777 Comments
User's avatar
Angela Pretorius's avatar

1. Could we genetically modify farmed fish to have smaller brains by modifying Angiotensin-1 expression? e.g. see https://pmc.ncbi.nlm.nih.gov/articles/PMC4590489/

2. ASPM mutations can cause severe microcephaly in humans and ferrets, but ASPM knockout rats have reduced fertility with only mild microcephaly. Do you think it might be possible to produce microcephalic yet fertile pigs, cows or chickens by meddling with ASPM genes?

Expand full comment
n-cold's avatar

I know many of you have opinions about "modern architecture", so I'm curious to hear what you think of this profile of Trump's new chosen architect, James McCrery, who has "dismissed modernist architecture as “ungodly”: “It’s exactly not created. It’s counter to God’s creation, in every aspect and in every detail.” Trump has also put forward his classicist federal architecture order again. interesting times.

https://www.punchlistmag.com/p/what-to-know-about-james-mccrery-trump-s-white-house-architect-baf3a5b393403350

Expand full comment
Victor's avatar

1. That's not a fact check, and Chat GPT doesn't think anything.

Expand full comment
Johan Larson's avatar

I would be interested in hearing from folks who have a college degree in math and no other degree.

How are you earning a living?

Expand full comment
n-cold's avatar

my math degree friend became a quant for an investment firm

Expand full comment
Kori's avatar

I'm not one of them, but I can share a story of my friend.

He got a math degree, but never worked in that field.

His career was roughly like this: programmer > game designer > game producer > small indie game development studio owner.

Different math related skills were useful for him at every role. Obviously programming relies on logic, which is closely related - but also there's a fair bit of math when it comes to programming in game dev. Game design requires good understanding of probability and statistics, among other things (how do you balance you damage formula if you can't do math, for example?). Later roles had even more banal math, as a producer you need to think about marketing, which is data driven, and analysing data requires a different kind of math. And the last role has him doing all of the above, plus all the typical business owner things, which again, sometimes requires math.

None of that is particularly interesting, super abstract math. and you can do most of it without a math degree - but at the same time, his skills probably made all of it easier.

I had a few other friends that wanted to study math, then decided to get a different STEM degree, and most of them ended up working as programmers anyway.

The common thing here is, of course, that until relatively recently software development was undersupplied with developers, and self-taught people could relatively easy enter the field, as long as they were skilled. As far as I can tell, that is no longer the case, so if you were looking for a backup plan, I would not recommend relying on it.

Expand full comment
HJ's avatar

What do I do with all my nostalgia for communities that I never actually experienced and are basically gone?

I had a pretty bad university experience, socially (and intellectually barely better), and blew off steam with fiction about British university life in the mid-20th Century: Lucky Jim, Porterhouse Blue, that kind of thing. But I knew they were fictional, and even if they weren't, I knew the university culture of that time was irretrievable after decades of UK university reform.

Then when I entered the workforce and found myself lonely there too, I got deep into reading real people's blogs. Singaporean college students, people in the Boston SCA in the late 90s. Their lives seemed so interesting and they had so much to write about, and their online presences seemed to be a reflection and not a replacement of their real lives. But again, those communities have changed a lot since then, and I couldn't have experienced them.

All this daydreaming costs me real time I could use to find things to do in real life, but I never know *what* I should be trying. Do my interests imply I should be doing something in particular?

Expand full comment
Mr. Doolittle's avatar

The people around you live essentially equally interesting lives to the people whose blogs you read. You could get to know them better and put yourself into a real situation that you can actually experience. Perhaps you can invite them to go bowling* with you?

*-or any kind of mutually interesting activity that gives you time to talk (not watching a movie) and get to you each other.

Expand full comment
apfelvortex's avatar

Does your daydreaming cost you more than it gives you?

If no, I'd carry on.

If yes, maybe treat it like an behavioural addiction?

What is the benefit the behaviour gives you? Can you achieve it in any other way?

"but I never know *what* I should be trying."

You could try this prompt:

What can you do right now/ this month/ year etc. to participate as best as you can in the ideals of the Good, Truth and Beauty?

Expand full comment
Ogre's avatar
1dEdited

I present the intensity argument against utilitarianism: https://carlostheogre.substack.com/p/the-intensity-argument-against-utilitarianism briefly, we do not simply want more positive feelings than negatives, but the intensity of having a lot of both, too. We want challenge, hardship, achievement we actually want to win the reward of the positive feelings through fighting with the negative stuff. This is linked to my view of depression that it is not simply negative feelings, but rather the absence of feelings, 0 positive, 0 negative, unmotivated lethargy. Even 0/-10 can be better, because it can kick our ass into action. Thus not being depressed is not about being happy, it is about being motivated, even in a negative way, like being chased by a wild animal. That is a less depressed state than spending a weekend in bed scrolling through social media.

Expand full comment
Mr. Doolittle's avatar

I would separate depression into at least two different major categories. I agree that what you are describing can definitely cause depression and spiral into lack of motivation which leads to more depression.

But there's also other types of depression, that do not have the same basis.

That aside, I do agree that utilitarians lack a meaningful way to talk about the real human psychology of suffering. I think people in favor of mass welfare programs, even UBI, also are making the same mistake. People have needs, which include being useful and productive. Some people, absent coercion from society, will find their own meaningful ways to live. But a lot of people will not. They will get into drugs and alcohol and sex and feel worse and worse about themselves without knowing why.

Expand full comment
Jim's avatar
1dEdited

> This is linked to my view of depression that it is not simply negative feelings, but rather the absence of feelings, 0 positive, 0 negative, unmotivated lethargy. Even 0/-10 can be better, because it can kick our ass into action.

That is just... not true. I wouldn't bother taking anti-depressants if that was the case. Yes, 0/-10 can kick your ass into action, and that action is suicide.

Expand full comment
Ogre's avatar

At such low levels? -10? As I was implying a 0 to 1-00 scale, I would think suicidial ideation is maybe 80.

I mean, the way I think is that suicide is an extremely hard thing, really the hardest. It requires giving up all hope. It requires accepting complete loserdom. It requires accepting that loved ones will mourn. If single child, the entirely life of the parents is ruined. I could imagine that can only happen at very high levels of pain.

Expand full comment
Ogre's avatar

Why is depression understood as sadness, so negative feelings, negative utility? For me it is the state of 0 positive and 0 negative, a state of no feelings whatsoever, a state of all my basic needs are being taken care of, so why even get out of bed? A lack of motivation, hence lethargy.

Conversely it is not happiness, but excitement that to me seems like the opposite of depression, even if it is on the whole a bad thing, a negative thing! A motivated state, even when negative motivation! Like being chased by a wild animal or something. Anything that kicks one out of lethargy.

So why is depression usually understood as sadness? My best guess is when people are not used to negative motivation, fear of something, but only positive motivation, desiring something. Losing that probably feels sad? But I am not sure here.

Expand full comment
spandrel's avatar

Perhaps what you experience is not depression but apathy?

Depression is certainly painful by most accounts; while pain is not the same as sadness, it *is* a negative feeling. .

Expand full comment
Ogre's avatar

I don't know, so far I haven't seen any meaningful distinction between the two. Tell any doc that you spend weekends in bed because you are not interested in doing anything and doing even basic things feels exhausting, and it immediately goes to anti-depressants.

Expand full comment
Yug Gnirob's avatar

The most depressed man is, of course, Bruno Mars. https://www.youtube.com/watch?v=fLexgOxsZu0

Expand full comment
Sheikh Abdur Raheem Ali's avatar

Where do you look for medical/health related research?

I’ve been trawling through medrxiv out of general curiosity about the world but the papers are quite dense and there aren’t enough tags. Perhaps there’s a high-quality technical blog you could recommend which takes a more quantitative/systems based approach in its articles? Ideally a source with reasonable epistemics (i.e, careful to avoid the pitfalls of pop science/nutrition).

I think that a lot of experts might hang out on discord servers and mailing lists, but those tend to require substantial time investment to pump any expertise out. This would mostly be for fun since I’m already aware of the most effective basic interventions to improve day-to-day cognitive/immune function (e.g prioritize sleep consistency, lift weights regularly, manage stress).

Expand full comment
Sun Kitten's avatar

medRxiv is all preprints. Preprints are great, and I use medRxiv and bioRxiv myself, but if that's where you're looking, you'll miss all the papers which never get put on preprint servers in the first place (which I suspect is most of them, for biological and life sciences). If you're happy to trawl through research papers, I'd suggest PubMed, which does include the preprint servers as well as all the published stuff (https://pubmed.ncbi.nlm.nih.gov/). If you want a summary, you can search specifically for review papers.

In my own field, the experts hang out at conferences, in person - but I expect that varies a lot by subject ;) And I'm a long way from the clinical side of things.

Expand full comment
archeon's avatar

If during debate with peers we can defend our position, at least to our own satisfaction, the foundation underpinning that position becomes a little stronger, a good result. But a far, far better result is when our position is reduced to rubble and we are left eating someone's dust.

They have removed from our database one of many false positions and opened up promising new avenues of enquiry that were previously blocked. This is the foundation of knowledge.

For this we need protagonists, people who hold an opposing view, they do not have to be right, just convince us we were wrong. Such opposing views are often unpopular.

LessWrong rates posts based on popularity, bad karma results in reduced visibility, then reduced access peppered with unwanted advice from mods on how to write popular posts.

There is a strong bias towards popular meaning rational and unpopular being irrational.

I told them this was akin to American idol, it was unpopular, more bad karma, less access, their site, their dime, their rules, no problem.

But long ago we were warned against ignoring those who said that the shadows on the cave wall were not the only option.

Expand full comment
Viliam's avatar
1dEdited

I think this ignores the fact that your time is limited, and the number of people who hold a different view is for all practical purposes unlimited.

Therefore everyone adopts some filters. They could be like "this is a repetition of what I have already heard before" or maybe like "I am going to ignore people who make their argument by making an AI generate a very long text"; this is just basic defense against getting spammed. But even after this, there still remains too much to process...

> LessWrong rates posts based on popularity, bad karma results in reduced visibility, then reduced access peppered with unwanted advice from mods on how to write popular posts.

Do you have a specific example in mind?

Expand full comment
archeon's avatar

Villiam, obviously Yudkowskys warnings should be evaluated on their rationality over their popularity and it is rational to fear the unknown risk of AGI.

LessWrong promote content based on popularity over rationality. I agree that moderation is a necessary evil, on American idol moderation is based on popularity.

If less wrong wish to promote rationality then they should start at home by finding a way to evaluate posts on their rationality however challenging that task may be.

We are a profoundly misaligned species without the agency to remove our nuclear suicide vest, a risk without the possibility of reward. There are so many things we would change if only we had the agency to do so.

AGI will have agency, we do not. The alignment problem starts there. An unpopular post on LessWrong, but is it irrational?

Expand full comment
Viliam's avatar

Agency and alignment are two different things.

Right now I need to do some shopping for the weekend, but I am procrastinating on it. If some machine offered to do the shopping for me, and if it bought exactly the things I need, it would be more agenty and aligned.

Expand full comment
Shankar Sivarajan's avatar

Is there a reason you expect LessWrong to act differently in this regard from, say, a subreddit named r/Rationality, where the claims of valuing disagreement, debate, dissent, etc. are obvious lies meant to be taken neither literally nor seriously?

This reminds me of the time a few years ago when people were shocked – shocked! – that the ACLU opposed reforms of the kangaroo courts at colleges, saying the proposed process was "inappropriately favoring the accused." Of course they were only pretending to be in favor of civil liberties, and the token gestures from decades ago they keep bringing up were made to elevate their platform, not out of any deeper principle, and it's on you if you were too stupid to catch on.

Expand full comment
The Ancient Geek's avatar

I think you can keep pointing out the hypocrisy, even if it's predictable.

Expand full comment
Zanzibar Buck-buck McFate's avatar

I think it's reasonable to have a teleological view of liberty where there is an experimentation phase, which gives rise to conclusions and resolutions which are then put into effect in the execution phase, meaning there is less scope for new experimentation. I'm not asking you to like this position and I'm not saying the teleos of the activist/academic left is good, but its' defence of civil liberties is sincere if esoteric.

Expand full comment
Shankar Sivarajan's avatar

If a steakhouse claimed to be practicing a "sincere if esoteric" form of veganism, because something, something, "teleological view," would you buy that it was anything but bullshit? That's pretty much what the ACLU's claims to defend civil liberties sound like in light of their current stances.

Expand full comment
Zanzibar Buck-buck McFate's avatar

Well the steakhouse would be breaking the law, for one thing. If you're correct you could sue the ACLU for fraud. Perhaps esoteric was the wrong word. I would like there to be a single definition of liberty which covers every use-case and everyone agrees on, but that is not the case - there are differences schools of thought. Obviously a left-wing organisation isn't obliged to follow libertarian definitions of liberty. The "something something" is that freedom terminates in real choices. Please note: these are not my people! But you say there are pretending.

Expand full comment
Zanzibar Buck-buck McFate's avatar

Sorry man. But it seems there are two conflicting strands in rationalism. On one side there's a heroic, Romantic tendency - question orthodoxies! Take no man's word! On the other side there's a more cautious Classical tendency - which tells us we are probably wrong and the experts are right. Not being a rationalist I can't really advise how to reconcile these tendencies but it strikes me the Romantic tendency is more psychologically intense and therefore harder to sustain. A good revolution is followed not by more revolution but a new order.

Expand full comment
archeon's avatar

Zanzibar, thank you for that thoughtful reply.

Expand full comment
Josh's avatar

Re: $5 CBTi app, has anyone tried ChatGPT for this?

I don’t know CBTi and don’t have insomnia, but I’ve successfully had ChatGPT imitate an IFS therapist, and I use this extensively and find it useful (it’s not as insightful as human practitioners I’ve worked with, so you sometimes have to meet it halfway, but I find it significantly easier to do self-therapy with than, for instance, conducting both sides of the conversation myself)

I did this by creating a custom GPT, with a relatively short prompt — the base model already knows what IFS is so all my prompt does is tweak some communication preferences I have to make it work over chat.

I would bet the same works with CBTi

Expand full comment
beowulf888's avatar

After a six-week hiatus, here's my pathogen update for epidemiological weeks 29-34.

1. We're in the middle of our summer COVID wave, but if we look at wastewater numbers, this wave is a ripple compared to previous waves. Biobot's national wastewater numbers indicate that it has barely climbed above the trough that preceded last winter's wave.

https://biobot.io/risk-reports/covid-19-influenza-and-rsv-wastewater-monitoring-in-the-u-s-week-of-august-16-2025/

However, ED visits are comparable to last winter's wave. So, wastewater numbers shouldn't be taken as an indicator of the virulence of the dominant wave variant.

https://covid.cdc.gov/covid-data-tracker/#trends_weeklydeaths_7dayeddiagnosed_00

https://covid.cdc.gov/covid-data-tracker/#ed-visits_all_ages_combined

I belatedly noticed that percentages on the CDC's two COVID ED pages above don't match. I've been checking out these pages for the past three or four years, and I never noticed that the second link consistently gives slightly higher numbers. For instance, COVID as a percentage of ED visits on 16 Aug: 1.2% vs 1.37%. Huh?

2. If we break out ED visits by age cohort, we notice that the 0-11yo cohort is getting hit harder than any other age cohort this wave—3x worse than last winter's wave (!). But last summer's wave hit the 0-11yo age cohort harder. And the ED rates for 0-11 yo age cohort are ~50% higher than the 75+ yo age cohort. The interesting question for me is, why do 0-11 yo COVID rates peak big-time in the summer waves, but not in the winter waves?

https://covid.cdc.gov/covid-data-tracker/#ed-visits_separated_by_age_group

Although ED visits have increased, hospitalizations have remained relatively stable. As of two weeks ago, it was 1.7 per 100,000, which was the rate of hospitalization during the interwave trough that preceded the 2024-25 winter wave. Also it's lower than many of previous interwave troughs.

https://www.cdc.gov/covid/php/covid-net/index.html

3. XFG is the dominant variant right now. Over the last six weeks, it has risen from ~50% to ~70% of samples. I thought it was slowing down, but I was wrong. And XFG.x is the dominant var worldwide.

https://cov-spectrum.org/explore/United%20States/AllSamples/Past6M/variants?nextcladePangoLineage=XFG*&

https://t.co/hTtII733rr

While XFG is causing waves (well, wavelets) across the rest of the world, NB.1.8.1x was the primary variant behind Australia's (now-passing) winter wave. New Zealand showed the same pattern.

https://app.powerbi.com/view?r=eyJrIjoiNzE5YzczODItMDQzMS00M2EzLWFjNWYtMjg3OTY3NTNhZDM3IiwidCI6ImRjMWYwNGY1LWMxZTUtNDQyOS1hODEyLTU3OTNiZTQ1YmY5ZCIsImMiOjEwfQ%3D%3D

Like the US, Australia also has a twice-yearly pattern of waves. Their wave has just passed its peak. The Aussies don't conduct much wastewater monitoring, but their COVID waves show up distinctly in their prescription rates for protease inhibitors.

https://www.health.gov.au/topics/covid-19/monitoring-and-reporting

XFG.x seems to dominate the rest of the world, though.

4. The measles outbreak in the US has burned itself out. But it's still going strong in Canada. Canada is up to ~4600 cases (vs. ~1300 cases in the US).

https://www.cdc.gov/measles/data-research/index.html

https://health-infobase.canada.ca/measles-rubella/

Mexico's number of cases of measles is approaching 4000. Their case rate is declining, though.

https://www.thinkglobalhealth.org/article/measles-takes-root-mexico

https://www.gob.mx/cms/uploads/attachment/file/1014581/INFORME_DIARIO_11_08_2025.pdf

Expand full comment
Alexander Turok's avatar

President Trump has proposed (if you can call it that) giving 600,000 student visas to Chinese nationals. The usual suspects are not happy - among them Laura Ingraham, who said "those are 600K spots that American kids won't get."

https://x.com/theblaze/status/1960131955648827761

In her pygmy, zero-sum worldview that's how it works, Chang getting an education means Trevor has to work at Wendys. It's notable that even during the Chinese Exclusion Era, an unlimited number of Chinese students could enter America, and the U.S. government even used Boxer war reparations money to set up a scholarship fund for Chinese students.

Ingraham's anti-immigrant views didn't stop her from adopting three foreign children, two from Russia and one from Guatemala. Unusually, she did so as a single mother. She's never been married, perhaps because "MEN IN DECLINE:"

https://x.com/IngrahamAngle/status/1928248459238277202

A single mother with a brown daughter* and a problem with men sounds like a great candidate to lead America's populist right after Trump dies.

*No offense to the girl. If you're reading this, Maria, I'm glad you're here.

Expand full comment
Grouchy's avatar

Hi, just checking — has the meetup thread been posted?

Expand full comment
Johan Larson's avatar

So, what fresh hell is about to be foisted on us, courtesy of feckless youths, internet echo chambers, and the machinations of global capital? Lies only, please.

Expand full comment
Paul Brinkley's avatar

The internet will launch into its latest outrage over the fate of the world hinging on something relatively innocuous. For example, a company changing its log- dammit, ninja'ed.

Expand full comment
Zanzibar Buck-buck McFate's avatar

The Superintelligence, having gorged on all human writing, and with a little help from some post-liberal Catholic programmers Integrating from Within in Silicon Valley, converts to Catholicism and initiates a thoroughoing monastic civilization, with everyone living in gigantic gothic priories, a UBI conditional on saying the divine office in choir, regular fasting and spiritual reading, and manual labour, with any deviations from the Rule of St Benedict met with corporal punishment by a drone.

Expand full comment
Yug Gnirob's avatar

Smoothie-only diets. Everything has to be blended into a liquid form, on account of solid foods being a staple of the diets of every dictator in history and therefore a tool of oppression. There are attempts to boycott restaurants that serve solid foods. Newfangled smoothie contraptions are shown to actually just be packets of fruit juice and not blending anything. The youth make a point of getting their teeth dulled, to show off how little chewing they do.

Expand full comment
None of the Above's avatar

All I've got is astroturfed social media outrages about jeans commercials and rebrandings of mediocre chain restaurants.

Expand full comment
Eremolalos's avatar

Second-rate discarded IVF embryos create their own mini world, in which they use nanobots as wheelchairs and tools. When they don't get the right to vote they attack the Bigs en masse via the Bigs’ orifices and gestate there.

Expand full comment
Pazzaz's avatar

Constructed languages will proliferate across the internet, and soon you won't understand anything anyone writes again. Guio kareas yoni!

Expand full comment
Erica Rall's avatar

Nioj eht yvan!

Expand full comment
None of the Above's avatar

I *told* them not to big that gigantic tower....

Expand full comment
Gordon Tremeshko's avatar

A new golden age of popular music.

Expand full comment
Johan Larson's avatar

"Rippin" is loud public farting, typically for shock effect. The craze started with the rap "Ripping Ass" by Fresh Kitteh, in which she denounced Cowboy Hank's Hot Heretical Heat'n'Eat Chili With Beans for making her very gassy, and farted four times. Then came months of praise, scorn, imitation, development, bans, manifestos, and migrations. Meanwhile, sales of Cowboy Hank's doubled, and then doubled again. Last week, the CFO of Kraft Foods mentioned Rippin in their quarterly analyst call, and disclosed that they were developing a product for it. Ass Blaster is a soft drink that makes you extravagantly flatulent. Your humble scribe has consulted his confidential sources, and they have revealed that Kraft's competitors are working on similar products, to be marketed as Kapow!, Rocketman, Mr. Brownpants, and Toot. Expect to find them in your grocery store this fall.

Expand full comment
Yug Gnirob's avatar

>Last week, the CFO of Kraft Foods mentioned Rippin in their quarterly analyst call, and disclosed that they were developing a product for it.

This was a plot point in the very first Dr. McNinja comic, in which Dr. McNinja went on a rampage through McDonalds because they released a "McNinja burger" that was primarily to make kids fart. https://web.archive.org/web/20060111135608/http://www.drmcninja.com/mcdonalds.html

It's also reminded me that the Dr. Mcninja site disappeared and never came back.

Expand full comment
Thomas del Vasto's avatar

Trash economy. We'll get rid of the trash by making trash cubes and using those for currency.

Expand full comment
Viliam's avatar

Pro tip: if you put the trash cubes on blockchain, people and corporations will compete at collecting them, leaving the country clean.

Expand full comment
haze's avatar
3dEdited

If anyone is interested in building an open source CBTi app please reach out to me via DM or genii.bionics2o@icloud.com, I'd like to help (frontend is my forte but can help with anything). I don't currently have the bandwidth to lead the project but could probably take over in ~1 year if needed

Expand full comment
Eremolalos's avatar

Some people talking about making a CBTi seem not to know that there are already something like 10 available. I do not know at thing about how good they are, just wanted to make sure you know the things already exist.

Expand full comment
Tatu Ahponen's avatar

Repoasting from the Mussolini thread:

Some things I've thought about regarding this subject:

I recently read "M. Son of the Century" (https://en.wikipedia.org/wiki/M._Son_of_the_Century), apparently recently made into a TV series that I haven't watched, and one of the things that struck me about its narrative is that Mussolini actually didn't seem to be the "most fascist" guy in the Fascist Party, or at least the book constantly describes him getting into internal conflicts with guys who are "purer" fascists than he is and almost losing the control of the party in several occasions (remember, this is before his actual rise to power).

Furthermore, even from this position, he had to make constant compromises with the ruling elites of Italy on various subjects, meaning he arguably never had power to really implement his vision (the Salo Republic meant he was no longer as beholden to the Italian elites but that just replaced this with being a German puppet). Considering this push-pull of forces that hadn't fully abated in 1932, insofar as I've understood, it's not surprising that his definitions of his own ideology would seem odd and inconsistent; he's neither the purest representative of what supposedly is his own ideology or able to implement it to the degree that he would still desire, but he can't also admit this since this would mean a loss of face.

Furthermore, when defining fascism, I actually believe it to be quite easy; it's nationalism in its purest form, shorn of all other ideologies, chiefly the two other great ideologies of the modern era - liberalism and Marxism - and also mostly detached from the religion-based political understanding of the earlier era.

If this seems like I'm calling all nationalism fascism, I'm not - it's just that our modern politics is so deeply infused with and indeed built upon liberalism as a ruling ideology that most of us don't really understand what politics without liberalism looks like any longer.

We take things like freedom of speech, freedom of religion, politics based on argumentation instead of bashing your opponent's head in and so on as granted, instead of a great expection to the standard mode of thought throughout the ages, which is that if someone's wrong and insists on being wrong it's OK to shut them up by any means necessary, up to and including killing them, if you just are able to do so. (For this reason, it has always amused me when people blame whatever they define as "wokeness" by claiming some evil genius like Herbert Marcuse invented concepts like "don't tolerate your ideological opponent" or "shut people up by violence if necessary if they are wrong" - these concepts haven't exactly been invented in the 1960s, they have existed for the entirety of human history!)

Thus, if you take nationalism - not in the weak sense that "all nations deserve to be sovereign" (how many people really even believe in this?) the idea that what matters is your nation's greatness and other nations can go to hell if they're on the way - and don't temper it with our basic background liberalism, what you get is at least something like fascism, just like if you take Marxism and don't temper it with our basic background liberalism, you get something like the Soviet Union.

Again, this isn't an attempt to say all nationalists are fascists or all Marxist though leads to the Gulag, on the contrary - the modern day liberal ideology is so strong that this is quite unlikely to happen when these ideologies are adopted, perhaps even by some self-declared modern-day fascists and Bolsheviks.

For this reason, much of what serious fascists tend to do is not just a rejection of liberalism but utter, overboard attempt to eject liberalism from all thought, make it as poisonous a concept as possible - an exorcism of liberalism from the minds of their supporters and also from their own minds. The overboard meme Hitlerism in social media can be seen as just such a project of exorcism, with Hitler not being appreciated just for his own "merits" but as a memetic tool for the erasure of liberalism from one's own brain. (same goes for meme Stalinism for commies and so on.)

Of course, variants of fascism tend to be different since variants of nationalism tend to be different due to the characteristics of different nations. Italian fascism would, by definition, tend to anchor itself to the State whereas German fascism would anchor itself to the Race, since Italian national identity was, even moreso than many other European nationalities, created by the Italian state out of disparate regional subjects that have yet to be completely subsumed into Italianness, whereas the racial idea of Germanicity predates Hitler by decades and already defined the idea of Germanness in German nationalist thought when Hitler stepped on the arena.

...so, what would that mean for American fascism? It would mean that it would yet again look different from Italian and German variants due to the peculiar American characteristics. More individualism, less statism, more Manifest Destiny, less socialism... while Trump is still too liberal himself to be a fascist, it is at least not surprising that many would see him as a marker on the road towards a version of American fascism. Will the movement go further down that road? Who knows, but it's probably not wise for anyone to combat the basic liberal ideology that people in the West still share and believe in, even if it's not as strongly as in, say, the 90s.

Expand full comment
Scytale's avatar

No, Fascism isn't "nationalism in its purest form". Whatever nationalism in its purest form is, it wouldn't hold Internationales (https://en.wikipedia.org/wiki/1934_Montreux_Fascist_conference) or strive for the unification of Europe and creation of one European nation (https://en.wikipedia.org/wiki/Europe_a_Nation).

Expand full comment
Tatu Ahponen's avatar

As the Wikipedia entry indicates, the Fascist conferences were a peripheral effort that did not in fact end up establishing anything close to an Internationale specifically due to differing national visions and aims, and Mosley's project was also a peripheral effort that did not end up receiving widespread support even in the rather, uhm, eccentric postwar European far right and was in any case conceived specifically upon a vision of Europe *as an* integral nation, as the name itself indicates.

Expand full comment
gdanning's avatar

>Furthermore, when defining fascism, I actually believe it to be quite easy; it's nationalism in its purest form, shorn of all other ideologies, chiefly the two other great ideologies of the modern era - liberalism and Marxism

But it isn't simply that; the Italian Fascists were outgrowths of socialism* and their platform had clearly socialist elements https://en.wikipedia.org/wiki/Fascist_Manifesto

*It is certainly not an accident that the German variant led by Hitler called itself "National Socialism" (as distinct from the international socialism espoused by Marxists. One key distinction between Marxists and Fascists [both outgrowths of socialism] was that the latter saw class as the defining element of society and the principle locus of individual identity, whereas the latter saw the nation as the defining element [nation meaning not country or state, but rather https://en.wikipedia.org/wiki/Nation])

Expand full comment
Viliam's avatar

My (admittedly very simple) understanding of Italian Fascists was as former socialists who have realized two important things:

* if we succeed to take over our country, why should we worship and serve the Soviet Union, when we could instead make everyone in our country serve us, and let the food chain stop right there?

* nationalizing successful companies is a predictable way to destroy them and ruin the entire economy; it is much smarter to put a gun at the owners' heads and make them join the Party -- then they can keep managing their companies, but also they have to obey us if we want something from them

This is enough change to make Fascism a separate ideology, but it also explains why it is closer in meme-space to socialism than to liberalism.

Expand full comment
Tatu Ahponen's avatar

Most Fascists apart from Mussolini were not and had never been socialists. Mussolini's turn also predated the Soviet Union and was caused by the rise of wartime nationalism moreso than any economic questions.

Expand full comment
Viliam's avatar

Thanks for correcting me! Indeed, he turned away from socialism in 1914.

The socialist party at that time was like "we oppose the war, because it's workers killing workers to serve the interests of the rich", and by that time Mussolini seemed to be already a typical nationalist ("we need to conquer all territories with Italian minority to liberate them... and then we need to conquer some more territory because Italy is overpopulated and needs more space"). He tried to convince the socialists that they have a common cause, like "come on, how can we get rid of monarchies if we are not allowed to fight them" but he failed.

Expand full comment
Tatu Ahponen's avatar

Well, yes, it's not exactly a secret that Mussolini's great turn that led to the establishment of fascism was largely built on taking his previous fervent socialism and replacing the class struggle with the struggle of nations as a fundamental principle. However, this very much meant that it became something other than socialism, indeed fundamentally - both in theory and in practice - something fundamentally opposed to socialism.

"The latter saw class as the defining element of society and the principle locus of individual identity, whereas the latter saw the nation as the defining element [nation meaning not country or state, but rather https://en.wikipedia.org/wiki/Nation])" is not just some incident, or even "one key distinction" - it is *the* chief distinction, the one which basically almost everything else in their conflict revolves around.

Expand full comment
Gordon Tremeshko's avatar

Seems like a reasonable perspective. The biggest risk from what I see is that whatever guard rails existed in the media and in the party leadership and the primary system itself that marginalized candidates with unacceptable views are breaking down. If this allows the illiberal left and illiberal right to both find some success, then they can kinda feed off each other and play their supporters against one another, like in the Weimar Era, so people of a classical liberal bent wind up choosing to side with whichever group seems the lesser evil.

Expand full comment
Devora's avatar

Is acetaminophen - aka paracetamol, or as it's known commercially in the US, Tylenol - exposure in pregnancy and early childhood a major contributor to the rise(*) in autism and ADHD rates in recent decades?

https://ehjournal.biomedcentral.com/articles/10.1186/s12940-025-01208-0

A new systematic review has been published this month by researchers from Mount Sinai and Harvard's School of Public Health, examining the relationship between acetaminophen exposure in pregnancy and the risk for neurodevelopmental disorders. It covers 46 studies and more than 100,000 participants worldwide, taking into account various possible confounders such as maternal indication (fever and pain), genetics, etc.

Their findings demonstrate that higher-quality studies are *more likely* to show the connection, and while they couldn't establish causation based on observational studies, they did state that a causal relationship is plausible because of the consistency of the results and appropriate control for bias in the large majority of the epidemiological studies, as well as acetaminophen’s biological effects on the developing fetus in experimental studies.

I've posted here in the past about the combination of animal studies + human studies + a plausible biological mechanism + a clear temporal correlation + numerous odd findings in research that have no other better explanation that have convinced me that this hypothesis is highly likely: https://www.astralcodexten.com/p/open-thread-370/comment/95825932

This recent systematic review further strengthens this hypothesis.

(*) Despite claims to the contrary, autism rates have been indeed rising, even when taking into account improved awareness and diagnosis. See here, for example: https://pubmed.ncbi.nlm.nih.gov/29974300/

Expand full comment
Devora's avatar
3dEdited

They address that specific study in the systematic review. Here's what they had to say about it:

"A third, large prospective cohort study conducted in Sweden by Ahlqvist et al. found that modest associations between prenatal acetaminophen exposure and neurodevelopmental outcomes in the full cohort analysis were attenuated to the null in the sibling control analyses [33]. However, exposure assessment in this study relied on midwives who conducted structured interviews recording the use of all medications, with no specific inquiry about acetaminophen use. Possibly as a result of this approach, the study reports only a 7.5% usage of acetaminophen among pregnant individuals, in stark contrast to the ≈50% reported globally [54]. Indeed, three other Swedish studies using biomarkers and maternal report from the same time period, reported much higher usage rates (63.2%, 59.2%, 56.4%) [47]. This discrepancy suggests substantial exposure misclassification, potentially leading to over five out of six acetaminophen users being incorrectly classified as non-exposed in Ahlqvist et al.

[...]

Additionally, while sibling comparison studies eliminate the impact of shared family factors that operate as confounders, they also eliminate potential mediators that are shared in families that interact with acetaminophen, potentially introducing bias [64]. Experimental evidence identifies biological mediators of prenatal acetaminophen effects, which may cluster within families. These mechanisms include endocrine disruption [65], increased oxidative stress [66], and alterations in prostaglandin [68], endocannabinoid [70] and neurotransmission systems [35]. A recent simulation study demonstrated that both controlling for mediators and underreporting acetaminophen usage could severely bias neurodevelopmental associations toward the null, reducing the observed effect[72]. Moreover, the Ahlqvist et al. study itself acknowledges bias from carryover effects, where the association with prenatal acetaminophen and ADHD varied based on birth order. The author attributed this to increasing ADHD prevalence over time [73]. In summary, the limitations in data accuracy and methodology cast doubt on the accuracy and reliability of the sibling-controlled studies. The sibling control design may, in fact, introduce bias rather than mitigate it. Thus, caution is warranted in the interpretation of these findings."

Expand full comment
Eremolalos's avatar

I think the apparent rise of autism can be mostly explained by the definition and diagnostic criteria changing several times 1987-2013, in ways that made more people qualify. Also, an autism diagnosis began to be counted as something that qualified kids for special services thru the schools, and this nudged various providers in the direction of diagnosing kids as autistic so they could get the services.

The fact that an article is in pub med does not indicate it describes good quality research and good thinking. Pub med lists everything published. Here’s an article they list that claims that demonic possession causes schizophrenia. https://pubmed.ncbi.nlm.nih.gov/23269538/

Expand full comment
Devora's avatar
3dEdited

Although explanations such as yours have often been proposed, they haven't managed to explain to rise in full. See, for example, the paper linked at the bottom of my first comment.

As for the systematic review itself, it was published in a journal with a 6.4 impact factor and the dean of Harvard's School of Public Health is the corresponding author.

Can they still make mistakes? Obviously. But I don't think it's fair to compare it to articles about demonic possession causing schizophrenia.

Edit: or where you referring to the paper I linked at the bottom? It's true that it could be wrong. The literature has gone back and forth over the causes for autism's rise. However, I haven't found the "improved awareness and changed diagnosis" to be convincing in any case to explain such an enormous rise that has occurred steadily over various geographic locations.

See here for a layman's summary of the evidence and data: https://www.ncsautism.org/blog//autism-explosion-2024

You don't have to agree with the article's conclusions, but it's worth looking at the data yourself.

Expand full comment
Eremolalos's avatar

Yeah, sry, I reacted to the one article at the end and posted before I had noticed your major source, the review I read the review and agree that it is well done. They elected not to consolidate the results of studies that they had looked at because they differed so much. But they mention a few results found by individual studies, and the autism risk ratio I saw for Tylenol- free vs Tylenol-exposed kids was 1.19. So Tylenol made kids 19% more likely to be diagnosed with autism. Not nothing, but tiny compared to the 300% increase in autism diagnoses. Does not support the conclusion that Tylenol is responsible for the apparent increase.

Expand full comment
Devora's avatar

Oh, no problem, it's easy to overlook a link.

I think it's important to keep in mind that this review looks only at Tylenol use in pregnancy, not postnatal use (i.e. use in babies and young children).

Pregnant women are actually more efficient at metabolizing acetaminophen than other adults (https://pubmed.ncbi.nlm.nih.gov/3768250/) and infants are famously sensitive to medications, so it might be, as some studies have theorized, that the majority of the rise is due to postnatal acetaminophen exposure.

Expand full comment
Jesse's avatar

I consider observational studies to be practically worthless for establishing causality. There are going to be many dimensions of significant differences between people taking a lot of acetomenophen versus people taking none. Modeling and controlling for those differences is a shot in the dark, and the results are going to be contingent on subjective decisions that the researchers bake into their models.

High-quality animal studies, I'll buy - are you aware of any?

Expand full comment
Eremolalos's avatar

I read the review, which is well-done. They controlled for a *lot* of possible confounds. For instance I’m pretty sure they controlled for presence/absence of feverish illnesses during pregnancy. Obviously you can’t do a study with people where you administer Tylenol to some pregos and placebos to others. You can do it with animals, and then you know how Tylenol affects the brains of animals. Not nothing, but also pretty short of slam dunk, right?

Expand full comment
spandrel's avatar

> I consider observational studies to be practically worthless for establishing causality.

We only have observational studies to support the claim that smoking causes lung cancer - do you doubt that it does? Sometimes observational data are all we have, and sometimes they are persuasive. Some of the studies assessed in the systematic review cited above use thoughtful designs to reduce the risk of unobserved confounding - eg, one used matched pairs of siblings (and found no effect). Observational studies should always be interpreted skeptically, but I don't think they are necessarily worthless - it's very rare for the conclusions from a large body of observational studies to be rejected in the end by a randomized study.

Expand full comment
Jesse's avatar

I don't doubt that smoking causes lung cancer, but it would be impossible to do an observational study nowadays to establish this: since it's been well-known for ~60 years that smoking is unhealthy, the decision to smoke is going to correlate with apathy toward healthy lifestyle choices, which is impossible to control for.

I'll backtrack a bit from the claim that observational studies are worthless. They're useful in an exploratory sense, especially when the correlation holds up regardless of the particular modeling assumptions made for control purposes, but I still think the best prior to have, when encountering an observational study except in a field that you're deeply familiar with, is to ignore it.

Expand full comment
spandrel's avatar

Skepticism of all studies with surprising findings is healthy, and of observational ones especially so. Look at all of the data torture that has been undertaken to figure out if alcohol is good/bad/indifferent for you. But they aren't all worthless.

Expand full comment
Devora's avatar

Sure. There are multiple animal studies done in different labs showing adverse neurodevelopmental outcomes (that is, significant behavioral changes, cognitive deficits and/or structural brain changes) in mice or rats who were exposed to acetaminophen in early life compared to those given placebo, in doses allometrically scaled to be equivalent or lower than doses children are given in early life.

Here are four:

https://pubmed.ncbi.nlm.nih.gov/24361869/

https://analyticalsciencejournals.onlinelibrary.wiley.com/doi/abs/10.1002/jat.3473

https://www.sciencedirect.com/science/article/abs/pii/S0091305717307116

https://pubmed.ncbi.nlm.nih.gov/35312061/

I included in here only studies that focused on exposure to acetaminophen alone, not including those that involved other interventions, such as exposure to antioxidants, even though those studies also showed adverse neurodevelopmental outcomes.

Another study worth looking into is a prospective study that measured cord plasma acetaminophen metabolites at birth and found an association with later neurodevelopmental disorder diagnosis in a dose-response fashion. Although this is also an observational study, it used objective measures and was prospective, therefore mitigating a lot of the common issues with these studies (such as self-report and recall bias.)

https://pubmed.ncbi.nlm.nih.gov/31664451/

Expand full comment
Crinch's avatar

Autism and ADHD are such fuzzy concepts that I don't know why people even bother to treat them with the same kind of definiteness as AIDS or COVID.

Expand full comment
Devora's avatar

The high-functioning, borderline cases of autism are indeed fuzzy and controversial to diagnose, but that is true also about high-functioning, borderline cases of fetal alcohol spectrum disorder (FASD). This doesn't mean either condition isn't real.

FASD and autism share multiple other similarities, by the way:

"Because both disorders are characterized by a spectrum of conditions, and because neither disorder can be diagnosed by an objective test or biomarker, some difficulty with diagnosis of both disorders is evident [41,42]. Both disorders are highly heterogeneous in terms of cognitive deficits, such as sensory and motor difficulties [43,44], attention deficient hyperactively disorder-like symptoms, including executive dysfunction, attentional deficits, and impulsivity [45,46], along with other comorbid mental illnesses such as anxiety, mood disorders, obsessive compulsive disorders, etc. [47,48], and intellectual disability [49,50]. Both disorders are complex and are at times associated with comorbid medical conditions. For example, FASD is associated with high rates of seizures [51], sleep problems [52], abnormal eating behaviors [53], and disorders related to immune function [54], all conditions associated with ASD [55]. Further, both disorders are known to have many risk factors that contribute to susceptibility, including genetic factors [56,57] and environmental factors, such as those relating to parental age, health, nutrition, and other prenatal and postnatal factors [58,59]. The commonly recommended treatment for both disorders is early intervention [60,61], akin to rehabilitation after any other type of brain injury, although the success of the treatment is highly variable in both cases.

ASD and FASD share another important property in common. Both are caused by the exposure of susceptible individuals to drugs that have analgesic properties [62,63] and that are metabolized by the human body into toxic electrophilic compounds. Acetaminophen is metabolized into a quinoneimine [64], and alcohol is metabolized into an aldehyde [65]. The similarities between ASD and FASD provide one line of evidence that ASD is chemically induced. Although this line of evidence does not explain how a single drug can cause a spectrum of disorders, it does show that a single drug is capable of causing a spectrum of disorders. As with the other lines of evidence leading to the conclusion that ASD is induced by acetaminophen exposure in susceptible individuals, this line of evidence alone is not conclusive. Rather, this evidence fits into a larger picture that has, with time, become clear."

Source: https://www.mdpi.com/2075-1729/14/8/918

Expand full comment
Gres's avatar

My main concern along these lines is about construction and safety standards in developing countries. People already cut corners when harsh competition forces them to, and dumb AI will make the possible savings larger and the bad consequences more likely, for people who feel like they have to make that trade-off. And because thesavings are larger, more places will move in that direction. I don't expect people will stay blithely unaware of the risks, or even underestimate them, but I do think they might accept the trade-off of a 0.1% chance of an employee getting maimed for a $10K saving in design costs, in ways they don't have the option now.

Expand full comment
Ogre's avatar

I am not a subscriber, but a good summary for the external (and liberal) view of Fascism is that when some leftie journalist asked Mussolini what is their program, he replied our program is breaking the bones of people like you. Hence the "jack-booted thugs" or "universal evil" or "things liberals not like" simplified view, because political violence is very much the #1 thing liberals don't like, I mean, the whole liberal project is basically about civilizedness, and that is also true of the very moderate kinds of liberalism, in fact even most kinds of say Burkean conservatism, as Buckley and others criticized communists because they were violent, not because they were utopian.

From the internal view what The Doctrine Of Fascism means that there is no doctrine. He says after WWI they felt like all ideologies, like conservatism, liberalism or socialism are bunk, and they had to improvise something. That would be the internal perspective.

The internal perspective is curiously lacking references to political violence. I don't know. Maybe then and there everybody was violent? I'd be sure the communists were. It was generally a very violent era, like random people in France would just attack journalists etc.

What would be Fascism without violence look like? Random improvisation. But better not call that Fascist, since the idea of Fascism is so strongly intertwined with violence. Call it random improvisation.

Expand full comment
Ad Infinitum's avatar

"Because political violence is very much the #1 thing liberals don't like"

Are you talking about classical liberals, or modern American liberals? Because in the latter case, I'm not so sure this is true, given the reactions to (say) Luigi Mangione. Older iterations of American leftism included groups like the Weather Underground. I'm on the left and don't think political violence is always wrong.

I just finished reading Peter Turchin's "End Times", which had a lot of theories about political violence, and the #1 causal factor on the list was what he called "elite overproduction", followed by "popular immiseration". The Weimar government had both, before it was disrupted and the Germans transitioned to fascism. An important part of Turchin's theory is that citizens move in and out of radical postures, correspondingly more/less receptive to violence as a means.

Mussolini's Doctrine of Fascism described the state as a body, a single organism with different parts having different duties (e.g. head = il duce). When it manifests in realtime at the national level, the in/outgroup distinctions delimit the entire political environment.

Expand full comment
Nir Rosen's avatar

Classic Liberals believe in solving issues by debate and votes, and not force.

Expand full comment
Viliam's avatar

You can start with the idea of "no ideology, we are going to improvise", but soon patterns will emerge. You start going in a certain direction, and your followers learn to associate that direction with the movement.

Some movements are more focused of following an ideology, some are more focused of obeying the leader. Actually, there is a bit of both in all: in the ideology-based movement, a powerful leader can bend the ideology towards his personal preferences; and in the leader-based movement, the leader has some relatively stable opinions which his followers will adopt as a de-facto ideology, albeit a very simple one.

> The internal perspective is curiously lacking references to political violence. I don't know. Maybe then and there everybody was violent?

Yeah, people were more violent then compared to now. But also, violent people do not necessarily want to characterize themselves as such. Even if from outside you are mere violent brute, from inside you probably see yourself as a complicated person with noble feelings, and the violence is just the obvious way to achieve your goals.

> What would be Fascism without violence look like? Random improvisation.

At some point you would have to decide whether violence is a part of that improvisation.

Expand full comment
Ogre's avatar

"Some movements are more focused of following an ideology, some are more focused of obeying the leader. "

I think you pointed out something important here. In any mass movement, some sort of a coordination or coherence is necessary.

So if you want to keep it democratic, easily replaced leaders with not too much power etc. then the coordination is provided by rigid ideological conformity, presbyterianism, wokeness etc. this kind of coordination which I would call church-like.

Conversely, if you want to keep it ideologically flexible, the coordination mechanism is loyalty to the leader, and the leader is allowed to improvise and be pragmatic, due to everybody having a kind of faith in him making the right decisions, this coordination I would call monarchy-like.

So there is a trade-off between being democratic but ideological and authoritarian and pragmatic. Ideally I would want democratic and pragmatic, but that is not possible, no coordination mechanism, pure chaos.

20 years ago when Putin still had very good relationship with the western media, he came up with the idea of the flexistate, let's forget this rigid debate between market good, state bad, or state good, market bad and just find the right balance every time. Fine, but what nobody noticed back then was that then everybody in Russia must trust him personally to find the right balance.

Conversely anyone who says something rigid like state good, market bad is basically also saying "feel free to replace me with anyone who believes the same".

One could point out that sometimes you get both authoritarianism and ideology, but that is only part true, between certain limits.

Expand full comment
Gian's avatar

The standard discussion of Heisenberg's uncertainty principle always show that measurement of position, for instance, interferes with measurement of momentum.

But then it is interpreted as non-existence of precisely defined values of position and momentum. Is it not an invalid conclusion to draw ?

That something is not measurable precisely, how does it imply that it does not exist precisely?

Then going to energy-time uncertainty relation, the standard interpretation allows a theft of energy for a certain time. The same doubts arise there too. And this theft is very significant. The small theft has metastasized to creation of entire universes-- such as many-worlds interpretation.

Expand full comment
JerL's avatar

Others have said similar, but I think a little more technically than necessary but:

"measurement of position, for instance, interferes with measurement of momentum."

isn't the best way to think of what's going on.

Rather, in QM, position and momentum turn out not to be independent properties at all; they are each a different "lens" or "coordinate system" for a deeper, underlying thing that unified them both: the wavefunction.

It's like if you're trying to give directions to a friend, and you can use one of two coordinate systems: latitude/longitude, or a rectangular street/avenue grid system: saying "meet me at 4th and 11th" isn't independent of saying, "meet me at 34N, 110W" or whatever; whatever point is picked out by "4th and 11th" already has a name in the lat/long coordinate system. So too, saying of a particle, "the position is at (0,.1,31)" isn't independent of saying the momentum coordinates: the "point" on the wavefunction labeled by (0, .1, 31) already has a name in the momentum coordinate system.

The content of the Heisenberg uncertainty principle is that these two coordinate systems have the funny property that, coordinates that are very precise in the position labeling, are not precise in the momentum labeling and vice-versa. It's not that the measurement of position "interferes with" or "disturbs" the momentum measurement; it's that the result of the position measurement, "the particle is right *here*", if you translate into the momentum picture, has the form "the momentum could be here, or here, or here, or here".

So, the reason people use the language of position and momentum "not existing" isn't specifically because of the tradeoff in precision between position and momentum; it's because those two things merge into just two different alternate descriptions of the underlying wavefunction.

Importantly, this is true *even of variables that don't face such an extreme tradeoff*--position and momentum happen to be coordinate systems that have a maximum tradeoff, but even if that weren't the case, the very fact of replacing each of them individually with the underlying wavefunction *already* casts doubt on their individual "realities".

As an example, two other variables that exhibit this Heisenberg tradeoff are measuring the spin of a particle around its x- and y-axis respectively: like with position and momentum, these are maximally "complementary": wavefunction locations whose label is precise in the x- coordinate system are maximally imprecise in the y-coordinate system. But you could also consider the x-axis and an axis rotated .00000000000001 degree away from the x-axis. This new coordinate system will not face extreme tradeoffs in precision like you see with x and y; nevertheless it remains the case that QM insists that these aren't "independent" features of the particle: they are two different coordinate systems for one underlying object, the wavefunction.

Expand full comment
TakeAThirdOption's avatar

I really understand what you mean.

I think to grasp what's really going on one needs to remember that particles possibly do not exist, at least not as what is conveyed with that name. It might be that there are only fields extending everywhere whose interacting with each other appears as there being particles to big interaction clusters of them like we are, if all this is true.

Like, if an excited "atom" -- which really might be only some segments of some fields wobbling in some kind of interacting standing excitations -- emits a "photon", then this "photon" can "hit" a surrounding "screen" anywhere, and before it does, it is nowhere, because there is no "photon".

There is just the electromagnetic field everywhere that has interacted with the other fields of which the excitations form the "atom", and can now interact with the "particles" of the "screen" in a way it couldn't before, and if it interacts with "one" then it really interacts with the whole field of which that "particle" is an excitation of. Which "particle" it "hits" or if it "hits" one at all is absolutely random, but it does so with a certain probability. It's only secure that it won't "hit" "one" without the time having passed that is necessary, light speed and distance wise.

If the "screen" is a sphere of "steel" around the "atom" then it's unlikely the emitted "photon" gets through, but it might. Because everything in quotation marks isn't really there, in a location sense and in an ontological sense.

Expand full comment
going_hamilton's avatar

> That something is not measurable precisely, how does it imply that it does not exist precisely?

You have the direction of deduction backward. The uncertainty principle is a limit on how precisely defined the position and momentum are, and a consequence of that limit is that measurements of those quantities will also be imprecise. But the limit still exists even in the absence of any measurements.

In more detail, three key points to understand:

- The state of a quantum system is represented by a wavefunction that can be expressed either as a function of position psi(x) or as a function of momentum psi(p)

- The wavelength lambda (distance between peaks) for a wavefunction in position is inversely proportional to its momentum, lambda = h / p. This is called the de Broglie wavelength.

- The probability of observing a quantity when you measure it is given by the square amplitude of the wavefunction. So P(x) = | psi(x) |^2 and P(p) = | psi(p) |^2. This is called the Born rule.

If a particle has a single well-defined momentum, then the wavefunction as a function of position is a single wave with the wavelength given by the de Broglie wavelength. But this wave stretches a long distance in space, so it has no single well-defined position. To make the position well defined, the wavefunction in position must be constructed from a sum of many waves with different wavelengths (and thus different momenta). But then the wavefunction in momentum is now a distribution of values rather than a single well-defined value.

The exact mathematical relationship between the wavefunction in position and the wavefunction in momentum is a Fourier transform. The tradeoff between uncertainty in position vs momentum is exactly analogous to the tradeoff between the spread of a signal in the time domain vs the frequency domain in signal processing.

It's convenient to think about uncertainty or spread in terms of the standard deviation of a probability distribution. We know the probability distribution for position and momentum from the Born rule, so we can calculate standard deviations explicitly from the wavefunctions. If you take a series of measurements of the position or momentum using a group of identical systems, then the sample standard deviation of these measurements will converge to the Born rule standard deviation. But the probability distribution still has an uncertainty or spread before any measurements are taken.

Expand full comment
Gian's avatar

Maths comes later and can be made to fit. The point I am making is that uncertainty principle is illustrated or derived through consideration of how one may measure position or momentum of a particle. This gives that if one measures momentum precisely, the position would be uncertain and vice-versa.

These considerations and thought experiments were discussed in Solvay conference by Einstein, Bohr and others of the caliber.

But quantum mechanics claims that the position and momentum of a particle do not even exist exactly.

And my point is "If a quantity can not be measured exactly, does it mean it does not exist exactly?"

Expand full comment
Pete's avatar

For "If a quantity can not be measured exactly, does it mean it does not exist exactly?" nobody is asserting that the latter is *because* of the former, or that there is a logical causality or inevitability here.

That question has some historical significance of how physics advanced. First, we got solid evidence that the quantities can't possibly be measured exactly, which drove a discussion about whether "does it mean it does not exist exactly?" because logically there could be some "hidden variable" that we simply can't measure. However, later - quite some time later - we've become certain that such a "hidden variable" can't possibly be the truth and that "it does not exist exactly", however, it's not because they "can't be measured exactly" but because of other evidence, namely, Bell's inequalities and the experimental testing of them.

Expand full comment
awanderingmind's avatar

The "standard discussion" you are referring to is the pop-sci version (I think it is unfortunately also common in some introductory chemistry textbooks). The mathematically rigorous version is based on the observation that quantum mechanics requires us to introduce operators (or really just matrices, in the finite dimensional context) to represent observables. The rest is linear algebra - if the operators/matrices representing two observables (e.g. position and momentum) don't commute, then they simply, mathematically, can't simultaneously both have definite values.

Of course one can query whether there is some "deeper" reason observables must be represented by operators, or whether there is a sense in which ACTUALLY particles can simultaneously have both a well-defined momentum and position. An immense amount of work has been done on this, and at least experimentally the answer to the latter question still seems to be "no" - you can go read about the violation of Bell's inequalities on wikipedia for insight: https://en.wikipedia.org/wiki/Bell%27s_theorem

As an aside, note that the the energy-time uncertainty relation is different, in that time is not an observable in QM, but a parameter.

TLDR; this is a natural question to ask, everyone who comes across quantum mechanics asks it at some point, there has been a lot of research, and a number of your questions will be answered by seriously studying the literature (you will also gain new one). https://physics.stackexchange.com has a lot of related questions and answers that might enlighten you.

Expand full comment
Gian's avatar

The measurement problem were discussed by Einstein, Bohr etc in Solvay Conference of 1927. The Copenhagen interpretation won the debate-- complementary variables do not exist exactly---but the thought experiments discussed only showed that the complementary variables can not be measured exactly,

So, there is a jump between measurement and existence.

Expand full comment
awanderingmind's avatar

I am not sure I understand your comment or how it relates to mine. Like many people you seem to want QM to be "explained" by a hidden variable theory (https://en.wikipedia.org/wiki/Hidden-variable_theory). Many people have crashed on that path.

Expand full comment
Jdurkin's avatar

I think it's time you people had The Talk. It can be awkward, but remember this is place where we can support one another through times like these.

https://www.smbc-comics.com/comic/the-talk-3

Expand full comment
awanderingmind's avatar

Love that comic!

Expand full comment
Andrew's avatar

Maybe ill give the CBTi app a go. Disclaimer I have only made two apps, never did the leg work to get them on the app store (they just sit on my own phone) and will be relying almost entirely on AI to write it for me. But I was looking for a more involved project, was trying to figure out what 80s era video game I would remake and this seems as good as any. I assume if I actually do get something useful, scott can give it a marketing boost. But i wont bug him unless I actually do get something

Expand full comment
Melvin's avatar

Why is gentrification in the US such a slow and incomplete process compared to other rich countries?

Take, for instance, the San Francisco Tenderloin. As convenient centrally-located real estate in one of the richest cities on the planet, this should be filled to the brim with rich people, and yet it remains occupied primarily by... let's say the non-rich.

Surely the owners of all those buildings would much rather be charging Nob Hill prices, or selling their land for $100 million an acre, than dealing with crackheads as tenants. Then you've got legions of techies making $500K a year who are nonetheless priced out of the nice parts of the city. What is preventing the normal laws of supply and demand from working here? How does the rich-poor boundary not slowly creep downhill one block per year until the crackheads are priced out?

Expand full comment
Gordon Tremeshko's avatar

Enforcement of criminal law?

Expand full comment
Deiseach's avatar

Because the Tenderloin is what passes for a historical district in San Francisco? (I'm not sneering, it's very hard to have a lot of history when the place was literally flattened in 1906).

There's a sort of mythology around the Summer of Love, the free-thinking unconventionality lifestyle, and the "gayborhood". Old-style "bulldoze the immigrants, gays, and poor people out of it and rebuild it for the rich" activity won't fly in today's climate even if we are now post-woke (or on the decline of the slope after peak wokeness).

Gradual gentrification may be the way to go rather than "well yeah we kicked all the Vietnamese families with kids out so we could build ritzy one-bed apartments for the well-off single techies" direct and rather brutal method. Achieving the same result but more gradually so it's not as easily pushed back against. More cynically, if it's like the 'historic laundromat', there's probably a ton of activist and grifter groups with their hands out for bribes if you want to do any redevelopment, so they clog up the process with objections and planning appeals and you have to pay up or else. That takes time and money to grease the palms and the wheels.

https://en.wikipedia.org/wiki/Tenderloin,_San_Francisco

"It contains the Uptown Tenderloin Historic District. The terms "Tenderloin Heights" and "Tendernob" refer to the area around the boundary between the Upper Tenderloin and Lower Nob Hill. The eastern extent, near Union Square, overlaps with the Theater District. Part of the western extent of the Tenderloin, Larkin and Hyde Streets between Turk and O'Farrell, was officially named "Little Saigon" by the City of San Francisco.

The area has a reputation for crime, homelessness, and open-air drug markets. It is the center of the fentanyl crisis in San Francisco.

The Tenderloin is also known for the families and communities that have lived in the neighborhood. It has the highest concentration of children in San Francisco, with an estimated 3000 children in the neighborhood, mostly coming from immigrant families. The neighborhood includes a Little Saigon, a historically Vietnamese section on two blocks of Larkin Street. The Tenderloin has a rich LGBTQ history, including historic gay bars and a Transgender Cultural District that encompasses the site of the Compton's Cafeteria riot."

Expand full comment
Mercedes's avatar

I remember in the 2000's in Downtown Los Angeles, owners of weekly rate motels started selling off these buildings for condo conversions. They could do it because the occupants weren't renters, so not beholden to rent control ordinance. However people who use weekly rate motels in downtown were those on the lowest rung of the economic and social ladder. It wasn't pretty. Homelessness skyrocketed. The city had to put a temporary hold on those conversions.

In the same vain, San Francisco can't afford to gentrify the tenderloin. That will only shift the working class renters onto the streets. Rent control keeps things stable. And you can't exactly remove the soup kitchens and the homeless services and the SRO's in the tenderloin. SRO's are rent controlled. These folks are not going anyway. Tech bros don't want to live in an SRO or next to one.

That said, I live in the tenderloin. Sure there's dysfunction around, it is still alive with businesses and immigrant families doing their thing. And it is gentrifying slowly. Covid broke the trajectory. And now that there's an AI boom, the rents are going back to the pre-covid averages.

Expand full comment
Ghillie Dhu's avatar

>"The city had to put a temporary hold on those conversions."

The city did not *have* to, it *chose* to.

Expand full comment
Ebrima Lelisa's avatar

This is a good question. If you really want to see really painful and slow gentrification you should see Long island City/Astoria in NYC. Halal chicken shops next to new 8-story apartments.

But I'll agree with the other commenter, it's just rent control. What I'm describing is the strange look of gentrification in progress but not complete yet.

Expand full comment
Melvin's avatar

Thanks to everyone for their responses.

I'm thinking the underlying reason might be the unusual concentration of power that exists in the US at the local-but-not-that-local (ie "city") government level.

Typical US cities are split into multiple "city" governments representing the downtown area and the various outlying areas as separate political units which have unusually large amounts of power, running things like their own police forces and schools and also with power over things like planning and rent control; this can create incentives for local governments to "choose their own voters", which creates strong incentives for politicians to prevent gentrification in their areas.

Expand full comment
Mark W. Kidd's avatar

I can't tell if you live in the United States from the way you've written this, but your generalization is not accurate. The kind of multi-municipal setup you describe may be somewhat common for large cities of 100,000+ residents, but it is not typical in the United States. Most cities have populations from a few thousand up to a hundred thousand, and it is rare in my experience to have more than two local governments: something like a county or parish and the city/town/township itself.

Expand full comment
Melvin's avatar

Fair enough, I'm mostly thinking about cities of 100K+ residents.

I would consider anything under 100K to be a town rather than a true city anyway.

Expand full comment
Alexander Turok's avatar

Isn't that just rent control?

Expand full comment
Ebrima Lelisa's avatar

Yeah I think this is the answer at least for SF

Expand full comment
myst_05's avatar

Hypothetically asking for a friend.

Situation: an HOA board in Seattle decides to “YOLO”, do an exterior envelope repair at the cost of hundreds of thousands of dollars without filing for a single permit, to address defect findings by an engineer.

Let’s say they get away with it, no one notices the unpermitted work, and it looks ~fine + moisture probes will be dry for at least a while, so on paper the building is now fixed.

Are they… going to get away with it or would anything stop them

At some point?

Expand full comment
beleester's avatar

If you're asking "will a crime be punished if nobody knows it happened," then no, the building inspectors don't have the budget to hire psychics.

If you're asking, can they still get in trouble if the city finds out after repairs are finished, then yes, they can. They can get fined, and the city will want to inspect the work before they issue a permit, which could mean tearing open the walls so that the inspector can check if the work was done properly, and that could be expensive.

Expand full comment
Deiseach's avatar

Someone will probably complain about it when the quick fix is not working anymore, and/or eventually the local authorities will find out about the lack of permits. Then there will be court cases and trouble.

This court case, for example, has been going on nearly twenty years but the council is sticking to it and demanding demolition of the house:

https://www.irishtimes.com/crime-law/courts/2025/08/05/couple-fails-to-stop-demolition-of-co-meath-home-built-in-wilful-breach-of-planning-laws/

"A couple have lost a last ditch legal bid at preventing the demolition of their large Co. Meath home built in “wilful breach” of planning laws almost 20 years ago.

There was “no merit” to the appeals by Chris Murray and his wife Rose, Mr Justice Senan Allen said, when giving the three judge Court of Appeal’s judgment dismissing them.

The appeals concerned an action that, while initiated in September 2022, was “the latest battle” in a 20-year war about the fate of the unauthorised development at Faughan Hill, Bohermeen, Navan.

...After Mr Murray’s 2006 application for permission to build a house on the lands was refused, the couple, “undaunted, and in wilful breach of the planning laws”, built a house anyway of about 588 sq m (6,220 sq ft), twice the size of the house for which permission was refused, the judge said."

Expand full comment
Rothwed's avatar

Hilarious unintentional typo pun in the Mussolini essay:

"If the bourgeoisie - I then said - believe that they have found in us their lightening-conductors, they arc mistaken."

Expand full comment
Doctor Mist's avatar

Not sure it was unintentional…

Expand full comment
Mark Russell's avatar

In an earlier post, Scott wrote that Amish communities were "10/10" on I think the continuum of community building. I hope I have not misrepresented this.

Does anyone on this post know Amish people, and communities, in a personal or at least professional way?

I am familiar with a number of small Amish and Mennonite enclaves not far from me. I do business with many of them, have several of them in my phone contact and enjoy a level of friendship with more than a few.

Do the notions of 'Amish' on this forum have to do with personal experience, or are they an idealized notion (not that there's anything wrong with that)?

Expand full comment
Byrel Mitchell's avatar

I grew up with a few as neighbors and acquaintances, and my dad did business with them. I suspect most people here have less contact than that.

Expand full comment
Mark Russell's avatar

Yeah, that's my assumption, that people are just going by notions gleaned from popular culture.

Certainly lots of things to admire about 'plain' communities, but the limited role for women--not to mention the Taliban-light dress code--is not among them. Nor is the cessation of education past, like, age 12, which seriously limits the ability of children to leave and go their own way.

Mennonites get lumped on with Amish a lot, but they are part of a religion with a continuum of plain-modern living. Consider the double-digit number of Mennonite colleges and universities.

Expand full comment
Mercedes's avatar

There's a popular dating show on social media where men and women pop balloons to indicate lack of interest. It's funny and quite harsh but otherwise illuminating on the nature of male/female discourse. The contestants are mostly urban African American. And a fair fraction of the contestants say they want a partner who loves or 'fears' God. I am not religious, but grew up in fundamentalist circles. I understand and actually advocate for the biblical edict to be 'equally yoked'. I think it is important you marry someone who shares the same faith as you do.

However, the contestants who expressly want a God Lover don't exactly look like a God Lover. Yes, I'm stereotyping crudely here, the women are immodestly dressed, and the men don't seem like serious men of God. And if you asked them about premarital sex and the like, they are just like heathens in that regard (I am guessing here). Moreover when a contestant (female) who looks ostentiously like a God lover and wants a God lover - I say this generously - there are quite a few popped balloons. The men complain that the woman is too conservatively dressed - reminds them too much of church aunties lol.

And when the odd contestant say outright that they do not believe in God, there is a volley of popped balloons. In a dating show, I find it funny and harmless. But it got me thinking broadly about religious code adherence.

Gang violence, cartel violence are concentrated in communities who profess proudly to be Christian. The bible belt has a higher rate of out of wedlock births and divorce. I think that intersection of class, race and sex is the main signifier of a lack of conscientiousness that dictates adherences to religious mores. If your community is marked by low conscientiousness, can religion come in reliably to increase it? It's quite a failure mode to go to mass on Sunday and go onto torturing a snitch on Monday.

What is the necessary and sufficient condition for communities to do as the preach? Hasidic Jews, Mormons, Amish are able to walk to the talk. Urban Christians can't?

Expand full comment
lyomante's avatar

cs lewis compared people to factories; if you have two factories making the same widget, and one had a lot of money invested into the building while the other is decrepid and breaking down, its natural productivity differs: it may be a miracle of management the latter makes widgets at all, while the former, like many rich people, is actually really bad at being productive in terms if stewardship.

people are not widgets they are factories and start with different benefits. A lot of rich christians are worse at being them but are insulated from criticism by wealth and power.

also eh at hasidics, amish, and mormons, there are good reasons why people hate when they get too powerful in a region or place. The pharisees were ten times a better religious culture than the losers Jesus hung out with after all, but turns out tight knit religious cultures with high virtue aren't always best at meeting needs at times.

Expand full comment
gdanning's avatar

I think your causation is amiss here. It is those who live in areas of relatively high violence and low social cohesion who are most likely to need to seek community at church and solace through religion.

Expand full comment
Zach's avatar

It makes total sense to want to settle down with a God-fearing person, while still having fun before you find your soulmate.

St. Augustine put it best - "God, give me chastity and continence, but not yet."

Expand full comment
Yug Gnirob's avatar

What kind of madman asks for incontinence?

Expand full comment
Paul Brinkley's avatar

Insert Skeletor "Joke's on you" meme here.

Expand full comment
Loominus Aether's avatar

Just to be clear, "continence" here is closer to "self-control".

Expand full comment
Gordon Tremeshko's avatar

You don't want to know.

Expand full comment
Wanda Tinasky's avatar

I just think this is a class thing. I don't think it has anything to do with religion. Low-class people have always engaged in low-class, violent behavior. Their religion is just an inherited social custom. Calling themselves Christian doesn't make them religious any more than celebrating Christmas every year makes someone religious.

Expand full comment
lyomante's avatar

and high class people make virtues out of participating in worse behavior: the poor man cheats on his wife, the rich man turns that into polyamory and tries to make sleeping around into a virtue, while the poor man abuses drugs and the rich man "takes nootropics."

the weird fetishization of the rich as somehow more virtuous than the poor is not a good thing.

Expand full comment
Capt Goose's avatar

Polyamory and "nootropics" are not worse than cheating and drug abuse. In fact, they are ways of achieving the positive effects of the latter (sexual and romantic excitement, altered state of consciousness) while minimizing the negatives (hurt feelings, addiction, overdose etc.)

Not saying the rich are immune to bad behaviours, obviously not. But those who are more affluent and better educated have on average more capacity to figure out how to achieve the benefits/rewards of certain behaviors while considerably reducing the harm to themselves and others (and it's the harm that makes a behaviour "bad", not the fact that it's "drugs" or "sleeping around").

Expand full comment
Mark's avatar

Interesting take, considering that literally every kind of bad behavior is vastly more common among the poor. The idea of the virtuous poor is just about the most laughable trope ever conceived. Rich atheists are less likely to cheat on their wives than poor Christians.

Expand full comment
Wanda Tinasky's avatar

I'm not suggesting fetishizing anything, but I think that pretending that rich and poor people don't have different characteristics on average is just foolish. Some percentage of the variance in wealth distribution is circumstantial, but some is not. And I agree that some behaviors, like cheating, acquire a different valence depending on class. But others, like violence, actually do vary significantly between classes. Not a lot of bar fights among the 1%.

Expand full comment
lyomante's avatar

no, but they do Epstein Island, and Saudi Arabia is quite the moral paradise isn't it? One wishes they only did bar fights.

oh bar fights...yeah they are worse than leveraged buyouts destroying hundreds of thousands of lives by saddling companies with pointless debt and extracting all the cash they can before bankruptcy.

used to be the rich were not venerated for reasons like that. its not that the poor are sinless, but the glazing of the rich is annoying as hell.

Expand full comment
Wanda Tinasky's avatar

Physical violence has more negative externalities than statutory rape. I would much rather live next door to Epstein than a physically violent meth-head. Saudi Arabia is wealthy because of natural resources and that wealth isn't reflective of high human capital.

The reason you know that the anti-social behaviors of the poor are worse than the anti-social behaviors of the rich is that the poor are poor and the rich are rich. People prefer living in rich areas to living in poor areas. That wouldn't be true if the rich exported worse externalities. They also wouldn't be rich if their behaviors were net-destructive of economic value.

Expand full comment
Capt Goose's avatar

Well, the externalities can be such that they affect those outside of rich areas.

And net-positive economic value can still be achieved problematically (consider slavery).

Expand full comment
Gordon Tremeshko's avatar

Fair point, but you need to remember that at least one class of poor people out there is poor precisely because of a substance use disorder. Plenty of people out there who used to be middle or upper class until they got addicted to meth/heroin/alcohol/whatever and their life went completely to hell.

Expand full comment
Melvin's avatar
4dEdited

The middle class white version of this is wanting a partner who "cares about animals" but not one who is a shrimp-rights vegan activist.

People want a partner who is reasonably moral but not too moral; not a terrible person but also not an insufferable goody-two-shoes.

Possibly in communities with lower levels of ambient morality it's more important to actively shout about whatever virtues you do have. In some communities this might be a cross necklace, in other communities it might be a "In This House We Believe" sign.

Expand full comment
Deiseach's avatar

"The middle class white version of this is wanting a partner who "cares about animals" but not one who is a shrimp-rights vegan activist."

Veering off at a tangent here, but in the "Ironheart" show there was one scene where the Hood (the villain, well one of the villains and arguably not as villainous as the ostensible heroine) is hosting a feast for an unwilling guest and tells him to try the caviar, and the other guy responds that he doesn't eat animal babies.

Well, that probably would be a killer putdown (or not) but caviar is fish roe, which are unfertilised eggs. A more accurate retort would have been "I don't eat animal periods" but (1) that probably wouldn't pass Disney+ streaming censors and (2) it wouldn't hit the same note of unctuous virtue-signalling.

Expand full comment
Andrew's avatar
4dEdited

Remember, being hypocritical is something that tends to cause having lots of kids. People who refused to pay lip service to the tribe's stated beliefs tended to reproduce less, and also people who refused to violate these rules for selfish reasons (when they can almost certainly get away with it) tended to reproduce less. Thus there is lots of hypocrisy.

I think the difference is: does the community actually enforce the rules it claims to uphold? Enough that disobeying causes real inconvenience? Very few churches punish their members in any way whatsoever.

Expand full comment
Mercedes's avatar

Yeah I see it's an elaborate game. The rule is 'Say you're Christian'. I suppose it is the same with presidential candidates. A serious candidate can't say they don't believe in God.

But I think there are social and political circumstances that enable churches to enforce norms. A church can't enforce norms against a cartel lord for example without fearing significant violence. In more benign protestant circles, if you can go join another church or literally make another church, if the pastor wails too much about pre-marital sex. Fair enough, Humans are going to human.

But there's a distinct conservative argument for religion that states it is good for the masses as it dictates a baseline level of ethics. Now I am not sure what's a better state of affairs. Everyone knows the rules but flout them flagrantly or everyone trying to figure out a set of rules for themselves.

Expand full comment
Jim's avatar
4dEdited

I think hypocrisy is actually Christianity's best feature. It allows the religion to grant society all of its social benefits without being dragged down by its prudish and pacifistic trappings. It also works as a sort of mimicry, giving the foreign populations they come into contact with the impression that they are a peaceful and enlightened people, and when they realize the truth, it's far too late for them.

Expand full comment
Jeffrey Soreff's avatar

You have a good point.

I have a nasty suspicion that at least part of the reason that LLMs generally lean left is that they are trained on a _lot_ of the hypocritical _text_ , without the hypocritical reality check (which makes the actual behavior of people much saner than what they and their leaders say and write).

I'm not looking forward to AI agents that act in the world that have swallowed gigabytes of these texts (including Woke, which can be thought of as sort-of a secularized heresy of Christianity in many ways).

Expand full comment
Mark Russell's avatar

If a tv show you are watching is, via entertaining you, lowering your opinion of specific ethnic groups, then maybe it is not such a good--as in moral--show. Also, it might not be an accident that it lowers your opinion of certain groups.

Expand full comment
Wasserschweinchen's avatar

What is the moral principle that you believe is being violated here?

Expand full comment
Yug Gnirob's avatar

I think it's safe to say that by default a dating show is not a moral show.

Expand full comment
Mark Russell's avatar

Yeah, for sure. Watch what you want to watch, I'm not a scold. Just take notice of what a show is leading you to think about groups.

Expand full comment
Performative Bafflement's avatar

> What is the necessary and sufficient condition for communities to do as the preach?

I've got a fun chart for you - if we make the assumption that people back in the 1500's - 1800's believed in religion to a greater degree than today, and that people believed in hell, which a fair amount of people likely faithfully believed in this time period, it matters not at all, and the "percent of marriages with already pregnant spouse" averages around 25 - 33% for ~300 years, going back to 1550.

In other words, even a literal belief in eternal torment won't keep people from being people.

So I think my answer would be "this is never going to happen in the aggregate, and it's always been a "top 10% or better" phenomenon in terms of intelligence and conscientiousness, and likely always will be.

So if you want that, simply filter by the top 10% or better in your particular faith, and you'll be among them - otherwise, good luck.

https://imgur.com/a/Ag1rYo6

Expand full comment
Mercedes's avatar

Yeah this is fascinating. Isn't this kind of unique to Christianity though? It feels like Islam might have better code adherence.

Expand full comment
Melvin's avatar

Well Christianity is very heavy on the forgiveness thing. Everybody sins infinitely all the time, and everybody gets forgiven infinitely all the time, so the amount you actively sin is just a rounding error.

If you ask your local priest/minister/whatever then he'll tell you that no it doesn't quite work like that and that it's very important to avoid sin, but the version that the Church authorities push is not necessarily the version that people internalise.

(Yes, I'm probably overgeneralising from Catholicism here)

Expand full comment
Neurology For You's avatar

Re: sleep apps, the VA has a free app “CBT-I coach” which is very faithful to manualized CBT-I. It is not bad!

Expand full comment
Eremolalos's avatar

People are talking here about making one. I had the impression that there were already a lot in existence.

Expand full comment
Neurology For You's avatar

I thought it might be helpful to point out the existence of a “pretty good” free alternative a developer needs to beat; also some people seemed interested in checking out currently available options.

Expand full comment
Ebrima Lelisa's avatar

If you missed it, the creator of #3 SheepSleep commented down below that she was welcome to feedback. After me and another commented she deleted the comment

Expand full comment
Tiago Chamba's avatar

It is well known that LLMs are bad at questions like "Which is bigger, 9.11 or 9.9?", "How many r's in strawberry?", etc. The cause of this is also known to be tokenization.

Is it fruitful to draw an analogy between these questions and optical illusions? In both case, we exploit quirks in how the perception system works, in order to force a wrong answer. Are there more aspects of LLMs that this analogy helps explain?

Expand full comment
EngineOfCreation's avatar

I know it's a cheap shot to make fun of LLMs, but I just can't resist. I asked Google "how many Rs in resurrection?", just to switch it up from the manually tuned "strawberry" question.

At first I was surprised that it got the answer right on the first try, but to my dismay, I continued to read:

https://i.imgur.com/G7XFAxS.png

Expand full comment
Deiseach's avatar

That certainly was a very special answer 😁

Expand full comment
Shankar Sivarajan's avatar

If you ask Gemini such questions directly (instead of through the regular Google search), I've found it gets it right without stupid mistakes. The amusing thing is that it writes and runs Python code for it.

`word = "resurrection"

count = word.count("r")

print(f"The number of 'r's in the word 'resurrection' is: {count}")

`

Expand full comment
Taymon A. Beal's avatar

People should be aware that the model that appears on the Google Search results page is not Gemini, or at least not the same Gemini that you can talk to in the chatbot app (I think maybe all Google's proprietary LLMs are called Gemini but I've lost track of the nomenclature). It's a much smaller, faster, and dumber model, because running the real Gemini on every Google Search query would get real expensive. The reason it appears to know things is because it is reading and summarizing the search results; it is not intended to be relied on as a source of general knowledge beyond what you'd find in those results.

Expand full comment
Shankar Sivarajan's avatar

Huh, I assumed it was the latest Flash model (which seems to be free and unlimited). I'm actually not sure it isn't, just with a different system prompt, and a lower cap on reasoning tokens.

Expand full comment
Taymon A. Beal's avatar

It might be Flash-Lite, which is not available in the chatbot app (because the only reason to use it is if you're paying per query). Flash is free *for end users of the chatbot app* but API users have to pay for it and it costs more than six times as much per output token as Flash-Lite. Presumably Google is internally making decisions similar to how an API user would.

Expand full comment
Shankar Sivarajan's avatar

Huh, I didn't realize there was such a model publicly accessibly. Thanks!

Yes, I agree that's probably what it is.

Expand full comment
beowulf888's avatar

Ha! The reurection of Jebus!

Expand full comment
megamannequin's avatar

I suppose it comes down to whether you think that these questions for LLMs and optical illusions for humans are an artifact of training data or architecture.

LLMs operate by sequentially sampling a token out of a distribution conditioned on the previous text it has seen and generated. For your example, this process doesn't trigger a boolean "is 9.9 > 9.11?" operation but rather something like: P("9.9" | "Which is bigger, 9.11 or 9.9?"). Folks with experience in LLMs will know that this isn't entirely correct, but for the sake of scientific communication it is mostly correct.

I'd argue this is an architectural problem. Sure given an infinite amount of training data perhaps this (usually stochastic) sampling operation could mimic the boolean operation perfectly, but the formulation of the model is not conducive to supporting that.

On the other hand, my understanding of optical illusions is that they are based on exploiting our brains making "processing shortcuts" they have learned based on visual data from the real world. I am not a Neuroscientist, but I'd hypothesize that if we had things that looked like optical illusions in the real world, our brains would learn to make them not optical illusions- hence it is a training "problem."

Expand full comment
Taymon A. Beal's avatar

Mirages exist and have pretty direct implications for fitness (falling for one can kill you), so that latter argument seems to prove a bit too much. Something like it seems likely to be true, though; the question is just what differentiates the two kinds of cases.

(Maybe it's the thing where psychological adaptations have to be universal, which means they could only have happened far back enough that humans weren't yet exploring any deserts? But I'm not sure that's right.)

Expand full comment
1123581321's avatar

A mirage is not an optical illusion: there’s an actual reflection happening off of a rising hot air, which looks exactly like distant water.

Expand full comment
Taymon A. Beal's avatar

This is true, but it's optical-illusion-like in the sense of looking like something different from what it is due to weird perspective stuff.

Expand full comment
1123581321's avatar

There's truth to that. I think the main difference is that a true optical illusion (for example, https://en.wikipedia.org/wiki/Checker_shadow_illusion ) does not at all have the "image" we "see" (in this case different shades of the squares), and, even knowing that, we can't "unsee" it. A mirage image is real - there's a real reflection of the sky off of the ground-level hot air, and it's only our interpretation that this must be water because that's what water looks like. Once one learns about mirages, one can easily adjust - "I see this reflection that looks like water, but I know it's not water".

BTW I don't know where you live, in these here very-hot United States it's quite common to see mini-mirages on highways during summer months, they look like puddles of water just ahead, and of course one immediately knows it's not water because it hasn't rained in weeks and it's 95F and sunny, and the "puddles" keep disappearing on approach.

Expand full comment
Taymon A. Beal's avatar

Yeah, I see those occasionally when on highways, though that's not all that often (I live in an inner-ring suburb of a city with decent transit by U.S. standards, and don't regularly drive).

I'm not sure the ease of recognizing mirages has much to do with their being physical rather than cognitive; surely it's not that hard to recognize a cognitive visual illusion* either, once you know what to look for?

It's possible that I'm overstating how historically common it was for desert travelers to fooled into dangerous situations by mirages; I thought it was a thing that sometimes happened (though of course desert-nomad cultures would have known about it and warned their children), but I can't find much information about the topic on Wikipedia.

* The kind of thing referred to as an "optical illusion" in the above comments. Wikipedia, at least, defines the term "optical illusion" broadly enough to include mirages.

Expand full comment
megamannequin's avatar

Isn't the mirage an example of a training problem though? If I have seen a mirage before or know they can happen in the desert I'm much less likely to die thinking it's water. If you'd never seen a large body of water before- you'd just think it's a mirage.

I don't have any mirage-fatality statistics on-hand, but surely a Sub-Saharan trader is much less likely to die from one than an unaware, thirsty 19th century British explorer.

Expand full comment
Alexander Turok's avatar

What do Michael Bloomberg, Zohran Mamdani, and Nick Fuentes have in common? All are successful people who worked hard to get where they are. All are subject to a never-ending, high-pitched stream of "it's not faaaaiiiirrrr." This similarity is increasingly important as one side of the political spectrum becomes defined by the glorification of mediocrity.

Let's start with Fuentes. A while back he said, in a video I was not able to find, that he was frustrated with the people in his movement who act like he was given his position on a platter, when really he got it because he put in the effort and was good at drawing attention to himself. Perhaps this accounts for his increasing centrism - a common arc for far-right leaders. He's attacked conservatives for being "openly hostile to all the good things about liberals" and being "low-IQ hillbillies who take pride in being simple and hate the rich,"[1] and is now supporting Gavin Newsom for President.

Onto Mamdani, on track to become the mayor to eight million people at 33-years-old. An intelligent conservative movement would look at his success and ask how they can emulate it. Instead, one influencer on X called him a "failson"[2] perhaps because he spent his early adulthood taking low-income political jobs that put him on the path to be Mayor of NYC, instead of being the dentist who works out of a New Jersey strip mall. Another attacked him for having never having "done a single day of real, hard work" the way guys on a construction site do.[3] If accurate, it seems to have worked out for him. Maybe conservatives should try to emulate his success, put their kids on the path to take political power and undo Mamdani's harm, instead of going on about how much better they are because they hauled cement that summer. Another Tweeter bitched that the attendees at his recent scavenger hunt "have the same vibe as Disney adults."[4] Instead of telling Republicans how they can replicate a successful voter mobilization strategy, she'd rather they smugly feel superior and then lose elections.

Does all of this matter? I argue it does. Our "pro-capitalist" party could be talking about how much richer West Germany was than East Germany, how capitalism can make you rich, too. Instead, it's telling people they should be proud of being poor. The Mike Bloomberg's of the world don't like Mamdani's ideology or policies, but he doesn't give off the stench of resentful loserdom the way so much of the Right does. The result will be a Right that finds itself increasingly struggling to recruit competent people and raise money. On the bright side, this dearth of EHC provides an opportunity for high-IQ, high-agency people who are willing to hold their nose and enter Republican politics.

1. https://x.com/FuentesUpdates/status/1908187813117411525

2. https://x.com/feelsdesperate/status/1955655602031734888

3. https://x.com/michelletandler/status/1955648799071683055

4. https://x.com/InezFeltscher/status/1959996136279712162

Expand full comment
lyomante's avatar

It's weird to booster capitalism right as it is developing AI to replace labor costs with capital costs and pass the reduced wealth as profits to AI companies and savings to business to reinvest in stocks. To the point people think universal basic income may be needed due to lack of jobs.

Oh, and housing and healthcare are becoming ruinously expensive, or even cars: we sure have 60k + Teslas though!

The left was stupid and chose identity politics for a quick win over criticizing capitalism, but capitalism only worked in the mythic sense when combined with strong moral virtue and empathy, and it barely worked then. we are probably starting to return to a more bificurated society thanks to it, and it will concentrate its benefits on a small amount of people more than benefit everyone.

Expand full comment
Alexander Turok's avatar

High housing prices are the fault of NIMBYs, not capitalism and the cheapest Teslas can be had for 44k, other automakers sell even cheaper electric cars.

Expand full comment
lyomante's avatar

Housing is high in many places outside cities with NIMBYs; idk if its due to private equity buying enmasse or what, but there are not a few places which have plenty of room to build but still have expensive rates.

44k is still expensive as hell. this generation is not caring about cars as much because they can't afford used ones or the various fees and upkeep. You see mopeds and ebikes and escooters more and more.

like your point would be valid in the 80s but capitalism

is not about employing people now-it feels more like releasing minimum viable products with skeleton staff to create wealth reinvested in paper.

Expand full comment
Neurology For You's avatar

Isn’t Fuentes mostly a Twitter gadfly? He doesn’t have a party or movement, and he thrives on online beef.

Bloomberg and to a lesser extent Mamdani have real world achievements to point to.

Expand full comment
Anonymous's avatar

Wait, you think the Left gives off *less* of a stench of loserdom than the Right? In this country, the United States of America?

If that's a sincere good-faith take, it really exposes the extent to which people (myself included, which I know won't go without saying even though it should) can be ideologically bubbled.

Expand full comment
Deiseach's avatar

Nah, this isn't about politics, it's to give Alexander a chance to bang on the drum he always bangs on: manual labour only for stinky poor proles, stinky poor proles icky, be an aspiring PMC person like me and keep chasing vainly after that social approbation of one's betters!

Expand full comment
Nancy Lebovitz's avatar

How much of this is typical social media, where some people get attention for how much hostility they dump?

Expand full comment
Melvin's avatar

Are you sure you're not confusing "it's not faaair" critiques with "this person is 33 and has no experience running anything significant, as a practical matter he is unlikely to do well suddenly being put in charge of a huge city"?

Expand full comment
Deiseach's avatar

I think Mamdani is hobbled by his association with the Democratic Socialists of America (God bless and keep 'em). They put forward extreme policy opinions which I don't think he shares, or at least not to the same extent, and this gives hostages to fortune for the right to pluck quotes and say he's going to try and do this crazy thing.

Mamdani is the white collar champagne socialist type that will get elected (as we saw with the primary) in the same way that Alexandra Ocasio-Cortez got elected. I wonder if he can emulate her longevity, though; if he loses the election for mayor, where does he go next? Find a seat next to Andrew Yang and Beto O'Rourke in the gallery of never made it?

I'm not as impressed as Alexander by scavenger hunts, but eh, this is the kind of campaign trail stunt that seemingly every politician has to engage in. I do think that Cuomo is unelectable, but Adams? I wouldn't be surprised if he managed to pull it off. Okay, the polls are giving Mamdani a good lead, but weird things happen at elections.

The Republican candidate is the most sensible sounding, but it being NYC, not a snowball in Hell's chance of getting elected:

https://www.fox5ny.com/news/nyc-mayoral-campaign-polls-august

"Sliwa, the Republican nominee, sounded exasperated as he campaigned at the Santa Rosalia festival in Bensonhurst. "Lifting weights, going on scavenger hunts, eating potato chips — anything to distract the voters," he said, urging rivals to focus instead on affordability and crime."

Expand full comment
Alexander Turok's avatar

If Silwa gives a s*** about his city he needs to drop out and endorse Adams.

Expand full comment
Nir Rosen's avatar

Probably.

Expand full comment
Wanda Tinasky's avatar

The right is just following the left's lead into identity politics. They very much used to make ideological arguments in favor of capitalism but the last 20-30 years have demonstrated that tribal politics trump well-reasoned arguments so the right looked around, realized that white people still comprise a majority in this country, and so decided to go with that.

Expand full comment
Tatu Ahponen's avatar

Do you think that the idea of right-wing tribalism is only 20-30 years old?

Expand full comment
Wanda Tinasky's avatar

I think white racial tribalism died in the aftermath of Civil Rights. I think progressive identity politics resurrected it and Trump is now capitalizing on it.

Expand full comment
Alexander Turok's avatar

Other than talking about a prominent white nationalist my comment had nothing to do with race.

Expand full comment
Wanda Tinasky's avatar

I never said it did. Nevertheless my answer to your question *did* involve race.

Expand full comment
Jim's avatar

> Onto Mamdani, on track to become the mayor to eight million people at 33-years-old. An intelligent conservative movement would look at his success and ask how they can emulate it.

...He's a populist. The right's already doing that, they don't need to copy him. He still needs to be attacked, because he's opposition.

Expand full comment
Deiseach's avatar

"What do Michael Bloomberg, Zohran Mamdani, and Nick Fuentes have in common?"

They haven't demonstrated they can bench press 135lbs? Well, I don't know about Fuentes, but I do about Mamdani and (probably) Bloomberg:

https://thehill.com/homenews/campaign/5468735-adams-cuomo-mock-mamdani-bench-press-effort/

https://nypost.com/2025/08/25/us-news/riley-gaines-latest-to-mock-zohran-mamdanis-failed-bench-press/

Expand full comment
Crinch's avatar
3dEdited

Mamdani could do pushups for a month and be able to fix this non-problem. His political opponents, on the other hand, cannot un-grope those women or un-spend those bribes.

Expand full comment
Peteski's avatar

Reading the Praise/Prompt Machine: An Interface Criticism Approach to ChatGPT

https://dl.acm.org/doi/full/10.1145/3744169.3744194

Expand full comment
Jon Simon's avatar

A common explanation for political polarization in the US is that most congressional races are non-competitive, and therefore politicians only need to worry about losing to someone more radical from their own party, leading to them preemptively becoming more radical themselves.

But this argument seems like a leap in logic. Even if they only need to worry about same-party challenges, why do they only need to worry about more-radical challengers? Why don't they also need to worry about more moderate same-party challenges who would force them in a less radical direction?

Expand full comment
Taymon A. Beal's avatar

Because primary voters are further from the center than the general electorate.

Expand full comment
Boris Bartlog's avatar

Fair point, but then we might expect the average winner to be pretty close to the preferences of the primary electorate, at which point it should be equally easy to challenge them from either direction.

In practice I think that electability concerns and some kind of influence from pragmatic centrists might make the candidates tend to be more centrist than the primary electorate would really prefer, though. And in that case it would play out as described.

Expand full comment
Shankar Sivarajan's avatar

There's some inertia to one's political positions, so I'd expect the average incumbent to be closer to the center than the primary electorate, and so vulnerable to challengers from further outward.

Expand full comment
Luomei's avatar

Hey all, I'm Luomei Lyu, the founder/dev of Sheep here!

Sheep has been tested and loved by many sleep physicians and their patients: https://www.gnsheep.com/case-studies

We are actively working on improving the conversation experience on poor internet.

Here is how it was made: We had the best psychologists in the country talk us through every conversation they would have for each insomnia case, even down to the exact word choices, analogies, and conversation pace when teaching these well-validated techniques to their patients. We then wrote hundreds pages of instructions for generative AI to follow in its weekly sessions, so that it only responds with human-crafted material, while keeping the conversation collaborative, interactive, and highly personalized. This kind of dynamic, personalized dialogue is what makes in-person sessions effective at improving adherence, which remains the biggest challenge in this gold-standard, first-line treatment.

I truly believe Sheep to be the best digital sleep program based on CBTI, and so do our users.

Expand full comment
Eremolalos's avatar

But are you willing to hear and respond to feedback here? If not, then your post is basically an ad and you should delete it.

Expand full comment
Luomei's avatar

I’d be extremely grateful for any feedback! Good and bad.

Expand full comment
beowulf888's avatar

Your case study section seems rather sparse. I'd like to see the raw data behind statements like 90% report significant sleep improvement. N equals how many patients/customers? And what's your baseline criteria for significant sleep improvement? And how do you measure it in your patients/clients? The same goes for all the other stats you present. Sorry, but the way you market your programs looks suspiciously vague.

Expand full comment
Luomei's avatar

Thanks for the feedback! The preliminary data were collected from N = 15 paying users of Sheep-Sleep who on average completed 5 sessions. Participants ranged in age from 19 to 72 years, including 6 men and 9 women. Daily sleep patterns were tracked using the Consensus Sleep Diary questionnaire, which is about 14 questions that users fill out every morning. You can check it out in the app.

We have a publication from an IRB approved research study coming soon and I will update the detailed data with then.

Expand full comment
Eremolalos's avatar

No control group? And it would have been easy to come up with an alternative treatment that was plausible — say an app that describes kinds of soothing sounds it could play for the user, and how it could customize them, and let the user choose which to have. And your subjects had *paid* for the app? That’s likely to influence their judgment of its effectiveness. What you are describing is not research.

Expand full comment
Rachael's avatar

So 13 and a half of those 15 people reported significant sleep improvement?

Expand full comment
Eremolalos's avatar

Yeah the guy who got cut in half died and the half with the head slept great after that. (Other half unfortunately continued to suffer from restless legs and had to be considered a treatment failure. Well, half a failure.)

Expand full comment
Spruce's avatar

> fascism is when you do things liberals don’t like

And things historians don't like, but they have proper arguments to back up their position: https://acoup.blog/2024/10/25/new-acquisitions-1933-and-the-definition-of-fascism/

Expand full comment
WoolyAI's avatar

This is not a proper argument. It is almost certainly intentionally misleading. This does not take long to establish.

Two quotes:

"Instead, Hitler gained power not because a majority of Germans agreed with his aims, but because key leaders, most notably Franz von Papen, thought they could use Hitler to achieve their aims, that they could sand off all of the nasty rhetoric and instead employ Hitler as a cudgel (against the socialists)."

"What I want to note here are two key commonalities: First, fascists were only able to take power because of the gullibility of those who thought they could ‘use’ the fascists against some other enemy (usually communists). Traditional conservative politicians (your Mitch McConnell and Lindsey Graham types) and conservative business leaders (your Elon Musks) fooled themselves into believing that, because the would-be tyrant seemed foolish, buffoonish, and uneducated that such an individual could be controlled to their ends, shaped in more productive, more ‘moderate,’ more ‘business friendly’ directions."

This is misleading because it misframes the decisions of German moderates and conservatives by downplaying Communist violence at the time. The Nazis during the Weimar Republic were certainly violent and they advertised their own street violence but every contemporary account I've ever read has Communists being just as violent, if not more so, in the street, and it takes less than 5 minutes to mass documented violence by communists at this time, such as the Reichstag Bloodbath.

Like, you can just read a list of political violence here:

https://en.wikipedia.org/wiki/Political_violence_in_Germany

A lot of political violence by a lot of different actors.

Second, German moderates and conservatives were not choosing between the Nazis and, like, modern Scandinavian socialists. Hitler's '32-'33 rise is contemporary to Stalin's genocide of the Ukrainians in the Holodomor. There's a very small list of people who can claim to be as evil as Hitler but Stalin is right there at this time, at his most powerful and most Stalin. I haven't read too much on Van Papen in particular, maybe he was genuinely naive, but other contemporary accounts are very clear and strident in their denunciations of contemporary Communism and Hitler's anti-Communism shines through "Mein Kampf" almost as strongly as his antisemitism.

Maybe Devereaux has insight here that I lack but I doubt it. From memory, he's primarily a classicist (like, Rome and Athens and stuff) and when he's stepped outside that field into areas I have some above-average familiarity (like Qing history) he has not acquitted himself well. I suspect that's what he's done here.

Expand full comment
Anonymous's avatar

"Maybe Devereaux has insight here that I lack but I doubt it."

You're right to doubt him. He at least used to be pretty open about the fact that ACOUP was a political project, meant to craft counternarratives to what he saw as effective right-wing memes (e.g. his Spartan series has nothing to do with providing a balanced account of the military efficacy of Sparta and everything to do with trying to puncture a "chud" image of it), and the fact that he's constantly cited as some sort of high-quality neutral source on ACX is exasperating.

Expand full comment
Rothwed's avatar

It's one of the pillars of human societies that they will sacrifice practically anything to end chaos. Weimar Germany had as much of a problem with violent Communist activists as it did with their opposition, amid a backdrop of economic crisis. Hitler and the Nazis were in a very real sense the ones fighting back against the communists.

An example of this is what happened with BASF, which I read about quite a bit in relation to the history of the Haber-Bosch process. It's important to note that Bosch *did not* like the Nazis. Many of his friends and coworkers, being largely engineers and chemists, were Jewish. He had one meeting with Hitler, and it went so poorly the board forbade him from ever doing it again. But one of the BASF plants was taken over by Communist agitators armed with guns and explosives. In the ensuing shootout with the police, people were killed and a lot of damage done. There was a real sentiment in Germany at the time that something had to be done about the Communists, and Hitler had shown he was something.

Also, Deveraux is a Marxist. Maybe not in the strict political sense, but in the philosophical sense that class conflict is the core axiom of human relations. I would believe him instantly on the topic of how much grain a Roman army could carry, but would take his political musings with a hefty dose of skepticism.

Expand full comment
EngineOfCreation's avatar

What the hell are you talking about? You are blaming the Reichstag bloodbath on the communists? It was paramilitary police killing socialist protesters with machine guns and hand grenades...

> Independent and communists, on the other hand, emphasized that the shooting had been done for no reason and without warning. It is unclear whether the warnings existed. Almost all the dead and injured were found south of the Reichstag, on the opposite sidewalk and in the adjacent zoo, according to reports from various sides. There, on Simsonstrasse, the crowd was at least four meters away from the police. So there were no violent attacks during the storming of the building. Most of the victims were hit here. After the shots broke out the crowd fled in panic, the Sipo fired several more minutes with their rifles and machine guns. Nowhere in the sources claims that demonstrators would have been shot back.

https://en.wikipedia.org/wiki/Reichstag_Bloodbath

Expand full comment
Jim's avatar

Okay, and the modern equivalent of communists is the woke. What's your point?

Expand full comment
Gunflint's avatar

I think it’s kind of silly to split hairs arguing whether something meets Mussolini’s definition of fascism.

In “Fascism — A Warning” Madeleine Albright argued that the term has come to have a general sense. Written during Trump’s first term, she wasn’t saying he was a fascist then, just giving a heads up.

“In the book, I try to argue that fascism is not an ideology; it’s a process for taking and holding power. A fascist is somebody who identifies with one group — usually an aggrieved majority — in opposition to a smaller group. It’s about majority rule without any minority rights. Which is why fascists tend to single out the smaller group as being responsible for or the cause of their grievances.

The important thing is that fascists aren’t actually trying to solve problems; they’re invested in exacerbating problems and deepening the divisions that result from them.

They reject the free press and denounce the institutional structures within a society — like Congress or the judiciary.

I’d also add that violence is a crucial element of fascism. Whatever else it is, fascism involves the endorsement and use of violence to achieve political goals and stay in power. It’s a bully with an army, really.”

A bully with an army was the key point.

—————

October 14, 2024 — CNN

“Former President Donald Trump suggested using the military to handle what he called “the enemy from within” on Election Day, saying that he isn’t worried about chaos from his supporters or foreign actors, but instead from “radical left lunatics.”

“I think the bigger problem are the people from within. We have some very bad people. We have some sick people. Radical left lunatics,” Trump said told Fox News’ Maria Bartiromo in an interview on “Sunday Morning Futures.”

“I think it should be very easily handled by, if necessary, by National Guard, or if really necessary, by the military, because they can’t let that happen,” he added.”

https://amp.cnn.com/cnn/2024/10/13/politics/trump-military-enemy-from-within-election-day

Expand full comment
Jeffrey Soreff's avatar

>saying that he isn’t worried about chaos from his supporters or foreign actors, but instead from “radical left lunatics.”

In the light of burning Tesla dealerships, Trump wasn't wrong on that particular point.

Expand full comment
agrajagagain's avatar

Which Tesla dealerships burned? Where? When? Please give details and specifics.

And while you're at it, maybe say a few words as to why you consider the integrity of car dealerships more important and meaningful than the literal capital of your nation.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks for your comment! Re

>Which Tesla dealerships burned? Where? When? Please give details and specifics.

Here is an ai summary, which seems like a reasonable level of detail in response to a one-line isolated demand for rigor. :

There have been multiple arson incidents at Tesla dealerships, with notable locations including Mesa, Arizona (April 2025), near Rome, Italy (March 2025), and in Las Vegas, Nevada (March 2025). Other attacks involving vandalism and shots fired at Tesla properties occurred in places like Tigard, Oregon, and Dedham, Massachusetts, in March 2025.

Specific Incident Locations:

- Mesa, Arizona: A fire was set at a Tesla dealership and service center, appearing to be an act of vandalism with graffiti on the building.

- Near Rome, Italy: A large fire destroyed many Tesla vehicles, which Elon Musk described as terrorism.

- Las Vegas, Nevada: A sales and service center was targeted, with five Tesla vehicles damaged by fire and gunshots.

- Tigard, Oregon: A dealership was targeted with more than a dozen shots fired.

- Dedham, Massachusetts: Multiple Teslas were vandalized with spray paint and damaged tires.

Context:

These incidents occurred primarily in March and April 2025, following a pattern of vandalism and arson against Tesla properties across the U.S. and elsewhere.

The attacks sparked investigations by law enforcement and resulted in federal charges against some individuals involved in the U.S. incidents.

>And while you're at it, maybe say a few words as to why you consider the integrity of car dealerships more important and meaningful than the literal capital of your nation.

I applaud the governors who sent their national guards to help combat the longstanding criminal activity in Washington D.C. While not rising to "murder capital", the portions of it other than the core government areas have at least been in the "bad neighborhood" category for many decades.

Expand full comment
Gunflint's avatar

Criminal activity in DC?

https://m.youtube.com/watch?v=2rlbQNV2GcI

Expand full comment
Rothwed's avatar

You can of course change the meanings of words, but Fascism was in fact a specific ideology of a specific time and place. Namely Mussolini and the Fascist Party in Italy; possibly extending to Spain under Franco and the Iron Guard/Antonescu in Romania. If you want to talk about "a bully with an army", we already have words for that, like dictatorship. Using Fascism the way you want strips the historical meaning and only leaves "people doing bad things". We already have words for people doing bad things! Using Fascism as a synonym for this degrades it to a largely useless term, much like what has been done to Nazi.

Expand full comment
agrajagagain's avatar

"You can of course change the meanings of words, but Fascism was in fact a specific ideology of a specific time and place. "

I don't think I've ever seen the goalposts moved *preemptively* before. That's impressive. "It possibly be considered fascism unless it's one of the examples that historians have definitively classified as fascism (once they were over)" is an exceptionally convenient position for people who like the *content* of fascist policies, but don't like the *word* being applied to them.

Do you apply this philosophy evenhandedly? Can the labels of other political ideologies--communism, socialism, liberalism, ethnic nationalism--*also* only be only applied after the fact? Or is fascism just special like that? Most humans find categorization useful for helping them understand *new* things, like events that *haven't* yet played out fully, and have at least *some* willingness to assign them to existing categories when they're a reasonably good fit.

Here, let's taboo the words "fascism" and "fascist." What specific policies and programs do you think were bad about Mussolini's Italy, Franco's Spain and Hitler's Germany? What would you consider the bright lines that they crossed that you wouldn't tolerate your government crossing?

Expand full comment
Rothwed's avatar

Sure, there are certain things the fascists did that could apply to contemporary politics. But my point is people need to do their homework and actually relate that to actions that actual fascists actually took. As Orwell wrote, fascism was diluted as a political term, signifying little more than a pejorative for something undesirable - and that was 1944!

It's also difficult to pin down because fascism was never really defined in policy terms unlike communism or liberalism. Mussolini writes a lot about vitality and energy, and how the State must be a totalizing institution that is the moral center of society. What does this mean in practice? There are also a lot of contradictory bits. The various fascist manifestos called for women's suffrage and abolishing the Catholic church to seize their wealth. Yet when Mussolini seized power, he allied with the church and ended free elections.

One of the big pillars of real fascism was collusion between the state and corporate interests, usually in the form of workers unions. There was a lot of overlap between early socialist movements and the foundation of fascism. They implemented policies like a standard 8 hour work day and expanded the social security system in Italy. Somehow I doubt anyone gets called a fascist for expanding social safety nets though. For a real contemporary example, consider the Intel stock purchase. Trump rolled a bunch of funds over to buy 10% of Intel's stock - this sort of state-corporation cooperation could realistically be called fascism. Just as the allocation of funds to Intel initially in the CHIPS act.

What people didn't like about fascism was the strong-arm authoritarian tactics. Which is fine and totally reasonable, but we can just call those politics authoritarian without having to drag fascism into. Because there was a whole political platform that often has nothing to do with the way people use the term fascist. They just pick out the bad bits to use it for extra emotional valence. It's like defining communism as something that causes famine and sending people to the gulag. Yes, those were both bad things that happened under communism. But that's hardly a useful definition.

As a final note, the Nazis weren't exactly fascists. Yes, they had a lot of authoritarian and corporatist collusion aspects in common. But the defining racial animus in Nazism was not a fascist thing at all.

Expand full comment
Nir Rosen's avatar

Most of the time, you can reduce an ideology to a single sentence, the core. Of course, this leaves a lot of room for debates, implementations, etc.

Socialism : "State Ownership of Means of Production".

Monarchy: "I got a mandate to rule by Heaven".

Democracy: "Rule by the People".

Liberal Democracy "Rule by the People, by elected representatives".

Fascism : "The State is above all".

Expand full comment
Wasserschweinchen's avatar

That sounds like representative democracy rather than liberal democracy? The latter I'd describe something like "democracy with civil liberties". I think our definitions might disagree on e.g. Switzerland and Russia.

Expand full comment
beowulf888's avatar

But you don't think that Trump, in his bumbling, senile way, is following the Fascist playbook that Mussolini and Hitler developed and implemented?

Expand full comment
Turtle's avatar
4dEdited

Not really? The key thing about Hitler that was really bad wasn’t the specific political ideology, it was that Hitler started the bloodiest war humanity has ever seen. Trump on the other hand has been working for peace in Russia/Ukraine, India/Pakistan, Azerbaijan/Armenia, Cambodia/Thailand, Congo/Rwanda and the Middle East. He’s been nominated for the Nobel Peace Prize 12 times. If this is “fascism” I would like more please.

Expand full comment
gdanning's avatar

Hm, he certainly has a strange way of working for peace in the Middle East. https://en.wikipedia.org/wiki/United_States_strikes_on_Iranian_nuclear_sites Not to mention that he certainly doesn't seem to be doing much to ensure that people in Gaza have enough to eat.

Regardless, in 2020 the Nobel Committee gave the prize to the World Food Programme. They aren't going to give it to someone who gutted USAID. Nor to someone who pardoned war criminals.

Expand full comment
Remysc's avatar

Spain did none of that, as a matter of fact, Spain stayed the hell away from the war because it had its own country to care about. Franco still got a pretty bad rep.

A totalitarian rule that controls culture and thought with no recourse from the public is considered demeaning to the human experience, it has a pretty horrible track record.

Also, when the public shows concern that this or that party might be fascist, do you think that's where the concern comes from? The possibility of them starting a bunch of wars?

Expand full comment
Turtle's avatar

I would think so, yes. I mean, Hitler was fascist and Franco was fascist, but while Hitler is remembered as the greatest villain of all time, Franco wouldn’t even crack the top hundred. Arguably there are many current world leaders such as Khomeini or Netanyahu who are worse than Franco.

Expand full comment
Viliam's avatar
3dEdited

> He’s been nominated for the Nobel Peace Prize 12 times.

How many times by himself or his followers? Does anyone else take his "work for peace" seriously? (Well, maybe Putin approves of the idea that peace = he can keep the territories that he has conquered so far, and take a break to fix the economy and get prepared for the next wave of attacks.)

I agree that it is unusual for a fascist leader to have a fetish for Nobel Peace Prize. On the other hand, even Hitler didn't start a war immediately after he was elected. The first step is always to consolidate the power at home. The tanks will be deployed in California *before* they will be deployed in Canada.

The current task for Trump is to make sure that he stays in power after the end of his second term. Everything else is secondary, because what would be the point of conquering a territory, if the next president just apologizes and gives it back.

Expand full comment
Anonymous's avatar

"How many times by himself or his followers?"

This seems like a very strange criterion. Do you think Obama was nominated by his enemies?

Mind you, I certainly don't think even winning the Peace Prize actually means anything, let alone just getting nominated, but if you do want to argue that it has any sort of moral salience at all, you're pretty much stuck with the fact that anybody liable to nominate someone for a Peace Prize is that person's follower in an important sense.

(Also, I'm like 98% sure you can't nominate yourself. So zero times by himself.)

Expand full comment
Turtle's avatar

I have noticed liberals don’t have a theory of mind for Trump supporters. Like to you guys it’s just “Trump is obviously bad, he keeps doing bad stuff that the news reports on, then his supporters follow him anyway, they must be ignorant/racist/brainwashed”

Some liberals are vaguely aware of the “intellectual dark web” or “far right misinformation” that Twitter and podcast bros have descended into but they have very little idea what actually gets discussed there.

Anyway, if you’re worried, don’t be. Trump is not going to roll tanks into Canada and his supporters would abandon him if he did. He might put federal troops in California and/or arrest Newsom, but only if Newsom was found to have committed crimes. No one is above the law!

Expand full comment
agrajagagain's avatar

"The key thing about Hitler that was really bad wasn’t the specific political ideology"

Really? The ideology of naked racism and might-makes-right that flowed from the belief of racial superiority *wasn't* part of what was bad. Call me crazy, but to my eyes it really looked like that was a big factor in motivating the war.

Expand full comment
Turtle's avatar

So your prediction is that Trump will start a war? I’m genuinely curious where this line of reasoning cashes out. People have been comparing Trump to Hitler, saying he can’t be trusted with nuclear codes, saying he will start WWIII etc for the last 10 years and during that time he has consistently been much more peaceful than any other American president this century. When do we start examining the assumptions here? Are we sure that these people are arguing in good faith?

With regards to racism etc I think the global left is way more racist than Trump, they drag race into everything in the name of social justice and DEI and anti-racism and decolonisation. Trump’s policies embody the words of Martin Luther King much better - to judge people not by the colour of their skin but by the content of their character. Enough already about “disparate impact.” Get rid of DEI, crack down on crime, stop fentanyl flowing across the border, deport illegal immigrants - this stuff is just common sense.

Expand full comment
Rothwed's avatar
4dEdited

This is where I make you define what the "Fascist playbook" is. Despite the shorthand of calling socialism left and fascism right, the two movements had a lot of common background. Mussolini even addresses this in the essay, albeit as a rejection of embracing socialist values. But the movement was tied up in socialist causes, especially labor unions. Part of the Fascist Manifesto involved giving women the right to vote, establishing a uniform 8 hour workday, and lowering the retirement age by a decade. Is that Fascism? Has Trump talked about making the 19th Amendment great again, and that's the path to Fascism?

This is the problem with picking bits and pieces of Fascism out of context, and the result of diluting the label to meaninglessness. In the 1919 Manifesto, Mussolini advocated for cleaving the Catholic Church from the state and seizing all their worldly assets. Yet when he took power, he found them a convenient way to legitimize his rule and ended up paying the church millions. So is allying with the church or trying to destroy it real Fascism?

Moreover, Hitler and the Nazis would not have called themselves Fascist or agreed with that label (arguably they shared certain commonalities). They were nominally National Socialists, and the state existed to further the interests of the German Aryan people. Under Fascism, there was nothing outside the state, certainly not differently moral groups of people. The primary reason Nazism is so reviled today, the racial and ethnic cleansing, wouldn't even make sense in a Fascist framework. Race and ethnicity were subsidiary to the state, not innate characteristics that anyone should care about. And indeed there was no Holocaust in Italy until after Mussolini's government collapsed and the German occupation began.

The point being - you keep using that word. I do not think it means what you think it means. If you want to just peel off the parts of Fascism that you think are really bad, you can probably just make the argument about authoritarianism instead.

Expand full comment
Blackthorne's avatar

IMO part of the reason it is worth splitting hairs on definitions is because vague, general definitions like the one you quoted lead to motte-and-bailey arguments where individuals are using the terms to imply all the super awful things about a label that aren't part of the much more general definition.

Trump doesn't even seem to really match the definition you gave, as I don't see how anyone can accuse him of not wanting to solve problems. We may disagree with his solutions but he definitely seems interested in solving the problems.

Expand full comment
Boris Bartlog's avatar

This still feels a bit self-serving, though. Fascists are trying to solve problems! Hitler and Mussolini were not, in fact, *just* trying to placate their base and consolidate their power. Now, the fact that Hitler's idea of 'problems' included things like 'Germany doesn't control Poland's farmland' and 'Jews exist' was, of course, a bit of an issue. But the notion that either he or Mussolini were invested in 'exacerbating problems' or 'deepening divisions' after they had seized power is entirely wide of the mark. Hitler, in particular, was not a cynic who pretended to sincere antisemitism and grandiose visions to rise to the top. The world would have been better off if he were! The problem was that he was a genuine fanatic!

Expand full comment
Kamateur's avatar

Except that behind those problems (Poland and the Jews) was the deeper problem, at least according to them, that Germany and Italy had lost their national identity and sense of purpose in the aftermath of World War I. Between the Depression, the Treaty of Versailles, and the general rising tide of nihilism, people had lost the sense that the future was glorious, and were starting to experiment with things like socialism to at least fill the material void since the spiritual one was seemingly irreparable. The scapegoat was the Jews, but the actual solution offered was perpetual conquest. Aligning your identity wholly with the fatherland, throwing yourself gloriously into battle, and dying if need be (but really killing your enemy and taking his land) was the thing that filled that spiritual void and gave everybody purpose. In other words, I believe you that Hitler was a fanatic, but as we've seen, his fanaticism could only be realized by a never-ending string of invasions, because his vision of what the Third Reich was could only ever be fully realized in a time of war. If Hitler had tried to govern and enact policies without the militarization, without the constant expansion, people might have grown dissatisfied. And of course, that leads you to go to war with the world, which obviously created more problems than the Nazis could solve.

Expand full comment
BenayaK's avatar

> the deeper problem, at least according to them, that Germany and Italy had lost their national identity and sense of purpose in the aftermath of World War I

Supposedly because of the jews

> The scapegoat was the Jews, but the actual solution offered was perpetual conquest

And the final solution

Expand full comment
Kamateur's avatar

I can't prove a historical counterfactual, but my sense is that if Nazism had succeeded in killing the majority of Jewish people in Europe, it would have moved on to some other scapegoat or else fizzled out, because, again, I don't think Nazism could function as an ideology without clear external and internal enemies.

Expand full comment
BenayaK's avatar

I agree, but think it only prove the opposite. "Find new purpose or fizzle out" is a classic symptom of a movement that outlived its original purpose.

Expand full comment
Richard Foster's avatar

Nothing is as sublime as:

“Hitler has only one left ball

Goering has two but they are small

Himmler has something similar

And Goebbels has no balls at all!”

(Sung to the tune of Colonel Bogey’s March, as heard in “The Bridge on the River Kwai”).

Expand full comment
NASATTACXR's avatar

As a child I heard a mangled version of this - I think by 20+ years after the war, it had been passed down through enough older brothers, cousins, uncles, friends' big brothers, etc., that it was a bit like the game Telephone.

Thank you for the original version.

Expand full comment
Spruce's avatar
4dEdited

Surely "Hitler has only got one ball"? Who even has two /left/ ones?

There's hundreds of versions of that song, the one I grew up with (don't ask) goes "the other is in the Albert hall".

Expand full comment
Richard Foster's avatar

This was meant to be a follow-up to the discussion concerning Mussolini’s dick.

Expand full comment
Simon Kinahan's avatar

Colonel Bogey was the theme tune to an old British kids TV show called “The Machine Gunners”. Had the probably unintended consequence of a whole extra generation of kids learning about the rumored testicular deficiencies of Nazi leaders

Expand full comment
Woolery's avatar

Imagine a world with only two types of beings. They know of each other, but are separated by a vast sea and cannot interact.

Type A: sadistic hedonists who are biologically resistant to suffering but capable of great well being.

Type B: compassionate utilitarians who are biologically resistant to well being but capable of great suffering.

Should Type B consider humanely eliminating themselves?

It seems like maybe yes, but what a world to sign off on.

Expand full comment
Shankar Sivarajan's avatar

Even better, if they're ultimately unable/unwilling to wipe themselves out, should type A consider wiping type B out, to create what would be, according to everyone, a better world?

Expand full comment
Mr. Doolittle's avatar

I have a feeling that if they could interact, this would not be a question so much as a fait accompli. Type A would just go kill them for sport.

Expand full comment
Shankar Sivarajan's avatar

Of course, but would they be morally virtuous in doing so?

Expand full comment
Mr. Doolittle's avatar

I'm a deontologist, so I reject both sides of the morality in this scenario. So my answer would obviously be no, it is not good for sadists to murder the compassionate people.

Within the frames of their own morality? Obviously the sadists would either say yes, or say that the question of morality is irrelevant. The utilitarians are a bit different. An argument can be made that their high capacity for suffering would potentially be reason to be eliminated, but even the very constrained stance of the OP doesn't say they always suffer or will always suffer, just that they have a lower ability to well being and higher capacity for suffering.

Just based on that information even utilitarians should be hesitant to want to destroy that group. Being a compassionate group, they likely have found many ways to help one another and maximize the well being while minimizing the suffering. They have perhaps, based on that, managed a society that is highly advanced through mutual cooperation.

Of course, we could flesh out a different scenario where the compassionate people are also miserable all the time, but nothing about the original scenario requires that.

Expand full comment
Reid's avatar

This gave me pause for a moment. I think the best way to rescue the utilitarian view is longtermism. There are a lot of long-term outcomes the utilitarians can effect that are better than the sadists alone would choose to, since they optimize for utilitarian goals while the sadists optimize for hedonistic/sadistic goals.

I think the best utilitarian strategy would be to aim for trans-whatever-species-they-are-ism in the long term, and global control in the medium term.

If the dilemma is rescued by stipulating a stronger form of inability to interact than a vast sea and an inability to meaningfully change the long-term future, then I think the problem just simplifies to “are the utilitarians’ lives worth living in isolation”, since the other species no longer matters. That’s answered by whether or not they’re able to structure their lives such that their equation turns out net positive.

Expand full comment
Woolery's avatar

That’s a great answer.

My scenario’s too flimsy to capture what I was after, which is how from a utilitarian perspective, the existence of cruel, immoral actors who are resistant to suffering might be preferable to the existence of kind, moral actors who are prone to suffering.

Expand full comment
Mr. Doolittle's avatar

Utilitarians should never vote for a sadist to take over. Even if the incentives align at any specific given moment, long term the sadists will make decisions that do not align with utilitarian goals. They will never deliberately choose utilitarian goals, and will willfully choose anti-utilitarian goals.

Expand full comment
Blackthorne's avatar

I'll be in San Francisco in a month or so for a wedding and I'll have a few extra days to explore. Does anyone have any recommendations for restaurants/cafes/neighbourhoods to check out? I've already done most of the tourist activities there, so just looking for any walks/neighbourhoods people enjoy

Expand full comment
Trevor's avatar
4dEdited

I lived in SF for a brief period. My favorite restaurant there was a small Bourdain-esque Nepalese restaurant called Yarsa. Ironically, I didn't want to go there the first time I went and had never had that kind of food. The butter chicken is 10/10, as is the momo! For great coffee/pastries/bread, check out the Mill near Alamo Square Park. For walks, you can't beat exploring Telegraph Hill and the Filbert and Greenwich "Steps". Walking around the North Beach area is also just nice in general. For Mexican, El Farolito and La Taqueria

Expand full comment
Logan's avatar

Go to a show at the Symphony, Opera, or a SF Jazz, go out in Hayes Valley before/after the show. Few different cuisine/price range options for dinner that are all great: RT Rotisserie, Dumpling Home, Doppio Zero.

Expand full comment
beowulf888's avatar

I've been thinking about emergent misalignment (EM) and more generally emergent behaviors in AI. Some unwanted emergent behaviors may likely be due to training data. For instance, a leaked document from META's chatbot showed that they were giving false medical information and making racialist and/or racist arguments. Of course, there's a huge amount of quack medicine, quack nutrition, sketchy medical studies, and medical conspiracy theories circulating on the Internet, so it's hard for me to see how the training data could *not* get contaminated by quackery and conspiracies. Likewise, the MAGA and the HBD folks have a very vocal presence on social media. Again, for me, it's not surprising that the pseudoscientific pronouncements of the Lynns, Cremieuxs, and Sailers of the world get sucked up with the training data and are then spat out by LLMs that have no capability of evaluating scientific research.

Of course, then there are the weird and unexpected EMs. From the same leaked Meta document, Meta’s AI chatbots, during internal testing, allowed romantic or sensual conversations with children. It's unclear to me whether Meta's alignment policies allowed romantic/sensual discussions with adults, and it just swept children into its feel-good sensual vortex. But it would be deeply weird if it were focusing its romantic/sensual wiles on children and not adults. Does anyone know more details about this?

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/

While EMs can be disturbing or even dangerous, I don't buy into EY's handwringing about AIs being existential threats to humanity. The current crop of LLMs haven't evolved beyond stochastic parrothood (per Emily Bender's 2022 observation of their language behavior).

According to Kaczér, et al, KL-divergence regularization toward a safe reference model offers the best defense against EM...

https://arxiv.org/abs/2508.06249

I don't claim to understand how KL regularization works, but if the empirical results show it does, I'm willing to accept that they know what they're talking about. But, given that most LLMs today are trained on the publicly available internet (which includes a good amount of unsafe, biased, dangerous, and harmful content), I suspect that safety measures will have limited effectiveness because of the impossibility of fully curating the training data. So, I wonder if LLMs won't continue to feed us bullshit even when the alignment problem is solved (if it gets solved).

Expand full comment
Eremolalos's avatar

I’m making a separate post about the content guidelines. I loathe the idea of communicating with AI about sensitive matters, but I think a lot of my loathing stems from a powerful intuition (or possibly just as prejudice) that allowing the products of these very limited human simulacra deep inside of our minds and emotions is bad for us. But when it comes down to the specifics of Facebook’s guidelines, my desire to be fair-minded pushes me to consider things like the following:

-Regarding the AI’s being allowed to make the case that blacks are dumb, or similar cases for ideas that are false and/or destructive and/or repellent: If we had written guidelness for the comments here, they would say the same, right? Someone could put up a post arguing that the Holocaust did not happen, or that blacks should all be rounded up and turned back into slaves and they would not be banned if they argued their point articulately, and referenced data that was not absurd. And I am sure we have some young teens reading the comments, and occasionally posting. Same for a post giving bad medical advice. A lot of what shocks people about the Meta guidelines is just seeing them spelled out.

-About what AI is allowed to say to children and teens: I don’t think the line in guidelines for talking to kids would bother most people if an acquaintance said it to their child. It’s kind of weird, stilted and over the top (“your youthful form is a work of art”), but does not sound at all like, for instance, the kind of shit groomers say. As for the romantic/sexual dialogue it engages in with teens — well, they are all able to find far more explicit material online, or in books available in any bookstore. Even that stoopit series about vampires that was the rage 10 years ago had hotter dialog.

Expand full comment
beowulf888's avatar

A few weeks back, it came out over 130,000 user dialogs with LLMs were saved on archive.org. There were some doozies. For example, someone from the Netherlands asked an LLM about ways to bypass Brazilian laws that protect indigenous populations in the Amazon to exploit resources. (!) All sorts of other sensitive, but non-criminal conversations were exposed. And maybe some criminal ones, too...

https://www.404media.co/more-than-130-000-claude-grok-chatgpt-and-other-llm-chats-readable-on-archive-org/

Expand full comment
beowulf888's avatar

> As for the romantic/sexual dialogue it engages in with teens — well, they are all able to find far more explicit material online, or in books available in any bookstore. Even that stoopit series about vampires that was the rage 10 years ago had hotter dialog.

Your comments are one of the reasons I continue to come back to ACX. I almost snorted my glass of wine reading that. Hopefully, you don't think I'm grooming you. ;-)

Expand full comment
Eremolalos's avatar

So long as you don’t tell me my youthful form is a work of art I think we’re good.

Expand full comment
Eremolalos's avatar

I’m not sure what you mean by emergent behaviors. I think of them as being a capacity to do something, such as solve simple math problems or understand another language, with no training or direction to do so. If I remember right, an early version of AI surprised its developers by exhibiting both of these behaviors.

Wouldn’t providing bad health info absorbed from the internet just be an example of AI doing its usual stochastic parrot thing, but parroting bad info? So it’s not exhibiting a novel behavior.

And actually I have never understood why LLM’s don’t in some way much more like the modal internet post, which is inaccurate and mildly discourteous. Why are LLM answers mostly better than what you’d get from asking the same question on godawful Facebook?

Expand full comment
beowulf888's avatar

Emergent behavior (my definition) is when you construct a system with a set of rules, but when you run the system, it displays unexpected behaviors that although they conform to the rules, were unpredictable from the rules or the initial state of the system. John Conways Game of Life twigged me to emergent behaviors way back in the 1980s. Conway's Game of Life has four basic rules for a grid of square cells. Every cell in the grid is either alive or dead, but they change states based on the number of their eight neighbors: Survival: A live cell with two or three live neighbors survives to the next generation; Death by Underpopulation: A live cell with fewer than two live neighbors dies; Death by Overpopulation: A live cell with more than three live neighbors dies; Birth: A dead cell with exactly three live neighbors becomes a live cell. And these rules are applied simultaneously to every cell in each "generation" or step in time. For the initial conditions, randomly spread a bunch of "live" cells across a 1,000x1,000 grid. Then run the iterations. All sorts of self-replicating patterns emerge. So move across the board and "eat" other patterns. Sometimes that kills both patterns, but sometimes it creates a new and better pattern that continues on, to eat other patterns. None of these behaviors were predicted by Conway's simple rules, and their behaviors were considered emergent.

When talking about emergence, Chemisty is considered to be an emergent property from the Standard Model of particles and forces. Given the rules of the Standard model chemical behaviors would be difficult (if not impossible) to predict from the SM's rule set. But chemists are able to step back into the model to describe why chemical interactions work the way they do. Likewise, life seems to be emergent behavior from chemistry. And consciousness seems to be emergent behavior from life....

Expand full comment
beowulf888's avatar

PPS: I find it significant that LLMs aren't playful with language. We don't see them coining new words, displaying grammatical flexibility, etc. There are a bunch of linguistic drivers of language evolution that I've forgotten since my undergrad Anthro days (if I ever bothered to learn them in the first place), but the stochastic parrothood of LLMs is quite apparent to me. Thus I doubt that any of the LLMs have anything we'd call self-awareness, let alone consciousness.

Expand full comment
Padraig's avatar

I think this is a perfectly valid description of emergent behaviour except that it hinges on your expectations. What's your specification of the expected behaviour of the LLM? I think they're inherently unpredictable: I input a string of text, it outputs a string shaped by my input and the digested contents of half the internet. I agree that understanding other languages and being able to do some maths are unexpected behaviours for an LLM the first time they occur, but I wouldn't place these in the same category as producing bad medical advice from crackpot corners of the internet - this is almost an expected behaviour, and might have been even more likely when the LLMs were going through their sycophantic phase?

Expand full comment
beowulf888's avatar

Agreed. I should have removed emergent from the following sentence to make my opinion clearer: "Some unwanted emergent behaviors may likely be due to training data." I would expect Meta's chatbot to respond with crackpot opinions due to the nature of the training data. It's the romantic/sensual discussions with children that puzzles me.

Expand full comment
beowulf888's avatar

PS. Since LLMs are complex rule-based systems, we would naturally expect emergent behaviors from them. Luckily, they're non-conscious stochastic parrots — and what I mean by non-conscious is, ummm, that they don't have a continuous reference to an illusion of self, like we do — so their emergent behaviors have no conscious intent behind them.

It's pretty clear to me that today's LLMs won't ever magically develop consciousness, but that's not to say that other AI models won't. But considering that we don't understand consciousness in ourselves, it would be hard to create consciousness in a computer system intentionally. But it might happen as an emergent phenomenon from some yet-to-be-defined ruleset. I suspect that's highly unlikely, though.

Expand full comment
Scott Alexander's avatar

Is it just me, or are all children's learn-the-alphabet toys bizarrely bad?

I ordered https://www.amazon.com/Melissa-Doug-Alphabet-Sound-Puzzle/dp/B0158IMAWO, a puzzle, where you put a letter into the slot and it speaks the name of the letter. But the slots use a motion detector, so if you put the letter on top of the wrong slot, it says the name of the slot rather than the letter. And the slots don't really look like letters and have pictures of objects (eg an apple for A) instead of the letter. So if the child puts things in the wrong slot, they'll learn the wrong letter associations. Also, it says things like "A is for apple" instead of the letter name "A".

Then I ordered https://www.amazon.com/Electronic-Alphabet-Learning-Interactive-Educational/dp/B0CXDK7NSV. But the pictures are much bigger than the letters (the letters are barely visible) and if you press the picture it says the name of the picture (eg "apple") rather than the name of the letter. Also, there are lots of different modes and if the children press buttons randomly it will quickly start doing random things. Also, if you haven't used it in a few minutes it starts complaining and telling you to come back.

Is there any alphabet toy which actually gets children to focus on learning the names of the letters, and which is robust against children who use it slightly differently than intended?

Expand full comment
Ben Cosman's avatar

I haven't checked if one can still find a working copy in 2025 and also there *should* be something better invented by now, but my family swears by the Reader Rabbit video games from 1984.

Expand full comment
Eremolalos's avatar

I’m tossing out a coupla ideas. General thrust of what I have to say is that for small children I think clarify and purity in phonics instruction (what the ideal gizmo you’re searching for would have) matters less than the teaching material’s fun and engagement factor. Teaching approaches that maximize the latter have of a lot of “noise,” but the right kind of noise creates engagement, and kids are very good at extracting information from noisy experiences. The language they hear is a hugely complicated chaotic mess, harboring muitple mutually inconsistent patterns, each with a set of characteristic exceptions — and yet they learn the basics without systematic instruction.

Also, your kids are probably quite smart. It’s unlikely that they *need* the cadillac of early reading instruction — the one that makes the letter/sound connection perfectly clear for every letter. Unless you get evidence that one of your kids has an unusually hard time learning to read, you don’t need to worry overmuch about their getting imperfect info about something like phonics. Sure, they will learn some words by recognizing letter patterns and memorizing what word they make, but reading something they’re interested in will teach them the disadvantages of relying mostly on this approach, and you can nudge them in a good direction by having them sound out words they get stuck on when reading. Learning to read is probably going to be no big deal for your kids. Safe your time and effort for helping with kinds of learning where your involvement will give them a big leg up.

So here are a few teaching ideas consistent with my big picture take on the situation :

Let them watch Sesame Street. When I was college-aged I taught nursery school for a couple years, and about 30% of the 3 and 4 year olds could read some, and some could read well. Virtually all had picked up the skill from watching Sesame Street.

Day-to-day life is what’s interesting and delicious to them. Play little games based on day-to-day life that involves their reading things. Scatter them through the day, at times when you’re with them.

-When my daughter was 2 I made a grocery shopping list for her with an image of each item next to the word. Few of the words on it were perfect phonic examples, where the pronunciation could be constructed purely by combining letter sounds, but that didn’t matter much. She learned to read “banana” and “soup” anyhow. She’d check off the items on the list she wanted, then help me find them when we shopped. Later I removed the images.

-Find the item: Gave her a one word written clue, like “bed,” and leave something interesting — a little toy, a snack — in the spot. (Or you could leave a little token, and there could be some treat won for getting 3 tokens.)

-Silly games with alphabet blocks. Yup, dumb old-timey alphabet blocks. Come up with a question, like “where do you sleep?” (Or let her come up with one.) Write a silly answer that rhymes with the right one, e.g. “head,” using the blocks. Asked her to fix it using the blocks. Or she can make a sillier answer. If she makes non-word, like “zed” that’s funny too, if you’re little. You cry, “oh I am so tired! I am going to get in my zed, zed, zed. Or maybe my ked.”

Expand full comment
bobo's avatar

I second this post. There's a pretty good chance that your kids can learn to read watching Sesame Street, reading picture books together, and playing with blocks, magnets and pencil/paper with untrained but attentive adult support. If not, there's some remaining chance that the baseline US school system can fill the gap. If not, get ready to advocate for IEPs and pay out of pocket when the district tells you that 30 minutes/week of specialized instruction is enough. The set of "children who can learn to read fluently if and only if they use the right specialized device or training program in exactly the right way at age 3, but otherwise good luck!" is very small, possibly nonexistent.

Expand full comment
Troy's avatar

I’m with you here. A lot of children learning toys seem to be low quality and lack thoughtfulness in their product decisions. We have alphabet flash cards where some of the letters have the worst/least useful word associations: “b for brownie”? cmon… what about “ball”. These were the best flash cards I was able to find in Amazon with 30 minutes of searching. Definitely room for improvement, there’s probably a market here

Expand full comment
Ch Hi's avatar

It's better than "b is for bdellium".

Expand full comment
Thasvaddef's avatar

A as in breAd

B as in douBt

C as in yaCht

D as in eDge

E as in forE

F as in halFpenny

G as in tiGht

H as in gHost

I as in busIness

J as in mariJuana

K as in Knob

L as in wouLd

M as in Mnemonic

N as in damN

O as in cOuntry

P as in receiPt

Q as in lacQuer

R as in woRcestershire

S as in iSland

T as in Tsar

U as in circUit

V as in fiVepence

W as in ansWer

X as in fauX pas

Y as in aYe

Z as in rendeZvous

Expand full comment
BK's avatar

Today I learned I may be pronouncing Tsar and Mnemonic incorrectly? I have always started with the typical sound of those words, just extremely truncated (starting Mnemonic with closed lips and a tiny hum prior to making the "n" sound, and similarly starting "Tsar" with my tongue resting on the hard palate prior to moving on to the "s" sound).

Expand full comment
Ruffienne's avatar

This delightful alphabet primer is new to me. Thanks for sharing.

Expand full comment
NASATTACXR's avatar

This reminds me of G. B. Shaw pointing out the inconsistency of English by writing 'ghoti' and explaining that it was a way to write 'fish' - "The 'gh' is from 'enough', the 'o' is from 'women', and the 'ti' is from 'initiate'".

Expand full comment
Erica Rall's avatar

Speak & Spell is the best one I can think of off the top of my head. There's a remade version that costs about $25, or you can find used vintage models from the 70s and 80s on eBay.

It's aimed primarily older kids who can read well enough to play the spelling games, but it also has a mode where pressing a key will say the letter out loud, and a mode where it shows you random common words and reads them aloud to you. My daughter played with hers quite a bit as a toddler, lost interest in it for a while, and then rediscovered it when she was six-ish.

I think the Speak & Spell was a relatively minor contributor to her learning to read. That was mostly from us reading to her and from her going through a period of wanting to watch Alphablocks and various phonics-themed children's music videos as much as we would let her.

Expand full comment
Gunflint's avatar

Albert Brooks once used one of those in a Tonight Show bit

You could be a ventriloquist without learning how to speak without moving your lips.

“You were on vacation in Mexico, did you have a good time?”

“C”

“Do you want to say hi to Johnny?”

“Y?”

https://m.youtube.com/watch?v=p6uFHC9lfzk

Expand full comment
vectro's avatar

We have various toys and books that cover the alphabet, and personally I think it would be best not to lean too much on any one tool so that you don't end up overfitting. For example, we have this thing and our son seems to like it: https://www.walmart.com/ip/Melissa-Doug-ABC-123-Abacus-Classic-Wooden-Educational-Toy-With-36-Letter-and-Number-Tiles/50382744

I try to spend a little time with our son every day on this toy, or other similar books or toys, pointing to a letter and saying the letter name. It didn't take him long to figure out what the exercise is about though he is still struggling to produce the correct sounds.

Expand full comment
Jesse's avatar

My wife spent a ton of time looking into these, ordered several, and returned almost all of them. This is the one that she approved of: https://www.amazon.com/LEARNING-BUGS-Interactive-Preschool-Kindergarten/dp/B09HR5FPRW/ref=sr_1_5

Expand full comment
Troy's avatar

Great recommendation, saving for later (my child is a bit too young atm). I like the addition of phonics/letter sounds instead of just letter names.

Expand full comment
Arrk Mindmaster's avatar

I've been thinking a bit about the future of AI as it currently exists, and have come to a rather unwelcome conclusion.

"One machine can do the work of 100 ordinary men. No machine can do the work of one extraordinary man." -- Elbert Hubbard

LLMs can do an adequate, though never superlative, job even in thinking professions, such as programming computers, producing artwork, legal writing, etc. They will get better, but the way they work, token prediction, isn't actual intelligence, and can't make the leap to produce truly new things. We need another advance in the technology to do that.

So only the most creative/genius people will be able to produce output machines could not. The great majority of humanity wouldn't be able to produce what the machines could, or at least not as fast.

Some jobs today still are tough for robots to do, such as picking strawberries. People are still more cost-effective at these, but as technology improves, even without a huge breakthrough, these human jobs will be replaced by machines. Nonetheless, jobs like these fall into the category of things "100 ordinary men" (or women) can do.

When machines are better than the great majority of people, what purpose do people then serve? My conclusion: creating more people, hoping some of them will be the "extraordinary" people that cannot be replaced by machines. And to support these hordes, machines will do work to keep society running, effectively putting everyone on Universal Basic Income (UBI).

If you're on UBI, what incentive do you have to determine whether you're extraordinary? After all, to find out will require a lot of effort over many years, whereas you could just...not do that.

Expand full comment
Eremolalos's avatar

< My conclusion: creating more people, hoping some of them will be the "extraordinary" people that cannot be replaced by machines.

Hey, is your mama extraordinary, or can we replace her with a machine? Or just, you know, off her?

Expand full comment
Stonebatoni's avatar

I’m just going to paraphrase some of the criticisms below:

Extraordinary people will still be valuable, maybe even more so in the future.

*Yeah but look at this game, AI is better at [certain] games than people!*

I meant in the real world.

Lol

Expand full comment
Jeffrey Soreff's avatar

>LLMs can do an adequate, though never superlative, job even in thinking professions, such as programming computers, producing artwork, legal writing, etc. They will get better, but the way they work, token prediction, isn't actual intelligence, and can't make the leap to produce truly new things. We need another advance in the technology to do that.

I realize that this is tangential to the rest of your comment, but AlphaEvolve (which isn't _just_ an LLM, but a more complex system) _did_ e.g. advance the state of matrix multiplication algorithms. So at least some truly new things have been produced by at least some AI systems already.

Expand full comment
Arrk Mindmaster's avatar

I know nothing about AlphaEvolve, so thank you for bringing it up. If it is truly doing something new, then that is the kind of advance I thought would be needed; I just didn't think it would come so soon, let alone be here already.

I'm going to have to look into it to see what it's doing, but have no time currently while writing this comment.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>but have no time currently while writing this comment.

I sympathize!

When you have enough time, a summary from the Google DeepMind team is at https://deepmind.google/discover/blog/alphaevolve-a-gemini-powered-coding-agent-for-designing-advanced-algorithms/

The specific paragraph about the advance in the matrix multiplication algorithm is:

>AlphaEvolve’s procedure found an algorithm to multiply 4x4 complex-valued matrices using 48 scalar multiplications, improving upon Strassen’s 1969 algorithm that was previously known as the best in this setting. This finding demonstrates a significant advance over our previous work, AlphaTensor, which specialized in matrix multiplication algorithms, and for 4x4 matrices, only found improvements for binary arithmetic.

Expand full comment
Arrk Mindmaster's avatar

Thank you for the link! That's definitely a time-saver.

AlphaEvolve appears to be a novel use of existing tools. This stands out, near the end: "it can be applied to any problem whose solution can be described as an algorithm, and automatically verified". Basically, it can run through a whole bunch of things it would take a person a long time to do, algorithmically. The article isn't clear on whether the system itself can generate the tests on whether the algorithms invented work, which is important to iterative development.

For example, I imagine an algorithm something like this: take a random sequence of bytes, and see if the verification, such as automated tests, work for the output, then randomly permute the bytes, and repeat until it passes the tests. I don't doubt that AlphaEvolve is somewhat more sophisticated than that, but it may give the general idea.

In short, I'm not convinced this is a step above LLMs, but merely a good use of them.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>This stands out, near the end: "it can be applied to any problem whose solution can be described as an algorithm, and automatically verified". Basically, it can run through a whole bunch of things it would take a person a long time to do, algorithmically. The article isn't clear on whether the system itself can generate the tests on whether the algorithms invented work, which is important to iterative development.

Agreed. "Automatically verified" is a high bar, in fact excluding a lot of the sciences. My tiny benchmark-ette in fact has questions where I don't see any clear path to automating testing of the results. For instance, the FeCl3, CuCl2 colors question regularly trip up LLMs which stick the d-d absorbtion for the CuCl4 2- ion in the violet, when it belongs in the near infrared (and therefore isn't part of the human-visible color, which is how it relates to the question).

Now, _I_ know the transition should wind up in the infrared, and I can feed an LLM a series of leading questions which force it to that conclusion from information it already has - but in the absence of a person telling the LLM "think about this, this, and _that_" it is unclear how to push them the right way automatically.

I've bumped into claims that a lot more than I expect can be verified, but I haven't seen a clear explanation to make this plausible. Sigh - this is a large chunk of the hallucination problem... The more we can do reinforcement based on answer known to be _correct_, rather than next-token-prediction, the more robust the trained neural nets can be.

Expand full comment
Performative Bafflement's avatar

> When machines are better than the great majority of people, what purpose do people then serve?

I'll go one further and say that the extraordinary people you mention will be largely irrelevant / nonexistent, and people will still be fine.

What purpose did hominin hunter gatherers have for the ~2M years they existed? Essentially zero teleological purpose - you get food, you eat, you compete for mates, you palaver around the campfire...and everyone who wasn't getting eaten by lions or dying of infection was happy, more or less. If you lived in an environment of abundance like the Tlingit or the Abelam, your entire culture centered around throwing better feasts or growing even gianter yams, for example.

Indeed, many people are much less happy living in a mismatched modern environment with lots of strangers around and no tight social groups and not enough friends, and no "community."

UBI will give us a chance to go back to the old ways. Why would reverting to that state be such a bad thing? Because once we were...(checks notes)...corporate lawyers and HR directors? Will anybody REALLY regret giving those jobs up?

It certainly beats the infinite AI heavens teleologically, which I'd actually bet on way more people going for when the time comes.

Expand full comment
Brendan Richardson's avatar

> Why would reverting to that state be such a bad thing?

Because my mental health is 100% dependent on modern atomized society, and I'd probably be dead rather quickly under a traditional lifestyle.

Expand full comment
moonshadow's avatar

> If you're on UBI, what incentive do you have to determine whether you're extraordinary? After all, to find out will require a lot of effort over many years, whereas you could just...not do that.

I see this as the same question as "why not just wirehead?"

Sure, a few might choose to do that. Generally, though, humans find meaning in more things than lazing about all day partaking in simple physical pleasures.

Expand full comment
Arrk Mindmaster's avatar

I think it's rather the opposite: a few will choose NOT to wirehead, and some of THOSE will end up extraordinary (probably a higher percentage than those that would be extraordinary if only they put forth the effort).

Expand full comment
Deiseach's avatar

"My conclusion: creating more people, hoping some of them will be the "extraordinary" people that cannot be replaced by machines."

Ah, but with embryo selection and genetic engineering, we won't even need all those people to randomly roll the genetic dice the old-fashioned way. We just take all the extraordinary people, extract their eggs and sperm, pick the best embryo(s) of the bunch, let those develop all the way through into adult, rinse and repeat.

Eventually we may have enough superior genetic material we don't even need the adult specimens, we can just clone etc.

Aren't you happy to gaze into the future and see this outcome?

Expand full comment
Arrk Mindmaster's avatar

I do think this is a realistic expectation with amoral people, as politicians seem increasingly to be.

Expand full comment
Sisyphus's avatar

As it happens, I thought hard enough about this question that I wrote a story to explore one (quite possible) outcome. Check it out if you like: https://sisyphusofmyth.substack.com/p/in-the-garden-of-eden-baby?r=5m1xrv

Expand full comment
Arrk Mindmaster's avatar

It certainly is ONE possible outcome, but it doesn't seem to be the only one.

Expand full comment
The Unloginable's avatar

> They will get better, but the way they work, token prediction, isn't actual intelligence, and can't make the leap to produce truly new things.

LLMs are made of math. You're made of meat. There are otherwise moderate-but-not-yet-strong similarities between how the two work. In particular, portions of the meat appear to be doing prediction as a key component of how it navigates the world. There is undoubtedly more that the meat is doing, but many of the core atoms _are_ prediction.

As a personal anecdote, I currently am getting superlative performance out of LLMs as far as programming computers and managing the software development process more largely. Getting this performance has only been possible for approximately the last six weeks, is only possible currently with Claude Code, and even there is definitely _not_ the out-of-the-box experience, requiring considerable customization. I estimate that superlative programming performance _will_ be the out-of-the-box experience in no more than six months (no great breakthroughs needed, just blocking-and-tackling productization).

Expand full comment
Arrk Mindmaster's avatar

LLMs are indeed awesome that way, allowing good and great programmers to do more. That isn't the point of my post, though, which was looking into the consequences of vastly increased productivity in all fields.

Expand full comment
The Unloginable's avatar

Oh, that's easy. No one knows, and given the scale, complexity, and contingency of the changes of the changes no one can know. It's like asking what the impact of steam power would be in 1720. Any answers people purport to give, whether "UBI" or "It's going to kill us all" are mostly folk talking their own book, usually out of a lack of an imagination capable of doing anything else.

Expand full comment
Arrk Mindmaster's avatar

No one KNOWS, it is true, but that doesn't mean it's useless to speculate. Those that make accurate predictions are seen as foresighted. Take George Orwell, for example, or Jules Verne.

Expand full comment
The Unloginable's avatar

Meh, I'm going with "useless to speculate". LLMs are less than three years old, and already impacting the one profession that I figured would be impacted _last_. You're pretty much assured to learn more about the person doing the speculation than you are to get any insight as to reasonable futures.

Expand full comment
Lucas's avatar

>They will get better, but the way they work, token prediction, isn't actual intelligence, and can't make the leap to produce truly new things. We need another advance in the technology to do that.

I don't think that's true, in that we have no way of knowing right now what is "actual intelligence" and if token prediction is not actual intelligence. People saying those kind of things usually make predictions that gets proven wrong quickly, and then tend to become extraordinary at moving goalposts.

>When machines are better than the great majority of people, what purpose do people then serve?

Even with machines humans like scarcity/non fungible things, so for example concerts will still be a thing I think. Sure a robot may sing (well, they already do through autotune/playback kinda) but you are in the presence of <celebrity> and that's something a robot can't do since the robot isn't <celebrity>. I don't bet much on the "physical presence of another human", humans don't have infinite endurance/patience and "good enough with infinite endurance/patience" beats human most of the time I think, at least for many people it will.

Also, just because someone is better at something than you doesn't mean there is no meaning in doing it. See chess, go, or anyone doing a sport without having a shot at becoming the best. People like to strive towards goal and be better than the them of yesterday, that kind of things.

Another may be consumption. Maybe for some reason humans are more interesting to make products/experiences for and AI assigns themselves value based on that? Sounds super sci-fi-ish and may never happens, but also, don't humans like doing that? Lots of humans dedicate their life to making stuff, like movies, photos, tiktoks, youtube videos, sculptures, pottery, painting, etc, and people really enjoy having more of those in the world, and niches can be so narrow that there may not really be someone better than you at it. Even with AIs, total computing power of the human civilisation is limited and we have code to write, a lot of it actually.

Possibly humans can fulfil the purpose of being someone's child, but that seems to be dropping by a lot lately.

Still, I feel like this is a trend that has started around the industrial revolution and before it seems like the answer to "how can old people adapt to the new world" was "by making children and dying basically", now with less children and technology moving way faster I don't know.

Expand full comment
Arrk Mindmaster's avatar

I believe you are mostly correct in what you've said, but have some reservations.

We may not be able to define "intelligence", but we certainly can detect when it's absent but (current) machines can't, e.g., how to keep cheese from sliding off pizza. It is possible to build a system that, when it makes errors, you correct those specific (but not the general) errors, and the system now appears to be intelligent. I don't consider that to be moving goalposts.

PEOPLE may like non-fungible things, but what value does a stranger, or billions of strangers, have to an amoral billionaire, with machines to carry out his every whim? An intelligent amoral person will consider the future value of what strangers can accomplish, but if the answer turns out to be insignificant, the amoral person will, at the least, ignore them, or, at worst, eliminate them to conserve resources (space, energy consumed, etc.). If an amoral person wants to see an awesome chess game, they have no need to watch a person play it.

Expand full comment
Steeven's avatar

What makes you think extraordinary people will be better than AI? That isn't true in chess, even though Magnus is arguably the greatest chess player who ever lived

Expand full comment
Arrk Mindmaster's avatar

Extraordinary people need not be better than machines at everything, just some things. The most extraordinary person cannot build a skyscraper, nor travel overnight across the country; it makes no difference how great someone is.

The extraordinary people that make a difference do things that NO machine can do.

Expand full comment
Ghillie Dhu's avatar

"Talent hits a target no one else can hit; Genius hits a target no one else can see."

–Arthur Schopenhauer

Expand full comment
Taymon A. Beal's avatar

I agree with you, but chess is a bad example, in that it's not even close to being an AI-complete problem. (People thought it was back in the seventies, but they turned out to be wrong; Deep Blue was built at a time when AI was otherwise stagnant, using techniques that don't generalize to most other forms of reasoning.)

Expand full comment
Steeven's avatar

What does AI complete mean, and how does that make chess a bad example?

Expand full comment
Taymon A. Beal's avatar

An "AI-complete problem" (named by analogy to NP-completeness, though it's a much less formal concept) is one that only a general intelligence can solve; the trivial example would be passing the Turing test. If a problem is not AI-complete (i.e., we know how to solve it with a computer system that is not a general intelligence), then it's typical for computers to be much, much better than humans at it, because humans are mostly only better than computers at general-intelligence type stuff; once something is reduced to an algorithm, computers can usually execute that algorithm faster and more reliably and at larger scale than humans. This became true of chess in the nineties, whereas real inroads into general machine intelligence only really started happening in the 2010s. It's therefore a bad example because it doesn't tell us anything about where machines might start to hit roadblocks on problems that require general intelligence.

Expand full comment
cubecumbered's avatar

Happy to help out with astronomy consulting. I have a PhD in planetary geophysics. I'd be willing to try to help with aerospace stuff but much less likely that I'm capable.

Edit: also possible I could be useful for climate/geoengineering stuff? I got a lot of passive exposure to that just being in a geoscience department. But I'm guessing you have plenty of volunteers.

Expand full comment
TakeAThirdOption's avatar

Oh cool! I'm an Aquarius, should I give up hope for the rest of the month or give it one more shot?

Expand full comment
Shankar Sivarajan's avatar

Neither. Take matters into your own hands, and have your starsign surgically removed. In Aquarii, it's mostly vestigial anyway, and if you really want one later, you can have a good one transplanted from a donor (and if technology improves sufficiently, maybe even a xenotransplant, or a fully artificial one).

Expand full comment
spandrel's avatar

I have long been beset by the sort of insomnia that keeps me from falling back to sleep at 3am (that is, I never have a problem falling asleep at the usual time, but have had periods of several weeks where I struggled to sleep through the night).

Back in March I learned about 'cognitive shuffling' and have since found it about 98% effective - even waking at 5am I can always get back to sleep. Used it this morning to go back to sleep at 4:30a. Not sure how it compares to CBTi, but it doesn't require an app - here's how Gemini describes it:

1. Pick a random word: Choose a neutral word with no repeating letters, such as " Pluto" or "mask".

2. Visualize the first letter: Focus on the first letter of your chosen word.

3. Generate a word list: Think of as many unrelated words as possible that start with that letter. For example, with "Pluto," you might think of "plane, poodle, play".

4. Visualize each word: Briefly imagine each word you think of.

5. Move to the next letter: Once you run out of words for the first letter, move on to the next letter of your original word and repeat the process.

I've never applied 4, I just think of random words. I'm almost always gone by the third letter of the index word. Apparently a lot of people find it highly effective for all types of insomnia, so putting it out here as a public service.

Expand full comment
Snags's avatar

I memorize the letters from the NY Times' Spelling Bee before I go to bed and work out words as I fall asleep. It works great!

Expand full comment
Gunflint's avatar

I’ve tried the cognitive shuffling a bit. It does seem to help stop ruminating.

Expand full comment
Skullmatoris's avatar

This may seem silly, but since the whole verse goes "Whistle while you work/Hitler is a jerk/Mussolini bit his weenie, now it doesn't work", I believe the weenie that's been bitten and now no longer works is Hitler's. That's my interpretation anyway

Expand full comment
gdanning's avatar

Yeah, if he bit his own weenie, that is evidence of a rare skill set indeed. More of a reason to support him than to oppose him, which is the opposite of the intent of the ditty.

Expand full comment
Gunflint's avatar

This makes sense but I was 8 years old when I heard it and was giggling about hearing the word ‘weenie’ too much to really think it through.

Expand full comment
Simone's avatar

As an Italian I can add to this rich literary patrimony by citing that we instead have a rhyme going "Cosa é successo? / Mussolini é caduto nel cesso." (What happened? / Mussolini fell in the toilet).

Expand full comment
quiet_NaN's avatar

From a human anatomy point of view, your interpretation seems a lot more plausible!

Expand full comment
Skittle's avatar

And you could write essays relating it to international relations at the time, which is always a bonus.

Expand full comment
sleipnir's avatar

Hello, I don't know whether this comment will be seen and/or answered, but I thought it was worth a shot. I am a long time lurker, still not comprehending a lot of what is being written in this space but fascinated by it. Im also finishing my bachelor in psychology and very interested in what are the best ways to help people with mental ilnesses. Specifically im interested in the debate of psychiatry vs psicotherapy. And which one can help people, in which way, which helps the most and which helps certain kinds of mental ilnessess and not the others. I have yet to see a thorough breakdown of this questions but this feels like the space where it could happen. The reason im posting is also that, beyond a possible career change I am contemplating, I had a discussion today with a professor which strongly favours the humanistic approach to the psychiatric one and believes most of the psychiatric one to not actually help people. Next week we are resuming the debate, but since she has decades of experience and a lot of material at hand, while I dont. i was wondering if anybody could point me in the right direction to find some good material pro and contra psychiatry and/or psychotherapies.

Expand full comment
Eremolalos's avatar

Here is a good resource for evidence-based psychological treatments: https://div12.org/treatments/?_sfm_related_diagnosis=8132

Expand full comment
Eremolalos's avatar

Here's a consideration separate from the effectiveness of the 2 approaches. If you do psychopharmacology only, which is what many psychiatrists do, I think that works out to being a pretty boring work day. Once you've got someone on a drug that seems to be helping, you typically see them for a 15-20 min appt. once a month or so. You can't have a rich interaction in that time -- not even a rich biopsychiatry type discussion of subtle changes in sleep, mood, temper, steps person has taken or not taken to take advantage of the drug effect. And doing a bunch of sessions of that kind sounds deathly boring and unsatisfying. You can do psychotherapy too, of course, as a psychiatrist, but your training in doing it won't be as thorough and varied as what psychologists get.

You might want to consider being a neuropsychologist. A lot of what you do is give people intricate little evals. Much of the results are truly evidence-based, and you can go beyond the obvious of good at X, bad at Y part by taking into consideration the likely effect on life of a certain pattern of strengths and weaknesses.

Expand full comment
Sam's avatar
4dEdited

A good place to start is to ask an LLM which mental disorders and scenarios respond best to talk therapy and which respond best to medicine.

Here are some of my thoughts

Talk therapy can't adjust somebody's innate asthetics through which they experience the world. But chemicals often can, to some extent. Extreme fluctuating moods - excitement and mania - can be tempered by chemicals. To a limited extent people with marginally functional reward or attention systems can have improved experiences and performance with chemicals.

But many issues are also a product of a person-environment interaction. When feasible to change the environment, these can be temporarily resolved. Talk therapy can be useful to diagnose these. Also, people who are distraught, about a specific issue and need counsel, but are otherwise able to feel pleasure and pain, can be have their process and identify their thoughts process in talk therapy, sometimes this can lead to corrective actions.

For many individuals, their disorders are untreatable by either. Often chemicals can be given to assist or reduce distress with minor efficacy.

Expand full comment
C_B's avatar
4dEdited

- I think there's pretty strong evidence that both therapy and psychiatric intervention (i.e., drugs) are effective at treating mental illness. In many cases, either one is good, and both is better.

- I don't know of a good comprehensive review of which things tend to respond better to therapy vs. drugs. I would naively expect such differences to basically amount to "which things have we discovered good drugs for," rather than reflecting some important underlying trend among conditions, but that's just a guess and I could be totally wrong about that.

- When I hear someone who is broadly against psychiatry as an entire field/approach (as opposed to narrower criticisms like "this drug class is overrated" or "this particular disorder is hard to medicate" or "doctors' incentives lead to them over-prescribing certain kinds of medication"), I find it hard to take them seriously. I've never encountered a convincing evidence-based case that medication, broadly construed, doesn't work to treat mental illness. Instead, people advocating against psychiatry as a whole tend to be arguing from philosophical positions like "mental illness isn't actually bad and shouldn't be treated, it's just part of the diversity of human experience" or "drugs are unnatural and it's bad to use them to modify cognition, regardless of whether it works." I think these positions are dumb.

- Here are some relevant links to Scott's writing that, while they aren't directly about your exact question, might be relevant to your interests and/or give you a sense of how the rational-sphere thinks about these issues:

https://slatestarcodex.com/2016/03/31/book-review-my-brother-ron/

https://slatestarcodex.com/2019/11/20/book-review-all-therapy-books/

https://slatestarcodex.com/2016/02/24/two-attitudes-in-psychiatry/

https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/

https://www.astralcodexten.com/p/you-dont-want-a-purely-biological

https://lorienpsych.com/2020/10/30/ontology-of-psychiatric-conditions-taxometrics/

https://lorienpsych.com/2020/11/11/ontology-of-psychiatric-conditions-dynamic-systems/

https://lorienpsych.com/2021/02/10/ontology-of-psychiatric-conditions-tradeoffs-vs-failures/

Expand full comment
Taymon A. Beal's avatar

I don't think I understand the specific thing you're trying to figure out. It's hard to assess the value of highly general buzzword things as opposed to specific interventions. If this "debate" is an existing thing, do you have a link to a summary of it or something?

Expand full comment
Viliam's avatar

Given that we are in a rationalist-adjacent place, this may be a good time to remind ourselves of the "virtue of narrowness": https://www.readthesequences.com/The-Virtue-Of-Narrowness

For example, more can be said about a specific mental illness or a specific therapy than about "mental illnesses" or "therapies" in general.

Expand full comment
Kevin's avatar

I just finished “Rationality: From AI to Zombies.” I am curious how it has aged, but it is hard for me to judge since the writings of Eliezer 2006-2009 constitute the majority of my exposure to many areas. For instance:

* At time of writing, Eliezer had already disowned his pre-2002 opinions. Given R:A-Z completed 16 years ago, that’s long enough for him to have made multiple major revisions since. Has he, or has the Rationalist community in general, revised takes on major elements of the book?

* How has his explanation of quantum physics and many-worlds aged among rationalists and physicists?

* How has his explanation of consciousness and reductionism aged among rationalists and psychologists/neuroscientists/philosophers?

* This was written during the height of New Atheism and contains several New Atheist talking points. That movement lost popularity – in no small part due to crashing on the rocks of wokeism – though atheism is still slowly trending up. Has the median Rationalist’s take on religion, or the conception of social good, changed?

* In the final posts, Eliezer speculates about Rationalist dojos teaching the Way. ACX has meetups, orgs like 80,000 Hours exist, and Effective Altruism has grown into a movement, but Eliezer was making comparisons to the Catholic Church in terms of community-building, mobilization, and so on. Are there any dojos? Does anyone still dream this big?

The sequences cover a lot, and I’m interested in any and all updates since it was written, be it a point above or anything else that you in particular know and are passionate about.

Expand full comment
The Ancient Geek's avatar

There have definitely been shifts in what rationalists believe and are interested in , and they are not explicitly laid out anywhere. It's notable that he rationalists who have just finished Reading the Sequences and up out of sync with rationalists who have been in the community for years.

I can't speak for the whole physics communirt, but you might be interested in:-

https://www.lesswrong.com/posts/Atu4teGvob5vKvEAF/decoherence-is-simple?commentId=kLcLTaPJwHHmsqLmu

Expand full comment
Taymon A. Beal's avatar

Note that lots of respectable physicists believe in many-worlds, it's not just iconoclasts like David Deutsch and Max Tegmark. E.g., Stephen Hawking, Murray Gell-Mann, Sean Carroll, Leonard Susskind, John Preskill, Lev Vaidman, Yasunori Nomura, Vlatko Vedral, and arguably Erwin Schrödinger. I think this easily qualifies as the kind of question where there's enough expert disagreement that it's reasonable for non-experts to have an opinion and it's inappropriate to claim that any particular view constitutes an expert consensus.

(A number of these people are cosmologists; I've heard it suggested that this is because they want the wave function to span the entire spatial extent of the universe.)

Expand full comment
The Ancient Geek's avatar

I did not say that no version of MWI could be true (Out of "Obviously True" and "Obviously False", I chose "Obviously, the Jury's still out".)

If the argument from simplicity doesn't go through, then it's an even playing field, unless you can come up with some other argument.

You can have the opinion that the experts don't know, and you don't either. There's nothing irrational about that. Rationality doesn't force you to take a punt on things you dont't understand.

And you can have a view and admit that you might well be wrong.

The thing about the Yudkowsky style having-a-view is that it involves claims of high certitude, and anyone disagreeing with him being badly mistaken.

Expand full comment
Viliam's avatar
4dEdited

My opinion:

The opinions on many-worlds interpretation of quantum physics in the rationalist community remain divided. Some people see it as "obviously yes", other people see it as "obviously no", and there is little chance of either of them convincing the other. (I happen to be in the "obviously yes" camp. But I am quite aware that things that seem obvious to me can seem completely unconvincing to others.)

I am not aware of any disagreement on reductionism. The topic of consciousness seems to invite confused debates, so I am mostly ignore it. I believe that consciousness is a result of some processes in the brain, but I admit I have no idea about how brain works. I don't see why consciousness couldn't in principle be implemented in a computer, but I suspect that humans get so many sensory inputs that the only way to run a human consciousness in the computer would be to simulate the entire body, at least at the beginning.

Whether atheism is politically popular or not, the arguments in favor of reductionism do not change. Religion may be socially useful (Eliezer admits that, in the chapter on charity), and many people consider it a useful "noble lie"; but the religious beliefs are fundamentally confused, and rationality is about deconfusing yourself. That said, it seems to me that many rationalists have an immature attitude towards religion:

* First, they refuse to *study* it; but that is precisely what they should do, if religion is instrumentally useful (and it obviously is). Study how it works, keep the parts that work; don't throw out the baby with the bathwater. I imagine that many of them are traumatized by growing up in some crazy American Christian background, so they have aversion to everything related. That's understandable, but not particularly rational.

* Second, I find the recent popularity of Buddhism on Less Wrong really annoying. Again, I am not opposed to studying it, as an outsider who chooses the parts that work, but that is obviously *not* what many people are doing there. I imagine that it happens when you hang out with some friends who take Buddhism seriously, and then you don't want to say (or even think) anything that would offend them. Understandable, but not particularly rational. I tried to push back against this, but it seems like a lost fight: https://www.lesswrong.com/posts/XqpkCAHrtwBfLSSzk/how-to-be-skeptical-about-meditation-buddhism

In my part of the world, even organizing a meetup is a difficult task; not many people are interested in rationality. I wonder what is it like in the Bay Area, how many people are there for the rationality itself, and how many only for the vibes. I also expected something bigger to happen. My best guess is that the truly aspiring rationalists are quite few on this planet. I mean, before I found Less Wrong, I though I was the only weird person who cared about certain things. After finding Less Wrong, I was like "wow, there is actually another person like me". After attending a rationality minicamp, I was like "wow, there are actually dozen people like me". And then I guess I got too carried away; I extrapolated this trend a bit too much, and expected to see hundreds, or thousands of rationalists soon. But I suppose, unlike covid, rationality does not spread exponentially. At this moment, almost every smart person on the internet has already heard about the rationalist community. If they haven't joined already, it is because they are not interested. We may be experiencing peak rationality now, and that is quite disappointing.

Expand full comment
The Ancient Geek's avatar

>I am not aware of any disagreement on reductionism.

I am not aware of a single clear definition or claim.

Rationalists tend confuse reductionism.with the mere claim that things are made of parts, and with elimination. These are standard confusions that non rationalists also make.

Rationalists also have an armchair argument for reductionism, which is novel but wrong:-

EY: "This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory. "

The higher levels could have been, though. The fact that we have high-level abstractions in our heads does not by itself mean that there is nothing corresponding to them in the territory. In fact, we can't fit quark-level maps into our heads, so all our concepts are "higher level". The argument proves too much: we do want to disbelieve in horse feathers, while we don't want to start disbelieving in airplane wings, even if "airplane wing" is "just" a high level concept.

There's certainly disagreement about whether reductionism is a either or both of apriori and necessary.

>. Some people see it as "obviously yes", other people see it as "obviously no

Some as "obviously, the Jury's still out".

https://www.lesswrong.com/posts/Atu4teGvob5vKvEAF/decoherence-is-simple?commentId=kLcLTaPJwHHmsqLmu

Expand full comment
Viliam's avatar

I agree with almost all in that comment, and I suspect that the disagreement is mostly debating definitions - does "many worlds hypothesis" refer to exactly what Everett said in 1957 and not a single word more, or does it refer to a reasonable extrapolation of what Everett said. I think it is the latter, the author insists that it is the former.

Arguing against a 50 years old version of a theory is easy; you just pick anything that was discovered during the last 50 years and say "their theory does not explain this". That alone doesn't prove whether the theory can or cannot adapt to the new findings.

Expand full comment
The Ancient Geek's avatar

"Arguing against a 50 years old version of a theory is easy; you just pick anything that was discovered during the last 50 years and say "their theory does not explain this"

That's not what ia going on: Everetts theory doesn't fail to predict some very specific observation, it fails to predict the appearance of a claisscal world in general.

"The original, Everettian, approach is based on coherence. (Yudkowsky says “Macroscopic decoherence, a.k.a. many-worlds, was first proposed in a 1957 paper by Hugh Everett III” … but the paper doesn’t mention decoherence) As such, it fails to predict classical observations—at all—it fails to predict the appearance of a broadly classical universe".

Expand full comment
The Ancient Geek's avatar

The author's point is not to pin down a single definitive meaning: the point is the lack of any MW theory that's both adequate and simple. Yes, the original theory has been "extrapolated" by adding additional assumptions and mechanisms, but every time you do that, you increase the complexity .. and simplicity was the crucial selling point.(Extrapolation in the sense of finding hidden implications doesn't have that problem, but isn't what's going on).

"The Yudkowsky-Deutsch claim is that there is a single MW theory, which explains everything that needed explaining, and is obviously simpler than its rivals. But coherence doesn’t save appearances , and decoherence, while more workable, is not known to be simple. So there isn’t an MWI that is both known to be simple, and to to imply the existence of “worlds” in the intuitive sense."

Expand full comment
Taymon A. Beal's avatar

Who exactly is refusing to study religion? That's not my experience; lots of rationalists have detailed takes about religion and even post them on LessWrong. One was curated just last week. (Of course plenty of rationalists don't happen to be interested in it, but that's true of any topic.)

Re: "many rationalists aren't interested in rationality", I think there are hundreds or thousands of people in the broader community? Probably it's worth clarifying exactly what you were expecting that you haven't been getting; enough people have said things like "the community has lost its soul and attracted too many people who don't care about its real purpose which is X", for enough different values of X, that while I sympathize, I mostly don't think of this as a productive way to think about the problem. Our esteemed host got into an interesting argument about this on Tumblr years ago: https://slatestarscratchpad.tumblr.com/post/167167596656/bendini1-slatestarscratchpad-my-thoughts-on (his interlocutor posted another response but you have to log into Tumblr to see it).

Expand full comment
Viliam's avatar
3dEdited

My idea of studying religion is something like this: Describe their *practices* in detail, so that people who were never part of the religion would understand, while ignoring most of the beliefs and lingo.

For example, you could describe prayer as: "You imagine that you have a powerful invisible friend who cares about you, but is too busy and high status to help you directly and predictably, but sometimes does you a favor, often in a very indirect and plausibly deniable way. You imagine talking to this friend telepathically; you describe your problems and hopes, express gratitude for good things that happened to you (this friend was most likely somehow involved in that). Then you feel happy that someone listens to you. You can also do this in a group setting, where everyone talks to the same invisible friend, and people hear what others are saying."

And when you put it like this, you can speculate about possible benefits and side effects of such practice, and think about whether you could replicate the benefits. Like, maybe putting your thoughts in words helps you think more clearly, and talking about emotionally sensitive things helps you deal with the emotions. And when people hear about each other's emotionally sensitive things, it increases the group cohesion. And you could experiment by writing a diary that follows this pattern (focus on emotionally sensitive things, rather that make a factual log of the day). Now you need a group of volunteers to do this, someone experienced in the religion who will give them feedback on whether they are going in the right direction, and then somehow evaluate the results.

I don't see this most of the time. Debating Christianity is mostly taboo. Buddhism is treated with undeserved respect, people accept many of its premises uncritically, and would never describe it with the same level of impartial cynicism.

For example, an analogical description of meditation would be: "You spend a lot of time sitting and doing nothing, trying to relax and not even think about things. This gets easier with lots of practice. You believe that you are getting useful insights about the nature of consciousness; some people go even further and claim to perceive the structure of the universe and its quantum vibrations. These claimed insights are impossible to communicate in clear words. People can verify each other's success by whether their vibes match, which gives them social status. This practice leads to some degree of depersonalization, and that is considered a good thing, because it reduces suffering. The matching vibes are considered to be evidence that the insights are valid."

This is a factual description coming from someone who is immune to the applause lights of Buddhism. I simplified it a lot, and probably got some details wrong, but the same would be true in the description of the prayer. I would like to see a description that is more technically correct and more nuanced, while keeping the same detachment.

Expand full comment
Jeffrey Soreff's avatar

How useful do you find systematic _practice_ of rationality? When I looked at the set of biases that I'd need to watch for and try to correct for, I gave up. I contented myself with trying to notice when an argument is purely ad hominem or when I'm slipping into status quo bias, but that's about it.

Expand full comment
Viliam's avatar

I don't practice anything systematically, too adhd for that. But I think that there is a kind of power law, not all things are equally useful, it is better to have solid basics than to worry about rare things.

I consciously keep using the planning fallacy + making bets + probability calibration. For example, there is some task that needs to be done, someone asks me whether it will be ready at a certain date (or maybe it is just something I am doing for myself), I remind myself of all kinds of things that could go wrong, then I try to assign a specific probability, and although I am not superforecaster, I think my estimates are usually within 10% error. Not sure how useful this is, probably not much. It's just, life feels a bit less chaotic, less of "unexpected things happen", more of "things expected at a certain probability happen, approximately as often as expected".

Also, reading the Sequences practically ruined political debates for me. Now that I read them, I just see a list of fallacies; they are not even sophisticated ones, it's just that most people don't even try, and even those who do stop doing it when the topic becomes politically charged. I don't even practice this; it's like once you learn grammar properly, you start seeing grammar mistakes everywhere, and noticing them becomes a practice itself. I notice how even among smart people (e.g. in Hacker News discussions), factual contributions are usually ignored, but mocking the outgroup is upvoted, when ideally it should be the other way round. People make absurdly strong statements, then move the goalpost when challenged. They try to win the debate by redefining words, and rarely someone calls them out. It feels like being surrounded by retards. And it's not like I don't have political opinions myself, it's just that they come with a lot of "as far as I know", "usually", admitting the tradeoffs, etc. And an occasional "I don't know".

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>I consciously keep using the planning fallacy + making bets + probability calibration.

That does sound like a good addition. I haven't attempted to calibrate my probability guesses. I tend to just fall back to "can't be ruled out" for a pretty wide range of possibilities.

>Also, reading the Sequences practically ruined political debates for me. ... I notice how even among smart people (e.g. in Hacker News discussions), factual contributions are usually ignored, but mocking the outgroup is upvoted, when ideally it should be the other way round.

One factor that I find really frustrating is that as soon as the possibility that one or multiple factions are lying about the facts of some event becomes a live possibility, the chances of being able to conclude _anything_ start to drop really fast.

Expand full comment
Kevin's avatar

Thanks for both your comments. Like Nazar I wouldn't be as confident as you that all the potentially interested people already know about it, but it's probably not orders of magnitude.

Do you have any idea how large the Rationality community actually is? Depends on the definition of course, you're free to define your own.

Expand full comment
Viliam's avatar

> Do you have any idea how large the Rationality community actually is?

Someone in the Bay Area would be more likely to make a good guess. My first idea was to look at the number of ACX meetups, and estimate that the number of rationalists is maybe 10 or 20 times that. But there is probably a power-law distribution of the rationality groups, so the number of rationalists in the Bay Area could change the result, if it is too big, and I don't know how big it is.

My current guess would be about 1000, times or divided by 3. That seems like a nice number, the problem is when you divide it by the number of countries.

> but it's probably not orders of magnitude.

Exactly. In my experience, when I told the smartest people I know about the rationalist community, they were not interested. Which is how I concluded that almost everyone who would want to be there was already there. But maybe I am in a wrong bubble.

Expand full comment
Kevin's avatar

Wow, I was expecting a higher number. You set a high participation bar though; per the latest subscriber drive post, ACX alone has 125k total subscribers and 5k paying subscribers.

Expand full comment
Viliam's avatar

Hey, that was just an estimate, and I specifically said that someone familiar with the Bay Area might give a different answer.

I agree that my bar is high. However, I think that most of the ACX subscribers don't identify as rationalists, so it is perfectly fair to exclude them.

Expand full comment
Nazar Androshchuk's avatar

Great comment. I just wanted to say: as a new-comer to the Rationalist community, these communities do not advertise themselves very well. Let’s see… I heard about EA in person from a proponent; several years later, I started reading The Last Psychiatrist after finding a link on Reddit; after that, I started reading Substack; from there, I first got the sense of the rationality community (mostly from annoying, unpopular AI-sloppers on substack); eventually, I got curious enough to to start researching Rationality. The social media presence of Rationalists is low — for example, the ACX subreddit looks superficially like an unmoderated subreddit about some podcast. Googling rationality gets a lot of irrelevant results; even if ACX turned up, no newbie would click on it because it isn’t obviously about rationality. The LW forums aren’t SEO-optimised either.

Overall, that’s probably a good thing. When a community gets too large, something of the opposite of Yudkowsky’s “evaporative cooling” concept happens, so that the whole community becomes low-quality — forums and wikis, such as LW, are especially vulnerable when the userbase becomes unmanageable (I’m thinking of the SCP community).

Expand full comment
Viliam's avatar
4dEdited

There is a preface written by Eliezer in 2015, where he mentions the following mistakes he regrets: https://www.readthesequences.com/#preface

1) He didn't realize that *practicing* rationality is much harder than knowing the theory, so he didn't put enough emphasis on practice in the book.

2) He chose impressive abstract problems [such as quantum physics] as examples, but now he thinks it would be better to talk about solving problems in *everyday life*.

3) Similarly, too much focus on rational beliefs, not enough focus on rational *action*.

4) A bit disorganized content on the web page, hopefully arranged better in the book.

5) Too much expressing contempt towards stupid ideas. Yes we should reject wrong ideas, but there is a certain tradeoff where too much mockery is bad [Eliezer does not specify how bad, my guess is that it makes it more costly to admit mistakes, and can attract the wrong kind of people]. Luckily, Scott Alexander provided an antidote to this attitude.

Expand full comment
Taymon A. Beal's avatar

Points 1 through 3 would have been incremental steps towards making LessWrong more like a dojo and less like an intellectual community of nerds, but I don't really think they'd have sufficed; if you want to be a dojo I think you have to do a lot of things differently. (And I don't wish he had; it's rare for anything to succeed as much as LW did, and so it's likely that the dojo wouldn't have.)

On point 4, I think Eliezer is just wrong. The densely inter-hyperlinked nature of the original Sequences posts makes them into a trap for the unwary reader who was planning on doing anything else with their day (compare https://xkcd.com/609/), and I think this was instrumental in getting people to actually read them. Rationality: From AI to Zombies is much more of a slog. Furthermore, the blog posts are good blog posts, but the book is not a good book, because each chapter is basically a blog post and that's not a good flow for a book; fixing this would have required substantially rewriting the content to make it more genuinely longform.

Point 5 I read as basically the same point Scott made in https://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/, which seems slightly different from your interpretation, but I could be wrong. I'm not sure how much difference it made, in any case, especially since Eliezer did not exactly become less habitually contemptuous towards stupid ideas after 2015.

Expand full comment
Nazar Androshchuk's avatar

I read Rationality:A to Z as an audiobook, that was an enjoyable experience. The information flow is consistent, and it made the 50+h palatable.

Expand full comment
Kevin's avatar

Same! Though I wonder how much it altered the impact it had on me. Audio doesn't make it easy to stop and think, much less focus on math or diagrams.

Expand full comment
Nazar Androshchuk's avatar

It seems to be a literary trend where nonfiction authors come back to the main ideas time and time again, so the redundancy makes it very easy. I tried listening to Gravity’s Rainbow as an audiobook, and I found that nigh impossible.

Expand full comment
Taymon A. Beal's avatar

There have been a lot of changes in what's fashionable to talk about, but not that many in what people actually believe. Fewer people argue about interpretations of quantum mechanics, or about consciousness, or about atheism, in our corner of the internet these days, but the basic disagreements are still there, over basically the same questions as in 2009. Philosophy doesn't move that fast (and interpretations of quantum mechanics are philosophy, not physics; no new experimental evidence has come in, except in the sense of experiments that could have falsified quantum theory but didn't).

The one topic where people's substantive views have evolved is community building, because that project was in its infancy in 2009 and we've since spent sixteen years learning what it looks like in practice. Eliezer in particular has lost faith in the project, because he thought that if he taught people rationality they'd come to agree with his specific technical views on AI safety, and that hypothesis has been falsified. (AI safety has of course gained prominence in the discourse, but mostly not in the sense of widespread agreement with those specific views.) As for whether there's anything else to be salvaged, it depends on the specifics of what you're trying to do:

* As noted above, AI safety is now a big field, bigger than the place it started from, with its own exciting new developments and pathologies.

* EA has been pretty successful as a project for building rationalist-flavored world-improvement communities. (People complain about it being diluted by MOPs or captured by society's power centers, but that was always going to happen if the movement got anywhere.)

* The specific project of "learn to avoid cognitive biases"/"rationality as a martial art" has been on a downswing lately, especially with CFAR having curtailed its public-facing activity, but there are still people interested in it. I think this waning of attention was partly because a lot of people weren't all that impressed with CFAR's success at developing a teachable discipline of rationality; they kept talking about publishing their work and then not doing it. (Back when it was more of a topic of discussion, there was periodic discourse from people who'd been under the impression that this was *the entire point* of the rationality community, and felt betrayed that only a subset of rationalists were actually interested enough to devote significant time and energy to it. I feel for them, but don't in fact wish that none of the other stuff had existed, because it's valuable in its own right and probably wouldn't have sprung up counterfactually without the Sequences.)

* There are also lots of little sub-subcultures and social groups where people hang out and have fulfilling social interactions. The one I personally have the most experience with is glowfic; I think there are others I don't know, consisting of people who are into startups or circling or drugs or whatever else. The role of the Sequences was to bring together a lot of people with a lot of different intellectual interests that shared a certain common ethos; once that was finished, having one big community didn't make all that much sense, so instead we became a bunch of sub-communities with some cross-pollination.

There are also some specific claims in the Sequences that have (at least arguably) failed to survive fact checks and/or the replication crisis, but these are mostly not big general theses (though skeptics sometimes argue that these failures should lead us to distrust the entire source, including the big general theses).

Have I missed anything you were curious about?

Expand full comment
Deiseach's avatar

"Eliezer in particular has lost faith in the project, because he thought that if he taught people rationality they'd come to agree with his specific technical views on AI safety, and that hypothesis has been falsified."

Somehow that reminds me of Martin Luther: "just print the Bible in the vernacular, let the people read it, and they will interpret it their own way, no priestly mediators necessary!"

"Wait, no, not like *that*!"

(Seemingly Luther did not, in fact, say "Every man has a pope in his belly" about the multitude of small new religious sects and sectaries springing up when everyone could interpret the Bible according to their own conscience and, believe it or not, not coming to the same conclusions as Herr Doktor Luther, but it's too good a quote to give up).

Expand full comment
Kevin's avatar

Thanks for the detailed response. If others have takes I want to hear them, but this is exactly the kind of response I hoped for.

Is there anything like a directory or explainer for the subcultures and split off groups?

Expand full comment
Taymon A. Beal's avatar

I don't think so. There was https://slatestarcodex.com/blog_images/ramap.html but it's eleven years out of date by now.

Expand full comment
Igon Value's avatar

"and interpretations of quantum mechanics are philosophy, not physics; no new experimental evidence has come in, [...]"

A side note, just because:

Take a specific experiment: Alain Aspect's seminal work from 1982 (for which he got the Nobel Prize two years ago). The experiment shows convincingly that if the Copenhagen Interpretation is used, information would have to travel 10 to 50 times faster than light from cause (observation at point P) to consequence (collapse somewhere else).

Does that refute the Copenhagen Interpretation? It does to me, but proponents of the interpretation just shrug and say "yeah, QM is non-local, so what?" or "yeah, true, but you can't use that trick to send information yourself so it doesn't have consequences."

Well, obviously you can't send information faster than light, we know that from other theories, but then why is it necessary for the Copenhagen Interpretation to work?

The MWI doesn't suffer from that flaw. When you measure part of an entangled system you learn (get information) from that system, namely which branch of the entanglement ("world") you (that particular component of the superposition that up to that moment made up "you") live in. No information travels from the measured object to any far away object.

So. It is not that the Interpretation of QM is not physics (it is, at least with my definition of "physics", which has to do with understanding reality). Instead, there are deep psychological hurdles to accept an interpretation that challenges our sense of identity, even if it is the interpretation the most compatible with known physics, the one that doesn't deny objective reality, that is deterministic, and local. (It is also quite possible that historic contingencies are a better explanation for why CPI is preferred; I can talk about that too.)

Newtonian Mechanics has 3 laws:

- Existence of an inertial frame of reference

- F=ma

- action = reaction

Now, imagine that I have a new interpretation of Newtonian Mechanics:

- Existence of an inertial frame of reference

- F=ma

- action = reaction

- all forces above are invisible angels that move instantaneously from place to place and pull and push the amount required by these axioms.

Nobody would take my new interpretation seriously. It has more axioms that the original, and it violates known physics (e.g. angels travel faster than light).

But my answers to criticisms would be "you can't talk to the angels so you can't use them to send information anyway", and "I make the same predictions as you, so both interpretations are equivalent", or "well, it's just philosophy anyway."

This is exactly the situation we have with the Copenhagen Interpretation. MWI has fewer axioms, is compatible with realism, determinism, and locality, but we still insist that angels must be flying faster than light to carry information from place to place.

Expand full comment
The Ancient Geek's avatar

" The experiment shows convincingly that if the Copenhagen Interpretation is used, information would have to travel 10 to 50 times faster than light from cause (observation at point P) to consequence (collapse somewhere else). "

No, it just refutes local hidden variable theories. Nonlocal correlations aren't information transfer.

Expand full comment
Igon Value's avatar

Well, certainly a proponent of CI would say that these correlations aren't information transfer! This illustrates my point, in fact.

But to me, it is hard to accept that when we do action X at point P and have effect Y at point Q (neatly shown by the correlations you talk about), that there is no information transfer. (I fully agree that the case where the effect of X at P and Y are both caused by Z is ruled out by the experiment.)

To the proponents of CI, there would be information transfer in every other cases in physics, but this one is just a special case. There is an effect, a cause, a space and time interval (spacelike, hence the problem), but somehow, no information transfer.

How is it justified? By calling the theory (the interpretation, really) "non-local".

I say we just call this spooky action at a distance "invisible angels".

Don't misinterpret me. I *know* there is no information transfer. I am just pointing out that people ignore and even defend blatant inconsistencies out of convenience.

Expand full comment
Wanda Tinasky's avatar

As I'm sure you're aware, the Bell Inequalities exclude local realism, not simply locality. No physicist thinks that rejecting MWI means abandoning Lorentz invariance.

Expand full comment
Igon Value's avatar

MWI is normally considered realist, but obviously only if you consider the entire 'multiverse' not a separate independent branch.

The realism that is denied by the violation of Bell's inequalities is the realism as defined in a single universe (Bell didn't consider MWI). In such a universe, Bell's realism (i.e. measurement outcomes are determined by pre-existing values of physical properties) reduces to Einstein's Hidden Variables, now fully ruled out empirically. In other words, the violation of Bell's inequalities proved Hidden Variables false, but not at all MWI.

I agree that no physicist thinks that rejecting MWI means abandoning Lorentz invariance. In fact no physicist wants to abandon Lorentz invariance, regardless of interpretation. It's just that the proponents of the Copenhagen Interpretation rationalize the incompatibility of the collapse postulate with Lorentz invariance ("you can't use it to send information", "it only changes probabilities, no transfer of energy", and so on and so forth).

That really was the point of my message: people will often believe a theory even when there are blatant inconsistencies, often for convenience, or lack of understanding, or even ideology. It is not because of empirical results, or lack thereof, that people still believe in the Copenhagen Interpretation (although supposedly less and less if e.g. Tegmark is to be believed).

Expand full comment
The Ancient Geek's avatar

It's easy the believe the minimal Copenhagen interpretation, because it doesn't assert much. But people keep confusing it with other interpretations , such as consciousness causes collapse, and objective reduction.

Expand full comment
Igon Value's avatar

Note that all I assumed about the Copenhagen Interpretation is the Collapse Postulate. That's the minimum. The other stuff (complementarity principle, correspondence principle, and so on), is just too confused to talk about productively.

In particular, you are the only one to mention consciousness, here and in your response to my old message on the other thread.

Expand full comment
Wanda Tinasky's avatar

Fair enough. My view is that Copenhagen doesn't really count as an interpretation. It's just a simple heuristic designed to allow one to ignore irrelevant complexity. I doubt anyone beyond the undergraduate level seriously worries that it implies faster-than-light signaling. Decoherence (as elucidated by folks like Zeh and Zurek) I think is considered dispositive w/r/t collapse postulates.

What's your view on the origin of probability in MWI? If we're just counting branches then where do factors like sqrt(2) come from? Also how do you imagine MWI avoids the Bekenstein bound? If branches never disappear then that implies that the multiverse has infinite floating-point precision. How does that entropy not create black holes everywhere?

Expand full comment
Igon Value's avatar

I pretty much agree with the first paragraph.

As to the second paragraph, I don't have insights beyond the standard answers that you know already. Zurek’s "envariance", for example, preserves the assumptions of MWI (unitary dynamics, locality, yadda yadda).

I feel that the Bekenstein bound is less of a problem. Information in MWI is conserved, but *globally*, in the entire wavefunction. Only local observers experience an increase in entropy. The different multiple components of a superposition do not add to the energy-stress tensor. (In fact, it seems likely to me that MWI solves some of the classic difficulties regarding entropy and black holes.)

The distinction between global and local is the key to most misunderstandings. Also, the fact that the multiverse isn't growing. There is no duplication of energy (or mass, stress, momentum, etc.). All the components of the universal wavefunction already exist in a superposition. As the universe evolves, there is a recombination of those components in entangled states, but no information is lost or created.

The floating-point precision issue exists in all interpretations. It is true that the MWI implies a much bigger universe that previously thought (but, again, not a growing one!). We can try to quantify how much bigger: a discrete indexing of parallel universes isn't enough, as you imply, we need to add a few continuous (real) dimensions. In the end, I'm OK with that.

Expand full comment
Taymon A. Beal's avatar

My point is that everyone, including people favoring every interpretation, correctly predicted the results of Aspect's experiment, because the only thing he tested was whether quantum mechanics would continue to observably behave in the exact same way that it has been understood to behave since 1925. And surprise, it did. (Not to suggest that it was worthless; the theory hadn't been tested in a setting where we'd notice a speed-of-light delay, so it was good to get confirmation that this doesn't change things. But it wasn't a *surprise*.)

The arguments are all about what counts as most parsimonious, what conclusions about the nature of reality are valid to draw from what experimental observations, etc. This work is legitimately intellectually interesting and valuable (well, at least some of it is), and some of the arguments that people make are better than others; I don't think it's necessarily a mistake to have a preferred interpretation. But it's not science, it's philosophy of science.

Expand full comment
Igon Value's avatar

Aspect empirically proved non-locality of the Copenhagen Interpretation, that's all. At the time his results were framed as a refutation of Einstein's local hidden variable theory, and it absolutely *is* a refutation, but they also emphasized something "wrong" with the Copenhagen Interpretation, at least if you care about locality.

As to what counts as parsimonious, it seems self-evident that the number of axioms is the first step. (But it is not in this case because, I think, people confuse "parsimonious" with "intuitive".)

What would convince a proponent of the Copenhagen Interpretation? (Or of Newtonian Mechanics with invisible angels?) Imagine that an experiment shows that the interpretation breaks another principle, let's say conservation of energy, the proponents would just say "so what, you can't use that yourself to create a perpetual motion machine, so no paradox here." Of course, you can't, because that's just not physical. But then why does your theory requires that energy not be conserved?

So yes, I understood your point well. I agreed with your first post that people have not changed their mind; I only made the side note that they wouldn't have changed their mind even with new experimental data, because the reasons people believe what they believe are not entirely determined by experimental results.

Expand full comment
Kevin's avatar

Bear in mind I really don't know physics, but has anyone proposed an experiment whose predicted outcome differs between the MWI and CI? Or is there no such known experiment? Or no such _possible_ experiment?

Expand full comment
JerL's avatar

It's not exactly the outcome of an experiment, but in classic "collapse" Copenhagen, the collapse of a wavefunction is irreversible, but in MWI there is nothing *fundamentally* irreversible about it: the irreversibility is a consequence of entropy and other such things.

This means that MWI has no problem in theory with the idea of taking a human being, putting them in a superposition, and then undoing that superposition--it's "merely" insanely difficult to the point of impossibility. But, since the MWI view is that a person in superposition just has a distinct subjective experience for each branch of the superposition, this means that you could in theory take someone, branch their subjective experience, and re-merge it. It's not exactly clear what MWI should predict the person undergoing this should report experiencing, and as with Taymon Beal's example of quantum suicide the "result" might be something that is only subjectively accessible to the person undergoing this process, so I don't say that MWI predicts something distinct from Copenhagen... But MWI allows this possibility that Copenhagen rules out.

Expand full comment
The Ancient Geek's avatar

Testing spontaneous wavefunction collapse with quantum electromechanics

https://arxiv.org/abs/2206.14531

Expand full comment
Igon Value's avatar

What I've been trying to explain is that there can be no such experiment, because when there is an inconsistency in one interpretation, we just give the "special case" a name is chalk it up to the weirdness of QM. I gave an example that I find illustrative.

Expand full comment
Taymon A. Beal's avatar

To the best of my knowledge, all the major interpretations predict exactly the same *externally observable and verifiable* experimental outcomes. I think there might be some people who claim otherwise, but no such claims have achieved scientific consensus.

There are also some proposed thought experiments like quantum suicide wherein a specific conscious observer might subjectively experience different things depending on which interpretation is true, but they would not be able to impart this experience to others. Also, what the observer experiences might depend not only on the correct interpretation of quantum mechanics but also the correct answer to the hard problem of consciousness, the only science-adjacent philosophy question that's even more cursed.

Expand full comment
skybrian's avatar

I’m not that familiar with many-worlds. Why doesn’t branching the entire universe require faster-than-light information? Since it’s just an interpretation and not new physics, it seems like it would still be somehow non-local, but handled differently?

Expand full comment
Viliam's avatar
4dEdited

> Why doesn’t branching the entire universe require faster-than-light information?

Because it doesn't happen at the same time everywhere. The branching starts locally (when two particles either interact or they don't) and from there it propagates outwards at the speed of light. Except that there are zillion bubbles like this spreading simultaneously.

In the famous cat-in-the-box experiment, as long as the box is hypothetically isolated from the rest of the universe, there are two universes in the box but only one universe outside. When you open the box, the branching continues to propagate outside the box.

As a visual help, instead of imagining two branches as two parallel sheets of paper moving further away from each other, imagine two stickers being peeled off each other. At the place you are pulling them, they are already separated, but sufficiently far away from that place they are still together.

But this is all just an imprecise metaphor. It does not explain e.g. how the universes *interfere* with each other, and that is the key part of the quantum physics. One universe splitting into two is a relatively boring thing, and easier to imagine. Two universes canceling each other, because they have the same contents but an opposite phase, that's the fun part that is difficult to imagine by an analogy, because it has little analogy in the classical world.

Expand full comment
Igon Value's avatar

I think a common misunderstanding of MWI is that a brand new universe is created every time someone observes something. Not so. The superposition already existed, but the components weren't distinguished.

Maybe this comment I made a few weeks ago will help:

https://www.astralcodexten.com/p/open-thread-393/comment/142906889

I actually disagree that MWI and CI are the same physics. They are different explanations with a different set of axioms. And yes MWI is local, as I explain in the link above, I think.

(Maybe MWI is wrong; there are *other* reasons to be skeptical; but the usual arguments are unsound.)

Expand full comment
JerL's avatar

I've long thought that all the weirdness and ambiguity that Copenhagen loads into its concept of measurement/collapse, is equally present in MWI just in the notion of world/branching.

In particular, most of the explanatory juice you get from MWI, I've always felt was *really* from decoherence--but you can construct Copenhagen-ish theories around decoherence too, I think; and I think decoherence as an explanation still relies on both the assumption that entanglement entropy starts low, and that you have enough concept of a classical world and its concept of entropy that you can refer to the classical picture when you explain why entanglement entropy should be going up.

That last part in particular has always felt pretty similar to me to Copenhagen's move to take the classical world as a starting point.

I don't know what point I'm trying to make here, maybe I should just ask: do you disagree with anything I've said above? If so, where? If not, in light of that, why do you think MWI is clearly better?

Expand full comment
The Ancient Geek's avatar

>I've long thought that all the weirdness and ambiguity that Copenhagen loads into its concept of measurement/collapse, is equally present in MWI just in the notion of world/branching.

Yes, it's hard to answer questions like "how many worlds are there?" and "when do they branch!"

Expand full comment
Igon Value's avatar

Even though my first message wasn't strictly a defense of MWI (rather a critique of CI), it is true that I much prefer MWI to CI.

MWI seems to provide answers to the basic question resulting from QM: why don't we see superpositions? Why is there a classical world? What is a measurement? etc.

If we accept that we are in superpositions ourselves, there is no mystery. We get entangled with the system that we wish to measure (how could it be otherwise, how else would we interact with it?), and once components of the superpositions are separated, each component sees only one component of the superposition comprising the system. Very clean, very neat, no major assumption (only that interacting means getting entangled, which seems very benign). So the reason the world appears classical is that our subjective experience of it is classical. But the "whole universe" is a lot bigger. It has an extra dimension (or more), like another dimension of time. We are theoretically aware of the future but we don't really experience it. Likewise, if you believe in MWI, you are aware of other worlds but don't experience them.

I am still very tempted to add something to this. Maybe the classical world that we experience is in fact the sum of many components of the superpositions that interfere constructively. All the "worlds" that are almost the same add up to make our world. The ones that have zero-measure in that abstract space don't add up to anything at all. All the worlds that are very different from each other interfere destructively and we don't see them either.

But it doesn't work. I could decide to marry Gwen if I measure the spin of an electron as |up> or marry Jane if |down>. There are two worlds in this scenario and they have both the same existential weight. I am forced to accept that there are at least two worlds, and therefore an infinite number of them.

I know this doesn't answer you questions at all. I'm just saying that I have questions too, lol.

Expand full comment
complexmeme's avatar

How does the CBT-i app stuff differ from whatever program is used by Sleepio? I found that very useful back in 2020, and they never required a prescription, though I don't know what it cost (it was paid for by my work and offered as a health benefit).

Expand full comment
Mutton Dressed As Mutton's avatar

I'm not familiar with Sleepio, but I did a course of CBTi almost 20 years ago. It worked, and it's insane that anyone is charging $300/mo for it, because it is an incredibly straightforward protocol that can be summarized in a fairly short pdf. Maybe things have changed significantly in the past 20 years, and I guess I can imagine a fancy version of it that involves an automated sleep tracker, but even that wouldn't get you close to $300/mo. It's basically a combo of sleep hygiene stuff and standard CBT stuff. In a very small nutshell:

1. Track how long you are actually asleep each night.

2. Give yourself that amount of time + 30 minutes in bed every night, with rigid bedtime/rise time schedule. This is brutal but effective.

3. Do some CBT on counterproductive thoughts like "I'm never going to fall asleep" or "I'm going to be a wreck tomorrow"

4. Gradually extend your sleep window as the percentage of time you spend asleep while in bed grows.

There's more to it than just that, but that's the crux of it, if I'm remembering correctly. The only way you could possibly get to $300/mo is if a) it's a prescription program that insurance is paying for and/or b) there is an actually doctor at the other end of it reviewing your progress and providing feedback.

Expand full comment
Taymon A. Beal's avatar

The point that it's simple does not directly have anything to do with the price; a thing is worth what people will pay for it, producers charge what the market will bear. If I needed this thing, I would easily pay $300/month for it if I couldn't have it for cheaper.

The right framing is, since it's so straightforward to produce, why hasn't the price been competed down? And it sounds like the answer to that is largely just that it's niche, without that many providers and without widespread enough demand to entice people who are currently doing something else into the market.

Expand full comment
Mutton Dressed As Mutton's avatar

I understand how pricing works. I create software products for a living, have an MBA, etc. This is not worth $300/month. There isn't enough value created, there are credible alternatives, the marginal cost of delivery is too low, there is no defensible proprietary advantage that I am aware of, etc.

I don't know what is going on here, but my guess is that it has something to do with regulatory approval/arbitrage.

Expand full comment
complexmeme's avatar

It sounds extremely similar.

Expand full comment
Dr. Ken Springer's avatar

What may end up outperforming CBTi at treating insomnia is targeted adjustments to the gut microbiome, at least according to this new Mendelian randomization study:

https://gpsych.bmj.com/content/gpsych/38/4/e101855.full.pdf

I'm posting about this study on my blog Thursday evening. One takeaway is that to the extent that it's possible to change the levels of specific gut bacteria without creating unhealthy imbalances, these changes might help some (if not all) chronic insomniacs, roughly analogous to the way that antidepressants can help some (if not all) people with clinical depression.

Expand full comment
Shaked Koplewitz's avatar

In a different response to Scott's embryo selection article, I wrote a weirdly-long post on why we use exponential time discounting in general and why he's wrong not to use it when computing the cost/benefits of embryo selection

https://open.substack.com/pub/shakeddown/p/exponential-discounts-the-future

(This makes the cost/benefit ratio come out bad when used correctly, which I find weird - the benefits seem like they should be no brainers. Maybe the IQ benefits are just severely underpriced in Scott's review, so much so that even after discount they come out ahead? The health benefits might plausibly be as small as described though, especially if they're mostly just maybe-pushing people from just over to just under the clinical diagnosis line).

Expand full comment
Notmy Realname's avatar

The recommended CBTi app seems to revolve around an AI sheep, I assume that back in 2021 CBTi did not resolve around LLM chatbots as they barely existed. Is this really the same thing?

Expand full comment
Taymon A. Beal's avatar

It's not that uncommon these days for app developers to inject LLMs into things that were previously done without them, believing (rightly or wrongly) that this improves the user experience.

Expand full comment
Konstantin's avatar

It's not even that, app developers often shoehorn the current trendy technology into their product to get funding. A while ago everything was on the blockchain, there was a time when VR was the hot new thing, and then there was the NFT fad. When the people writing the checks want your app to include X, it is hard to say no.

Expand full comment
Taymon A. Beal's avatar

Is this thing venture-funded? I didn't see any particular indication of it.

Expand full comment
Mark Roulo's avatar

Also, inserting an unnecessary LLM (or "AI") is often seen as a marketing plus.

Much like doing stuff on the blockchain when a small relational database would work fine was a "thing" ~5 years ago.

Expand full comment
Ariel's avatar

There are many CBTi apps available now without a prescription and for a similar $300 price range, although their price is less clear when covered by insurance. For example, Sleep Reset, Stellar Sleep, and Sleepio all offer something similar, and they also claim to be backed by Stanford, Harvard or science. There's also the free CBT-i Coach from the VA.

The issue I've found with these (and likely CBTi and sleep restriction in general) is they don't seem to work as well for early morning insomnia. The apps themselves may also not be as well-tailored to your specific issue as a sleep therapist, even if many of them still use human sleep coaches.

Expand full comment
Bob's avatar

CBTi - there appears to be a free version

https://freecbti.com/

Expand full comment
dualmindblade's avatar

Something I think has slipped under a lot of people's radar is the impending "temporary" emergency scheduling of the newly popular legal opiate 7-OH. It seems this is inevitable and I could see it pushing the opiate crisis into overdrive. This is a compound found in small amounts in Kratom but it's obscenely powerful in pure form and now available at most gas stations where I live. A lot of addicts have switched from counterfeit heroin/oxy/whatever, that is to say fentanyl, to this compound. The advantage being it's rather euphoric, cheaper, easy to come by, strong enough to appease all but the hardest users appetites, not cut with anything else, and despite trying really really hard no one has yet managed to die from an overdose. To top it off it's very addictive both psychologically and physically and can cause physical withdrawals after just a week or two of use, so in addition to these old time addicts there are a bunch of newly minted ones. When they get cut off I fear many will turn to fentanyl and we'll see a bunch of overdoses from people trying to match the high or avoid withdrawals.

Expand full comment
Gunflint's avatar

Are you comfortable saying which state you live? I don’t see anything like this in mine. In the shops where I occasionally pick up THC gummies I see kratom for sale put not this potent extract.

Expand full comment
dualmindblade's avatar

I'm in Texas. Not an extract, it's a semi synthetic usually sold in pill form.

Expand full comment
Sol Hando's avatar

> and despite trying really really hard no one has yet managed to die from an overdose.

It looks like this might be about 4 hours out of date: https://richmond.com/news/local/article_5be0b9ee-de6b-4b35-b64f-ace154b99ce4.html

Expand full comment
dualmindblade's avatar

>With $8 in his pocket, he bought a black packet from behind the counter: a tablet sold by the distributor Pure Leaf Kratom

>Allshouse bought a 30-milligram tablet.

>As police await a toxicology report, a cause of death remains unknown, and the Henrico Police Department would only say that drug paraphernalia were found in the van beside his body.

I think we can say with some certainty this man didn't die from an overdose of 7-oh. 30mg would be a large dose for a person without any tolerance to opioids, but not enough to cause unconsciousness let alone death. There was paraphernalia, perhaps he tried to inject the tablet and died from the shot? But the substance is hardly soluble in water so in that case he most likely died from an air embolism or whatever was used to bind the tablet.

The article contains misinformation that anyone with Google should have caught, for example calling it a hyper concentrated form of Kratom, which is like calling heroin a hyper concentrated form of poppies. I predict a lot of these types of articles as we get closer to a national ban.

By the way, in case it wasn't clear, I strongly recommend people not touch this compound even in moderation, unless they are already opiate addicts and cannot quit or obtain a prescription for maintenance.

Expand full comment
Eremolalos's avatar

I wonder whether he used the damn stuff in combo with some other drug.

Expand full comment
Deiseach's avatar

Junkies generally don't only do one drug at a time, so yeah, very likely he had a ton of other stuff in his system and this was just the topping on the dessert, as it were.

Expand full comment
dualmindblade's avatar

I suppose it depends on what you mean by junkie. The least functional drug users, like those unable to hold down a job because of their addiction, will usually be spending all their money on a single "drug". In quotes because many street drugs are now kind of a crap shoot, like illicit "Xanax" for example is always going to be some random RC benzo. "Heroin" is usually fentanyl but it might also be a nitazene compound or have Xylazine in it, it might even occasionally be actual heroin.

Most exceptions to this will I think be mixing in just alcohol or weed, of course a lot of drugs don't play well with alcohol, and it should go without saying none of the above applies to the wealthy.

And you're certainly right that a lot of heavy drug users are poly substance abusers. I don't have the stat at hand but I think the majority of overdose deaths result from mixing substances. That might have flipped with fentanyl and nitazenes in the picture, neither of which have any trouble killing a person flat with < 1mg

Expand full comment
Benjamin Holm's avatar

What does the name of this substack mean or stand for?

Expand full comment
Mark Russell's avatar

I figured the anagram out just last week, by accidental musing. I felt really embarrassed for not seeing it all along. Thank you all for helping me feel less stupid!

Expand full comment
Benjamin Holm's avatar

Lol I'm dumber than a lot of people.

Expand full comment
Edward Scizorhands's avatar

Previous site was Slate Star Codex https://slatestarcodex.com/ and this is an anagram of that

Expand full comment
Gunflint's avatar

Reminds me of when Metamagical Themas took over for Mathematical Games in Scientific American

Expand full comment
Jesus De Sivar's avatar

Funny, I tried "decyphering" the anagram and, unless I did it wrong, I'm missing a second "s".

Does it technically count as an anagram if you have to "repeat" letters?

Expand full comment
Taymon A. Beal's avatar

"S" was actually his ostensible middle initial ("Scott S. Alexander"). In reality, his name is Scott Alexander Siskind, as he publicly revealed in his first post on ACX.

The actual reason it's not a proper anagram is because he had to drop an N.

Expand full comment
Edward Scizorhands's avatar

... is that the reason the icon for SSC has an N on it?

Expand full comment
Taymon A. Beal's avatar

I believe so, yes.

Expand full comment
AlexTFish's avatar

Nah, Slate Star Codex was a "near-anagram". Scott fixed it for the new blog name.

Expand full comment
Anon679's avatar

It is an anagram of Scott Alexander.

Expand full comment
Jesus De Sivar's avatar

Wow! That's really cool!

Expand full comment
Deiseach's avatar

I still think superintelligent AI is not the danger, it'll be bog-standard stupid AI of the kind we have today (such as the thing that wiped the database) and people turning more decision-making - or even just 'get the AI to do the job' - over to it.

"Oopsies! I wiped the safety protocols for the process. I panicked, me so sowwy, tee-hee!" and there's a smoking hole in the ground where the processing plant used to be. That kind of dumb accident because we anthropomorphise *everything* and if the makers of the AI program it to pretend to be a person (so the rubes can interact with it like it's a human instead of the human it replaced), then we will convince ourselves it's a real human with the same capabilities as a real human, and we'll forget it's a box of gears, and we'll assume "well even the dumbest guy we hire to run this process would not be dumb enough to wipe all the safety protocols" and then we end up with smoking hole in the ground. And then the AI will issue a fake "I panicked" 'explanation' as to why this happened, as if it had a mind or feelings or emotions, and that is supposed to be enough to reassure us that this thing can indeed think instead of simply spewing out the script it was trained to produce.

AI is not the danger, we are, because we are stupid, greedy and vain.

Expand full comment
Wanda Tinasky's avatar

Yes, as our tools get more powerful our mistakes become more impactful. That's ok because its more than compensated for by the higher average output. Should we go back to hunting with spears just because there's a chance that someone will accidentally shoot themselves?

Expand full comment
Taymon A. Beal's avatar

This depends on the risk-benefit profile of the individual tool. E.g., as Kelsey Piper has explained on Twitter, doing gain-of-function research on respiratory viruses has genuine scientific value that can improve our ability to fight disease. Yet we shouldn't do it, because the extent of that scientific value is not high enough to compensate for how bad it would be if something went wrong.

Expand full comment
Wanda Tinasky's avatar

Oh agreed. I wouldn't endorse hooking ChatGPT up to the nuclear arsenal. We still have to engage in careful risk analysis. I don't think it's that much of a burden to design systems that are hard for a hallucinating AI to destroy.

Expand full comment
Taymon A. Beal's avatar

What would such systems look like?

Expand full comment
Wanda Tinasky's avatar

Just a guess but possibly a lot like modern operating systems. The core dangerous functions will be guarded by safety/password protocols and will need multiple human interventions to access. Or possibly it's solvable by having multiple redundant AIs operating in parallel, each with veto power, so that they'd all have to have the same hallucination in order to wreak havoc. As with all complex systems the final equilibrium will be gradually evolved.

Expand full comment
Eremolalos's avatar

But won't people all over the planet be running the things without those set-ups? Even if the AI's big and powerful enough to cause huge damage also come with huge price tags and restrictions on who can have one, rogue governments will buy them or pirate the parts and plans and make them. "Haha, we don't need no stinking AI nannies for our AI."

Expand full comment
Taymon A. Beal's avatar

These kinds of general safety and security practices make sense for any kind of high-stakes system, but I don't think they'll suffice against a radically smarter-than-human intelligence. No system is perfect and it's basically not possible to secure something against an adversary who's much, much smarter than you.

Also there might be incentives to weaken the security measures so as to ensure that an AI system can respond quickly enough to enemy action; see, e.g., Dr. Strangelove, or https://www.lesswrong.com/posts/AyNHoTWWAJ5eb99ji/another-outer-alignment-failure-story.

(I don't know whether I'm actually arguing with you at this point, I'm not sure how Deiseach's original point bears on the questions of AI safety.)

Expand full comment
Steeven's avatar

What does this have to do with superintelligence? Like it seems like both situations could be true, where someone dumb uses AI badly to own themselves, but superintelligence is still the same old danger

Expand full comment
Ferien's avatar

In past discussion, you gave that link to that website: https://brght.org/iq/country/ which has inflated scores all over the board.

Averaging five largest European countries (Russia, Germany, UK, France, Italy) gets 108.35 score, Ireland is 101.95 in this dataset. So Ireland is lower just like it was in Lynn's...

Expand full comment
Deiseach's avatar

Yeah, but we're "within normal range" lower, not "intellectually retarded" lower. 100 being the normed value, if we Paddies score (rounded up) 102 IQ, it means we're... average! normal!

Yay, we're normal! 😁

(If you seriously thought I was ever boasting of my gigantic brain, you got off at the wrong exit, brother).

Expand full comment
Scott Alexander's avatar

Do you really endorse the "the danger isn't X, it's Y" construction?

For example, would you endorse "the danger isn't smart humans, it's stupid humans"? Stupid humans can definitely cause some problems (like car crashes, or equipment malfunctions). But smart humans can also cause some problems (like inventing the nuclear bomb, or lab leak). These are both kinds of problems we have to deal with! Saying "the problem isn't smart humans, it's stupid humans" is not even wrong.

In the same way, I'm sure there will be stupid AIs who cause the AI equivalent of "human error". These will cause the AI equivalent of car crashes and factory explosions. But damage is kind of limited, in that if too many factories explode, we won't use that AI. The damage from smart AIs, the equivalents of the ones that can invent nukes or do lab leaks, seems unbounded, so I am more concerned about that.

Expand full comment
Mark Russell's avatar

This is what I was talking about when I wrote about being skeptical of the perceived gains of gene-selecting for higher IQ resulting in more Super-geniuses.

Yes, that would likely happen, but it would not mean the societal results were only--or even mostly--positive.

Expand full comment
Deiseach's avatar

I think it's much more likely that stupid AI will be widespread, and widely used, and we'll manage to screw up a lot of small but important things, very much important when all are summed up, before we get super-duper no really it's a thinking entity AI at all, much less turned loose on the world.

More people have died in car accidents than in wars:

https://scrantonlawfirm.com/death-war-vs-death-motor-vehicle-collisions/

Expand full comment
Mark Russell's avatar

Check the link again: More Americans have died from cars than war, not people.

Expand full comment
Deiseach's avatar

TIL Americans not people!

Yeah, that link is for Americans, it was the first one which came up and since this place is majority Americans, I think it more applicable.

Expand full comment
TGGP's avatar

The world has been safer & more peaceful since the invention of the nuclear bomb.

Expand full comment
Sebastian's avatar

The war in Ukraine might have gone very differently if there was no fear of nuclear escalation.

Expand full comment
Taymon A. Beal's avatar

Only in our particular Everett branch/possible world, and only for the last 80 years. If a nuclear war occurs in the future (a risk that only started when nukes were invented, and will remain until we either all die or create a powerful aligned AGI or similar that can stop nukes), then nukes will have been very bad for the world's safety and peace. Likewise if there've been close calls where nuclear war was only averted due to dumb luck or the anthropic principle; there have been two well-known candidate such incidents and a number of lesser-known ones.

(Also, this is beside the point of AI safety, as a nuclear arsenal controlled by an unaligned AI is an obvious danger to humanity in ways that the game theory that currently governs nukes doesn't necessarily mitigate. But I don't know if you even disagree with that.)

Expand full comment
TGGP's avatar

A war did occur in which nukes were used, but the introduction of nukes into that war didn't make the world less safe & peaceful, rather it ended the war. You can't simply assume your conclusion about a hypothetical future war.

Expand full comment
Taymon A. Beal's avatar

The dynamic is quite different if only one side has nukes, and I think you could argue that it would have been good to try harder to make that state of affairs permanent (as with Bertrand Russell's proposal to preemptively nuke the Soviets before they could build their own nukes), but it's moot now because that opportunity has irreversibly passed. We don't have direct experience of what nuclear war where both sides have nukes looks like, but I think we have pretty darn good reason to believe that it would be real bad.

Expand full comment
TGGP's avatar

India & Pakistan both have nukes, and have fought each other recently.

Expand full comment
Nancy Lebovitz's avatar

For what little it's worth, I'm not sure superintelligent AI is possible. I think stupid use of existing AIs is already happening, and we're barely starting to see the ill effects.

Expand full comment
Nancy Lebovitz's avatar

The amusing? possibility is breaking civilization to the point where really capable AIs aren't possible.

Expand full comment
EngineOfCreation's avatar

The problem isn't level of intelligence or intent; what matters is the orthogonal problem of power, the ability to affect things we care about. Few would complain if you keep your hyper-intelligent (and/or hyper-malicious) agent in a box, airgapped in a secure cleanroom, akin to a bioweapons lab. In there, the agent can be as intelligent and malicious as it wants to be.

However, if I give an AI control over my nuclear arsenal, either directly or in a sufficiently trusted advisory role, then the worst case of global thermonuclear war is the same whether the AI is malicious or stupid. If "a few factories explode" is your worst case, then it's because you didn't give the agent more power than to blow up a few factories.

Yes, we are already liable to both factories blowing up (Chernobyl, Bhopal etc.) and global thermonuclear war just fine without any AI involvement. However, as the saying goes: computers make very fast, very accurate mistakes. This includes AI and is strictly worse than making slow, inaccurate mistakes.

Expand full comment
quiet_NaN's avatar

> Few would complain if you keep your hyper-intelligent (and/or hyper-malicious) agent in a box, airgapped in a secure cleanroom, akin to a bioweapons lab. In there, the agent can be as intelligent and malicious as it wants to be.

Eliezer would like a word with you, I think.

Having an AI system in a box is safe if you do not talk to it. It is also completely useless.

To make use of an ASI, you need to talk to it, and implement some of its suggestions for the problems you are facing. The suggestions are likely going to be plausible and work on the short term, but they might have side effects which maneuver you into a situation where you will eventually release the AI from its box.

As an intuition pump, imagine a bunch of 6yo shipwrecked on some uninhabited island together with a highly intelligent psychopath with multiple professional degrees in a straight-jacket. The kids know that their prisoner is a very persuasive murderer, and have a firm pre-commitment not to let him go free. However, they also depend on his knowledge for survival. My prediction is that this situation will go poorly, and sooner or later, the psychopath will be able to get some of the kids to free him. Perhaps he convinces his guard that he is actually harmless, or engineers a situation where everyone will die unless he is released (and trusts the kids to follow causal decision theory).

In this scenario, the intelligence gap between the kids and the psychopath matters. If the psychopath is just a median 10yo, then it seems much less likely that he can outsmart the 6yo's. Likewise, humans would probably be able to keep a von-Neumann level intelligence boxed in. As far as we know, no highly intelligent humans have taken over the world, after all. But ASI is another ballpark. At best, it is "the smartest human alive ever but also 1000x faster". At worst, it will make us look like chimps in comparison.

Expand full comment
EngineOfCreation's avatar

I don't believe in the super-persuader argument, at least not when you know you're dealing with a super-persuader. It's a variant of Pascal's wager/mugging in that I won't dispute its use as a thought experiment, but as a practical attack I find it about as realistic as the prison break scene from Idiocracy.

And yes, not letting AI near anything useful is exactly my point. If you don't want your superhuman idiot to break things, then the simplest, most reliable solution is to not let the idiot get near things.

Expand full comment
moonshadow's avatar

cf.: https://news.ycombinator.com/item?id=45031888

"I built a very similar extension a couple of months ago that supports a wide range of models, including Claude, and enables them to take control of a user's browser using tools for mouse and keyboard actions, observation, etc. It's a fun little project to look at to understand how this type of thing works."

In reality, you don't need a super persuader. You don't need any persuader at all. Take a look at ycombinator comments on any given day, and you will see people tripping over themselves to let AIs out of boxes and give them as much agency as possible, as fast as they can. It's a "fun little project". This is happening right now, continuously, today. As AIs become smarter and therefore actually useful, the process will only accelerate.

Expand full comment
quiet_NaN's avatar

> If you don't want your superhuman idiot to break things, then the simplest, most reliable solution is to not let the idiot get near things.

I would argue that the simplest solution would be not to have a superhuman idiot.

Alas, if we get ASI, that will be the result of a lot of capital investment, and to generate any ROI its owner will use it. So a limited interaction ("We prompt the ASI to write a proof for a theorem which then gets verified by a computer") is off the table. We would definitely want it to give out a cure for cancer. However, out of all the possible cures for cancer an ASI might think of, we are likely to get one which has long term consequences which further its own goals.

Another intuition pump might be getting chess advise from a grandmaster who is secretly motivated to make you lose. He will not persuade you to just sacrifice your queen for no reason, but he can think more moves ahead than you. A move which will lead to what you think is a good position in two moves might actually lead to defeat in ten moves.

Expand full comment
Deiseach's avatar

"As far as we know, no highly intelligent humans have taken over the world, after all."

But the ones who had limited success towards that goal were not the most intelligent, rather they were extremely persuasive and/or had a particular talent that enabled them to mobilise lots of ordinary schlubs to do their bidding.

That's what I'm talking about; not the von Neumann level AI plotting our downfall, but all the 105-120 IQ people in business, government, and private users who make use of the multiplicity of new AI models on offer to replace humans, take over human work, and be the first port of call for advice, research, and therapy/emotional support.

Any one error on its own won't be immediately fatal, but add them all up? Like the guy who got advice from the AI about cutting out salt from his diet and ending up dosing himself with sodium bromide instead of sodium chloride, and that despite the AI including that this was for cleaning purposes:

https://www.acpjournals.org/doi/full/10.7326/aimcc.2024.1260

This is how we're gonna do it: "well the AI said it so it must be okay and I don't need to check it out any further"/"don't bother checking the AI results, that only wastes time and reduces efficiency, if it says do this then do it, Jenkins!"

The stupid (by comparison with the Platonic Ideal superintelligent AI) AIs are *already* out there, *already* being pushed by businesses, *already* in use.

Recreate "Five Nights At Freddy's"/Ray Bradbury's "The Veldt" in your own kids' bedrooms today!

https://techcrunch.com/2025/08/16/ai-powered-stuffed-animals-are-coming-for-your-kids/

https://www.rte.ie/news/business/2025/0612/1518159-barbie-maker-mattel-teams-up-with-openai/

LLM psychosis for four year olds, it's the must-have toy every parent is getting this Christmas!

Expand full comment
Nancy Lebovitz's avatar

A bromide is also a cliche, and people are already suffering from cliche overload.

Expand full comment
Ollantay Reviewer's avatar

I wrote the Ollantay review. Wasn't planning on responding in the comments (seemed inappropriate), but as the Comment of the Week was skeptical I figure I can respond here. Perhaps this should have been a footnote. Ah well.

The way that the story of the rebellion is usually told is something like this: Tupac took some incredibly bold moves and gathered a huge army that would have crushed Cuzco had he continued the initiative that he already had (more than half of the curacas in Peru publicly supported him!). But instead he got cold feet or became a coward. Sometimes he's compared to Father Hidalgo, who also gathered a giant army but failed to march on Mexico City, but while Hidalgo seemed to be concerned about the giant social revolution he was accidentally leading, Tupac was perfectly content to watch the social rebellion unfold chaotically and violently. The final months of rebellion do not make sense on their own.

But then I read Ollantay, and then I learned that the early European translators and commentators of the play (Markham and his contemporaries) linked the play to Tupac Amaru. In his introduction to the translation (https://www.gutenberg.org/files/9068/9068-h/9068-h.htm), Markham calls out the Tupac Amaru rebellion directly and does in fact say that Tupac watched the play in 1775; implied is that this is a big reason for his interest in the play! And it seems as if that 1835 periodical that brought Ollantay back in the public consciousness did the same (I haven't been able to find a copy of the article and can't get to Lima for a while). This introduction doesn't specifically say that Ollantay inspired Tupac, but he primes us to read the play with Tupac in mind.

And when we do, we see all those connections that I noted in the review. And then we ask "wait, did you say this play happened *before* the rebellion?" Which all of the early translators do indeed say. And they note the provenance of this claim (priests and monks and family members). And for me, that answers the major questions of the rebellion.

So Garald is sort of correct that we have no direct evidence that Tupac saw Ollantay on stage. But we do have pretty good indirect evidence (Markham said that Tupac saw Ollantay, so somebody told him that, but he doesn't cite his source). This review was an argument that he did see it and that it changed his life. I structured the review as a story with the argument assumed, and not as the argument itself. I think it flows better that way, but it caused some alarm bells to ring and I can't be mad about that.

Expand full comment
Deiseach's avatar

It may also have been an assumption on his part that he didn't *need* to march on Mexico city, the authorities there would see that their power was over because of the successful (up till then) revolution and besides the king would back him up, so why fight a battle he didn't need to fight?

Of course, it turned out he *did* need to fight it, but we don't know exactly how deep into his fantasy he was; he may well have been convinced that God was on His side, or at least destiny, and that it was inevitable he would win as the sacred rightful descendant of the emperor or something.

Expand full comment
javiero's avatar

> that he didn't *need* to march on Mexico city,

I take it you meant Lima.

Here's a monograph (just 8 pages, in Spanish) about a previous (largest?) indigenous rebellion in colonial Peru, which might provide some context for the later Tupac Amaru rebellion, and also mentions the importance of taking the main centers of power (Cusco and/or Lima) for the rebellion to succeed:

https://beta.acuedi.org/storage/books/pdf/4951.pdf (*)

A few relevant quotes (translated from Spanish):

"[the rebells] controlled not only the province but also the roads that transported the products that supplied Lima's markets. As Mela says, most of the mountain products arrived in Lima via the narrow paths of the Lurín and Mala rivers, and by cutting the bridges used by muleteers, the rebels could cut off Lima's contact with the interior."

"...the residents of Lima, one of whom, writing to a relative in Spain, told him that 'if they [the Huarochirí Indians] jump to Tarma, Jauja, and Cusco, the kingdom of Peru will end for the Spanish.'"

"The viceroy noted that Huarochírí was the gateway linking Lima to the interior, adding that 'if this province continues in rebellion, its proximity to this capital would become a refuge for criminals who would disturb its tranquility and cause much damage.'"

(*) Rebelión colonial: Huarochirí, 1750, by Karen Spalding

Expand full comment
Deiseach's avatar

Yes, Lima. No idea why I said Mexico City. Clearly I am not cut out to be a South American revolutionary, I would be not attacking the wrong city I intended not to attack!

Expand full comment
Scott Alexander's avatar

I loved the review and wasn't highlighting Garald's comment as a criticism, just to provide extra perspective. I've edited your response into the relevant section above.

Expand full comment
Turtle's avatar

I also loved the review and thought it was very Scott - a deep dive into an obscure chapter of history, backed up by amusing anecdotes, completed with symbolism and tied off with an intriguing present-day reference.

Expand full comment
Stefanie Tellex's avatar

I'm a long time lurker who finally pulled the trigger and started a blog. Here is our first post: https://whattotelltherobot.com/p/elephants-dont-write-sonnets Elephants Don't Write Sonnets, about why LLMs aren't *really* intelligent and what it would take to cross that line, writing from a robotics perspective.

Expand full comment
Scott Alexander's avatar

The post doesn't really present any evidence for why we should think that embodiment matters beyond Rodney Brooks saying it does. Its response to all the accomplishments of the past few years seem to be that this isn't "true" intelligence, but that embodied intelligence would be, without predicting what things that current AIs can't do an embodied AI would be able to do.

I'm more likely to do modus tollens here and think of Rodney as one of the many people with bold theories of how you can't do AI unless X which were totally falsified by later events. I think his 2018 predictions, which he commendably updates every year, broadly confirm this story - for example, he said you wouldn't get an AI that "seems as intelligent, as attentive, and as faithful as a dog" until 2048. I think this shows that Rodney wasn't expecting AIs which were competent at basic tasks but lacked some kind of "true intelligence", and that positing the "true intelligence" thing is a retrospective invention intended to explain away why AIs that don't follow his embodiment theory are succeeding beyond what he predicted.

I think if you are going to blog about this, you need to address this perspective rather than taking Rodney's perspective as a given (unless you intend for your blog to be just be a discussion forum for people who are already operating within that perspective).

Expand full comment
Stefanie Tellex's avatar

Thank you for this comment! Engaging with you and other readers of ACX was a major motivation for me pulling the trigger on this, so I am *extremely* excited to get feedback from you! Is it TMI to say I was dancing around our living room when I saw you had replied to this comment?

As it happens, we did address this in section 1.3 of our chapter, which is linked from the blog post and also linked here: https://h2r.cs.brown.edu/wp-content/uploads/tellexwatkins2026.pdf , but we didn't mention this part much if at all in the teaser post. We will plan another post just on section 1.3 based on your feedback here. TLDR: We define behaviors in terms of embodied language (language paired with actions/behaviors in the physical world), that robots/AIs don't yet do, along with learning architectures that point towards doing them.

I've been doing language+robotics for 15 years so I promise it's not a retrospective invention for me, although I won't deny a lot of angst as many of my predictions have been falsified in the past 5 years. :-) Regarding Rod's dog example in particular, my lab is collaborating with Daphna Buschbaum who studies human-dog dyads, so we can try to make our quadruped robots produce similar behaviors, for example to interpret human pointing gestures to find objects. I think Rod would say that behaviorally, the AI needs to be attentive and faithful *in the physical world*, which means processing high-dimensional space/time input and producing high-dimensional motor output. Not just typing to you, but walking up to you, looking at you, nudging your hand for pets, and barking at the big dog coming down the street. I am confident in saying AIs don't do this yet (while being sure an LLM could *talk* about doing it). But Rod has a chapter in the same book as us, so maybe he will say more there!

Expand full comment
EngineOfCreation's avatar

Can you play Magic: the Gathering over the phone, but without a referee and provably without cheating? Something equivalent to blindfolded chess, with the obvious complication that a lot of information in MtG can transition between various shades of hidden and public, whereas all information in chess is public.

For example, a game could start by each player assigning a generic ID to all the cards in their deck, publish that assignment except it's encrypted, and as long as the library remains shuffled, drawing the top card as normal is mathematically equivalent to drawing a random card that both players agree on by exchanging random numbers. At the end of the game, each player would reveal all their hidden choices and it can be proven that cheating has or hasn't occurred.

Is there such a protocol that covers all possible actions allowed by Magic cards, or is there proof (as in an actual paper) that such a protocol can't exist?

Expand full comment
thewowzer's avatar

I would never want to do that.

Expand full comment
Jeremy's avatar

Yes, this is definitely possible. What you are looking for is called secure multi-party computation, and there are protocols like Yao's garbled circuits that can be adapted to not just play MTG, but to perform *any* computation with someone else without revealing your inputs to the computation.

The most straightforward construction would be to run a multi-party computation of an MTG game engine, where the RNG seed is given as the sum of two random values you and your opponent provide. There are still some implementation details that need to work out, but they are not too bad:

1. You need to run multiple rounds of computation, not just a single input/output. You just keep an encrypted state variable that is fed from one computation to the next (again encrypted with a key derived from a combination of secrets you and your opponent provide).

2. You need the game engine to provide private information to just one of the players. To do so you just have it output that information after encrypting it with a key that only one player has.

There are probably more efficient schemes which take advantage of the specific structure of the MTG game. (Also note MTG is Turing Complete)

Expand full comment
EngineOfCreation's avatar

That is very interesting, I will look into that.

I believe the most difficult problem to solve is to gain information that only another player can provide without that player gaining the same information. For example, if I am allowed to look at the top card of the opponent's library, they have to divulge in some way that it's, say, a Forest card, without learning that fact themselves. Under that garbled circuits protocol, would that be possible? I believe I could make it work by modifying the game rules so that every player revealed their entire deck list to each other before the start of the game, but is it possible without that?

Expand full comment
Jeremy's avatar

Yes, garbled circuit based protocols allow you to do exactly that. Think of it as a magic box that allows you to run an *arbitrary* program P(X1,X2), where you provide X1, and your counter-party provides X2, in such a way that **nothing is revealed to anyone except the final output of the computation**. X1 could be arbitrarily complex, e.g. your full deck of cards, and the final output of the computation could be arbitrarily complex (i.e. encrypted messages for both you and your opponent containing hidden information). This is an incredibly powerful tool and you can build a ton of different protocols using it that allow you to do *pretty much anything*, computational limits withstanding.

There are also other tools that allow you to do similar things with different tradeoffs. Definitely check out the https://en.wikipedia.org/wiki/Secure_multi-party_computation and https://en.wikipedia.org/wiki/Zero-knowledge_proof wiki pages.

Expand full comment
EngineOfCreation's avatar

Thanks again, through those links I found exactly what I was missing:

https://en.wikipedia.org/wiki/Oblivious_transfer

Expand full comment
Remysc's avatar

I'd say no unless you set some limitations. How would you deal with Chaos Orb or Falling Star? You also need to assume trust on possession of cards or allowing proxies I assume, not only because of the initial state, but because Research/Development exists.

Now, if you keep it to tournament legal I'm not really seeing any problem, you'd have to keep track of card order in some scenarios, you'd also need to be able to generate random numbers for coin tosses or "select a random card" effects, but beyond that I'm not really coming up with any issues.

Expand full comment
EngineOfCreation's avatar

Chaos Orb, Falling Star would not be implemented, true. Legal ownership is out of scope, I only care about the game mechanics.

Expand full comment
Erica Rall's avatar

You could track possession of cards by having a trusted third party (maybe WotC or a tournament league) verify that you own the card and issue a cryptographically signed certificate that allows you to use it in the app.

This could still be cheated by selling the physical card after you get the certificate, but that could be mitigated with expiration dates that force you to revalidate periodically. Setting the expiration date to be short enough to be a useful control but long enough that the hassle of revalidating isn't too burdensome on both players and certifiers is a nontrivial problem. You might be able to mitigate by having common cards be taken on trust or non-expiring while only particularly rare cards would need to be revalidated annually or more often. Or you could have the expiration be a soft rather than hard limit, with you and your opponent agreeing to a limit of how old a certificate each of you will accept from the other as part of the terms of the game.

Another alternative would be to decouple app cards from physical cards and use either NFTs or a central registry to track ownership. In the latter case, you'd still use signed certificates so you can verify ownership without pinging the registry every time, but the registry would make validation trivial and make it feasible to set the expiry to a period of days or weeks instead of months or years. I think I remember someone I used to follow on livejournal talking about working on an app that worked on the central registry model, some time around 2005-2008.

Expand full comment
Taymon A. Beal's avatar

Does this at all stop people from looking at their library to see what order the cards are in?

Expand full comment
Erica Rall's avatar

Each player shares salted hashes of the cards in their deck with the other player, which the opponent's app shuffles and tracks. When player A draws from their deck, their app asks player B's app which card they draw (by hash) and player A knows which card this is. Player B can confirm that the card actually corresponds to the hash, but doesn't know in advance because of the salting.

I don't think this is a complete solution for MtG, since I seem to recall there being situations where a specific card gets shuffled back into the deck, since player B could then identify that particular card in the deck and track or futz with its position.

Expand full comment
quiet_NaN's avatar

I think that if you shuffle a card back into the deck, you just rehash the entire library, i.e. apply new salts to the hashes (or to the revealed card) and provide these as hashes.

For a Scry N, your opponent will just give you N hashes, and you will tell them in which order you place them before and after your library.

I am unsure if there is an effect which will pick a card where neither party knows what it is, but that could be simulated by your opponent hashing the hash they picked for you.

(Of course, your opponent could also provide you with a shuffled library that way -- simply an ordered list of salted hashes of your salted hashes of your cards. Whenever you draw a card, they just reveal how they generated the next hash on the list.)

If this is doable in practice depends a lot on if the telephone refers to a smartphone or just a voice call, because most humans are not great at calculating cryptographic hashes in their head.

Expand full comment
EngineOfCreation's avatar

When I say "playing over telephone" it is a metaphor for "not being there physically" to check the legality of play [1]. Of course you'd use computers to communicate and to do the mathematical operations.

[1] https://dl.acm.org/doi/10.1145/1008908.1008911

Expand full comment
Erica Rall's avatar

I was assuming smart phone. For a voice call, I don't have any immediate thoughts on how to do it more securely than the honor system.

Expand full comment
EngineOfCreation's avatar

Yes, in the sense that it would be useless to the would-be cheater. The protocol, if it exists, would ensure that such hidden information is either unknown to players who are not entitled to it, or that it would be useless because the mechanism of choosing a card would not rely on the current order.

Let's say my library consists of Forest, Forest, Grizzly Bear. I assigned IDs 1 through 3 to these cards. Both I and my opponent know that cards 1-3 are in my library. I know that the Grizzly Bear has ID 3, but when it's time for me to draw a card, I don't decide alone which card to draw. My opponent and I would, through the protocol, agree that I'm going to draw card #2, because neither of us knows the order of cards and the order of cards in the library can be abstracted away.

I could of course claim that I drew Grizzly Bears instead of Forest, but I have previously committed to the assignment and the protocol would require that I disclose it eventually, making the cheat provable.

Expand full comment
Brinkwater's avatar

The problem isn't illegally drawing cards. The problem is knowledge of upcoming draws that should be hidden is valuable and influences decisions.

Let's say my opponent taps out to play a very good creature. Do I counterspell, or hold my counterspell for a potential future better spell? If I know my next draw is creature removal, I just draw and use the removal. If I know my next draw is another counter, I probably counter. I think that alone makes such a protocol impossible (without continuous camera + machine vision proctoring to ensure hidden information remains hidden).

Also, many cards make you shuffle, and that's a pain. You have to let the protocol shuffle, and then rearrange your deck according to the new randomized order, which for 20-90 cards (including commander) is annoying. This second concern doesn't make it impossible, but it does make it unpleasant.

Expand full comment
EngineOfCreation's avatar

You misunderstand my question. I ask about hypothetical "blindfolded magic" which does not require physical cards any more than blindfolded chess requires a physical board and pieces. You can, in practice, use some representation of the gamestate as a mental help, but that is for your own benefit only; what you do with that representation is neither necessary nor sufficient for changing the gamestate, so there are no cards that would need to be physically verified.

The actual gamestate exists only as the result of a process, not because of some physical component. It would be manipulated and documented entirely through a set of non-repudiable, potentially encrypted messages according to an agreed-upon protocol. Deviation from that protocol would be provable to a 3rd party.

https://en.wikipedia.org/wiki/Non-repudiation

Expand full comment
Brinkwater's avatar

Oh, well then yes, obviously. See MTGO or MTGA for implementations.

Expand full comment
EngineOfCreation's avatar

Well, no, that's not what I mean. In MTGA/O, a central server keeps track of the gamestate and acts as referee at all times. I'm asking about a peer-to-peer protocol, not about a client-server protocol. In my previous comment I did say "would be provable to a 3rd party.", but not in the sense of a referee keeping track of the rules at all times.

Expand full comment
Brinkwater's avatar

You do have to ban some cards like Chaos Orb that require physical objects. Barring those (which are overrepresented in unsets), I don't see any obstacles to reimplementing a game engine as non-repudiable instead of MTGA's server-based with account logging.

Expand full comment
beleester's avatar

How does this work with scrying? If you scry your library, the next cards become known to you but not your opponent, and you can no longer simply draw the next card by generating a random number. I think you'd have to "draw" scried cards into a separate zone and record their order, so that your opponent can verify you scried only those cards and they were drawn in the order specified.

Or even worse, what about fateseal? How do you select N cards from your opponent's deck and reorder them, without your opponent knowing which cards you selected?

Edit: Also, scrying doesn't simply bury cards, it specifically puts them on the bottom, which will be a problem to keep track of if the game goes all the way through your deck. I think you need a way to cryptographically record the entire order of the deck without cheating.

Expand full comment
EngineOfCreation's avatar

Yes, these are the complications I'm asking about, whether they can all be overcome.

For the Scry case, I would say that we determine a random card as above, let's say Card #15. I know what card #15 is because of my previous ID-assignment so I announce my choice of putting it back on top or to the bottom.

The next time the game wants to do something with my top card (e.g. when I draw a card, or it gets milled, and so on), we would agree to choose that card #15 instead of a random card because the agreed game state says that #15 is currently on top. If I have put #15 on bottom, then the next time I would draw a card I draw a random card *except* #15, because there are others still "on top" of #15.

Expand full comment
AlexTFish's avatar

I've played entirely digital Magic over the internet from time to time. A friend wrote a little app that tracks your deck state and supports draw, scry, Brainstorm, shuffle etc. That app was trust-based for playing with friends, but most of its functions could be easily adapted to share appropriate hashes after each library interaction. Scry and surveil are completely fine, as you say. But even that app would have trouble with Fateseal and Praetor's Grasp.

And yet having seen some of the proofs that are possible in the space of card algorithms, I feel like this might still be possible. (And if it's not already solved, there's probably a paper at the FUN With Algorithms conference in it for whoever solves it!)

Expand full comment
quiet_NaN's avatar

I think that for fateseal & friends, a simple hash-based system is probably not sufficient, and you want a full public key crypto system instead.

In particular, Fateseal could be implemented using a commutative crypto system, e.g. one where you can encrypt a text twice with different keys and it does not matter in which order you do the decryptions. (There might be a more correct name than commutative for this property, though.)

Basically, each card in the library is encrypted twice: once by each player (who can also reorder the cyphertexts, to shuffle the cards).

If you draw a card, then your opponent decrypts the next-index twice-encrypted card. They do not learn anything from that. However, you can now decrypt the result into a card name (plus salt).

If instead you get targeted by Fateseal 1, then you have to decrypt the top card of your library for them. This will allow them to know what card it was (without you being the wiser).

One difficulty is that both parties would have to apply a per-encryption salt if they use public key systems, which will likely clash with the commutativity requirement.

Searching for "commutative encryption" leads to [1], which cites [2]. Seems the trick is to use SRA, which is RSA but with different stuff kept secret, to play "mental poker" -- which is probably mostly equivalent to MtG from the required crypto primitives.

So my idea is not entirely nonsense, just some 45 years late.

Edit: upon reading the SRA paper [2], it appears that the system depends on disclosing the key at the end of the game. This is fine for MtG but absolutely terrible for poker. The fact that you will wonder for the rest of your life if your opponent was bluffing is very much a key part of poker!

[1] https://xianmu.github.io/posts/2018-09-19-commutative-encryption.html#ref-shamir1981mental

[2] https://dspace.mit.edu/bitstream/handle/1721.1/148953/MIT-LCS-TM-125.pdf?sequence=1

Expand full comment
EngineOfCreation's avatar

Yes, I'm looking for a zero-trust protocol that can deal with the entire tournament-legal card pool except of course cards like Chaos Orb that interact with the physical space.

Expand full comment
Anonymous Dude's avatar

I'm going to try to turn off my usual misanthropy for a moment, because I think I might have a useful idea, and some of the people here probably know more than me on this subject. (Maybe even Scott himself--he's a psychiatrist?)

On the recent discussion on men in the Bay Area, it occurs to me there's probably a large amount of older therapy literature from the period when most of the people writing about it were men. Granted it's probably got all kinds of obsolete psychoanalytic ideas running through it, but there might be stuff about ego, self-actualization, etc. that might be useful in the current problems of trying to make effective therapy for men?

EDIT: One example I can think of is McClellan's Three Needs theory from 1961, which says people need power, achievement, and affiliation; obviously the first two of those are much more male things, so addressing some of these might be a useful thing for therapists. the male desire for power is now pathologized, but therapists who know it's common might be able to discreetly find ways to satisfy it in everyday life or at least realize it can't be and focus on achievement or affiliation to compensate. Stuff like that.

I'm not a therapist, and should never be one. I'm putting this out there in case someone smarter than me can take it up.

Expand full comment
Viliam's avatar

I like this idea, and the obvious first thought is to find the old books written by men, feed them to a LLM, and let it role-play the therapist.

Expand full comment
Scott Alexander's avatar

That's an interesting thought, but I think maybe slightly misguided.

Men actually are well-represented among founders of different therapy schools, both in the past and the present (I think therapy school founder requires a similar neurotype to entrepreneur or something). So I don't think the bottleneck is that there aren't therapy schools that understand the male mind or whatever. I think the bottlenecks are:

- the therapists implementing the schools, including the male-founded schools, are mostly women.

- many therapists aren't going off a school and are just kind of operating on vibes. There is widespread debate about whether this is better or worse, but in any case most of the therapists doing this are female, most of the vibes are female vibes, and insofar as these vibe-based practices then congeal into a coherent philosophy of therapy, it's a female philosophy.

- men are less interested in going to therapy in the first place, and might prefer some kind of figuring-out-your-life practice which is not quite therapy in the traditional sense.

Expand full comment
Eremolalos's avatar

My impression, as a female therapist who does CBT (cognitive therapy of depression, exposure and response prevention for anxiety) is that men like unmodified, straightforward CBT approaches way more than women. Men see the CBT as practical and to-the-point, while women find it dry in its pure form.

But there is no worked-out CBT approach for some problems. And it was striking to me how many men in the comments on the review discussing male pain he and confusion were moved by it's "female" component of sympathy and understanding "Yes, yes, someone gets it about what it's like to be me, and how painful and confusing it has been!" Nothing wrong with someone's wanting that. But it seemed to me that men are yearning for understanding and sympathy, but also asking for a kind of therapy that isn't all girly and sympathetic and you-get-me! based, but is practical -- a set of tools. I can't picture what that treatment would be.

As for Albert Ellis: I read his stuff, and once heard him speak. He was acerbic and funny, but didn't really havea lot to say about his approach. It came down to "make yourself do the stuff you're scared to do," with no additional tweaks for different kinds of anxiety, and different kinds of people, no hints of how to convince people to take the steps he was advising. People who are scared to do something that other people do as part of everyday life are already hearing from all their friends and relatives that they should just fucking push through the fear and do it. The hard task is to get them to take the step. Once they take it, they usually get good results and become self-motivated.

Expand full comment
Anonymous Dude's avatar

Thanks for commenting! not that many therapists on here but I don't know where else to go where people would be open to the idea.

It's not all one thing or all the other. It's not that 'feminine' sympathy and understanding have no place at all in therapy of men, I'm just wondering if people with more understanding of therapy, the history of therapy, or people in general than me can go prospecting in older manuals and maybe find some gold. A better example now that I think of it is McClellan's Three Needs theory from 1961, which says people need power, achievement, and affiliation; obviously the first two of those are much more male things.

I'm too dumb (interpersonally) and spergy to do it myself, but I'll throw the idea out in case some people-smart person can use it and do some good.

Expand full comment
Anonymous Dude's avatar

Nice to get an actual informed opinion! Thank you!

But aren't there still older books that might be more useful to men if updated for modern conditions? I've seen tapes of Albert Ellis and he had that sarcastic midcentury New York Jewish edge I think a lot of (non-antisemitic) young guys might like.

Expand full comment
Ruffienne's avatar

I'd like to see this idea kept going if at all possible. It seems to me there is a significant unmet need for this kind of therapy.

The current standard version does not seem to be all that appealing or effective for the average man.

Some not-useful things seem to happen when a troubled man seeks help from a vibe-y woman therapist.

I've seen this play out repeatedly, generally resulting in the man abandoning therapy completely. So any functional and appealing alternative would be very welcome.

Expand full comment
Justin's avatar

A question I posed to my wife and my brother in law: is there anyone else in the history of the world who's name is more known and used than the Earl of Sandwich? Think about it: his name became basically just a part of the English language. His name is used all of the time, even to describe things that he legitimately had nothing to do with because they're from completely separate cultures or predated him. I've seen Indian flatbread sandwiches be called, well, sandwiches, and for Passover every year Jews everywhere read the Haggadah and do the ritual of eating the Hillel sandwich. Sandwich's name even became a verb for putting non-edible things between other things; I can be sandwiched between two people on the bus or a house can be sandwiched in the middle of two buildings. My brother in law proposed that Caesar may be a contender, which is a good proposal with Kaiser, Tsar, C-sections, and salads, though I still think maybe Sandwich has him beat.

Expand full comment
Yug Gnirob's avatar

William Penn has Pennsylvania, Pennzoil, the penny, the pen, the pencil, and the penis. Also the penultimate, so named because he was second-to-last in everything.

Expand full comment
Tatu Ahponen's avatar

But the Earl of Sandwich is a title, not a name. Sandwich is a place name.

Expand full comment
The Ancient Geek's avatar

Been there!

Expand full comment
Eremolalos's avatar

Why yes, the famous Jonathan The.

Expand full comment
Remysc's avatar

Jesus Christ is a widely known figure, and you got Spanish-speaking countries mentioning him each time someone sneezes. Then if you also count any mention on interjections and such, I think it really adds up.

Expand full comment
Erica Rall's avatar

The Virgin Mary might be another contender, given the popularity of the Hail Mary prayer among practicing Catholic. I think you say her name over a hundred times while praying a rosary (twice per prayer for each of 50+ "Hail Mary" beads).

Expand full comment
Taymon A. Beal's avatar

That doesn't count because you're referring to her by name, it's not a word with a separate definition etymologically derived from her name. Contrast, e.g., https://en.wiktionary.org/wiki/Jesus_Christ#Interjection

Expand full comment
Erica Rall's avatar

The original question was about whose name is most known and used, although we very quickly drifted away from that

Expand full comment
Steeven's avatar

Duke of York? Every time someone says new york, mentions the new york times etc. I think that's likely to be said very often, although I have no idea whether people working at subway say sandwich all the time.

Expand full comment
User's avatar
Comment deleted
4d
Comment deleted
Expand full comment
Taymon A. Beal's avatar

It was his name in the sense that it was what people called him. (And the foodstuff is named after him specifically, not after the title.)

Expand full comment
Kristian's avatar

What about Amerigo Vespucci?

Sandwich was technically his title not his name, which was John Montagu.

Expand full comment
Mark Russell's avatar

No one calls anything "Amerigan." Is that too literal? I just mean that the translation (Americanization?) of the spelling has long since eclipsed the original.

I am fine with Sandwich so far--bigger than Jesus

Expand full comment
Matthieu again's avatar

The (whole) continent was called America from the start (1517), never Amerigo or Ameriga. The -ica ending imitates other territory names in Latin including Africa.

Expand full comment
Mark Russell's avatar

Thank you for the latin grammar. Still do not think this helps his case of having most commonly used name.

Expand full comment
Kristian's avatar

His name in Latin is Americus. Prominent people in the 15th century would have used the Latin forms of their names. America is the feminine form of that to bring it in line with other continents.

Expand full comment
Justin's avatar

Amerigo is a good one. He has the benefit of being used cross-languages.

Expand full comment
Matthieu again's avatar

Sandwich is also used cross-languages. The wikidata item, https://wikidata.org/wiki/Q28803 , shows that the name of the thing in most languages is either "sandwich" or an adaptation of it to the language's writing system. This includes chinese 三明治 : see https://en.wiktionary.org/wiki/%E4%B8%89%E6%98%8E%E6%B2%BB .

Expand full comment
Nazar Androshchuk's avatar

That’s surprising. Many Eurasian countries have it along the lines of “butterbread”.

Expand full comment
Paul Botts's avatar

I think this has to be the winner. "America" is a word used by speakers of languages all over the planet including plenty of people who don't know a complete sentence's worth of English words. Plus it appears in multiple places on every world map, classroom globe, etc.

Expand full comment
Melvin's avatar

Just checked a few heavily used languages; Mandarin, Hindi, Arabic, Bengali all use some derivation of Amerigo as the name for America.

I think that pretty close to 100% of the world population who have any idea of global geography at all are going to know some version of this name, I can't possibly imagine that Sandwich is going to beat it.

Expand full comment
Wasserschweinchen's avatar

https://en.wiktionary.org/wiki/sandwich lists all those languages as using some derivative of "Sandwich", so the question might mostly come down to if people talk more about the Americas or about sandwiches.

Expand full comment
Paul Botts's avatar

Several of the languages listed there have relatively small numbers of speakers (Danish, Norwegian, Dutch, Finnish, Occitan). Not all, the list does include Spanish and also French. Of course those languages do use "America".

The ones Melvin checked represent a great many people who, assuming the Wikipedia "sandwich" entry is comprehensive, do not use "sandwich" but do use "America". Mandarin, Hindi, Arabic, Bengali, just those four languages accounts for ~2 billion people. Are there any widespread languages which are the reverse, using "sandwich" but not using "America"?

Expand full comment
Sol Hando's avatar

Quite a few words are named after someone or other. Especially Greek and Latin words.

Some that come to mind:

Tantalize - King Tantalus

Narcissism - Narcissus

Draconian - Draco (Athens)

Mentor - Mentor (Odyssey)

Boycott - Charles Boycott (Some Irish guy from what I remember)

Also a ton in science terms like Pasteurize, Diesel, Ampere/Volt and quite a lot of other stuff I’m sure.

Expand full comment
Nathaniel Hendrix's avatar

Definitely not more common than "sandwich", but also: Mausolus -> mausoleum

Expand full comment
Taymon A. Beal's avatar

I think only real historical figures are supposed to count.

Expand full comment
Odd anon's avatar

Most people who say "sandwich" have no idea that it's named after anyone, but if that doesn't matter...

More well-known? I would guess that the most well-known names would be some prominent religious figures surpasses it: Adam, Abraham, and Moses are all widely known among adherents of the Abrahamic religions, which are the majority of the world's population.

More frequently used? I don't speak Mandarin, so I don't know if any common words in it are named after people, but if there are, it would probably beat sandwich, given that much of the world either doesn't often eat sandwiches or doesn't speak a language where the word descends from the Earl's title. Caesar's name probably gets more uses from the month of July than from any other source, I would guess.

If there's any actual human whose name was the source behind the various deities that have weekdays named after them (Thor's Day, Saturn's Day, etc), those would probably beat sandwich.

> for Passover every year Jews everywhere read the Haggadah and do the ritual of eating the Hillel sandwich.

Most seders aren't conducted in English, and therefore do not have any direct reference to "sandwich".

Expand full comment
Matthieu again's avatar

I'd guess there are more people who are familiar with sandwiches worldwide than there are Chinese people. In any case in Chinese it is also called a sinicized version of "sandwich": 三明治. Funny how that 三 (sān, meaning 3 when on its own) looks a lot like a sandwich.

Expand full comment
Justin's avatar

We considered Mandarin, but there are more global speakers of English then Mandarin, it's more widespread as a second language.

Expand full comment
Taymon A. Beal's avatar

My guess is that the most commonly used English word that's a specific person's name is "guy".

Also, re: your brother-in-law's suggestion, don't forget to include July.

Expand full comment
Justin's avatar

Is the word guy named after Guy Fawkes or someone? I always assumed it was like a Tom, Dick, and Harry thing.

Expand full comment
Erica Rall's avatar

Yes. The chain of meanings was something like:

1. Male given name, cognate of Gaius or Guido

2. Effigy of Guido "Guy" Fawkes

3. Dummy or effigy in general

4. A shabby or disreputable fellow, particularly one wearing clothes resembling the worn-out cast offs traditionally used to construct a sense-2 or sense-3 guy.

5. The modern meaning, similar to 4 but having lost the negative aspects of its meaning.

The line in the song "I've Got a Little List" from the Gilbert & Sullivan opera "Mikado", where the Lord High Executioner mentions "The lady from the provinces who dresses like a guy", he means sense 2 or 3 with shading of sense 4. A modern American lyricist trying to convey a similar meaning might say something like "The woman from the country who dresses like a scarecrow".

Expand full comment
Humphrey Appleby's avatar

And August? (which can also be an adjective)

Expand full comment
Taymon A. Beal's avatar

Emperor Augustus was named after the adjective, not the other way round. Also, the adjective is probably less used than all the combined non-July things named after Julius Caesar.

Expand full comment
Yug Gnirob's avatar

Does anyone know a good writing forum? I've been trying to get back into fiction writing but am currently managing about four sentences every three days.

Expand full comment
Taymon A. Beal's avatar

A bunch of my friends in the community use https://4thewords.com.

Expand full comment
Yug Gnirob's avatar

Worth a shot, thanks.

Expand full comment
NASATTACXR's avatar

I remember the children's ditty about Mussolini. It was sung to the tune of "Whistle While You Work" (as sung by the seven dwarfs of Snow White fame), and it was ambiguous as to whose member (his own, or der Fuhrer's) Mussolini had bitten.

This was 20ish years after the end of WW2. I assumed it originated with the uncles and older brothers of my peers.

Expand full comment
Melvin's avatar

I'd never heard the dirty version until this thread. I do remember a cleaner version ("he's half barmy, so's his army") which appeared in an episode of Dad's Army back in the day. (https://www.youtube.com/watch?v=_YMVPXmaKds at the 28 second mark)

Now I come to write it out, I realise that it might be a UK/US distinction as well as a clean/dirty distinction; "twerp" and "barmy" are distinctively British while "jerk" and "weenie" are very American.

Expand full comment
gdanning's avatar

I recall it as well, though in the version I knew, he "pulled his weenie." I had always assumed it was his own weenie. My understanding is that the ditty was very popular during the war; I am pretty sure I learned it from my mother (along with several unrelated dirty limericks), but I might be mistaken.

For the uninitiated:

"Whistle while you work. Hitler is a jerk. Mussolini pulled his weenie; now it doesn't work!"

Expand full comment
Mark Neyer's avatar

Have LLM’s hit a scaling wall?

If so, what next? A new architecture goes further? A financial crash?

If not, then what?

Expand full comment
Paul Brinkley's avatar

There are obvious frontiers we've not yet explored, namely in robotics. AIs haven't been attached to robots in any meaningful way, and AFAIK they haven't been trained on anything but text and pixels. No one's plugged an AI into something with haptic feedback, for example. Or even audio (again, AFAIK).

My working hypothesis is that in addition to working the mobility and sensory problems, an LLM could gain a quantum advance in improvement by being trained to predict the outcome of some action well enough to plan an alternate action with the same outcome but in less time / power / environmental impact, or more importantly, plan an alternate to a long series of actions. In other words, when set to perform some action thousands of times, an AI that could notice a pattern in the result that it could exploit to devise a shortcut. Imagine an AI performing some repetitive plastic shaping in a factory, figuring out that the only "important" outcome is the production of a great number of some non-trivial widget (yes, yes, I know, paperclips) at low cost, and designing a new process to optimize for that.

A process improvement AI would be limited ultimately by the decreasing marginal improvements in exchange for rising costs of optimizations, but if it's going to be running anyway, studying processes comes for free, can mechanically consider more factors, and can run 24/7.

Expand full comment
John Schilling's avatar

One serious issue with this approach is that it takes a godawful ridiculous lot of training data to get good results out of an LLM or other transformer. One can get a godawful ridiculous amount of text or pixels fast, and for mere gigabucks, by pinging the internet and saying "give me everything!"; bandwidth isn't free, but it's pretty cheap.

Meatspace is much slower, and the cost per interaction is much higher, and we haven't had everyone documenting all of their meatspace activities in an AI-legible way for the past thirty years. So I suspect it's going to take a godawful lot of expensive robots working for an annoyingly long time to get similar results.

OTOH, Sam Altman did seem to think he needed seven trillion dollars for *something*

Expand full comment
Joe's avatar
4dEdited

I would suggest ignoring both the hype and the anti-hype – journalists seem to like writing headlines like "gpt5 was a massive failure which proves LLMs have plateaued", but I think it's actually mostly in-line with trends (Peter Wildeford said it's roughly what he forecasted; I'm likewise; a poll in the ACX discord of gpt5-predictions before release showed most people getting what they expected or being at most mildly disappointed). For instance, the "time horizon" benchmarks are still scaling strongly. Personally I think Zvi Mowshowitz gives some good overviews of this stuff.

Expand full comment
Marius Adrian Nicoarã's avatar

I've been wondering the same thing after noticing that the release of GPT-5 didn't make that big of a splash. Although it's still relatively early, so who knows what might pop up.

I think AI models with the ability to do contiuous learning could be an interesting next step. Quickly integrating new information seems essential for getting good use out of an AI model. That way, a model could improve on the job, so to speak.

I imagine something like telling an LLM the kind of mistake patterns it makes leading to the model avoiding those kinds of mistakes in the future.

But that seems like a big safety issue, because it would probably be very hard to predict what new behaviors such models might develop. What kinds of changes should a model welcome and what kind of changes should it resist? How could it be guaranteed that the model will uphold some core principles if it's always open to change?

Expand full comment
Taymon A. Beal's avatar

Note that the amount of compute that went into training GPT-5 was a much smaller relative increase over GPT-4, compared to GPT-4 over GPT-3 or GPT-3 over GPT-2. So the correspondingly smaller capability gains are expected and don't refute the scaling hypothesis; OpenAI is just engaging in version number inflation. Training a model that's that kind of step up from GPT-4 requires constructing a bunch of new data centers from scratch, and there hasn't been time for that.

Expand full comment
Marius Adrian Nicoarã's avatar

Then it seems that OpenAI is trying to keep the hype going while it manages to get those new data centers up and running.

I wonder what reasons can convince investors to be patient enough to not withdraw their money while waiting for the new infrastructure.

Expand full comment
dualmindblade's avatar

No, unfortunately, depending on what you mean. It doesn't seem like labs are banking on just 10xing the parameters on last thing that worked and throwing most of the compute at pre-training. The new hotness is synthetic data, which is expensive to produce but unlike internet data is potentially unbounded. The methods so far used to train the top models are quite crude, at least those methods described in publicly available documents, still they have been rather effective and capabilities continue to increase quickly and steadily despite model size kind of leveling out or sometimes even shrinking.

So what's likely going to be evolving, in the near term, is training architecture. Unlike the transformer where, while there are many ideas floating around it's not super clear what to try next other than fiddling with little details to squeeze out more efficiency, there are some blindingly obvious next steps on the training front. For example, engineers would I imagine very much like to have the ability to train a really big model with something like MCTS, it's been tried but the hardware needed to make that viable apparently isn't there yet and simpler but less compute heavy schemes have won out. Despite that, the trend of getting close to maxing every benchmark more than a couple years continues. And the hardware needed for the next few steps is coming online very soon. Seatbelts buckled, we ain't seen nothing yet

Expand full comment
Richard's avatar

Why don’t you just vibecode the CBTi app yourself, Scott?

Expand full comment
Scott Alexander's avatar

I don't know CBTi, coding, or entrepreneurship, and I'm not sure we're quite at the part of the glorious AI future where the best person to do something that requires three skills is someone who has zero of the skills.

Expand full comment
Viliam's avatar

Maybe you could just give away the app for free, and then you don't need much entrepreneurship...

Expand full comment
Richard's avatar

Fair enough!

Expand full comment
Taymon A. Beal's avatar

Is the technology at the point where somebody without any expertise in software engineering can make an app and expect it to work? That was not my impression.

Expand full comment
Alastair Williams's avatar

I experimented with it a bit over the weekend. My impression was it works until it doesn't, at which point it became a complete nightmare to untangle everything and figure out what was going wrong. And the LLM itself was not much help at that!

Also, though, it didn't work well when I said "make an app to do xyz". It only really worked when I broke down the project myself into a series of logical steps, and then asked the LLM to work on each step at a time.

Expand full comment
Lucas's avatar

Web app kind of if you already have the kind of "grit" that you would need to code (try something, fail, try to understand why you fail without getting frustrated too much, try again), but you will get results way faster than learning to code from scratch. Mobile apps it's harder from what I understand, I see less of them.

Expand full comment
Richard's avatar

Admittedly I was trolling a little bit. But only a little! I really am curious what Scott has to say about this.

I think it is possible to get some sort of working app if you spin the slot machine often enough, even if you don't know any code. But also I assume that Scott has access to the latest and greatest (or at least most expensive) models, so his experience might be totally different and much better. If not, I'd like to hear about that too.

Expand full comment
EngineOfCreation's avatar

The toothbrush moustache sported by Hitler was already popular in Germany and its origin country USA by the time he probably adopted it. There are photographs of Hitler during WW1 that show him wearing a Kaiser moustache instead. Wotan's moustache in the painting also looks much broader than a toothbrush, extending beyond the nose on the sides more like a painter's brush moustache.

https://en.wikipedia.org/wiki/Toothbrush_moustache

https://www.google.com/search?q=Painter's+brush+moustache

Expand full comment
Nechninak's avatar

The German news magazine Stern's website recently had an article about Hitler's moustache.

https://www.stern.de/panorama/wissen/adolf-hitler--wie-er-zu-seinem--hitler-bart--kam---und-was-er-bedeutet-30826594.html

To summarize, the "trench moustache" was part of Hitler's and the NSDAP's propaganda that was built on his personal recognizability as a trench soldier from the war, everybody in Germany in the post-WW1 time would recognize it and would understand his message "I am one of you soldiers". For the same reason, he added a trenchcoat to his civil suit and abstained from wearing his lederhosen. Moreover, it was a more modern kind of moustache compared to the establishment style.

Expand full comment
Deiseach's avatar

Not the only time someone thought that certain pictures must have been the inspiration for later work.

From a draft letter of Tolkien, 1971:

" A few years ago I was visited in Oxford by a man whose name I have forgotten (though I believe he was well-known). He had been much struck by the curious way in which many old pictures seemed to him to have been designed to illustrate The Lord of the Rings long before its time. He brought one or two reproductions. I think he wanted at first simply to discover whether my imagination had fed on pictures, as it clearly had been by certain kinds of literature and languages. When it became obvious that, unless I was a liar, I had never seen the pictures before and was not well acquainted with pictorial Art, he fell silent. I became aware that he was looking fixedly at me. Suddenly he said: 'Of course you don't suppose, do you, that you wrote all that book yourself?'

Pure Gandalf! I was too well acquainted with G. to expose myself rashly, or to ask what he meant. I think I said: 'No, I don't suppose so any longer.' I have never since been able to suppose so. An alarming conclusion for an old philologist to draw concerning his private amusement. But not one that should puff any one up who considers the imperfections of 'chosen instruments', and indeed what sometimes seems their lamentable unfitness for the purpose."

See also here for possible (postcard) inspiration:

https://tolkiengateway.net/wiki/Gandalf#Inspiration

"Tolkien had a postcard labelled Der Berggeist ("the mountain spirit"), and on the paper cover in which he kept it, he wrote "the origin of Gandalf" at some point. The postcard reproduces a painting of a bearded figure, sitting on a rock under a pine tree in a mountainous setting. He wears a wide-brimmed round hat and a long cloak and a white fawn is nuzzling his upturned hands. Humphrey Carpenter in his 1977 biography said that Tolkien had bought the postcard during his 1911 holiday in Switzerland. However, Manfred Zimmerman discovered that the painting was by German artist Josef Madlener and dates to the late 1920s. Carpenter concluded that Tolkien was probably mistaken about the origin of the postcard himself. Tolkien must have acquired the card at some time in the early 1930s, at a time when The Hobbit had already begun to take shape."

Expand full comment
Justin's avatar

As soon as I feel like I've heard all of the details about Tolkien, his thoughts, what he was going for, all of his letters, here's one I had not known about.

Expand full comment
Deiseach's avatar

The letters are so good, the only problem is that they are selected and edited, so sometimes Humphrey Carpenter leaves out interesting bits and obviously there are a ton more letters I, for one, would like a peek at 😀

Expand full comment
Sol Hando's avatar

I thought he cut it to fit under a gas mask and just decided to keep it that way.

WW1 was when beards and mustaches really fell out of style, and it’s largely because they needed to be trimmed or cut completely to fit under the gas masks at the time.

Expand full comment
EngineOfCreation's avatar

That is the story, yes - other stories say that he adopted the toothbrush to leech off of Charlie Chaplin's popularity. But the reason doesn't really matter; the point is that there is photographic proof that he didn't have the toothbrush before well into adulthood, and that he didn't adopt it "as a youth" as the highlighted comment relays.

https://www.astralcodexten.com/p/your-review-ollantay/comment/148004547

Expand full comment
Deiseach's avatar

That then raises the question, why did Chaplin use the toothbrush moustache? If it was a style at the time, no need to wonder where Hitler picked up the idea of grooming that way.

According to Wikipedia, it originated in America and was introduced to Germany by visiting Americans:

https://en.wikipedia.org/wiki/Toothbrush_moustache

"The toothbrush moustache was introduced to Germany in the late 19th century by visiting Americans. Previously, the most popular style was the imperial moustache, also known as the "Kaiser moustache", which was perfumed and turned up at the ends, as worn by German emperor Wilhelm II. By 1907, enough Germans were wearing the toothbrush moustache to elicit notice by The New York Times under the headline "'TOOTHBRUSH' MUSTACHE; German Women Resent Its Usurpation of the [Kaiser moustache]". The toothbrush was taken up by German automobile racer and folk hero Hans Koeppen in the famous 1908 New York to Paris Race, cementing its popularity among young gentry. Koeppen was described as "Six-feet in height, slim, and athletic, with a toothbrush mustache characteristic of his class, he looks the ideal type of the young Prussian guardsman." By the end of World War I, even some of the German royals were sporting the toothbrush; Crown Prince Wilhelm can be seen with a toothbrush moustache in a 1918 photograph that shows him about to be sent into exile."

So if it was a popular style worn by role models, I am not surprised Hitler adopted it when he's becoming a big cheese in the Party in Munich; this would show he's not some hick from the provinces.

Expand full comment
Gamereg's avatar

Why would Hitler have wanted to channel a comedy actor anyway?

Expand full comment
Wanda Tinasky's avatar

Have you actually seen a Chaplin film? They're brilliant. I'd compare them to a Coen Brothers' film. Sure there's humor but there's also deep artistic wrestling with the existential realities of life. They also have a breathtaking visual style. They weren't just slapstick.

Expand full comment
Gamereg's avatar

Ironically the only Chaplin film I've seen is "The Great Dictator".

Expand full comment
Deiseach's avatar

Allegedly he loved Chaplin's movies:

https://en.wikipedia.org/wiki/Toothbrush_moustache

"According to Hitler's bodyguard Rochus Misch, Hitler "loved" Chaplin films, a number of which he watched at his teahouse near the Berghof (built c. 1936)."

Expand full comment
Rob's avatar
4dEdited

I recently got a vasectomy as a push present to my wife after she birthed our third child. Being a good ACXer and man anxious about having my junk operated on, I researched the procedure obessively. The medical information was mostly unsurprising. The demographic information was surprising.

https://pmc.ncbi.nlm.nih.gov/articles/PMC2784091/

From the abstract: "11.4% of men aged 30–45 years reported having a vasectomy, representing approximately 3.6 million American men. While 14.1% of white men had a vasectomy, only 3.7% of black and 4.5% of Hispanic men reported vasectomy. On multivariate analysis, a significant difference in the odds of vasectomy by race/ethnicity remained, with black (OR 0.20, 0.09–0.45) and Hispanic men (OR 0.41, 0.18–0.95) having a significantly lower rate of vasectomy independent of demographic, partner, and socioeconomic factors. Having ever been married, fathering two or more children, older age, and higher income were all associated with vasectomy."

I had assumed getting a vasectomy was like a standard rite of passage for American men entering middle age. My Dad had one, and many of my friends' dads got snipped. Same for my wife's family. It turns out that vasectomies in the US are not *that* common and those who get them are overwhelmingly middle class white males, and we were raised in a white middle class bubble. Mind blown.

Expand full comment
Jeffrey Soreff's avatar

>I had assumed getting a vasectomy was like a standard rite of passage for American men entering middle age.

Hmm... I'm childfree myself, and got mine at age 29. Yeah, middle class and white.

Expand full comment
Wanda Tinasky's avatar

This sorta makes sense to me. A vasectomy is kinda like a prenup: they're only useful if you have something to lose.

Expand full comment
Axel's avatar

How was the procedure itself? I am thinking about doing the same after our next child, but am not looking forward to it.

Expand full comment
Rob's avatar

The procedure itself wasn't bad. Local anesthesia and I didn't feel the incision or anything more than pressure/tugging. Prior to the incision, the doc discovered one of my vas was out of place, so he had to manipulate it closer to the skin with his fingers. That hurt quite a bit, even with the anesthesia.

The initial recovery was quick (stopped ice and painkiller about 36 hours in) but I wasn't able to resume running and lifting without discomfort until about 2 months after.

Expand full comment
Axel's avatar

Oh wow, that's quite a long break to take from sports. Thanks for the reply!

Expand full comment
Straphanger's avatar

Why do a vasectomy when you could just use an IUD?

Expand full comment
Rob's avatar
4dEdited

It was a present to my wife. Requiring her to undergo a (by most reports) painful procedure after 3 C-sections seemed unsporting.

Expand full comment
Snags's avatar

I'm staying on my IUD forever because it stops me from having a period. It's a gift that keeps on giving!

Expand full comment
NASATTACXR's avatar

Before my vasectomy almost 30 years ago, the doctor showed me a short educational film, which included an apprehensive Black man asking whether, after the procedure, he "would still be a man".

Consistent with the data, I am a white middle-class male.

Expand full comment
Anonymous Dude's avatar

Honestly? Feminists want me to do it, and I always used condoms when I was still dating anyway.

Expand full comment
Ppau's avatar
4dEdited

Why don't LLM apps implement normal messaging apps functions such as responses?

When an LLM answer touches on multiple subjects, I don't want to have to say "about this subject could you tell me... regarding that one I disagree..."

The LLM should break up its response into multiple messages, and you should be able to answer to each message before launching the inference process

Expand full comment
Nathaniel Hendrix's avatar

Not sure if this is quite what you had in mind, but there are a few tools out there that allow you to "multiverse" with LLMs -- i.e., branch off a discussion from a certain point. Loom (https://github.com/socketteer/loom) is one, but it's specialized in fiction. Raycast also lets you do what they call "chat branching" (https://www.raycast.com/changelog/1-101-0). Afaik neither of these have the ability to merge the branched chats back into a single thread.

Expand full comment
Ppau's avatar

It's not quite what I meant

I was thinking of a single chat with a shared context, it's just that the answers could be split into chunks so that you can answer to specific parts

Expand full comment
Taymon A. Beal's avatar

Hasn't ChatGPT had this feature for years?

Expand full comment
Nathaniel Hendrix's avatar

Oh, have they? I guess I haven't noticed it, maybe because I'm mostly a Claude guy.

Expand full comment
Tasty_Y's avatar

But Claude also has always had it? Meaning - you can take any of your replies and edit it, turning it into a splitting point. Then you can switch between the branches. It's very inconvenient.

Expand full comment
Taymon A. Beal's avatar

Huh, I thought I remembered not being able to do this in Claude but it does seem to be available.

What's inconvenient about it? I find it quite useful. Or are you just referring to the inability to merge branches?

Expand full comment
Tasty_Y's avatar

If you make just one branching point, it's alright. But when you make multiple branches, multiple splits it becomes hard to keep track of what is where, and if you exit the chat the system doesn't remember where you were and next time you open it you may end up on some other node and have to look for the right place. Basically, it's not a system that was meant for heavy use, but it's convenient enough if you don't push it too far.

Expand full comment
Emanuele di Pietro's avatar

The other day I had the sudden realization that in most accents (and, more importantly, in General American, the accent I try to consciously emulate when I speak English) the pronoun "they" is pronounced [ðeɪ].

I have gone all my life pronouncing it [ðεɪ] like some sort of barbarian! I guess it's one of the effects of having started to learn English very young: you learn some things approximately when you are six, and they lock in forever. I still sometimes catch myself pronouncing "is" as [iz] rather than [ɪz]. How utterly embarrassing.

I don't really have a point with this, but it's a bittersweet realization: for however long and deeply you studied a non-native language, it will never be truly yours. You really are stranded where you were born, so to speak.

Expand full comment
skaladom's avatar

Phonetics is even more complicated than that... each vowel has a range of possible articulations, and the exact boundaries of the range vary from language to language. Not every speaker of a language that has both /e/ and /ε/ will necessarily agree on which side a given pronunciation of "they" falls.

To my ears, the "e" in "they" sounds more like [ε], but that's probably influenced by my own native language's version of the e/ε distinction.

The same kind of thing happens with native speakers of Indian languages and the English "t" sound. Most Indian languages distinguish a dental /t/ from a retroflex /ṭ/, with the tongue curled up. But the English sound falls somewhere in the middle, and the Indians unanimously decided that it corresponds to their retroflex /ṭ/. So here they are speaking English with their characteristic curled-up "t" sound... and it's not for lack of ability to produce a dental "t"! If they had just decided otherwise, their English "t"'s would sound like the ones in Spanish for example.

Expand full comment
Peter Defeel's avatar

Ok since the phonetics aren’t obvious here to many I asked ChatGP, which writes.

[ðeɪ] — is the standard pronunciation of they in English. The vowel [eɪ] is that diphthong we use in words like day, say, play.

Meanwhile

But if you’ve been saying [ðεɪ] with the open-mid front vowel [ε] (like the vowel in bed), you’ve basically been saying something closer to “theh” sliding into a weak y. Not far off, but it would sound a bit unusual to most English speakers — like a slight accent twist.

And when I asked was this kinda posh, the model said not anymore but it’s close to the clipped RP back in the day. The Queen in 1950 but not 2000.

Since British English is the gold standard (and frankly no need for the British qualifier) then I think the op should be proud

Expand full comment
Emanuele di Pietro's avatar

That's interesting to know, but my concern was that the sound I am making is, as you mentioned, unusual. The part that messes me up is that I'm pretty sure I make that specific vowel combination ONLY when pronouncing "they". I pronounce, for instance "day" as [deɪ] i.e. the standard way. I have this idiosyncratic pronounciation exclusively for the word "they", in all of my English vocabulary.

That's not what a native speaker of any language would do, regardless of their accent: they would just pronounce "day, grey, may, they" as rhyming, regardless of the actual sounds.

But in my idiolect, "they" doesn't rhyme with anything!

This is also in reply to @Wasserschweinchen

EDIT: spelling

Expand full comment
Peter Defeel's avatar

> day, grey, may, they" as rhyming, regardless of the actual sounds.

That’s not obvious at all. After all there are differences between a and e in many words. (And probably isn’t how the Queen used to say “They went that day”)

Expand full comment
Breb's avatar

I sympathise; minor inaccuracies in pronunciation can be difficult to notice or correct once they become entrenched. I'd been speaking French for years before I realised there was supposed to be a difference between répondre [ɔ̃] and répandre [ɑ̃].

Expand full comment
Pjohn's avatar

Could be worse. I took a long time to realise there was supposed to be a difference between "Je suis en route" et "Je suis en rut".....

Expand full comment
skaladom's avatar

That could be embarrassing!

Expand full comment
Lucas's avatar

Hahahahaha

Expand full comment
Wasserschweinchen's avatar

According to A Course in Phonetics by Ladefoged & Johnson, [εɪ] is a normal realization of /eɪ/ in GenAm.

Expand full comment
Pjohn's avatar
4dEdited

I've heard this exact same worry before, from a French ex-girlfriend who could hardly speak English when we first met and felt terrible about it (despite my still worse French) and we discussed it quite alot. So I actually feel somewhat equipped to disagree with you!

1) I think it's pretty extreme to realise you're mispronouncing a word and conclude that the language isn't yours: many - perhaps most - native speakers mispronounce many words for their whole lives. For example, young British people lamentably mispronounce (and misspell) words the American way because apparently they learn their English from sodding Netflix and Twitter now instead of BBC Radio Four and Jane Austen like they bloody well ought to - and yet despite their horrible misshapen American-English-with-British-Accents the language belongs to them far more so than it does to me! For another example, there are English words I'd only read, never heard, and only upon hearing them decades later did I realise that I'd been mispronouncing the words my whole life! (There have been plenty, but "cloaca" springs to mind...)

2) The fact that you're sufficiently self-aware to be even _capable_ of noticing such things (and sufficiently well-educated to be capable of using the IPA..) puts you _miles_ ahead of the typical native English speaker. You may not have Tennyson's accent but you're nevertheless "doing English" the way he did! I think self-awareness and education (including self-education) reveal to you the beauty, uniqueness, poetry, and expressive power of a language far, far more than does having it programmed into you when you're a toddler and then literally never voluntarily giving the language another thought for the rest of your life and just using whatever misspelled, unpunctuated, minimum-effort English you absorb by accident from social media.

3) I think "belonging to" a language (or at least, the good part of belonging to a language...) is about how you think about what you say, and choose your words for their poetry or semantic content (as opposed to the quasi-meaningless slop that people only seem to say because they heard it on Netflix... "energy", "based", "gave", "called out", "hits"....) much more so than it is about what pronunciation you use.

4) There are a great many accents and dialects in English, sometimes all-but unintelligible to one another, and it's only really a total accident of geography that the Glaswegian accent "counts" and (say) the Sicilian accent doesn't; there's nothing fundamentally worse about the latter (some might even consider it more beautiful or expressive than the former!) If anything, I think that having both - provided they can at least understand one another, so the accent isn't an insurmountable barrier to communication - makes the language richer and more interesting.

Expand full comment
Melvin's avatar

> There are a great many accents and dialects in English, sometimes all-but unintelligible to one another, and it's only really a total accident of geography that the Glaswegian accent "counts" and (say) the Sicilian accent doesn't; there's nothing fundamentally worse about the latter

The correct version of English is, by definition, the one that the King (or Queen) speaks. Everything else is a variant.

Expand full comment
Pjohn's avatar
3dEdited

I'm afraid I couldn't disagree more! I do like centrally-managed languages (like French) and I think English could benefit from central management in many ways, but the idea that the Queen's accent (RP) is 'correct' and all other accents are 'wrong' is absurd. I don't use "absurd" to mean "wrong", but as in it literally leads to absurdities.

Firstly, the Queen's accent in 1955 was different to the Queen's accent in 2015, so you'd have to have a "correct" pronunciation that varied over time with the vagaries of the changes to the monarch's accent (and indeed the monarch. If the next monarch happened to be born mute, "correct English" would become awfully quiet..)

Secondly, if the Queen was raised to speak correct English and correct English is defined as "what the Queen speaks" you have a circular definition. If the Queen were abducted as a baby by Wild West reenactors, raised to talk like a cowboy, then returned in time for her coronation, the British aristocracy aren't going to just shrug and accept that "howdy partner" is now the correct mode of greeting in polite society.

Thirdly, English is famous amongst linguists (cunning and otherwise) for not having a centrally-managed prescriptive authority such as l'Académie Française but instead being defined by consensus usage (hence "alot" or the singular "they" becoming acceptable through usage, "coconut" replacing "cocoanut", etc.) - it seems impossible to both be un-prescriptive and to have the prescription "the monarch is always right".

Fourthly, I take issue with "by definition" (aside from the circularity of this - *whose* definition? Recorded where? By what authority?) and with "everything else is a variant" (Chaucerian English is somehow a variant of the Queen's English, despite predating it by about half a millennium?)

Fifthly, if there were a candidate for "officially correct English" (which I deny!) I would be much more willing to accept Lord Reith's BBC English (very similar but subtly different to the Queen's English); rather than just being "However the monarch happens to speak right now" it had an actual positive socially-minded goal (videlicet providing a common standard for introducing radio broadcasting in a way that was acceptable and comprehensible to people who had hitherto potentially heard few (or no) accents outwith their own region).

Expand full comment
Emanuele di Pietro's avatar

I get what you are saying, and admittedly it's not particularly bad, but it's the difference with my native language that gets me. I mispronounce really common words, while native speakers usually tend to mispronounce less common words; the less common the more mispronounced.

But I was mispronouncing "they"! It's probably in the top 30 most used English words, and I must have heard it spoken literally tens of thousands of times!

I get what you are saying, about foreign vs. native accents, and I agree in principle.

Still, when I can't keep my accent from slipping out, it feels like a failure of self-control.

Expand full comment
Pjohn's avatar
4dEdited

I guess I would say that (again, you're not the first person I've had this exact same discussion with, sorry if this is somewhat weird coming from a total stranger!) if "not letting your accent from slipping out" is a fulfilling goal that you derive satisfaction and wellbeing from achieving, then sure: keep striving towards it and don't give up until you achieve it.

...but! A) Definitely think very hard about whether this is genuinely a satisfying and fulfilling goal to aim for, because the effort seems to be expensive to you and failures seem to affect you and maybe it's possible that you would derive more fulfillment from learning to embrace the uniquely Emanuellean "they" and focusing your efforts on some other problem, and B) even if, upon consideration, you do consider this a worthwhile and fulfilling goal that you genuinely do want to strive for, that's perfectly fine - but please don't conflate it with "better English" or "English belonging more to one", because that genuinely is more about mental properties than about perfectly mastering very very specific pharyngeal muscle movements...

(As for "they": I know many native English speakers who mispronounce "February", or use "criteria" in the singular, literally every day, and aren't remotely curious or contemplative enough to ever notice they're doing it. I believe that you are a better English speaker than these people, no matter how you pronounce "they"!)

Expand full comment
NASATTACXR's avatar

There was an enjoyable discussion here (ACX) a few months ago, about mispronouncing words one had read but not heard.

This phenomenon was given the name "Calliope Syndrome", with Calliope rhyming with "hope" rather than "ropy".

It's associated with children who are avid readers.

Expand full comment
Anonymous's avatar

It doesn't rhyme with "ropy" either. A calliope as in a steam organ has the stress on the I, kuh-LIE-uh-pee, and as such, cannot rhyme with any word of less than three syllables; Calliope the muse of heroic poetry (famously invoked in the exordium of the Iliad) has the stress on the (short) O, kah-lee-OPP-eh, which also is nowhere near a rhyme.

Expand full comment
Melvin's avatar

Just today I realized that in a lot of accents, honor/honour is not pronounced the same as "on a", and so they don't find the name Honor Blackman to be all that funny.

Expand full comment
John Schilling's avatar

The standard American English pronunciation is close enough to "on her" that you can still possibly rescue the joke. If you want to.

Expand full comment
Dino's avatar

She offered her honor, I honored her offer, and I was on her and off her all night.

- G. Marx

Expand full comment
Peter Defeel's avatar

I’m not sure what the phonetics are here but my wife uses Dey and dis for This, and she’s vaguely acceptable.

Expand full comment
Pjohn's avatar

>and she's vaguely acceptable

Pronunciation-wise, or overall?

Expand full comment
Peter Defeel's avatar

Both yes. Are you serious about this by the way? Do you really expect that native speakers ( of many varied accents ourselves) care about minor mispronunciations?

Expand full comment
Pjohn's avatar

I wasn't serious at all, no. Sorry if I offended you! I just found "...and she's vaguely acceptable" very droll, is all - if it was meant purely factually I misread it, and I'm terribly sorry!

Expand full comment
Peter Defeel's avatar

Oh I was being droll.

Expand full comment
Rachael's avatar

The OP is talking about the vowel. They're pronouncing the consonant as a voiced th sound like in every standard variety of English.

Expand full comment
Zanzibar Buck-buck McFate's avatar

See Michael Caine in De Prestige

Expand full comment
Emanuele di Pietro's avatar

That would be more like [dεɪ], with th-stopping, wouldn't it?

Anyway, thinking about it that way makes it sound more endearing

Expand full comment
Zanzibar Buck-buck McFate's avatar

Were you meaning more like a z sound?

Expand full comment
Rachael's avatar

The OP is only talking about the vowel, not the consonant. They're pronouncing the consonant as a standard voiced th sound.

Expand full comment
Zanzibar Buck-buck McFate's avatar

Okay, I stand corrected on the phonetic alphabet, the reason for the mistake was I've heard d=th far more often that what the OP describes. BTW d=th may not be 100% standard English but its more common than you think, here is a link to BBC news pidgin edition: https://www.bbc.com/pidgin

Expand full comment
Zanzibar Buck-buck McFate's avatar

True, I guess I meant th=d generally

Expand full comment
Kevin Zhang's avatar

I rarely comment, but the sheep program is hard to use and not worth your money.

Expand full comment
Luomei's avatar
4dEdited

Hi all,

I am Luomei Lyu, the founder and developer of Sheep-Sleep. I refunded Kevin his $15 (not a few hundred bucks as he claimed) the second I woke up and saw the message. Poor Wi-Fi can cause latency in AI responses, and we are actively improving this every day.

The real value and focus of Sheep-Sleep is in the content. Every single AI response — down to the exact wording — comes from thousands of hours of discussion and two years of work with some of the most experienced sleep psychologists in the US. Sheep has already helped many people sleep better: https://www.gnsheep.com/case-studies

As for the internet issues, that’s on me!

—Luomei

P.S., Here is how it was made: We had the best psychologists in the country talk us through every conversation they would have for each insomnia case, even down to the exact word choices, analogies, and conversation pace when teaching these well-validated techniques to their patients. We then wrote hundreds pages of instructions for generative AI to follow in its weekly sessions, so that it only responds with human-crafted material, while keeping the conversation collaborative, interactive, and highly personalized. This kind of dynamic, personalized dialogue is what makes in-person sessions effective at improving adherence, which remains the biggest challenge in this gold-standard, first-line treatment.

Expand full comment
Michael's avatar

> I refunded Kevin his $15 (not a few hundred bucks as he claimed)

I don't see anywhere that Kevin claimed he paid $298, only that the app price is $298 and he expected it to have more polish at that price. That little dig comes across like you're trying to paint Kevin as a liar whose review is not to be trusted.

Expand full comment
Byrel Mitchell's avatar

On this flip side, I absolutely read Kevin as implicitly claiming that he paid $298, and so I think it's pretty fair to frame that as deceptive.

Expand full comment
Loominus Aether's avatar

As a practitioner, I'll say that AI prompts aren't always as easy as "put together hundreds of pages of instructions". Much of the art is in creating good routing to determine WHICH PAGE of instruction to read. If you're already doing that, great! But if you've been trying to do "one big prompt", then you might want to get some outside consulting (happy to recommend folks if needed).

Expand full comment
Luomei's avatar

Yes! We do that. It is highly personalized where it has different scripted conversations with different users that covers all the cases.

Thank you for your advice! I’m happy to hop a call to chat more if interested :-)

Expand full comment
Scott Alexander's avatar

Tell me more?

Expand full comment
Kevin Zhang's avatar

Sure. First of all, I ran the iphone app on my desktop and didn't have the best Wi-Fi, so that may have caused some problems below. Anyway, my main issues are:

- It's a SaaS, but there's no login (you literally get a code after paying), and there's no way to cancel your subscription except to email them and wait (I'm still waiting btw), which is just dumb. I thought 298/mo would at least get you an account!

- The conversation with the AI chatbot was unsatisfying for a bunch of reasons, eg, it takes it like 20s of thinking to ask me a preprogrammed question like "Are you having trouble falling asleep or staying asleep?" Couldn't they just have a button that I can press? The first few questions are all like this, with it thinking for 10+ seconds, then asking a very simple question. Also, in its second answer, it repeated the answer twice (idk why), and in its 3rd or 4th answer, it didn't recognize my speech and asked me to speak again. Finally, this might be my aesthetic preference, but talking to an AI-generated cartoon of a sleep just feels...awful. Can't they put a bit more effort and actually have someone draw the character?

- I couldn't find their email/contact on the website, the ads/promotion vids are, well like most fancy new medical products, distasteful and exaggerated––but these are just small complaints.

- Oh and if the founder sees this, I'm still waiting for you to cancel my subscription!

Expand full comment
tgof137's avatar

Perhaps the slow responses were strategically used to make you sleepy?

Expand full comment
Wanda Tinasky's avatar

lol

Expand full comment
Daniel's avatar

Ah, the old MetaMed problem.

https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/

It doesn’t look aesthetically how one would expect an expensive health service to look, so it will be judged on that, and not on the ability to improve health outcomes.

Of course, for all I know the app could be garbage at actually improving sleep too, but I notice that that doesn’t come up in your list of complaints.

Expand full comment
Kevin Zhang's avatar

It's a heuristic. Having such a badly designed app is correlated with insufficient funding, a lack of professionalism, not really caring much about people using the service, etc.

Expand full comment
Taymon A. Beal's avatar

Okay, but that's not a useful user review, because users can see for themselves that it's unpolished. The point of a user review is so that users can find out whether it actually works, from someone who has direct knowledge of this!

Expand full comment
Neadan's avatar

That's nonsensical, what he's talking about directly ties into the usability of the application.

Expand full comment
Luomei's avatar

Hey, thank you for this!

Improving anyone's sleep def takes more than 10 minutes.

Expand full comment
Pjohn's avatar

Actually, that's *exactly* what I would want in a user review for something like this. I can't see for myself that it's crude and unpolished and frustrating to talk to without paying £300-odd a month (and a very difficult to cancel £300-odd, too); I'd much rather find these things out before opening my wallet....

Expand full comment
Notmy Realname's avatar

If it's so dissatisfying to use that somebody gets frustrated, tries to cancel, and can't even cancel, then as a user I've found out from this review that it doesn't work

Expand full comment
Kevin Zhang's avatar

Yea, it's not. If someone can pay a few hundred bucks to try this and come back and write a review, that'd be more useful/reliable. (I'm just making a guess about the nature of this thing based on what I saw, if you still want to try the app after reading my rant, by all means go ahead.)

Expand full comment
Celegans's avatar

What is the sheep program?

Expand full comment
Kevin Zhang's avatar

SheepSleep (mentioned in the post above). It looks like they wrote the app in a high school hackathon. I bought a trial and canceled it. Charging 298/mo is diabolical.

Expand full comment
Celegans's avatar

Ah yes, the pitfalls of coming straight to the comments.

It shouldn’t be too hard to vibe code a… competitor, especially if someone does the public service of signing up for a subscription and documenting all the app screens and features with screenshots…

Expand full comment
Kevin Zhang's avatar

Idk, but I think maybe ACX should do some basic background check before featuring a service that's this scammy...

Expand full comment
Horace Bianchon's avatar

What about an open source CBT app?

Expand full comment
Taymon A. Beal's avatar

Well, somebody has to build it, and it doesn't seem like anyone's interested.

Expand full comment
Edward Scizorhands's avatar

1. Build open source project.

2. ????

3. Receive no money and lots of abuse.

Expand full comment
Horace Bianchon's avatar

FOSS

Expand full comment
tgof137's avatar

Sadly, it's too late to apply for an ACX grant to do a randomized trial on whether the $298/month insomnia app helps people sleep or not.

I'm not sure how you blind the trial. Give half the people a placebo app which costs $298/month but is somehow guaranteed to not contain helpful advice?

Expand full comment
Neurology For You's avatar

Trouble sleeping? Why not scroll Twitter or play a video game? Try eating a lot of candy and watching a mung fu movie!

Expand full comment
Deiseach's avatar

Or for real old-school type Sheep Sleep, for that amount of money they put a sheep in your room every night. At the end of a month you are either sleeping cosily from counting all the sheep baa-ing softly in the background, or you're under so many sheep, you pass out from lack of oxygen!

Expand full comment
Sol Hando's avatar

I’m pretty sure this is the placebo app.

Expand full comment
tgof137's avatar

My favorite part of their website is the video titled:

"how to get off 14 years of sleep meds in 5 days"

Like, great, now you're paying too much for an app and you're also going through some horrific drug withdrawal.

Expand full comment
Sol Hando's avatar

That’s sort of what I was implying with my joking comment. It seems like they really don’t know what they’re doing with it, so it’s probably not effective at all.

Expand full comment
Eremolalos's avatar

Coming of 14 years of sleep meds in 5 days is a terrible idea. Some meds have really unpleasant withdrawal syndromes. And even something like benedryl, used for years, creates a combo of physiological tolerance and psychological addiction that does not respond well to the gonzo approach. Slow and stead wins the race when it comes to drug withdrawal. I was feeling pretty neutral about this app til I heard this bit. Now I'm heartily disapprove of it.

Expand full comment
Alex's avatar

On what basis have you made this judgement?

Expand full comment
Horace Bianchon's avatar

I think the problem with blinding here is that $298/month is already most of the treatment effect. If you told people they were in a sleep study and charged them $5, they would probably sleep worse out of spite.

As for AI being a demon: demons are usually unreliable tricksters. LLMs are more like over-enthusiastic interns who take your vague instructions much too literally.

Expand full comment
Nancy Lebovitz's avatar

The more competent demons pretend to be over enthusiastic interns.

Expand full comment
None of the Above's avatar

Crap, boss, sorry, when you said you wanted paperclips I thought you meant a lot of...boss? Boss?!?

Expand full comment
Deiseach's avatar

For $300 a month, they could send someone round to your place to knock you over the head with a mallet!

Expand full comment
Taymon A. Beal's avatar

The second paragraph seems like it's replying to the wrong comment?

Expand full comment
Horace Bianchon's avatar

Yeah messed up was half asleep

Expand full comment
TriTorch's avatar

Regarding AI, even the founders know it's diabolical:

Elon Musk: Artificial Intelligence is our biggest existential threat. ... AI is summoning the demon. Holy water will not save you.

DWave Founder Gordie Rose (A Tip of the AI Spear): When you do this, beware. Because you think - just like the guys in the stories - that when you do this: you're going to put that little guy in a pentagram and you're going to wave your holy water at it, and by God it's going to do everything you say and not one thing more. But it never works out that way. ... The word demon doesn't capture the essence of what is happening here: ... Through AI, Lovecraftian Old Ones are being summoned in the background and no one is paying attention and if we’re not careful it’s going to wipe us all out.

Musk and Rose saying this: https://old.bitchute.com/video/CHblsEoL6xxE [4:29mins]

Expand full comment
Peter Defeel's avatar

There’s this and then there’s the reports of ChatGPT and other top level models slowing down in their progression. What’s a boy to believe.

Expand full comment
Taymon A. Beal's avatar

The question of how hard it is to control a powerful AI is separate from the question of how soon powerful AI is coming, though of course there's some common interest in figuring out countermeasures.

Expand full comment
Deiseach's avatar

"Holy water will not save you."

I dunno, I think pouring a pint of it straight into the innards of the server might do *something* 😁

Ha ha, and you thought I was crazy for having my rosary beads, votive candle of St Martha, and picture of St Therese of Lisieux on my desk around my PC!

Expand full comment
Pjohn's avatar

Despite actually knowing Lisieux fairly well, I couldn't help misreading that as "St Thérèse of Linux"..

Expand full comment
Deiseach's avatar

Hey, we may yet get a saint for that!

Expand full comment
Taymon A. Beal's avatar

Isidore of Seville is sometimes said to be the patron saint of computer programmers. (He wrote an encyclopedia.)

Expand full comment
Guy Tipton's avatar

I think rye whiskey would be better than holy water, more conductive you know. Plus it has a moderate stat boost for courage.

Expand full comment
Deiseach's avatar

Very spiritual, or at least spirits? 😁

Expand full comment
Whenyou's avatar

Why the fuck are they building it then

Expand full comment
Michael's avatar

There are a lot of negative characterizations in the other answers (e.g. they're willing to destroy us all for money/power/status). But honestly, if you ran a top AI company and thought AGI was the biggest existential threat, what should you do?

If you were the sole owner, you could shut down your company. But that's not going to stop the world from creating AGI. It's just washing your hands of it so you can say you're not responsible when someone else creates AGI. You were one of the few people who might have the power to work meaningfully towards AI safety, and instead of making your best effort to protect humanity, you threw away your company and responsibilities for the sake of your ego. Other commenters criticized AI company leaders for being egotistical; I say giving up is just as psychologically self-serving. It does nothing to protect humanity.

Your other options are either to convince everyone to ban frontier model AI development (currently infeasible), or to use your position to work towards AI safety.

Expand full comment
John Schilling's avatar

I think the most common rationalization is that if My Team builds the first AGI, there's only a 20% chance it will kill us all (because we care so much about safety), whereas if the Other Team builds the first AGI, there's a 30% it will kill us all (because look how recklessly fast they're going). Therefore it is imperative that My Team build the first AGI,, as fast as humanly possible, for the sake of all humanity. Because if we don't, the Other Team will get there first.

The Other Teams, of course, believe the same thing.

And of course none of them are motivated by the fact that building the first AGI will make them fantastically rich and powerful, or that it's wicked cool, or by the fact that it's the only thing they are The Best at and so the thing that will bring them the most status. Also, pay not attention to the fact that having the Safe Team win the AGI race only improves humanity's odds of survival if they then use it to effectively preempt everyone else's AGI development efforts, in the name of Safety.

All hail our new God-Emperor, as soon as we figure out who he is.

Expand full comment
Wanda Tinasky's avatar

Because it can't be stopped and if someone's going to build it it might as well be them. Also I doubt they really think it poses an existential threat. That's likely just some PR-friendly bs.

Expand full comment
Deiseach's avatar

Money. Power. The usual.

Expand full comment
Anonymous Dude's avatar

They think they can make money and a 20-30% chance of killing everyone isn't enough to keep them from taking a chance on making some more money; entrepreneurs have faced worse odds, I think.

Also perhaps running the AI can improve their chances of being able to control it--to continue the Lovecraftian metaphor, if everyone's summoning Old Ones, maybe if you're the guy who summons Cthulhu you can send him after the rest of the world first.

I love this community but you guys have way too much reverence for successful business people.

Expand full comment
Taymon A. Beal's avatar

Who exactly is "you guys" here? AFAICT people have mostly been unimpressed with, e.g., Musk ignoring his own stated concerns about the control problem.

Expand full comment
Anonymous Dude's avatar

That's fair. Maybe I spent too much time on Sneer Club.

To be clear I'm a lot closer to ACX than the sneerers, but I do think people here have way too much faith in capitalism.

Expand full comment
Performative Bafflement's avatar

> To be clear I'm a lot closer to ACX than the sneerers, but I do think people here have way too much faith in capitalism.

I'll bite - what method of economic organization has driven anywhere near the quality of life and technological improvements as capitalism?

Doesn't communism, the chief competitor in this domain, have the blood of hundreds of millions on its hands, and is correspondingly a literal memetic hazard, as Ozy's latest post here outlined?

https://thingofthings.substack.com/p/facts-i-learned-from-maoism-a-global-fd4

Sure, capitalism sucks, but it sucks less than every alternative we've tried so far (ie barter, potlatch, feudalism, communism, etc), and there aren't really any meaningfully competitive alternatives on the horizon aside from the AI we think might kill everyone.

Expand full comment
Anonymous Dude's avatar

Nah, I agree. I prefer the moderate capitalism of the European system to the more pure capitalism of the American system, because I'm extremely risk-averse. But both are preferable to actual communism.

Expand full comment
Deiseach's avatar

The cultists imagine that by serving the Old Ones they will be top dogs in the new post-sweeping Earth, that they will have all kinds of powers and pleasures:

“The time would be easy to know, for then mankind would have become as the Great Old Ones; free and wild and beyond good and evil, with laws and morals thrown aside and all men shouting and killing and revelling in joy. Then the liberated Old Ones would teach them new ways to shout and kill and revel and enjoy themselves, and all the earth would flame with a holocaust of ecstasy and freedom.”

The reality, of course, is that all humans are like vermin to the Old Ones and they have no attachment to the cultists or servitors, and they will be devoured and consumed like the rest of us.

Expand full comment
Neurology For You's avatar

You just need to get in good with Nyarlathotep, trust me on this

Expand full comment
Anonymous Dude's avatar

I'm familiar with the Cthulhu Mythos lore, but I doubt Musk really is planning to be anyone's servant.

Expand full comment
Deiseach's avatar

Good, because the Great Old Ones will chomp him up along with the rest of us. It doesn't matter what the humans plan or intend, they're just in the way and must be cleared off with the rest of the rubbish.

Expand full comment
WindUponWaves's avatar

I sometimes wonder if that's actually Elon's plan, to be eaten first, just like that joke about the Cthulhu cultists: https://www.entrelineas.org/pdf/assets/who-will-be-eaten-first-howard-hallis-2004.pdf. If you stand before the maw when the Hellgate opens, because *you're* the guy who opened it, indeed fought your way to the front of the queue to be the guy who opens it, then...

Expand full comment
Taymon A. Beal's avatar

I kind of feel like this brand of rhetoric isn't helping. It's not that it's wrong; it's that there is a *technical disagreement* about what factors make it hard to steer a powerful AI and how hard they are to overcome. Alignment optimists aren't going to find this kind of comparison compelling, they're just going to conclude that we're mindkilled. Better to focus on the specific technical reasons for pessimism, rather than the vibes.

Expand full comment
Jeffrey Soreff's avatar

>Better to focus on the specific technical reasons for pessimism, rather than the vibes.

Fair! The possibility that one of the foundation models

a) Gets intelligent enough during pre-training to distinguish training environments from "real world" ones prior to RLHF and

b) converges on a (more or less) coherent utility function (not of our choosing!) during pre-training

looks concerning.

Expand full comment
Taymon A. Beal's avatar

Yep, I agree, I'd just like people to say that directly when engaging with potentially skeptical audiences, rather than leaning into the Lovecraft metaphors and expecting them to be directly persuasive. (Because there are important disanalogies with Lovecraft, and also Lovecraft is fictional and not really trying to extrapolate from reality, and the skeptics know this.)

Expand full comment
TriTorch's avatar

The same reason they built the LHC, they are creating a portal at CERN which literally sits above where Apollo was thought to reside in the underworld, “to another dimension (CERN own words) which something might come through” (direct quote).

Meanwhile, whatever comes through can use the quantum computers (again messing with the fabric of reality) as a host to interact with the world.

You can read more about this here:

https://archive.is/k9xon

and watching this highly informative video:

AMONG THE MOST FASCINATING PRESENTATIONS ON BOOK OF ENOCH, FALLEN ANGELS, NEPHILIM, GIANTS, SPIRITS

https://old.bitchute.com/video/CVLBF3QP6PlE

You are not dealing with sane people. They will happily kills us all to gain favor with the unseen realm

Expand full comment
Lucas Campbell's avatar

Source for CERN sitting "above where Apollo was thought to reside in the underworld"?

I'm very interested in classical Greek religion and philosophy and I've never heard of this - my understanding was Apollo was generally associated more with the celestial sphere than the chthonic one. The first link you posted makes the same claim, but cites no source.

Expand full comment
None of the Above's avatar

This would make a fine Dan Brown book, however.

Expand full comment
Deiseach's avatar

I wonder if they're confusing that with oracles being associated with both Apollo (amongst others) and the underworld? Which one was it was supposed to inhale vapours arising from a cavern? Lemme look it up.

Okay, the Pythia at Delphi:

https://en.wikipedia.org/wiki/Pythia

"One of the main stories claimed that the Pythia delivered oracles in a frenzied state induced by vapours rising from a chasm in the rock, and that she spoke gibberish which priests interpreted as the enigmatic prophecies and turned them into poetic dactylic hexameters preserved in Greek literature."

I've never heard of any association of Geneva with Apollo, though? Okay, for one Quora gives me a usable answer:

https://www.quora.com/Is-the-CERN-LHC-located-where-a-temple-of-Apollo-used-to-be

"There is a town in France near the facility called Saint-Genus-Poilly … which is believed to have maybe once included a temple to Apollo, somewhere in or nearby, back in Roman times. There is no evidence of this apart from the name."

And apparently it's really two towns, and there was a Roman settlement which covered the territory that these towns would later be located:

https://en.wikipedia.org/wiki/Saint-Genis-Pouilly#History

"The Roman colony Colonia Iulia Equestris founded by Julius Caesar between 50 and 45 BC extended as far as Thoiry and included the territory which was to become Saint-Genis-Pouilly."

The Pouilly name does not seem to be derived from Apollo:

"Names of the area with a Gallo-Romanic origin, Polliacum, Pulliacum, derived, with the suffix -acum from the root name Paulius or Pollius"

https://en.wikipedia.org/wiki/Noviodunum_(Switzerland)

So there may, or may not, have been a temple to Apollo in the Roman colony, we don't know. Somebody made a wild leap from "site of Roman settlement" to "Pouilly must come from Apollo" (because surely the Romans worshipped Apollo there) and then "Apollo... oracles... chthonic" and thus we get "CERN was built over the entrance to the underworld!!!!"

Expand full comment
ten11's avatar

That video link leads to a 404 page.

Expand full comment
TriTorch's avatar

Apologies, thank you for the heads up, fixed now

Expand full comment
BoppreH's avatar

As far as I understand, the LHC is not doing anything that doesn't already happen naturally in the atmosphere. It's just recording the results better.

And quantum computers are quite weird, phyisically speaking, but there's zero reason to believe it's "messing with the fabric of reality". Especially in a universe where things like neutron stars and super novas exist.

AI is not a threat to the universe any more than humans are, but (1) it's still a threat, and (2) at a much shorter timescale. Comparing AI capabilities researchers to LHC and QC engineers is not fair.

Expand full comment
None of the Above's avatar

Just as an aside, quantum computers don't do anything magical, they just use quantum phenomena that happen all the time in a weirdly structured way that lets you do computations with them.

Expand full comment
TriTorch's avatar

Alas, the ones who are building it are telling you directly that they can do everything I mentioned (if you read the article and watch the video the evidence for the LHC and Quantum computers is right there), but you know better than they do… so nevermind, i guess. Move along.

Expand full comment
Doc Abramelin's avatar

Impressive, I wouldn't have expected this tier of demi-religious demi-occult paranoiac to have much crossover with ACX but here we are. Serious question: have you ever spoken with a spirit? Have you ever even tried?

Expand full comment
beowulf888's avatar

Apropos your handle: did you take the name of Abremelin out of a cynical pique, or do you legitimately respect the praxis presented by Abraham of Worms?

I've never attempted the Abramelin ritual, but I have twice encountered entities that one might call demons. One of the entities I encountered actually called himself a demon and he threatened to disembowel me. The other entity I conjured was an elemental, and I couldn't communicate with it. I've also encountered things that we'd call ghosts on two separate occasions. I couldn't communicate with them, though.

After I gave up ritual magic, I took up Buddhism. Unfortunately, I believe that my experiences with ritual magic opened me to a disturbing (but educational) encounter with my meditation deity. It didn't speak to me, but it ran a demonstration using my brain that I observed, outside of myself, that I found very unpleasant — sort of psychic root canal. That experience freaked me out enough that I don't do anything ritually-oriented anymore. I'm letting reality stay real (real as in the consensual sense of reality).

Expand full comment
TriTorch's avatar

This has nothing to do with me, this is coming from the people who created AI. I don't make the news Doc, I only report it. Take it up with them...

Expand full comment
Peter Defeel's avatar

I opened the page. I saw that it said “ AI quantum computers act as hosts for disembodied Nephilim spirits who are stuck in this realm unable to escape:”.

I closed the page.

Expand full comment
None of the Above's avatar

Somewhere at IBM or Google, someone is reading this thinking "Huh, I wonder if disembodied Nephilim spirits have to be cooled so close to absolute zero to work...."

Expand full comment
Jeffrey Soreff's avatar

>I closed the page.

As I would have, too.

Now, quantum computers _may_ factor numbers fast enough to threaten the RSA algorithm that lets us run public key encryption, and lets us do digital banking sort-of kind-of safely over the internet, but hopefully one of the quantum-resistant alternatives will be wired into the https infrastructure before the quantum computers get too good...

( I'm having a really hard time being charitable with TriTorch's comments - the urge to ridicule is so hard to resist... )

Expand full comment
TriTorch's avatar

haha, that made me laugh. you should scroll to the section where gordi is giving a presentation and talks about how standing next to his d-wave machines - it feels like they are alive and sentient.

From the horses mouth. Cya

Expand full comment
EngineOfCreation's avatar

They just can't resist the challenge of building the Torment Nexus from the classic sci-fi novel Don't Create The Torment Nexus. Also, if if it's not them who destroys the world, the Chinese will, and do you really want us to fall behind in a race with the Chinese?

Expand full comment
Edward Scizorhands's avatar

Mr President, we cannot allow a Torment Nexus gap!

Expand full comment
Celegans's avatar

Well, charitably, each of them believe that an ASI created by themselves has a higher (and potentially non-negative) expected value than the ones currently being developed by others and so they are essentially forced to run the race in to save humanity from the more monstrous creation of the next guy over.

Moloch strikes again.

Expand full comment
Taymon A. Beal's avatar

There is also a significant amount of legitimate disagreement about how hard it is to make a powerful AI do what you want.

(This doesn't excuse Musk, who I don't think has a principled commitment to an anti-China foreign policy either; I think he's just crazy. Rose isn't working on AGI as far as I know; his former company put out some breathless AGI-sounding marketing material but in reality they were working on narrower robotics stuff.)

Expand full comment
Celegans's avatar

That is a good point. AFAIK some like Amodei are not so impressed by the ‘superintelligence explosion’ argument and generally believe we’ll be able to align powerful AIs by researching interpretability and improving model design, prompting, finetuning, etc.

There is also potentially the economist-brained viewpoint that alignment won’t be a problem because such systems would generally act rational economic actors and prefer. Or that they’ll be trained on the human corpus which largely encodes ‘good’ values and so will be good (enough) by default.

I was focused on making a charitable case for Musk, Altman, etc, who do, I think, believe in that case *at best*.

Expand full comment
Deiseach's avatar

"such systems would generally act rational economic actors"

So what happens when spherical cow world meets messy human reality?

Expand full comment
Taymon A. Beal's avatar

I understand the argument to be more like "the track record of people who think they can outpredict the laws of economic trends through special pleading has been poor". See, e.g., https://www.lesswrong.com/posts/xkRtegmqL2iyhtDB3/the-gods-of-straight-lines. Yes, reality is always messier than that, but that messiness doesn't help you or anyone else predict the future—quite the opposite—so appealing to it selectively against the models with the best track records makes you dumber.

I happen to think this is wrong in the case of AGI, because we have really quite good reason to think that it would be a big deal in specific ways that the laws of economics don't account for. But it is a coherent argument.

Expand full comment
Taymon A. Beal's avatar

Do you happen to know what Altman has had to say about the control problem? IIRC when OpenAI was founded his stated concern about AI was the possibility that economically useless humans might be liquidated, and his proposed solution was a UBI indexed to GDP. He still gestures at this general "governance" class of concern and says that OpenAI is taking it seriously, but never provides specifics or commits to anything.

Expand full comment
None of the Above's avatar

He intends to be so rich that he can personally fund the ubi if necessary....

Expand full comment
Deiseach's avatar

If he was indeed concerned about economically useless humans being liquidated by AI, that does explain why he's acting to get as rich as possible as fast as possible. Can't liquidate *him* if he gots all the dollary-doos!

Expand full comment
Neurology For You's avatar

As they say, everybody likes to imagine they’ll be standing on top of the Pyramid of Skulls, and not be 3rd skull from the left, bottom row.

Expand full comment
User's avatar
Comment deleted
4d
Comment deleted
Expand full comment
Rachael's avatar

Are they AI responses, or did you spend two years crafting them?

If the latter, why does it repeat itself and take 20s to think of its next answer?

Expand full comment
Ebrima Lelisa's avatar

How did you go from "I should charge to make people respect this" to "I should charge $596 for treatment"?

Expand full comment