853 Comments

Okay, I’ll be the one to touch the third rail. My advice to Dems on the 34 felony convictions: Do not spike the ball on this. Really. I would have CapLocked ‘do not’ if not for my own habit of dismissing CapLocked comments.

I’ve seen this dynamic play out on a smaller scale locally. I’d describe it but, as usual, I’m on my phone and I don’t want to tire my thumb. The result was not what the over eager wanted.

If the Dems think they are dealing with a 30% of American voters cult of personality now they can fully expect to have that number grow if they start to gloat about a conviction based on a novel interpretation of law against a former US president.

Expand full comment
Jun 3·edited Jun 3

I'd expect that no matter what the median Democrat says or does, some combination of news sites and algorithms optimizing engagement with conflict, and right leaning news and political efforts seeking to optimize voter turnout with the same, will ensure that regardless of how large or small the contingent of "gloating" Dems are, right leaning voters will be sure to see them overrepresented as "how the other side is."

Expand full comment

I wish more people felt as repulsed by their own side gloating, as they do by the other side gloating.

Expand full comment

This feels like as good a place to ask as any: what is the relationship between Skibidi Toilet and the song Skibidi by Little Big?

Expand full comment

According to reddit:

https://www.reddit.com/r/youtube/comments/161u3d9/the_viral_skibidi_toilet_series_is_based_on/#:~:text=It%20is%20based%20on%20a,is%20what%20inspired%20the%20toilets.

As an adult in a household that is currently infected with skibidi fever, I will just say that I’m glad the original Little Big song isn’t getting played.

Expand full comment

Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.

Obviously, these people are crazy and have no sense of even recent history.

I think the reasons for young people thinking they live in hard times have a lot to say about the future of AI.

In spite of a massive increase in the standard of living for Americans objectively over the past 20 years, young Americans reject that narrative. I think the disconnect is that technology, despite changing life tremendously, hasn't improved it subjectively enough that people notice that they are better off.

This leads me to believe that the same will likely to be true when AI changes things massively on an objective level. The standard of living will improve but will hardly be noticed, because subjectively we are asymptotically approaching optimal conditions for humanity.

Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.

Expand full comment

It is a basic Stoic advice to consider that it *could* be worse; and if you know history, you know that most people actually *had* it worse.

Like, when people complain about all the inconveniences related to covid, I think about Black Death and conclude that we have it too easy. Most of your family survives the pandemic, and you complain about having to wear a mask for a few months? Seriously? Read "A Journal of the Plague Year" to get some perspective!

Old people remember the old times in person, but young people can get similar perspective from reading books about the old times. It is a very natural mistake to assume that the past was exactly like the present. My kids find it difficult to imagine a childhood without internet.

It is natural for humans to imagine a golden age in the past. Christianity believes that Adam and Eve lived in a paradise; Marxism believes that out ancestors lived in a perfect egalitarian society; Taoism believes that ancient people were all virtuous and lived in harmony with Tao; feminism believes that noble savages lived in enlightened matriarchal societies. The only difference is that young people these days seem to believe that the Golden Age happened in a very recent past, so they can accuse their own parents and grandparents of being right there and having ruined it by eating some forbidden fruit. But maybe even this fits into the general pattern of accelerating progress.

Expand full comment

The last several generations have also perceived hard times as young adults. The core reason for this is that the transition to independent adult living is genuinely hard for most people. And it seems worse than it is because you're comparing your own lifestyle as a new-minted adult fresh out of school to that of your parents who are 2-3 decades further along in their careers than you and have had a similar amount of time to build up a stock of household capital. And the other major baseline for comparison is "slice of life" sitcoms and other media depictions, which tend to show middle-class or struggling-young-adult characters with an unrealistically high material standard of living, particularly in terms of living space and dining-out-and-entertainment budget (c.f. "'Friends' Rent Control" on TvTropes).

On top of this, the transition to independent adulthood has probably gotten significantly harder over the past couple decades. Credentialism makes it harder to get good entry-level jobs than it used to be, people are graduating with more student debt than was the norm in the 90s and before, and the cost of basic housing has been growing faster than the overall inflation rate.

Expand full comment

I suppose I am talking about "popular vibes". I was a young adult in the '90s and don't remember the vibe being "these are hard times". I would say that shows like "Friends" perhaps because of its unrealistic portrayal of life captured the positive vibe of the times. Indy movies like Office Space captured generational disenchantment with the workplace but also demonstrated that financial insecurity was not a big concern of twenty-somethings. By comparison, I can imagine a 22-year-old in 2010 watching Office Space and thinking "These punks are gainfully employed at desk jobs but are too spoiled by the '90s economy to appreciate it!"

I do agree that young adulthood is a hard time in life. But there's a difference between recognizing that versus thinking: "My grandfather's generation had it easier at this age. The 2020s are a bad time to be young."

Expand full comment

I distinctly remember "these are hard times" vibes as a young adult in the early 2000s, and I'd assumed the vibes went back earlier than that. As a teenager in the 90s, I did notice a fair amount of pessimistic vibes in media, particularly stuff focused on young adults. Bringing up "Friends" again, remember how the theme song goes:

"So no one told you life was gonna be this way

Your job's a joke, you're broke

Your love life's DOA

It's like you're always stuck in second gear

When it hasn't been your day, your week, your month

Or even your year, but..."

It's a perky, upbeat song and the chorus is optimistic in tone, but it's optimistic about social support, not material conditions.

Later seasons of the show were a lot more materially optimistic than the earlier, IIRC. In the early seasons, Ross and Chandler had decent jobs, but the financial struggles of the other four main characters were major recurring themes even though the depicted standard of living implied that Monica and Rachel were the best-paid line cook and barista in New York.

Expand full comment
May 31·edited May 31

I feel like the transition happened somewhere around 1999-2000. I can't speculate as to underlying causes, but the economy wasn't doing as well as it used to. And there were some signs of "things going wrong", like Columbine, and the WTO riots, and Bush v. Gore, and then 9/11.

Expand full comment

I think the zeitgeist of the early '90s was best captured by the Jesus Jones song "Right Here, Right Now".

A woman on the radio talks about revolution

When it's already passed her by

Bob Dylan didn't have this to sing about

You know it feels good to be alive

I was alive and I waited, waited

I was alive and I waited for this

Right here, right now

There is no other place I want to be

Right here, right now

Watching the world wake up from history

Oh, I saw the decade in

When it seemed the world could change at the blink of an eye

And if anything, then there's your sign

Of the times

https://www.bing.com/videos/riverview/relatedvideo?q=right+here+right+now+song&mid=FB06636940A1BD4F695AFB06636940A1BD4F695A&FORM=VIRE

Expand full comment
May 30·edited May 30

https://sohl-dickstein.github.io/2022/11/06/strong-Goodhart.html

If your measure says one thing but reality says another thing, then it's silly to insist that reality must be wrong. If a person is dying and the doctor can't find the cause, it would be absurd for the doctor to argue with the dying person that they've tested negative for influenza, malaria, tuberculosis, cholera, and any number of other diseases, therefore the fact that their organs are shutting down must be some kind of optical illusion and the person just needs to snap out of it.

Expand full comment

First, I want to agree with WoolyAl that my post could have been phrased better and more generously.

As to your post: I agree with your words, but how *are* we to measure reality as to how hard the times are if we don't consider the most common metrics such as the unemployment rate or that we live in a time of peace not war?

What measures do you have in mind that show the current times are relatively bad?

Expand full comment

Regarding the economy specifically, a reason younger people especially are unhappy in the US economy might be that they have a smaller proportion of total household wealth than older generations did at the same age: https://www.visualcapitalist.com/charting-the-growing-generational-wealth-gap/

I'm probably at the extreme end of pessimistic and alienated young adults, but I've personally always felt like the economy is just a game of monopoly that I got in on too late with too little starting resources to have any chance of ever getting a foothold in (setting aside that I wouldn't even like the game if I did have a better chance at it), so I gave up on the idea of ever buying a home or reaching any financial milestones beyond subsistence more or less before I even started. The whole thing just felt like it wasn't for me.

Expand full comment
May 31·edited May 31

"they have a smaller proportion of total household wealth than older generations did at the same age:"

They don't control for demographic changes.

In 1995 the share of young people was much greater than it is now

So it's hard to tell whether there's a real effect there when it comes to a persons access to the economy, or whether there's just comparatively more baby boomers.

Expand full comment
May 31·edited May 31

Record high homelessness since we started keeping track in 2007:

https://www.pbs.org/newshour/show/how-a-perfect-storm-of-issues-is-causing-a-sharp-rise-in-homelessness

(And the method used to track homelessness only gives a lower bound, though that often isn't stated — the real number could easily be an order of magnitude higher.)

80 year record high suicide rate:

https://www.usatoday.com/story/graphics/2023/11/29/2022-suicide-rate-historical-chart-comparison-graphic/71737857007/

(And that doesn't include other "deaths of despair" like drug overdoses, which are probably much more common now than 80 years ago)

Those are two measures that indicate that not all is well.

There's also environmental stuff like how wild animal and insect populations have been going down by crazy amounts over the past few decades:

https://ourworldindata.org/biodiversity

https://www.smithsonianmag.com/smart-news/study-shows-global-insect-populations-have-crashed-last-decade-180971474/

Those aren't necessarily directly related to human wellbeing, but I do believe that much more than we recognize of what we're doing that makes places less habitable for other animals makes places less habitable for humans as well.

Expand full comment

> What measures do you have in mind that show the current times are relatively bad?

Per Haidt's After Babel, suicide and suicide attempts are up in young people.

Another societal wide measure is fertility rates - pretty much across the developed world, fertility is below replacement and trending down.

Expand full comment

I'm baffled by the idea that low fertility rates are compelling evidence of relatively bad times. It seems a pretty universal law of the last century that fertility rates decline as infant mortality declines and material well-being increases. Very plausibly, lower infant mortality and increased standard of living *causes* lower fertility. No one aspires to move to the places with really high fertility!

Expand full comment

The big drops in infant mortality were a long time ago.

Expand full comment

So were the big drops in fertility!

Expand full comment

There's a gap between actually-achieved and desired fertility though - Zvi quotes them all the time, but there's surveys showing most developed world women want ~2-3 kids, and have ~1 kid. The gap would be the indication that current times are relatively bad - either financial, social, or other concerns are leading people to have fewer children than they say they want.

Expand full comment

The point is that if you have some metric by which sub-Sarahan Africa appears to be a uniformly better place than Europe, then you should not be using this as your primary metric for the question "are things Good or Bad?" If things were Good in sub-Sarahan Africa and Bad in Europe, then migration flows would be going in the other direction!

Expand full comment

I feel it's important to know how many children men want here. If most men want 0 and have 1, that's a perfectly normal compromise number.

Expand full comment

"Many young people seem to think we live in hard times. Some of the book reviews reflect that. It's a popular meme: these are hard times.

Obviously, these people are crazy and have no sense of even recent history.

...

Although young people generations hence will continue to complain about the economy sucking, their real complaints are that technological advances aren't helping them achieve what makes most people happy: good relationships, family, interesting work and optimism.

"

Less of this please.

This is one of the least productive ways you could have phrased this. You certainly understand why young people might complain; modern technology either doesn't help people's attempts to cultivate the things that make them happy: relationships, family, rewarding work, and optimism. You're also probably aware that there hasn't been a massive increase in the standard of living, it's been ~18% real increase over 20 years (https://fred.stlouisfed.org/series/MEPAINUSA672N) and it's certainly probable that, given variance, some people are actually significantly worse off than they were 20 years ago, on top all the social consequences you list.

We're all adults, we can discuss things calmly, you can just say that lots of people are frustrated with declining social relationships in a situation of moderate economic growth if that's what you actually believe.

Expand full comment
May 30·edited May 30

What is your definition of “hard times?”

Without some kind of clear criteria, it just seems like an evergreen excuse to belittle and ignore other people’s complaints as long as we can define some time period in the past we can plausibly allege to be harder. Young people in the 1970s are whiners for complaining about inflation and the Vietnam war - those aren't "hard times," because previous generations lived through the depression and WW2, young people living through the great depression and WW2 are whiners - those aren't "hard times," because the civil war was far bloodier and those people didn’t even have antibiotics and “medicine” meant hacking a leg off with an unwashed saw, and so on.

Expand full comment

To a first order, my criteria of "hard times" is a time of war vs. a time of peace, and the economy. Right now we have peace in the USA (as always, there are places where that isn't the case), and the unemployment rate has been very low for quite some time.

So I believe the major years of the Vietnam War, WW2, The Great Depression, and the Great Recession were "hard times". The '80s, '90s, '00s, and this decade not.

But maybe I am missing something. What am I missing?

Expand full comment

> Many young people seem to think we live in hard times.

I think ragebait is to blame.

People really like reading about how shit everything is because it's engaging and confirms that they're not to blame for whatever hardship they're experiencing.

Perusing reddit.com/r/all you'll often find memes about what things cost and how cheap it was to buy a house in the 80ies (with completely made up insane numbers)

I suspect this is in part driven by troll farms, it would make sense for russia/china/whatever to try and convince western youths that everything is hopeless.

(But I think it would probably happen organically either way)

Expand full comment
May 30·edited May 30

I think it's possible that the milieu we live in is making people better off in specific ways and worse off in other possibly more salient ways.

Better off is the obvious improvements in tech, medical care, etc. Lots of things are now massively more convenient.

Worse off tends to be things that are really important pillars of wellbeing - security in your place in the community (the number of people who are self-employed or employed in very small, close-knit businesses is way down compared to before - most of us now work as individual employees, surrounded by coworkers who rapidly move on or are made redundant - this is not at all conducive to building a sense of security in ones place in the social hierarchy when the faces change constantly), we're very dependant on scarce positional goods (I don't think it's controversial to say both housing and jobs are now more scarce), and a lot of us sleep a lot worse (some of this is screentime, some of this is higher population density).

Its completely possible that the psychological impact of being secure in community, vocation, and shelter is higher than "nicer stuff". People aren't necessarily opposed to work that is hard or unpleasant if the work is also respectable and able to afford a living (if that was the case there wouldn't be anyone signing up to become a doctor, which is well known to be gruelling). But the availability of these jobs seem to be shrinking, and a lot of the respectable jobs have had the hard aspects get harder without the respectable aspect changing (seems to have occurred in teaching especially)

Expand full comment

Housing is probably scarcer. I don't see how jobs can be considered scarcer now with such a low unemployment rate for so long now. Jobs were truly scarcer during the Great Recession, 2009 - 2014. I would consider those years to be "hard times" for twenty-somethings.

Interesting point about working in "small, close-knit businesses". I suppose I don't understand why that would be preferable. Small businesses are more like families in ways both good and bad. Many small businesses are run by abusive owners, although many are run by wonderful owners. The modern HR-run corporation kind of irons out those extremes. You get neither too wonderful nor too abusive.

But perhaps you have a point about lack of community in a geographical sense. That is clearly something we have less of.

Expand full comment

Jobs in general might not be scarce, but desirable jobs specifically are. Its well known that contract and temp positions make up a larger proportion of the workforce now.

Bouncing between companies every 2 - 3 years is not conducive to forming very strong bonds within an industry, and that's especially true if a big chunk of the workforce is doing that simultaneously.

Permanent roles aren't a guarantee that it's not going to happen, either, because those tend to be offered at very large organisations (> 300 employees), and frequent internal moves due to the company restructuring would be similarly destabilising.

A similar thing happens in housing - renters bounce from place to place due to having short (normally 1 year) leases, and it's not just that all of the neighbours are also on short term leases, you don't really get to become a "regular" at local businesses if you have to move again soon after (and some businesses, eg a supermarket, have high enough turnover that the checkout staff have changed like 8 times while you were there).

So that's my thesis - in the last 30 years, the two places most of us spend most of our time (work and home) tend to change too often for us to form lasting relationships with the people and businesses nearby. Even if you succeed personally in locking down these two locations, everyone else around you struggles to do that, so those people and businesses change constantly anyway, mostly to our social detriment, because it causes most of us to focus most of our social energy on a tiny group of people (your spouse and immediate family). I do think a lot of us no longer have "medium intensity" social bonds - people we know quite well, socialise with often, but don't literally live in your house. Most of us only have the low intensity (colleagues and acquaintances 0 - 3 years) and high intensity (spouse and kids if we're lucky) bonds. Young people don't really have the high intensity bonds yet, too.

Maybe, a lot of us feel like our personal "tribes" are too small to feel safe, because it's just so difficult to build a proper tribe under the current economic conditions.

Expand full comment

Why Bayes should be better known to lawyers and judges: https://unherd.com/2024/05/the-danger-of-trial-by-statistics/

Expand full comment
May 30·edited May 30

Thanks for the recommendation. I've posted about Lucy Letby before, and predicted more murders and attempted murders to be revealed at the upcoming enquiry. The New Yorker story raised some interesting points - my prediction is mainly based on similarity to Shipman, where the Police narrowed it down to 7 prosecutions, then after conviction opened an enquiry into the hundreds of other cases where Shipman probably murdered the patient. The NY piece contends that the Letby case is being approached by the Police in the way it is precisely because of a fixation on Shipman, in which case I'm being taken in by the Shipman vibe the Police are engineering. Private Eye have a good record of exposing weak convictions, and they have a piece on Letby ready to publish when the restrictions are lifted. So we'll see. But many of the arguments in the NY piece are lame....

Expand full comment

...for example the doctors who snitched on Letby are depicted as being overly confident in their own opinion - which may be a failing in a journalist, but it's not clear to me that it's a fault in a doctor. Also, "but she had friends!!!" is a terrible argument - Shipman (sorry!) was a well liked and well trusted family doctor with a family of his own. Anyway, we'll see what the public enquiry brings out - I'm willing to be persuaded. But the very existence of the enquiry is probably bad news for Letby

Expand full comment

The logic of The War on Terror after 9/11 was "We fight them overseas so they can't fight us here (In the USA)"

I generally think that the USA's War on Terror was overkill and idiotic. We spend trillions of dollars on it we could have spent at home on infrastructure. (An argument that outsourcing to China for manufacturing wasn't so bad because China turned around and invested trillions in the US bond markets which kept interest rates low for Americans. The problem was that instead of investing that Chinese money in the US we blew it up in Iraq and Afghanistan.)

The War on Terror weakened US hegemony because it showed the US to be capricious, strategically weak (particularly in the case of Afghanistan), and politically divided. OTOH, we haven't had a foreign terrorist attack of not since 2001. So maybe that war against an emotion was sort of effective?

Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible? (I think dropping some bombs on the Taliban and assassinating Osama bin Laden were good things but not much beyond that.)

Was the war in Iraq worth it? Could it have been?

Expand full comment

> Does anyone today think the War on Terror was worth it? Even partially? Are there parts of it that are defensible?

Aside from everyone else's point about the money, as a frequent flyer, the cost in US time to the TSA's security theater has now killed ten times more people than 9-11 itself, and which anytime it is audited with red-teams trying to get weapons through, has a *95% failure rate*.

I went back and forth with another ACX poster on various assumptions, and we arrived at a floor of ~35k US lives lost in US citizen-hours due to the TSA, which is 10x the actual toll of 9-11, and whose ongoing cost (with a 95% failure rate, remember) wastes something like 800M USA person-hours annually.

Expand full comment

>We spend trillions of dollars on it we could have spent at home on infrastructure

Surely most of that money was indeed spent at home.

>Was the war in Iraq worth it?

For whom? For Iraqis, who will never be ruled by Qusay Hussein? Very possibly. Just as Afghanistan was very possibly worth it for the hundreds of thousands of girls who were educated during the 20 years after the Taliban were forced out of power. It is impossible to say, because weighing the costs and benefits requires a normative judgment.

Expand full comment

Most of the money was spent at home to produce what? Ammunition and other supplies to support an army overseas? IOW, disposable not durable goods. Not advanced infrastructure that could exist as part of the wealth of the nation such as cross-country power lines delivering the abundant solar and wind power from the desert side of the nation to the more populated, less breezy and darker parts of the country.

Defense contractors benefited from that money, but couldn't it have been better spent, even on defense technology, without that war? I don't know what we spent on bombs and missiles that were launched and detonated, but they were all a deadweight loss, not an investment in future military technology for a war that might be worth fighting.

And as I say above, I think giving a lot of money directly to those who lost jobs directly due to the China trade would have been much more worthwhile than spending it on those wars.

Expand full comment

Yes, of course it could have been better spent. I was merely objecting to the implication that it was simply thrown away. (And you seem to imply that again, when you refer to supporting "an army overseas." It matters little where a soldier is located, if the spending is domestic.

Note also that the only way to get to an 8 trillion cost is to include future spending on things like veterans' benefits.

And this study, which includes that, puts the total at 3 trilliin by 2050. https://watson.brown.edu/costsofwar/files/cow/imce/papers/2023/Costs%20of%2020%20Years%20of%20Iraq%20War%20Crawford%2015%20March%202023.pdf

Finally, "we could have spent it on infrastructure" is a bit of a red herring, given how little of the federal budget is spent Theron. The vast majority is spent on providing services, which in your formulation is also wasted spending.

Expand full comment
May 29·edited May 29

The War on Terror cost us about $8 trillion. I know that seems like an absurd amount of money, but it would buy you Microsoft, Apple, and Nvidia. The three largest American companies, but you're still missing Google, Amazon, Tesla, etc.

If you're going to spend that money on infrastructure, prepare to be disappointed. Take the Bay Bridge replacement in California. Cost 6.5 billion to replace a bridge, originally estimated at 2 billion. At that rate, you'd be getting somewhere between 1,500 and 4,000 bridges, and it'll take about 11 years to get them done. Or look at California's High-Speed Rail. In 2008, it was estimated that the project would take 33 billion to get rail from Anaheim to San Francisco. As of 2023, we've sunk in 20 billion and gotten zero miles of usable track. It's currently estimated to cost 100 billion to get from Anaheim to San Francisco. So I can totally imagine spending that 8 trillion on projects that never go anywhere (a la NEOM in Saudi Arabia).

Or put another way, if you took that 8 billion off the U.S. national debt and we lived in the counterfactual world where that money had never been spent, the current debt would be at its 2017 level. I was alive in 2017 and I don't remember feeling like the U.S. was just drowning in wealth - quite the contrary, it was received wisdom that the U.S. was drowning in debt.

We certainly didn't win the War in Iraq or Afghanistan. But the U.S. largely believed - foolishly perhaps - in the idea that ordinary people could, if given a chance, successfully govern themselves without dictators or autocrats. Unfortunately in the past few years, we're seeing the naivety of that concept -- authoritarianism is flourishing everywhere. But I don't fault the U.S. for believing and some small part of me still believes that we'll one day see the good ending to the Arab Spring.

Expand full comment

>We certainly didn't win the War in Iraq or Afghanistan.

I never understand it when people say this, esp about Iraq. The regime that governed Iraq was completely destroyed. The current constitution of Iraq is the one that was written under US supervision. If that isn't winning, I am not sure what is.

Expand full comment

It’s now dominated by Iran, which was probably not what Uncle Sam had in mind.

Expand full comment

Leaving aside that "dominated" is a gross overstatement, so what? Isn't that evidence that the war was won? Because if the old Ba'athist regime had regained power (and note that "the old regime regained power" is precisely the argument underlying the statement that Afghanistan was a loss) Iran would obviously have less influence. More broadly, the overall US policy re the Middle East has not been "anti-Iran." It has been pro-stability. And the former regime in Iraq was the source of enormous instability, what with their propensity to start wars with their neighbors and all.

Expand full comment

8 trillion buys a lot. The entire interstate highway system cost $618 billion after adjusting for inflation.

Expand full comment

I think the best use of the low interest loans from China would have been to compensate factory workers whose jobs were displaced by outsourcing to China. It would have benefited them, and it could have benefitted trade policy going forward as international trade wouldn't be viewed as such a negative thing by the masses if "trade-offs" != "the working class gets fucked".

High-speed rail in the USA is a boondoggle.

Expand full comment

> if you took that 8 billion off the U.S. national debt

8 trillion, not 8 billion

Expand full comment

Ack, sorry! The math should be correct. It’s like 16 years worth of US infrastructure spending, which seems like a lot, but really wouldn’t change reality all that much.

Expand full comment
May 29·edited May 29

I think it was mostly not worth it either, and that the wars were a mistake. I think there are still some contrarians here who support the Iraq war, but for the most part, yours is the standard take.

Perhaps theoretically if they had not invaded Iraq and focused only on Afghanistan, Afghanistan might have turned out better, but in our timeline it certainly didn't end well and it's hard to put much faith in hypotheticals going better.

Expand full comment
May 29·edited May 29

I just watched some videos of the songs from Wish and wow. I'd already heard that Wish was criticized for lack of story, excessive references and awkward song lyrics, but I had no idea it had such a weird-looking art style as well. The characters are still the same detailed-3d models as every past Disney movie, but the backgrounds are all 2d, making the whole thing look very stupid.

Also, is it me or is the exposition song just an inferior ripoff of the one from Encanto? It seems so similar.

Expand full comment

It doesn't even have a pretty dress. It should be a no-brainer when making a Disney movie to put your heroine in a pretty dress so you can sell ten million copies of it to little girls, but apparently pretty dresses are now too heteronormative or male-gazey or something, so it's a shapeless purple long-sleeved tunic on our shapeless big-nosed heroine-of-undefinable-ethnicity.

Expand full comment

The art style was supposed to be an homage to classic Disney movies like Snow White or Sleeping beauty, with the simple 3d models meant to resemble a kind of combination between classic watercolor backgrounds and modern CGI. It was actually an impressive technical feat to pull off, but the problem is that it looks weird. File it under the category of things that are hard to do, and also suck.

The songs are all bad, mostly due to lazy lyrics.

Expand full comment

100% agree.

The moment that got me was the "You're a Star" number. At the end of the movie you can sort of see why they needed it, or something like it, since "everyone being a star" is basically how they defeat the big-bad. But situated as it was in the movie it was just this random mediocre "everybody is special" pop number that just flew in out of left field.

Expand full comment

I think that the word evil is a really good example of Sapir-Whorf type effects where a word that doesn't do a good job of carving reality at the joints leads to sloppy thinking (by contrast, "good" in the moral sense is less harmful).

I think that "evil" conflates (at least) two different things - "does not try hard to do what they consider to be the right thing" and "is wrong about what the right thing to do is".

Consider, say:

:- Serial killer Ted Bundy

:- Indian independence leader Mahatma Gandhi

:- A relatively principled politician from a party you disapprove of.

:- 9/11 suicide bomber Mohammed Atta

:- Me.

On the first scale, "tries hard to do the right thing", I would rank these people

Suicide bomber > Gandhi > Politician > Me > Serial killer.

On the second scale, "is correct about what the right thing to do is", I would rank them

Me > Gandhi > Serial killer > Politician > Suicide bomber

So what sorts of insights can we express with these two scales that we can't with just the word "evil"?

Getting low on either scale gets you into the territory we refer to as "evil", but in very different ways - I think that both Atta (an incredibly brave and principled man, demonstrably giving to give up his life for what he believed was right, but whose principles and beliefs about what that constituted were diametrically wrong) and Bundy (who for all I know may have had a perfectly good moral compass, but just chose to ignore it) did terrible things, but they did them for very different reasons, and the condemnations I would offer of them as people (as opposed to of their actions) don't really over lap.

By contrast, the only people who do really good things are people who score high on both scales. The people at the top of the "tries hard to do good" scale include some saints, but also a lot of really terrible monsters; no-one else's moral values align with mine as well as my own do, but I'm not a very good person because I don't make the effort and sacrifices required to be. Since "good" requires being high on both scales, it - unlike "evil" - does a good job of referring to a natural cluster in the 2D space they span.

Expand full comment

This sounds pretty sensible.

One slight complication is that beliefs are often downstream from intentions. If you are a cruel person who enjoys the suffering of others, you'll find yourself drawn to ideologies that say that cruelty is justified. Evil ideologies attract people who are already predisposed to be evil.

Expand full comment

> On the second scale, "is correct about what the right thing to do is", I would rank them

Me > Gandhi > Serial killer > Politician > Suicide bomber

You demonstrate a severe deficiency of hate for the political party "you disapprove of". I would recommend watching YouTube videos from the lunatic fringe of their side of aisle until you are cured.

Expand full comment

Serial killers are ranked higher on the "correct about the right thing to do" scale though. So I guess murdering innocent people is less wrong than outgroup political ideas.

Expand full comment

I think Bundy was chosen because he's generally understood to have known that his crimes were morally wrong but didn't really care.

There are other serial killers who justified or excused their crimes, e.g. Ed Gein or David Berkowitz, and thus would have been ranked lower on the "correct about the right thing to do" scale.

Expand full comment

That makes more sense.

Expand full comment

Notice that I'd class someone like Bundy at the very bottom of the "does what he thinks is right" scale, whereas plenty of politicians I despise try harder to do what they (in my view wrongly) think is right (as do suicide bombers, regardless of their motives).

Expand full comment

Are you implying politicians you disagree with have worse moral compasses than suicide bombers? That's the only person he ranked them above. That's quite extreme

Expand full comment

I think you're misreading.

I *am* implying that there are serial killers with better moral compasses than politicians I disagree with (and possibly even politicians I broadly agree with), the difference being that the serial killer knows right from wrong and chooses to do wrong.

Expand full comment

Well, not ALL suicide bombers, of course. Some of them probably have the same political views as said politicians. But I imagine suicide bombers to be more likely to be of the political persuasion NOT in power, and that might be skewing my representative instance morally-rightward.

At any rate, my point is that the hypothetical politician, by construction, is selected to have a "bad" moral compass. The hypothetical suicide bomber, in contrast, is only selected for passionate belief and a willingness to die for SOME cause.

Expand full comment

The post didn't talk about a "hypothetical suicide bomber", it said 9/11 hijacker Mohamed Atta.

Expand full comment

The top-level one, yes, but not the comment I was directly replying to, which spoke of "suicide bombers" more generally.

I stand by my reply to the original post.

Expand full comment

I agree with you to the extent that most people use good and evil the way you defined. But they aren't treating the two terms symmetrically. In this framework, being good requires both intent and action, whereas evil only requires one or the other. This is why it seems like good is better defined and more exclusive. Basically, evil is over-defined as everything not explicitly good.

Good intent, good action > Good

Good intent, evil action > Evil

Evil intent, good action > Evil

Evil intent, evil action > Evil

If the terms were treated symmetrically, both good and evil would be narrowly defined as good acts with good intent and evil acts with evil intent, respectively. And both would be useful descriptive terms.

Expand full comment

Interesting way of thinking about this, but there needs to be a "neutral" possibility for the "action" label.

And then on the "intent" label an unstated variable in how people think/talk about this is degree of selfishness. How people judge others' purely-self-interested actions varies enormously, so much so that it seems like a significant confounder to this framework's real-world usefulness.

Expand full comment

I have a silly question (theory?) about the mechanism of the antidepressant mirtazapine.

For background I've been pretty severely depressed for a while (5+ years) and have had allergies/asthma/eczema/etc for all of my life; I had pretty severe allergies and asthma as a child, ended up in the hospital semi-frequently. I've read the theories that depression is linked to inflammation or whatever and there seems to be a reasonably robust association between asthma, food allergies, and depression risk.

At any rate I went through four separate medications before trying mirtazapine, mostly SSRIs, and they did absolutely nothing except give me some mild side effects. Ditto with therapy. Then I tried mirtazapine and it was magical; felt better after two weeks than I had in many years, and it's lasted for a while (more than a year now).

Of course this is completely anecdotal data. But I was reading up on the mechanism of mirtazapine, mostly out of curiosity, and I noticed that in addition to the main antidepressant mechanisms of á2-heteroreceptors antagonismand 5-HT2 receptor & 5-HT3 blocking, it's an extremely potent antihistamine. I'm curious if the antihistamine effect in itself is helpful for depression (and is maybe related to the fast onset of effect for mirtazapine). I can't find any studies for this. Has anyone tried injecting rats with histamines to see if they get depressed? Maybe in people with a long history of severe allergies/asthma it makes sense to prescribe a TeCA first? Would appreciate thoughts from someone who understands psychopharmacology better than me.

Expand full comment

Out of curiosity, did you notice any changes in your allergies/eczema/asthma?

Expand full comment

I don't understand it better than you, but I do have some thoughts for you: I doubt you will find much the by straightforwardly researching your exact question. But I just went on google scholar and searched "antidepressant effects antihistamines" and all kinds of stuff turned up. I think I also saw some stuff about antidepressants as a treatment for allergies, too. I'm guessing you will get confirmation that your basic idea is plausible. So then you might just try to figure out ways to test the idea with yourself. If you expose yourself to allergens, do you get a bit depressed? Does a course of allergy shots have an effect on your mood? Does adding a bit of benedryl to what you're already taking make a difference? (Check to see whether this is safe before trying it, though). I think you have an unusual and hard-to-treat kind of depression, and it would be good try empower yourself by figuring out as much as you can about how your system works. You can't count on psychopharmacologists to do that, or to be up on the research. Many seem to have very little intellectual curiosity. Anyhow, congratulations on finally finding something that works.

Expand full comment

The Israel-Palestine situation has gone through several months' worth of events in the last week, here's a summary, as much for my own benefit as anyone else interested.

(1) On Friday, The ICJ ordered Israel to halt the operation in Rafah.

From [1], the 25:30 mark:

>>> By 13 votes to 2: [The court orders Israel to] immediately halt its military offensive, and any other actions in the Rafah governorate, which may inflict on the Palestinian group in Gaza conditions of life that will bring about its physical destruction, in whole or in part.

=====================

(2) Israel has interpreted (1) as saying that the military offensive should only be halted if it threatens to inflict genocide on the Palestinians, i.e. instead of understanding the sentence structure as "(a) halt military offensive (b) any other action which may ...", Israeli politicians and media appear to have deliberately understood (1) as "(a) halt military offensive and any other actions that satisfy (b), (b) that which may ....". Then declared that the military offensive in Rafah doesn't satisfy (b), and thus won't be halted.

It's notable how nearly every international newspaper understood (1) to mean the immediate and unconditional halting of hostilities in Rafah:

-- (2-a) NYT: U.N. Court Orders Israel to Halt Rafah Offensive [Subtitle:] The International Court of Justice ruling deepens Israel’s international isolation, but the court has no enforcement powers.

---- (https://www.nytimes.com/2024/05/24/world/middleeast/icj-ruling-israel-rafah.html, https://archive.ph/11UIA)

-- (2-b) WaPo: U.N. court order deepens Israel’s isolation as it fights on in Rafah [Subtitle:] Though a rebuke to Israel’s conduct of its war, the World Court ruling will be difficult to enforce without the backing of the United States.

---- (https://www.washingtonpost.com/world/2024/05/24/israel-rafah-invasion-icj-ruling/, https://archive.ph/K2Iqq)

-- (2-c) CNN: UN’s top court orders Israel to ‘immediately’ halt its operation in Rafah

---- (https://edition.cnn.com/2024/05/24/middleeast/israel-icj-gaza-rafah-south-africa-ruling-intl/index.html)

-- (2-d) Reuters: ICJ Gaza ruling: Israel was ordered to halt its Rafah offensive and open the Gaza-Egypt crossing for aid

---- (https://www.reuters.com/world/middle-east/icj-live-court-rule-israels-offensive-gaza-2024-05-24/)

Four of the ICJ judges - 2 of whom were dissenters - supported Israel's ass-backward interpretation in public statements, while 1 (South African ad-hoc) supported the mainstream interpretation, and 10 judges stayed silent.

=====================

(3) The ICC hasn't yet granted arrest warrants against Netanyahu and Gallant. This is within range, Putin's warrants took 1 months to grant, while Omar Al-Bashir's (Sudan's dictator) were granted in 9 months. It's unclear yet how the ICC classifies Netanyahu, and whether further developments will accelerate or decelerate the granting of the warrants.

=====================

(4) On Sunday, Israel has torched a safe zone in Rafah. Claiming it was targeting 2 Hamas officials, initial figures say that 45 Palestinians, 32 of which are children, were burnt to death as their tents caught fire from the air strike [2]. The IDF alleges that it used lighter ammunition to strike a nearby location, and that the fire is instead the result of a shrapnel from that attack that hit a fuel tank, starting the fire.

This attack received widespread condemnations from outside Israel. The EU is reportedly mulling sanctions, pushing the stricter interpretation of the ICJ ruling above that means immediate and unconditional retreat from Rafah. Amnesty International [3] further called the ICC to investigate the incident among the war crimes it's investigating.

Meanwhile, inside Israel, some journalists have celebrated the torching and likened it to the Lag Ba'Omer bonfire [4]. (a ritual of celebration of a Jewish holiday of the same name that coincided on the day of the attack, usually celebrated in Mount Meron in the North, but this year was celebrated in East Jerusalem's Shaikh Jarrah neighborhood.)

=====================

(5) Hamas claims to have ambushed and captured soldiers in Jabalya. The IDF denies the validity of the claims, but didn't offer any additional details or explanation of Hamas' footage. If true, it would be the first time that new hostages were added to Hamas' bunkers since October 7th.

=====================

(6) 2 Egyptian soldiers were killed in an exchange of fire with the IDF on Rafah, one immediately with a sniper shot, and the other due to injuries. Both militaries conducting investigation, heavily restricting information and issuing few public statements.

=====================

(7) On Saturday, a video appeared of a masked IDF personnel threatening Yoav Gallant of disobedience in case he orders a retreat from Gaza and/or handing over the territory to any Arab-affiliated government. The video was shared by Netanyahu's son, to widespread outrage and condemnation in Israel.

[1] https://www.youtube.com/watch?v=V-G8aj3CnCk

[2] https://www.youtube.com/watch?v=IQl9MrQ2oUI

[3] https://www.amnesty.org/en/latest/news/2024/05/israel-opt-israeli-air-strikes-that-killed-44-civilians-further-evidence-of-war-crimes-new-investigation/

[4] https://www.haaretz.com/israel-news/2024-05-27/ty-article/.premium/right-wing-israeli-journalists-celebrate-rafah-attack-likening-it-to-lag-baomer-bonfire/0000018f-b983-dca9-a5cf-bd832e6e0000, https://archive.ph/jQWWu

Expand full comment

I'm not making any comment on the underlying issues, but going solely off of the sentence you quote, I would agree with the Israeli interpretation. "the court orders Israel to immediately halt [its military offensive, and] any [other] actions in the Rafah governate..." Grammatically you should be able to remove the bracketed section and have the sentence still make sense. But, at least in my experience with English, "halt... any other actions" is almost always followed by a "that" or "which" qualifying what subset of all possible actions are prohibited.

Expand full comment
May 29·edited May 29

Again putting aside the object level and only focusing on the grammar, your post still makes no sense to me. That's not how English works at all, to the point where it is difficult to see how someone could interpret it in the way that you did unless they really really want to and are just looking for a fig leaf.

If you say "I want you to stop eating cows or any other animals that chew their cud", it is simply not reasonable to respond "cows don't chew their cud so that means we can eat them". The word "and" includes *extra* stuff, it doesn't limit that which is explicitly named. "Any other" also implies that the description of the second part describes the first part, but simply disagreeing with that implication doesn't mean you get to throw out the explicit plain meaning of the words.

I will admit that there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there, but even with the comma, the rules of English just don't allow for an alternate interpretation here.

Expand full comment

> there's an extra comma before the "which" in LHHIP's quote which is pretty weird and shouldn't be there,

Yes, I originally wrote the quote without the comma, then - for honesty's sake - I went and checked the official ICJ transcription [1], which does include the comma, a fact that Israeli media like Times of Israel has exploited to peddle their bizarre interpretation.

But yes, any application of either common sense or the principle of Relevance from Pragmatics would immediately reveal that the court didn't bring up Israel's offensive in Rafah for fun, or just because the judges just happen to be fans of military strategy. There is no world in which any remotely eloquent adult uses language like "I thereby order you to stop doing the extremely specific thing X, and any other things, which has the trait of being Y" to mean "Well you can stop or not stop doing X, depending on your own interpretation of whether it has the trait of being Y".

[1] https://www.icj-cij.org/sites/default/files/case-related/192/192-20240524-ord-01-00-en.pdf

Expand full comment

I completely misunderstood your argument about what "which" was doing. After re-reading the sentence like 90 times I'm convinced by you and Lapras's take. The second comma threw me off.

Expand full comment
May 29·edited May 29

Isn't it that "extra" comma which makes all the difference, though? It sets off "and any other actions in the Rafah governorate" as its own, bracketed clause which can be removed from the sentence.

Like what if I were to require that you "immediately halt housebuilding, and any other actions on your property, which may cause harm to the local endangered beetle population", and you had a method of housebuilding which ensures that the beetles stay safe? My read would be that you can continue housebuilding using that method.

Expand full comment
May 29·edited May 29

In your example, the natural interpretation would still be that you have to halt housebuilding. But even if you did change up enough to make your interpretation viable, the only reason that works is because "housebuilding" *could* be interpreted as an indeterminate collection of actions which could be further narrowed down.

In the original example, it specifically says "its military offensive", that it is referring to a specific thing, not an indeterminate collection which could be further narrowed down. In order to make it possible to interpret the way that Israel wants to, it would have to be changed to say "offensives" rather than "offensive" as well as the other changes.

Expand full comment

I understand what you're saying.

So I guess what we're looking at is an attempt by the judges to craft a vague and ambiguous sentence that would allow as many of them as possible to sign on to it, but which ultimately wasn't all that successful.

Expand full comment

I've noticed Google Maps is often getting things worse than it used to, with traffic backups that don't exist or missing ones that do. That's not even counting when it gets the best method to a destination wrong.

In Michigan, I-75 currently has construction between Detroit and Flint, and going north it actually consistently advises one to exit the freeway and enter again afterward, even though the freeway is actually open and running fine. If you follow its advice you will take longer to get to your destination.

That doesn't even count the time it routed me to a strange area instead of where I wanted to go.

I've gotten the distinct impression Google is trying to squeeze more revenue out of everything these days, and I'm not sure how any of these inaccuracies are helping it to do so.

Is anyone else getting the impression Google Maps is getting worse?

Expand full comment

Where does their data come from? Just people with the app installed and permissions granted?

Expand full comment

It's not just Google - there are companies that buy and aggregate geolocated traffic data (with the ultimate data coming from hundreds of different apps, so broader penetration than Apple or Google), like Airsage, after which anyone can buy the data.

I don't know if Airsage has a real-time data stream, I don't think they did when I was using them 5 years ago, so maybe the real-time traffic stuff is confined to Google and Apple. But I thought I should point out that geolocated data isn't special or controlled or hard to access, anyone with money can get it, and with broad penetration into any given population.

Expand full comment

Certainly a large proportion of phones have this (I assume), but I suspect also, based on some directions the app provides, that other entities, like the government, are providing routing data. So my route which was low-traffic but not recognized as a route by Google may have had a "road closed" entry added, even though the road is open.

Expand full comment

I've had trouble with it for a while, but more with the road information than traffic. In Seattle (hardly a backwater), it told us to turn left at an intersection where this was disallowed. The lane information is frequently out of date, and it doesn't pick up on closed roads as much.

Expand full comment

It wouldn't surprise me if they've got their AI making things up for it now. Their AI was their first response to a Google Search yesterday.

Expand full comment

Yes Google maps have been deteriorating rapidly, and I switched to Apple maps (yes, given their early history I was very reluctant to) several month ago. Apple, to their credit, made tremendous improvements to the maps and driving directions. Between using Duck for search and now Apple maps for driving my only remaining engagement with Google is email. This one is hard to break away from....

Expand full comment

Unfortunately, DDG sucks nearly as much as Google-a-year-ago now. I've personally been using SearXNG, a federated open source search engine that aggregates results from multiple and has no ads or trackers.

It's like Google was back when it was useful, I don't even need to prepend "forum" or "reddit" to every search to get real results.

There's a number of URL's and browser plugins you can use to use SearXNG - I use paulgo.io as my goto URL on Safari on my phone and a firefox plugin to make it the default on my laptop.

Expand full comment

Thank you, I’ll give it a try.

Expand full comment

re Short and Pope, someone once asked me why they were called Child Ballads when they're obviously not for children. I said, "Are you making a joke or is that a serious question?" because I really couldn't tell. It was a serious question. And the answer is, they were collected by a man named Child.

Expand full comment

I keep checking fivethirtyeight, expecting them to have started modelling the 2024 election in earnest, but they haven't. If you want to see I feel like they normally do, by this point in an election year, but I don't know exact dates that they started previously. Are they just holding off because they don't like the answer?

Expand full comment

I'm late to the party, but I'd like to say that you should stop checking FiveThirtyEight. Here's Nate Silver himself, explaining how low they have fallen:

https://www.natesilver.net/p/polling-averages-shouldnt-be-political

Expand full comment
Jun 4·edited Jun 4

FiveThirtyEight is Nate Silver and his models. Without Silver and his models, there is no FiveThirtyEight.

Expand full comment

To chime in on the same theme, I think this is a case of "follow the person, not the brand". I trust Nate Silver to be relatively accurate and impartial, but the brand "fivethirtyeight" is only as good as whatever demon is possessing it.

Expand full comment

Thanks for all the replies, I had no idea that Nate had left 538. (Actually now it sounds familiar but I'd forgotten.)

I'm actually surprised they're not leaning even *more* into the modelling, though, in his absence. Maybe the remaining bozos can't come up with a model as sophisticated as Nate's, but surely they can make a dumb one?

Expand full comment

That does seem like Disney's style these days. Maybe they think a blog is fine? But there's got to be a few statistics folk who are also into politics and who think that they could do as good a job as Nate Silver. Putting a few of them in charge seems like a no-brainer.

Expand full comment

the "538 Model" is Nate Silver's IP. When ABC News made the inane decision to lay him off, they lost the model as well.

Nate has talked on his Substack about reducing the scope of the model, since (paraphrasing) why publish a constantly-updating model if the vast majority of its audience (*especially* the self-assured pundit class) are just as probabilistically innumerate as they were in 2016?

Expand full comment

I've seen a fascinating discussion of the long-term prospects for the 2024 British election.

https://andrewducker.dreamwidth.org/4434171.html

Expand full comment
May 28·edited May 28

They do seem to have gone downhill since Nate Silver left and Disney took them over. His new Substack doesn't seem to have started coverage yet either, but that is probably because he is still (or was, as of this post 16th May) working on the model:

"Also, I’m finally taking some tangible steps to get the 2024 election model ready, interviewing finalists for the position I’m hiring — and later today, I may even (gasp) write a few lines of code."

April post announcing his plans for this year's election:

https://www.natesilver.net/p/announcing-2024-election-model-plans

So looks like we'll just have to wait and see?

Expand full comment

Fivethirtyeight seems to have changed over the past year. I know Nate Silver is no longer working for it and I think he owns the rights to the models. So probably this is the biggest reason if they haven't started modelling already. Nate Silver seems to have a Substack of his own now, so might be worth checking that out to see if he does a similar model there.

Expand full comment

Sometimes when I scroll through Substack comments on mobile *something* triggers popups prompting me to subscribe to a comment author's blog to appear.

Is it possible to hide or disable those?

It seems impossible to hide them once triggered unless I reload the whole webpage.

Expand full comment
author

Are you getting that on ACX?

Expand full comment
May 29·edited May 29

Yes, I also regularly get this in the comments section when scrolling down. I'm not sure what triggers it - I think clicking on some part of a comment in a certain way when on mobile. It's fairly intrusive.

Expand full comment

If you hover over someones name or icon, a modal will appear with more info about them and buttons to follow/subscribe.

On mobile this happens when you touch their name I think, so probably when scrolling you can trigger this accidentally and maybe there is a UI bug where it doesn't go away without direct input.

Expand full comment
May 28·edited May 28

Yeah, the same happens for me as well. To get rid of it again I do the following:

1. Carefully tap some spot within the popup that does not trigger anything (not a link or button) and without scrolling

2. Carefully tap some spot outside of the popup that does not trigger anything

Expand full comment

On desktop the popup appears when you hover over the blog name to the right of the username (if the user has one). On mobile you have to long-press it just the right way to get it (at least on Android, don't know about iOS).

Expand full comment

In my experience, it happens whenever you view a new person's substack for the first time and scroll halfway down if you aren't logged in. They're really annoying.

Expand full comment

Why not monorails?

Monorails have a reputation as a white elephant of a transport system which seemed like a good idea in the mid 20th century but which failed spectacularly everywhere they were tried. But they still don't seem like an obviously bad idea... you can build them in an established city with a small land footprint, they're quiet, they run on electricity, they don't get stuck in traffic, and they're pleasant to ride on. Why have they failed to find a use case outside touristy niches?

(Serious discussions please form a line at my left, "Marge vs. the Monorail" references please form a line at my right.)

Expand full comment

Seattle has a nice one, but went for light rail instead of expanding it.

Expand full comment

I just remembered the Transrapid https://en.wikipedia.org/wiki/Transrapid magnetic monorail. It operates in Shanghai, but never got off the ground anywhere else.

People were quite against it when it was considered in Munich. Reasons were: Too expensive, not compatible with any other means of transportation, NIMBYism, high energy consumption, too much associated with Edmund Stoiber who made a fool of himself when he tried to advertise it: https://www.youtube.com/watch?v=bMUxRA4B9GE

Back then, I was against it too, but now I feel it would have been cool.

Expand full comment

not cost effective

Expand full comment

I read through the wikipedia article on monorails, and... I'm not sure I see what notable advantage monorails have over two-rail designs. This is probably partly just a limitation of the wiki article, which is really focused on history.

But it seems to me that many of the advantages you note (run on electricity, don't get stuck in traffic, small land footprint, pleasant to ride on) are all shared with any other electric elevated light rail system. Is the advantage here actually in the monorail? Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?

Expand full comment

> Or is it just easier to make an elevated monorail than an elevated two-rail system for some reason?

From my cursory reading, that's exactly it. Elevated monorail tracks are cheaper than elevator two-rail tracks because of the smaller footprint. Unfortunately, that's their *only* advantage. In particular, monorails are more expensive for ground level or below-ground tracks, which are a lot more common than elevated tracks.

Expand full comment

It must depend on how it's implemented. In Detroit, we have the Q-line, which is almost, but not quite, a huge waste. In support: it's nice to use in inclement weather, and probably for disabled and/or elderly people who cannot walk far or well. Against this, it is actually faster downtown to walk where you're going unless the train happens to be in sight, "don't get stuck in traffic" is wrong because people park in front of it (surely just "for a minute") and they break down, and the time estimates at the stations can be wildly wrong.

I never paid for a ride on it, as I worked for Rocket Mortgage, who probably paid for all rides I took.

The Peoplemover can be a better option since it can't be stopped by traffic (it's up on supports, as a kind of 2nd floor) but the places it serves is awfully short-range, and one could walk anywhere in its service area probably faster, if one includes going up to it and going down at your station.

Expand full comment

“ seemed like a good idea in the mid 20th century but which failed spectacularly everywhere they were tried.”

Just like communism!

Expand full comment
author

25% warning, less like this please.

Expand full comment

Also, to be fair, fascism.

Expand full comment

Indeed. Also Scott, I'm sorry. It was a spur of the moment thing and I knew right after I closed the tab that I'd crossed the line a bit. I know it's not good to bring politics into a non-politics thread nor to write lazy one line potshots against whichever outgroup.

Expand full comment

Fascism seems to collapse into democracy sooner o later. See Spain, Portugal, Chile, Argentina, etc.

Expand full comment

The same reason we have ICE cars instead of steam powered cars? There was a time when both systems were relatively viable, but more people chose ICE. With monorail, everyone chose trains with double rails instead. Now if you want to build monorail, you need separate tracks and trains and maintenance facilities. Basically the whole system is more complex and expensive than a normal train, even though the technology is not "worse".

Expand full comment

Wait, are you saying that ICE superiority over steam engines is down to contingent factors?

I haven't looked into this but having to put water and coal into the vehicle instead of pumping fuel seems way less convenient

Also steam engines seem less miniaturizable?

But please convince me of the opposite and let me dream of PM-2.5-saturated steampunk uchronias

Expand full comment

Steam and even some kinds of electric car were pretty popular circa 1900. The biggest problem was not the coal/wood fuel, but the steam engine had to constantly be supplied with fresh water. A big advantage was the lack of transmissions. Steam power could be used to spin the wheels directly, and had consistent torque generation. There was no need for complicated gear ratios to manage power. Steam cars were like the Tesla of 1900, in that they had smooth constant acceleration.

What really started the decline of steam cars was the adoption of electric starters in ICE cars, which replaced the hand crank. This made ICE the all around most convenient system. From a thermodynamic perspective, an external steam engine would never be as efficient as an internal combustion engine. But it's hard to say what steam engines would look like today if we had kept using them. After all, ICE technology has been continuously improved for the last 120 years.

Expand full comment

Steam based thermodynamic cycles doing work are ubiquitous in power plants and way more efficient there than any mass produced internal combustion engine, but the difficulty is miniaturizing it without loss of efficiency. Condensers tend to be very bulky.

Expand full comment

I think the problems with steam cars were pretty insurmountable, and once someone figured out how to make a decent ICE.

Consider the locomotive -- steam technology had a massive head start and an incumbency advantage here, but ICEs quickly displaced steam once decent ICEs started coming along. ICEs were much more efficient, and required much less maintenance and attention. Same deal with ships.

In automotive applications the advantages of ICEs are even larger. It's okay if you need to spend half an hour heating up your boiler before you move your locomotive, but pretty inconvenient each time you move your car.

Expand full comment

Insurmountable is a stretch I think, and the ICE cars of the same period had plenty of problems too. Condenser systems could recycle the water to extend refills to about 1500 miles, although this added weight. Some of the later kerosene fueled steam cars got 15 miles per gallon, which was comparable to ICE cars at the time. There were flash boilers, powered by diesel or kerosene ignition, that could heat enough steam in 90 seconds to power the vehicle long enough for the main boiler to warm up. I imagine modern electrical systems would also work well in a hybrid. Electric powers the start-up until the steam heats enough to take over, and then the steam engine recharges the battery.

There were also ways the steam car was distinctly superior to the ICE car. The lack of a clutch or transmission made steam vehicles much easier to drive, and the simple design lasted much longer. There are steam cars with over 500,000 miles on them still in good condition, without anything other than normal maintenance. Which is unthinkable for ICE vehicles, unless they get the Theseus's ship treatment. Steam engines are also much quieter, almost silent, and don't produce nearly as much exhaust.

The real nail in the coffin was by the time all these kinks were worked out, Henry Ford was rolling ICE cars off the assembly line at a rate that dominated the market. Steam cars remained in the realm of a novelty for the rich.

Although I think steam engines could have been a viable replacement for cars, there are some areas they wouldn't work. Mainly because ICEs have a better power/weight ratio and are easier to miniaturize. I struggle to see how a steam powered airplane or leaf blower could work.

Expand full comment

Makes sense, thanks!

Expand full comment

I don't have specific knowledge here, but to give you some hope, I don't think the coal has to be a part of it.

Expand full comment

My first thought was that the monorail in Chiba seems reasonably successful.

A quick search suggests that the Haneda to Hamamatsucho monorail is very successful, and Chongqing operates a pair of well used monorails.

Maybe Disney and world expos have unfairly tarnished the monorail?

Also autocorrect seems to hate that word too.

Expand full comment

Chongqing is always a weird case when it comes to structural engineering, because it's basically all mountain - go look up a video of someone driving through the city. I'm not surprised monorails work there - the geography basically makes elevated rail mandatory and monorails are the cheapest way to build elevated rail.

Its cheaper to build and maintain ground structures wherever that's an option, though.

Expand full comment

Chongqing looks like it was designed by Escher. I have no idea how anyone gets around it.

Expand full comment
May 28·edited May 28

Here's the first article I found when googling this: https://ggwash.org/view/67201/why-cities-rarely-build-monorails-explained. It seems pretty convincing to me.

Expand full comment
May 28·edited May 28

Thank you for this interesting article! It mentions Wuppertal; the thing is that around 1900, Wuppertal was very very rich and able to afford a spectacular form of suspended train.

But the first two reasons aren't really valid for Gyro Monorails: https://en.wikipedia.org/wiki/Gyro_monorail

What about these?

Expand full comment

That's an interesting concept I hadn't heard of before. Looking at the Wikipedia page presents some obvious immediate issues though.

1. This thing has never been built beyond a prototype before. Even if it were a good idea, that would mean that it's an option for 20 years from now, not today. But the fact that it's never actually been built suggests that there are major problems of some sort with it. Not every bright idea in one guy's imagination turns out to make practical sense.

2. As Wikipedia points out, every single car needs to have an active gyroscope system. I'm guessing that increases costs and fuel usage a lot.

3. There's also the issue of safety. This design is "fail deadly" - if it ever loses power at all, it immediately falls off the track. That is a really bad property to have and probably fatal just by itself.

Expand full comment

No, it doesn't fall immediately. Even when power fails, it keeps stable as long as the gyros are turning, which is about four hours. At least that's what Ernst Ritter von Marx wrote about the test monorail in London. Not sure how much fuel the gyros would need today.

And besides, not every failed idea is necessarily bad. Remember those sailships which had rotating cylinders instead of sails? Yes, that really works, better than you'd expect; but they still need wind.

Expand full comment

What happens when a gyro is sabotaged?

Expand full comment

I don’t have a strong opinion. But if it makes your hands smell bad, it can’t be doing much better for your “clean” dishes.

Expand full comment

(>_<)

Expand full comment

I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell. I believe it would involve algorithms, mostly, not deep learning. I am not in the field, and there is no way I can actualize this. I will happily pass the idea to anyone who believes they could actually build the thing. If you work on animation software, this would be right up your alley. If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. Anyone interested?

Expand full comment

Re: your drama: I once had a similar experience, and I wasn't even an outsider - I was talking as a fellow coder wanting to collaborate on a cool idea.

My takeaway was that while the Internet is great for people who want to have a squabble, using it for any kind of productive or ambitious goal is going against the grain.

My advice is you have to act a bit like a politician: take nothing personally, shrug off snideness and jeering, smile and nod to the ones who miss the point or just don't get it, engage and enthuse the ones who offer something.

And you have to put up with a lot of repetition - other people won't even have read the other responses, let alone know the now-familiar background context you have in your head. No one can follow you in there without a lot of patient explaining.

I suspect everyone is like this, but I also suspect coders are a particularly petulant bunch.

The problem is if you unload on them all for it, you tire yourself out and everyone sees you getting wound up. And if you give them power by worrying about their reaction, you let them control the conversation. Ignore them instead - you're here to discuss/advance an idea, not to justify yourself to internet strangers.

I say all that - take it with a grain of salt because I haven't really made a success of starting collaborative projects; I now tend to approach in more oblique ways and have a low baseline expectation that other people will help me.

I use graphics software too and would still be interested to discuss your idea.

Expand full comment

Re coders; I don't think coders are particularly petulant, it's just that "I have this cool idea and I'm just looking for a coder to implement it, and I want a cut" is kinda the coder equivalent of "hey I want you to do this art for me 'for exposure'" for artists. It's virtually always a bad deal for the coder, and a lot of coders get these sort of requests fairly often, and often give similar reactions to artists.

And I do think "you're just here to discuss an idea" is the wrong framing - they could have had that conversation, it would have required publicly describing the idea and saying "what are your thoughts?". But "I have an idea, and I'll give it to you if you agree to pay me if it goes well" is explicitly soliciting for other people to work on your behalf and that's a different dynamic.

Expand full comment

Yeah, I get that, but even in my naive first post I did not propose anything that exploitive. What I actually said was " If the idea makes a pot of money, I think it would be reasonable to toss a bit of it my way. " And I later clarified many, many times what I had had in mind: I said SEVERAL TIMES, in pretty much these exact words, the following: The idea's not even that original, just a way of extending something already done. And it just popped into my head in an instant, whereas somebody building and advertising the thing would spend many many hours on it. So I don't think it would be at all reasonable to think I had a claim on that money -- just was picturing the developer tossing me like 1% as a thank you. Also said SEVERAL TIMES that I was certainly not thinking of any kind of contractual arrangement -- was just looking to give the idea away.

I'm sorry now I even mentioned money. It's really far from the main point. But I don't get why people were so reactive. It's as though some mass hypnosis kept everybody believing I was proposing "you do the work and I keep the money, OK?" even in the face of massive and ever-growing evidence that was not what I was proposing. And why are software developers so sensitive when they think some fool is proposing a ridiculously unfair and exploitive idea? Seems like software developers are a well-paid, smart, respected bunch -- why not just laugh off a stupid proposal of the kind people thought I was making? Instead, people reacted as though they were, I dunno, newly freed slaves, and somebody was trying to trick them into going back to massah's house and working the fields for free.

Expand full comment
May 28·edited May 28

"It's unlikely that an off-hand idea by a non-expert will work out" *is* a form of advice and accurate and in lieu of any actual technical details in your comment about the only advice someone can give. Yes, receiving the same advice (which isn't what you want to hear) multiple times can be annoying, and it *might* not be right, but, to use your words, why not just laugh off those replies? None of them read as the programmers being angry, but your reaction to them does read as angry. The "sensitive" party and "reactive" in this thread does not appear to me to be the programmers.

Again, if you're just interested in someone implementing your idea, or having a discussion about it, just share it. Put it in a google doc and post the link. Or don't; but this extended debate about how everyone else is being unreasonable seems pointless, and I'm going to bow out of it.

Expand full comment

I did share it, with 2 people who work in related fields and expressed curiosity. (And I shared with no strings attached, by the way, no request that they keep the info a secret, or not use it without signing some sort of contract, nothing remotely like that). Am about to share it with a third. I have not shared it here as I said I would, because not only was the discussion extremely unpleasant, but very few people showed any curiosity at all. Nobody asked why I was interested in facial expression, where they idea came from, what kind of graphics stuff I do. 95% of what was said was a prolonged, completely curiosity-free attempt to convince me I'm an asshole. "It's unlikely that an off-hand idea by a non-expert will work out" was the least of it. That's a bit tedious after you've heard it a couple times, sort of like the college advice your uncle always gives after he's had cocktails, but not offensive. But there was sneering and snark, and I was called a crackpot, told that I thought software developers are idiots, had made completely, laughably absurd statements, etc etc. It seemed like the message was "you're a fool with a swelled head and an exploitive asshole. Now tell us your fucking idea." Under those circumstances I lost my appetite for posting the stuff here.

Expand full comment

Thanks, but actually I only unloaded on one, dionysus, and I didn't say anything awful in that exchange. And acting like a politician really goes against the grain for me. I dislike politicians, and I value being real, and would rather do the latter and take my lumps. I am reasonably good at making the case for my point of view in interactions like the present one, and while that doesn't soothe the troubled waters the way oil does, it often gets through at least partially to some of the people involved, and we end up having a somewhat better exchange at the end.

Also, people can sense when you're making nice just to soften them up so they'll be receptive to whatever it is you want from them. At the beginning, when I read your responses and Quiops whose name I may be getting wrong, I mentally lumped you with them as someone whose main agenda was to convince me I'm an asshole for thinking my idea could be a decent one. You started off the way Quiops did in your post, then suddenly switched to some friendly stuff about how you're curious and would love to have a nice little chat with me about my idea. That switch felt to me not like you'd realized you also had a second, friendlier message to convey, but like you'd realized never get anything outta me if your entire message was a lecture about how the chance is nil that a layman could come up with a novel graphics idea worth trying. And then in a later post you actually commented yourself about how you'd consciously decided to put something friendly into your post. So at this point I haven't the faintest idea how sincere any of the sentences in any of your communications are, including the current one. So I'd recommend giving more thought to the downside of being down towards the impression management end of the impression management -- real deal axis.

Expand full comment

The problem is, we've all seen people who go "I am not involved in the field at all but I have this amazing new idea that is miles better than anything the professionals are doing".

Most times those ideas are not better. So people naturally tend to "Okay, tell us about the idea so we can see if it really is better".

If I claimed that I had a fantastic new system of doing therapy, even though I'm not a therapist, not trained in the field in any capacity, and have no experience of doing such work, I'm sure you as a professional would be slightly sceptical and want to know more about my fantastic new idea before you agreed to help me sell it to the public.

Maybe your new idea is marvellous, it could well be, but people are going to want to see the pig first before they buy the poke.

Expand full comment

Yes, I would be definitely be skeptical, but I would want to hear your idea. I would probably post something like, "I have to admit I'm skeptical, but, you being you, I think there could be something in your idea, and I'd be very interested to hear it. " And then I would shut up and listen. I would do that partly out of courtesy and kindness, because I like you, but also I do not at all rule out the idea that you would have a genuinely good and interesting thought about psychotherapy. Then after I heard the idea I'd tell you want I really thought of it. If I thought it was absolutely no good I would look for tactful ways to get that across.

Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell. So it's more like if you posted that you'd had a novel idea about how to treat people with insect phobias, and said you didn't think the approach had been tried before, and that you thought it might actually help a bunch of people. So my claim was much more of that nature.

And I did not refuse to describe or show my idea. I said at the beginning that I'd go into detail if anyone who was able to build such thing expressed an interest in hearing the idea. I probably would have gone into detail anyway if most of the posts had said things like, I don't develop graphics software, but I'm quite curious about your idea. Can you post some more? But actually there was almost no curiosity expressed. 95% of what I got were long, irritated-sounding lectures about how ridiculous it was that I could for a moment entertain the idea that someone with no training in software could have an idea that would work. And people were pretty harsh and rude. The word "crank" was used. I was told that I thought software developers were idiots, and that I what I had said in my post was unbelievably absurd. The gist of it was that I was a fool and an asshole.

And actually I did tell the idea in detail, via DM, to 2 people who work in the field and expressed some interest. So the situation is not that all the posters I'm mad at are asking to see the idea and now I'm being contrary and refusing. Most did not express any curiosity at all in their intial posts or later ones. Yet one person who had not asked one single question about the idea did accuse me of "jealously guarding it" after I had "promised" to post it.

It really does seem to me that the people piling on me have a distorted perception of my initial posts and what their responses actually were. And it sux. I can be quite mean sometimes on here, but I only do it to people who seem like trolls and/or are being rude and cruel. I think I felt like being pretty good natured and reasonable in my posts overall had kind of given me, like, some credit -- like that if I posted something off-base, I'd kind of earned enough points so that people would be unlikely to believe I'd just posted something dumb, mean, entitled and ridiculous. Like if it came across that way they'd give me the benefit of the doubt and ask me to clarify what I meant. Nope.

Expand full comment

>Your version is not a completely fair analog of what I posted, because I did not rave about having a whole fantastic new approach to software development that's miles better than what anyone else is doing. I named *one* out of thousands of kinds of software, and said I thought what I had would work better than current software for this one little task, and that I thought it might even sell.

Perhaps you forget that the the one tiny kind of software you described in very broad terms has applications in multi-billion-dollar industries such as games, movies, and TV. If you'll allow another analogy what that sounds like to me: You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."

Expand full comment

>You said the equivalent of "Oh it's no biggie, maybe you'll find a buyer here or there in the niche subfield of transportation, but I am certain I have improved upon the wheel, DM me if you like money."

Yes, my initial naive post could be taken to mean that (though it could also be taken to mean other things). But when everybody got so angry I put up many many responses clarifying what I had meant, which was definitely NOT that if somebody knew the novel, awesome idea I had they could make millions by applying it in all the industries that in one way or another use tools for adjusting facial expression. I said the idea was not particularly original -- that it was just a way of slightly extending something that is already done. I said it had just popped into my head -- was not, in other words, the product of a lot of thought and labor. I said that I knew it was unlikely that an idea from someone outside the field would work, and would be something that had not been done, and would make much or indeed even any money. Seems to me it did not matter what I said -- I was the poster people loved to hate, and they were impervious to any information that would make me look less foolish, entitled, self-important and exploitive. Cuz where's the fun in that?

Let those who have never put up a post that could be taken to mean something really dumb and obnoxious cast the first stone.

Expand full comment

I know you didn't come on strong with "I am so much smarter than the professionals", I think it's just that we've been burned before, in whatever job or career we have, by people coming in with "amazing new idea" or "we are completely scrapping how we used to do things and now doing it this new way", and refusing to listen to the people who have to use the system or implement the new way about how it's not going to work the way the "great new idea" person thinks it will work.

And some of us on here are less socially adept in interpersonal interaction, to put it charitably, so we do rush at it like a bull at a gate with "what makes you think you know so much?" 😀

Expand full comment

You're not a very rewarding person to offer help or olive branches to.

Expand full comment

I am if I experience the help and the olive branches as real. Currently feeling pretty warmly towards Vitor, for instance, whose post seemed simply sincere to me.

Expand full comment

I do graphics programming for games. I'm curious what the idea is, but of course I can't commit to working on it without hearing the idea first.

Expand full comment

Of course not! What I meant was that if anyone had any interest I would describe the idea -- then, if you're interested, you may have it. I sent it to you as a DM. because the present discussion has gotten so unpleasant and I don't want to add fuel to it. Also DM'd it to Viki Szilard when they posted.

Expand full comment

Well I’m a software dev who’s in the field of computer graphics/machine learning, and my curiosity’s getting the better of me so…

What do you mean by “changing the expression on a face”? Take an RGB image of a human face, and change it from say, a smile to a frown? Or take a rigged 3d model of a human face and animate it to have a desired expression?

If you prefer you can message me with the details. I can prototype things very quickly :))

Expand full comment

Side note, I'm really disappointed there are no faces on the Euro banknotes. It's so much fun to make them smile or frown!

https://www.youtube.com/watch?v=GX7Aj8SySYQ

Expand full comment

I don't begrudge your optimism, but the reality is that ideas are a dime a dozen among AI researchers. The only way to know if something works is to try it, and the vast, vast majority of ideas that even experts have don't pan out. Because you're not in the field, you don't know what the state of the art is, what researchers have already tried, how feasible it is to implement an idea, or how plausible it is that an idea might work if implemented. The chances of your idea working out and being monetizable are very close to 0%, especially because it seems vague and poorly defined to begin with ("...I believe it would involve algorithms...")

Expand full comment

I'm getting a bit sick of responses telling me it's unlikely the idea'a any good.

Hey, I get it. I am not expecting to make any money. If the thing did, I think it would be reasonable to get a bit from whoever makes it for supplying the idea, but I certainly wasn't picturing signing a contract or anything like that. In fact, I was imagining just describing the idea right here. Obviously if I thought it at all likely that this idea would make money, I would not be describing it on a forum where hundreds or thousands of people could read it -- I'd be jealously guarding it and telling one possible developer at a time, after swearing them to secrecy. On the other hand, I have messed around with enough graphics software to have a sense of what is possible, and the thing I'm thinking of seems to be in that realm. And I have searched hard enough for the thing I have in mind to be pretty sure it is not available now. So I doubt that it's impossible to do, and I doubt that it has already been done.

I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is. In fact I have given multiple assurances of various kinds of optimistic stuff I don't think. All I am doing is asking whether anyone who builds animation software or the like is interested i hearing the idea. If someone is, I will lay it out.

In other words, I don't think the likelihood that this idea is worthwhile is zero. Several people who have responded so far seem to be triggered into some kind of irritable discourage-this-amateur feeding frenzy by the fact that I don't think the chance is fucking ZERO. Get over it.

Expand full comment

Computer graphics is a very large field that's pretty mature (compared to AI at least). There are thousands of people doing research on (semi-)automated mesh animation, all sorts of things like projecting motion capture onto arbitrary models using some sort of skeleton, deforming meshes while renormalizing them, kinematics, etc etc etc.

This is a huge field of research backed by practical applications in some of the world's biggest industries: movies and video games.

This kind of field is much harder to make a contribution to as an outsider, especially when you don't know what the state of the art is, common tools and file formats in use, typical rendering processes, etc.

I don't want to discourage you, but the priors are strongly against you. That said, I'd be happy to discuss your idea, I'm a dabbler in computer graphics myself.

Expand full comment

Hey, for about the 5th time, I get it that it is unlikely that an outsider would come up with an idea that is novel, and doable without an amount of effort that the idea does not merit.

It sounds like you're like me -- you use computer graphics, but do not develop the software. If so, I think I'm going to hold of on laying out the idea unless someone who actually works on this software asks to see it. When I first posted this idea I probably could have been persuaded to just describe it to somebody like you, who uses graphics software and is curious. But at this point I am irritated and uneasy, because every single respondent has told me that it's very unlikely the idea's worthwhile, and several have written about that at some length. It really seems to me like my post irritated the hell out of various people who actually write software, and that if, as is likely, my idea is not workable, or has already be done, I will be subjected to lengthy, snide "I-told-you-so's, dum dum" posts.

I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. Do they think I didn't read all the earlier ones? That I read them but was unable to grasp their meaning? That I vehemently disagreed with them? Jeez, I have responded to all of these posts by saying I know the chance is low that the idea is workable.

Do you know why you felt the need to again make the same point. The first 90% of your post is still another explanation for me of why people outside the field almost never have an idea that is worth implementing. I'm not complaining about your post, I'm asking you, because you sound friendlier than the other posters. Can you figure out why it felt important to you to write that first 90%, which duplicates what the other posters have said, instead of just posting your last sentence, expressing some interest?

Expand full comment
May 28·edited May 28

"It really seems to me like my post irritated the hell out of various people who actually write software"

Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics. You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is. Do you realize that some things in computer vision require a few lines of code, while other things require years of dedicated effort by a large research team with tens of millions of dollars (which may well go down the drain because the idea turned out to be impossible), and that it's not always easy to tell which is which?

"I don't understand why people keep piling on with the "it's very unlikely to be any good" posts. "

I made the same point as the other posters because there is value in letting you know that there is overwhelming consensus on this point.

"I don't see where the evidence is that I am overconfident about how workable and monetizable this idea is."

The evidence is here:

"I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."

I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works. When people pointed out the unlikelihood of success, you became hostile, which is again typical of a crackpot. Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic. Granted, you did acknowledge that the idea was unlikely to be monetizable and could be unworkable, which is not typical of crackpots.

Expand full comment

>“I have an idea for software that will work much better than what is out there currently for changing the expression on a face.I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."

Yes, it did irritate me. It irritated me because it matches the pattern of crackpots who take the people in a highly technical, actively researched field for idiots, and are convinced that they know better despite demonstrably not knowing even the basics.

There is nothing that I wrote that suggests in any way that I take the people in a highly technical, etc., field for idiots, or that I am convinced I know better despite not knowing the basics. That all seems like shit you are angry about from other contexts that you are dragging to this exchange and dumping on me. In fact not only did I not say anything that implied any of those insulting, stupid ideas, I said things that expressed ideas incompatible with it. I said in my initial post that there's no way I could possibly develop the idea into actual software. That's some pretty good evidence I'm aware that I lack basic skills, isn't it? Also I expressed willingness to just post the idea here, if anyone who has the skills to make this sort of thing expressed interest. Seems to me that makes clear that I do not think my idea is highly valuable and unique, since I'm willing to describe it to a huge forum. If I thought it was unique and highly valuable I would guard it jealously, wouldn't I?

>You don't even realize the absurdity of saying "I will happily pass the idea to anyone who believes they could actually build the thing" without giving any description of what "the thing" is.

Actually, in retrospect I do see how that sounds absurd if taken a certain way. But I did not mean that I expected somebody to decide, without hearing more, whether they could build it. Of course they could not! What I meant was, if what I’ve said interests you let me know, and I will put up a post describing the idea. If you think the idea is workable, it’s yours for the taking. Jeez, dionysus, it doesn’t seen like it’s that hard to figure out that there is an alternative interpretation to my post beyond the stupid, entitled one you put on it.

> “I have an idea for software that will work much better than what is out there currently for changing the expression on a face. Because it would be so much better, I think there's a reasonable change it would actually sell."

I'd bet you that the world's top computer vision experts wouldn't dare to make a statement like "it'll work much better than what is out there currently" without implementing their idea and seeing that it actually works

Well, if you knew what my idea is you would see why what I’m saying is nowhere near as sweeping and grandiose as it sounds to you. it’s really just an extension of something that already exists. My hopefulness about the idea has nothing to do with thinking I am able to judge how easy it is to implement, and I have concluded that it’s easy. I totally get that I am not able to do that, and in fact that even experts would hesitate to do it with a novel idea. My optimism came from thinking, clearly we can do this for a and b. If there were software that could also do it for c, d and e, which are in the exact same class as a & b, that would make some cool things possible. Here’s a made-up analogy about tattoos, which probably is not historically accurate: Let’s say that it used to be that most tattoos were small simple blue images, and one day some tattooist said, why not make them multicolor? We know how to do inject other colors besides blue, and going multicolor would make more complicated and beautiful designs possible. OK, that’s the nature of my idea. It does not rest on any belief that I understand how to implement these things — it’s an idea about the possibility of extending something that’s already possible.

>Jealous guarding of your idea (despite unfulfilled promises to share it on this forum) is a third typical crackpot characteristic.

I didn’t promise to share it. I said if anyone who works in the field and can actually make this sort of thing was interested, I’d post it here. Actually, someone in the field finally wrote and expressed interest, and I laid out the entire idea for them in a DM. I only put it in a DM because this discussion has become so unpleasant, and I did not want to add fuel to it. I did not ask them to keep it secret, or to make any sort of contract. So I think that puts the jealous guarding accusation to rest.

Later edit: Somebody else expressed friendly interest, and I DM'd a detailed description to them to, and also did not say a word about secrecy, etc. Still think I'm jealously guarding my idea?

Know why I'm not just posting it here? Because this discussion is so unpleasant, and most participants have shown zero interest in the idea. You're reacting to the entitlement and whatnot you think is inherent in posting about the idea the way I did.

>When people pointed out the unlikelihood of success, you became hostile

I don’t think I did. I said many times that I did not believe this and that grandiose thing, and I said that politely. I eventually started to complain about the repetitive posts all saying the same thing, but I complained in a civil way. I guess it was snarky to describe it as a feeding frenzy, so we can count that as hostile, but it’s pretty small scale. And I don’t think I’ve been hostile in the present post. The worst things I have accused you of are dumping anger about other situations onto me, and failing to consider various non-idiotic things I could have meant by certain sentences. Whereas you, in the post I’m responding to, have used the word crackpot, have accused me of thinking of software experts as idiots, and of having absurdly grandiose and unrealistic ideas about what I am capable of, and of jealously guarding my idea. Your hostility score’s a lot higher than mine.

And didn’t you ever have an idea you thought might be worthwhile about how things might be done in a field outside your expertise? Something about the way a hardware stores could be set uo, or a way to get more people to get needed medical tests done, or whether it might someday be possible to control a cursor by running your tongue around the roof of your mouth?

Expand full comment

I was trying to get across where exactly the expectation mismatch is. There are some domains where you can come up with a contribution relatively easily as an outsider. But let's say someone posted here that they're a hobbyist who's come up with a new surgical material... you'd be very skeptical. Not because the person is dumb, but because they basically have to be an insider to even have access to the situations and tools where they could conceivably experiment with their thing.

Computer graphics is very accessible OOH, with tons of people building their own raytracers, games and such, but the more *topological* problems OTOH depend on stacks of assumptions and lower level techniques, and you won't build something commercializable if you don't know exactly where in the toolchain your code is going to sit. My guess is that people would have been less skeptical if you'd just mentioned this as an interesting research problem.

Expand full comment

Thank you for answering. And hey, I get all that. Computer graphics is sort of like a gold field that's had a crowd of people prospecting in it for years. There's not much left to find.

Still, didn't you ever have an idea you thought was worthwhile about a field outside your expertise, maybe even a field that's already had lots of people prospecting in it?

Expand full comment

"Algorithms" is too vague to be meaningful. It's very unlikely that your idea is both possible to implement with only "algorithms" (which I'm taking to mean relatively simple image transformations, eg warp, skew, rotation, alpha compositing, etc) and better than existing techniques that use text/image embeddings and diffusion models. For an example of how powerful and usable these techniques are, I'll direct you to this blog post, wherein the authors discuss using a webcam stream of a face to animate a 3D model of a face in real time: https://blog.roblox.com/2022/03/real-time-facial-animation-avatars/

Expand full comment

My point when I said algorithms was that I did not think deep learning would be used for the core of what this software does. I believe the task is simpler than the roblox animation one. Yes, it would be doing relatively simple image transformations.

Listen, I'm a psychologist and people here often have ideas based on simple misinformation, or ask naive questions, or propose naive theories. On the other hand, I find some of the ideas people here completely outside the field have quite fascinating and plausible. I thought that, for instance, about a number of comments about how the sense of self is constructed, in the discussion of Scott's post about IFS. Yes of course my field is much softer and mushier than software development, but there is still such a thing as being misinformed or naive about human psychology, and when I run across some of that I do not write a sneery response. Why be rude?

Expand full comment

I don't perceive myself as having been rude, but merely frank - I stand by my probability estimate (very unlikely - which does include possible!). Sorry if that came across as rude. I think your tone and framing (refusing to reveal the idea itself, confidence that it could make money, asking someone to commit to implementing it, stating that it's "much better" than existing techniques despite a lack of demonstrated knowledge of existing techniques beyond having done a lot of searching) are all triggering the Crackpot Response Protocol for people, here.

I think you could have gotten a better response with an approach more like, "Hey, I had this idea for a way to change the expressions on faces, here it is: <description of idea>. Can anyone with experience in the field tell me if that's been tried, or why it would/wouldn't work?"

Expand full comment

I'm rather curious to hear the idea. I'm not sure it's as easy as you think to turn ideas into products, or turn software into money, come to that. But I'm always interested to hear ideas.

Expand full comment
deletedMay 27
Comment deleted
Expand full comment

On second look: If you meant this reply for Quiop it would make more sense, since he was actually being snide.

Expand full comment

> Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas

That's a you problem. Jeering wasn't my intention and I deliberately tried to word my reply away from sounding like it was. In this case being hypersensitive has only cost you goodwill.

Expand full comment

Since you admit you don't know enough about the field to be able to implement your idea, I'm curious as to why you seem somewhat confident (i) your idea would work better than what is currently available, and (ii) other people haven't already thought of it?

Expand full comment

I do not think it is easy to turn ideas into products and software into money. I am pretty sure I'm right that the thing I've thought of does not exist, because I have done a very extensive search for it. But I am well aware that as someone not in the field I may not be right about how this could be done, or about whether doing it is a nightmare not worth the trouble, etc. On the other hand it's a free idea, and if I was in the animation field I'd at least ask to hear it. Ya never know.

But neither you, Quiop nor rebelcredential have expressed any interest in implementing the idea, should it turn out to be decent. Mostly I get the feeling you're looking forward to teaching me a lesson in how people outside the field invariably come up with lousy ideas. So I see no point in sharing the idea with you. If someone in the field shows some interest I'll describe it right here on the forum, though, and if they tell me it's not practical I'll certainly accept that.

Expand full comment

Since rebelcredential also read my comment as "actually being snide," I accept responsibility for my poor choice of phrasing and offer my apologies. I was genuinely curious about the idea and wanted to know why you think people in the field would have missed it. (e.g. "I am a psychologist, so I have insights from my own field into the perception of facial expressions and I think computer modelling of facial expressions could be more effective if they incorporate these insights.")

Expand full comment

I don't exactly think people in the field have missed it -- it's more that there's a lot of churn and change. There have been a huge number of sites opening up that offer a user-friendly interface for altering appearance. My idea is just an extension of something that's already being done. That is the reason I'm pretty sure it's doable -- not some delusion that I can intuit, without knowing how to code, that coding the thing I have in mind is pretty simple. I have looked *quite* extensively for sites or software that do what I have in mind, and can't find any, and that's why I'm pretty sure they do not exist. As for whether the thing I had an idea for would be widely interesting to people, it's hard to judge. Doesn't seem implausible to me, but it's hard to predict what the public will fall in love with and what they will ignore.

Expand full comment

If your DM conversations don't end up leading anywhere, I'm sure you could start an interesting discussion in the next OT by describing your idea in more detail (assuming you're not too concerned about IP and money issues).

Expand full comment

Concerned about IP and money issues? How can it possibly not be clear at this point that I am not concerned about either? I have probably said at least 10 times in the course of this long, unpleasant discussion that I do not view this idea as intellectual property, and that I'd happily describe it in detail here, publicly, if someone who works in the field showed some interest. As for money, I have also said multiple times that I get that the idea is unlikely to work, and if it works it's unlikely to be a big moneymaker. Also added that in any case I didn't think of myself as having a share in the profits. I just tossed out an idea that's a variant of something already done, so not a particularly original idea. The person who makes and advertises the thing would have put in many hours, and would deserve the money. All I said was that I thought it would be reasonable for the person to send me a small chunk as a thank you. (I had in mind something on the order of 1% , but certainly was not imagining formalizing that in a contract.). And I have now sent detailed descriptions of the idea via DM to 2 people in the field who expressed some curiosity, and I did not ask them to keep the idea secret or to send me some thank you cash if it by chance the software made them a good amt of money. The only reason I didn't just post the idea here, as I'd said I would if anyone was interested, was that the discussion had become so unpleasant. Also, there has been almost no expression of curiosity from the people posting. 95% of what I have gotten have been long, irritated explanations of how unlikely it is that my idea would work, and various bad judgments of my character for even *thinking* the idea might work. I have been called a crank, told that I think software professionals are idiots, accused of being ridiculously oblivious to various obvious things, of jealously guarding my idea, and of reneging on a promise to post it here. Until 2 people who work in the field put up brief posts expressing some curiosity and nothing else, nobody has expressed the slightest curiosity about why I'm interested in facial expressions, where the idea came from, why I think currently available ways of putting expression of faces are unsatisfactory, or what's the general approach I have in mind.

Anyhow, I appreciate you apologizing for being snide, and showing some interest now. I could not resist venting a bit, in the course of telling you why I have zero interest in posting more about this subject.

Expand full comment

I'm aware this is well-trodden ground, but have we conclusively put to bed why it is that software/software development is quite so shit?

These are the reasons I know about:

Mental models and gaps thereof: details of the real system are complicated. They get hidden in libraries/frameworks that make things easier by hiding said details. New devs unaware of the underlying details end up doing inefficient things or reinventing systems they aren't aware already exist. This process repeats so you have layers on layers on layers and everything just seems to run slower and slower.

Leaky abstractions: these frameworks/libraries don't fully encapsulate the underlying model so when things go wrong you need to examine (and understand) every layer down the stack. Many of which you were never explicitly taught about because you weren't supposed to need them. More layers = harder time fixing bugs.

Docs: any lib/framework/component brings its own mental model, units of thought, and procedural knowledge (ie what actions/processes/patterns to follow when using it). Devs often don't even acknowledge these, and even when they do it well, communicating them takes a long time.

Dependency and fragility: stuff relies on an increasing number of other stuff, with the result that there's more and more to go wrong.

Bloat: new things are constantly being required and added that aren't fundamental to the job/important to the users. Both for end users - your new laptop is slow because it's trying to run a million new services that Microsoft has decided you will like - and for devs - that image carousel for your website comes brings with it React+tailspin+vite+webpack+didnt ask+dont care+touch grass.

Have I missed anything?

Expand full comment

Here's one: text files are a rather unnatural way to interact with code. Text is a serialization format, and a huge amount of the work of writing code amounts to moving from the text model to a mental model of the runtime model and back. Loading and unloading mental models this way is incredibly exhausting in the long run.

Compare to "working on a car"—you don't work on the "the blueprints for a car", you fiddle with one piece while interacting with the already-built other 99% of the car. In a new code base it's pretty hard to see all the pieces and how they interact—whereas when you pop the hood of a car, there it all is. Debuggers get a little closer, but not very—you can't feasibly rearrange the parts while the rest is running. Live-reloading tries to emulate the right idea, but it's still hopelessly stuck in the text-based paradigm.

A coding paradigm that was 50% closer to "popping the hood" would be a dream to work with, I think, if it could get over the huge barrier of "all our existing tools and mental models are designed to work with text files".

Expand full comment

This is pretty much word for word my own opinion.

Have you had any thoughts about what your runtime model should look like? I've had various ideas but I'm interested to hear other peoples'.

Expand full comment

Oh, neat. I have a lot of old notes but it's a dormant project to me. Where my mind goes (/used to go) was towards a system with first-class constructs for:

* dataflow graphs. To the extent possible all "programs" would just a be an introspectable dataflow graph, tho one could compile a graph into a native function for speed.

* "components"—like actors, or like "things you can point to under the hood of a car". These in turn would be wired together in a dataflow graph. Components live in a hierarchy of layers, so e.g. your "webserver component" has subcomponents like "API" and "datastore" who are wired together, and you can "work on" the API component with the webserver and store already running.

* DSLs or sub-languages aka "slangs"—a given component has an internal namespace which basically defines a specialized language, with certain constructs in-scope. E.g. an API server automatically has a sublanguage for "routing" in scope, and an API handler has a bunch of HTTP equipment in scope.

* programs are "submitted to a component". E.g. an API handler implementation is a "program running in an HTTP-handler component", and a routing table is a "program running in the HTTP server component" in a very limited language that can basically only bind regexes to handlers. This notion of "submitting" always takes for granted that the underlying component exists and is running, and you can repeatedly submit "programs" to the same running component. One would typically "work under the hood" of a single component at a time.

The underlying runtime model is "whatever it takes to support this", but I would prototype it with a relational DB backend and leave the impl abstract, allowing a system of microservices to be defined in the same schematic language...

See, it gets out of hand!

Expand full comment

I'm seeing a lot of my own ideas here, which is a nice but odd feeling. I actually spent yesterday working on a (very crappy) dataflow planner thing as an experiment. (Conclusion was it needs more work.)

I ended up going in circles on what the DSLs should entail. A DSL implies a whole bunch of unseen background knowledge, which needs to be there when you pick it up. (Because what good are clever, domain specific nouns and verbs if the user doesn't know what they mean or, worse, assumes a subtly different meaning than the one the creator meant?) To fix this each DSL needs to come with all that explanation bundled in. But a standard wiki would end up being ignored and could struggle to communicate the important stuff, which is how I got onto my current preoccupation: better ways of notating and communicating "mental models" in general.

I get your sumbitting-programs concept - basically a system of actors where each actor speaks its own language. (You could call something built around that Babel, if only the name wasn't taken by an AST parser.)

What made you give up and move on?

One fundamental problem I see is, do you create this as an entirely new selfcontained universe - and have to reinvent said universe from scratch - or do you try to allow including external things, and then find yourself having to make compromises with them that break the entire concept?

Expand full comment

> I'm seeing a lot of my own ideas here

Ha, that's heartening. I kind of think there's a naturalness to the perspective we've both glimpsed, and attempts to iron away complexity in a lot of different programming areas tend to converge to a similar framework.

> What made you give up and move on?

Haven't been writing code in a few yrs. These kinds of ideas would arise whenever I was frustrated with my tools. I never really got past the taking-notes and brainstorming stage—I don't have a programming language background at all.

> do you create this as an entirely new selfcontained universe

I imagine you:

* design the self-contained universe as an ideal

* but anything you build that's actually designed to be *used* has to be maximally interoperable with mainstream languages and tools. This might mean you implement the runtime interface in Python, or you run a python interpreter inside your runtime, or you interface with the runtime over the wire.

> DSLs... explanation... standard wiki... mental models

I don't have a ton of answers here, but might be able to illustrate my thinking as follows:

One of the narrowest problems I wanted to solve was to improve on SQL for big data-analysis-style queries. Consider: a SQL query represents a dataflow graph—data flows from a bunch of upstream tables into a final view or query result. Every column in that final result query "knows where it came from"—e.g. a column `user_id` knows exactly what upstream tables it came from, and an `avg(sales)` column knows it's an average of whatever `sales` was. It has to, because these details become the runtime representation which the compiler actually operates on!

Now, it seems to me first that:

* our tooling should have access to that runtime representation, such that I can cmd-click on a column in the final query and my editor can show me that graph structure by which that field is generated

* the final query represents a dataflow graph, which I want to be able to use in other ways than just *running* it. For example, I could "host it as an endpoint"—and autogenerate API docs, where SQL column descriptions in the upstream tables "flow through" the datagraph to document the columns of my API query along with its lineages, types, constraints, FK relationships, etc. Or I could materialize a number of tables in a DAG (here I'm thinking of a DBT style analysis workflow, if you know of it) and have each one automatically inherit lineage data. There are tools which try to layer in such lineage data later, but IMO it should part of the native representation of all SQL.

All of this metadata is just data, but we're blocked from using it because we treat queries as text-files first and runtime representations second, instead of the other way around. Sort of?

I like SQL as an example because it's literally just a dataflow graph, and there are a lot of ergonomic issues that can be solved just by being able to "slice" the internal model along various axes.

Actual imperative code is more complicated. But still I think of an ideal where, for example, every variable that flows through my code is accompanied by its type information, constraints, and docstring. A function `def f(x: int)` with a docstring specifying the meaning and constraint values of `x` is an "input node" into a graph, and all the metadata—type, constraint, documentation—can flow with `x`, and if `x` is later exposed in an API say, the metadata is all there to be auto-filled. The only case where we actually toss out the metadata is when we compile to native code for speed—but the dataflow representation is primal.

I guess I'm starting from a "low level ergonomics" place. I don't really know how to handle high-level complexity, but I sort of imagine the same kind of "dataflow" concept at the level of interoperating components, and editor which is specialized in viewing these graphs similarly no matter what level of abstraction they occur at.

Expand full comment

I don't have much to contribute besides this kind of vague brain-dumping right now (:

maybe we should DM instead of this impossible-to-navigate chat thread, I already can't see your OP

Expand full comment

Also, shortage of time to fix bugs.

Like, you spend literally days tracking down a bug in a complex software system, and when you finally find it: the guy who originally wrote the code left a comment that he doesn't handle that particular case. (This fact, of course, not being reflected in the documentation of the numerous layers of stuff built on top of the routine that doesnt handle that case). Also, leaky abstractions of course, but more like: life is too short to make the software correctly implement the abstraction.

Expand full comment

I wish there were some way to graphically draw the "fitness for purpose" of a component, including all the moving parts, the context it needs to live in and the dependencies it relies upon, and the I don't know what you'd call it but the "envelope" of cases it does and doesn't handle. So we can tell these things at a glance rather than have to stumble across them at random.

Expand full comment

Ostensibly, this is what type-definitions are for.

Unfortunately, side-effects exist. They're called side effects because instead of some interaction being an input or an output, the interaction goes *sideways* into a primordial soup of global state. And as a consequence of this, they don't get listed in the type-definition as God intended.

Some languages try to fix this by decreeing that all side-effects must go into the type-definition after all. Global State is now Local State. But now we have a new problem: each side-effect now need to be included in the list of inputs for all downstream functions (as well as the outputs). Otherwise, the downstream functions won't actually pass a side-effect along the chain. So to fix *that*, we add an operator called "bind". Which, using the power of first-class functions, automagically reconfigures the type-definitions so the downstream functions actually pass along a side-effect (in parallel to the chain of "normal" inputs and outputs).

And huh, look at that. We just reinvented monads. So if you want to tackle this, it might be helpful to take inspiration from Haskell. Or maybe Eiffel, for its contracts.

Expand full comment

Another cause: Ideas pass through too many people and end up either disfigured or over-adapted.

The customer needs X, the product product owner conceive W, the architect fix it as Y, the BA describes Z and the developer build A. Now, either A don't do X (it's disfigured) or it does it, but with all the idiosyncracies of W, X and Y in the way (it's over-adapted).

I'm not even saying that the layers are useless. They aren't, or we'd end up building shit that's awefuly unaware of the rest of the industry or of the rest of the company. But it can sometimes turn a 2 months job into a 2 years job.

Expand full comment

This is more general than software, and intuitively (to me) it happens in proportion to a single individual's powerlessness. Anything you can do by yourself, doesn't have to go through this process; the more people you need to help you, the more of this bullshit there will be.

To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people. But I suspect I'm not very good at team collaboration, and maybe someone who was would regard this vision as a nightmare.

Expand full comment

> To me a utopic outcome looks like our tools getting more and more sophisticated, allowing a single individual to feasibly take on larger and larger projects without having to involve other people.

Yeah, through this entire exchange, I was thinking "so GPT-5 or 6 will be a great thing for development, because it will be a single mind that can see all the code and dependencies down the entire stack, and can optimize an entire codebase as long as it's small enough to fit in the context and undestanding window."

Instead of calling so many mb of different libraries to do simple things in a bunch of unconnected places, it can just write the simple function and get rid of a bunch of dependencies and vulnerabilities. There are probably architectural things it can do in terms of commenting and dependencies that will make it easier to surface bugs, or describe and articulate the envelope concept you had upthread visually and textually. I think that's a pretty exciting area to be thinking about and working on right now.

Expand full comment

I actually think the "envelope/mental model explaining a component" thing is going to be incredibly important - because when AI is the one creating the library, how are we going to follow along? We either need better explanatory tools or we accept that the AI is the new owner of tech and all we can do is try to manage it.

But right now I struggle a bit to think about the problem, because it feels like I don't have the right tools yet. To take a topical ACX example, I feel like a monk trying to diagnose demonic possession cases without the concept of "psychiatry".

I can tell you GPT 3.5 isn't there yet - every time I ask its opinion, we have a back and forth where I try to explain what I mean. No matter how I try rephrasing it, as soon as we get close, it crashes.

Expand full comment

Tempted to add that not many fields have got away with «your system developed by us is vulnerable to known malicious attacks from the actors under sanctions from the government we share with you; we won't fix it, won't be responsible, and will legally prevent _you_ from fixing your own copy» for so long.

Oh, and of mental models — the mental model of a person who knows from the featureset how this things must have been done inside, and the mental model of a user throwing a cursory glance at the system… they are not very similar. And if you abandon the first one completely, your system gets too confusing to maintain, and if you abandon the second one completely, you can only have professionals willing to invest into training as users. So you cut uneasy compromises and the system breaks expectations of everyone who touches it. By design. Few people try to present fully disjoint views into the system while handling the differences of the models correctly via translation, fewer succeed…

Expand full comment

I've wondered for a while about some kind of formal "analogy testing" process for this. A simplified model is an analogy to the real thing (think of your directory tree like "files" in a filing cabinet).

An "acceptable" analogy provides a simplified model without changing anything about the real behaviour. A bad analogy implies behaviour or logic that isn't there, or fails to prepare you for logic/behaviour that is.

I don't know by what process you could lay down or test these analogies. My impression rught now is that all of this is unconscious and illegible.

Expand full comment

You're only looking at technical reasons, while some of them may be social.

Take institutional capture. This is obvious for proprietary stuff, but best observed in collaborative open-source projects the moment they get significant funding. The people in charge treat users as intruders, pursue their hobby horses at the expense of actually important functionality, and generally refuse to do any more work than absolutely necessary, but refuse to leave because they have funding, and it's hard to unseat them by forking when they're the ones who have funding.

Expand full comment

Of course the «actually important functionality» depends on the usecases, and the usecases the core developers have, the funders have, and the majority of users have, are three different sets, too.

Expand full comment

Technical debt is an overarching term for the shortcuts taken to meet a ship date. Usually they don’t get refactored into something stable and extendable. You pay interest in the form of unnecessary effort on everything extending ‘just make it work’ code.

Expand full comment

To think about that a little further - why do we end up with technical debt? What properties of the language/structure lead to "fast" coming at the expense of "good", and why is extensibility not the default?

My first thought is the fast vs good thing is actually a "didn't have time to destruction test my definitions" thing, and extensibility is hard because it's entangled with mental models and concepts (for you to extend my code you need to see my code the way I see my code) and that stuff is illegible and ignored.

Expand full comment

Yeah, one of the biggest challenges for software organizations is finding ways to reward unglamorous quality work. Even when companies say they care about engineering excellence, they're usually still biased in favor of legible impact.

Expand full comment

I think "fast" is driven by external concerns. Even if the language allows extensibility, as long as it's faster to get a working alpha out the door by sacrificing extensibility, people are going to do that. Environmental pollution by any other name.

Also, look at Google. Apparently the culture there for a while was in favor of writing shiny new things, not maintaining old ones. Prestige goes to the person who demonstrates the cool new feature, not to the person who later gives it more reliability and free bugs.

Expand full comment

There are a lot of reasons for technical debt and it is mostly inevitable to a certain degree. The largest and most all-encompassing is that there is no such thing as "finished" software. There is always more to add, refactors to improve, bugs to fix.

I'm sure you can find many architects that wish they had more time to really put their vision to paper but I imagine relatively few of them get to work beyond "passes legal code, checks requirement boxes". The major difference in software is the legal requirements; there basically aren't any.

I can go on and on about costs of technical debt but if management is sufficiently McKinsey-brained then that is a cost to developers and not the company itself. They do not care if my work is harder or more stressful. That is my job after all. It is made worse by the fact that good developers will warn of these problems while continuing to successfully make it work. At least until they don't or leave.

I would predict debt driven development companies tend to have less capable developers at understanding, modelling, and predicting complex interactions. Meaning less push-back and management can always find enough people to tell them everything is fine.

But the biggest reason is that bad software still makes money while good software isn't finished.

Expand full comment
May 28·edited May 28

The other difference is that with buildings, you normally tear them down and build new ones. You don't end up with say, a city full of skyscrapers that still have Victorian servant's staircases and coal chutes because that was in the original design and noone ever paid down that technical debt.

Expand full comment

I see you're not British.

Expand full comment

I, too, am British and live in a house that was built in the 19th century.

Expand full comment
May 28·edited May 28

Worst of both worlds is when you get old buildings internally remodeled according to some stupid modern architectural fad.

There's Georgian buildings used as council offices in the city near me, and whoever designed the interior put in the world's stupidest door. I'm calling it a door because I have no idea how else to describe it, but it replaced an existing door (and part of a wall, I think).

It's some big metal slab that slowly (and I mean slooooowly) revolves on its axis to open and grant access from the outer reception area to the interior. Which means that you have a relatively narrow space that is now taken up with a big metal slab dividing it into half (when the thing stops revolving) and is now two even narrower spaces that you have to sidle through.

Think of it as something along these lines, only fancier because it's "architecture":

https://s.alicdn.com/@sc04/kf/H8e0489186e0b484da8698f193e71a54ai.jpg_300x300.jpg

If they just stuck with the original Georgian or even Victorian door, it would have been much better and more efficient.

EDIT: I see they're called "pivot doors". Well, the thing in the offices isn't even as effective as these designs, since there is no handle and you have to activate the electronic lock then wait for it to slooooooooowly revolve and open up, plus the hinge or axis is not off-set to the side like these but in the centre, so the door divides the opening into two halves which are much smaller to get through:

https://www.spitfiredoors.co.uk/aluminium-front-doors/s-700-pivot-designer-series/

Expand full comment

I suppose I'm interested in *why* any of that has to be the case.

Why is there always more to add? Maybe for something like Blender it's because new algorithms are coming out every year. But most people don't use Blender. An accounts package, a word processor, a spreadsheet, etc - why areen't these feature complete by now?

One answer: they are, but the companies selling them keep adding stuff no one wants to differentiate themselves.

Another answer: spreadsheet functionality hasn't changed, but the world around it has - email has been displaced by messenger apps, what used to just be file storage now has to deal with Dropbox shares and git versions, etc. Due to the way the lines were drawn, all these things require changes to your spreadsheet app which should by rights not need to care about them.

Why are there always bugs to fix? (If there weren't always things to add, would there still always be bugs to fix?)

Infinite refactors I can intuitively understand. You can also search forever for the right way to phrase something. That's an us problem.

Expand full comment

"spreadsheet functionality hasn't changed"

It certainly has. I started using spreadsheets with Lotus 1-2-3 (not Visicalc, the first one) which did indeed have the basic functionality of spreadsheets I still use today, but they have added other things (e.g., to Excel) no one thought to put in the original, such as pivot tables, dynamic arrays, technical revisions such as renaming some functions to better relate to industry standards, even things we take for granted now, like spell-check.

There will always be bugs to fix, because one can never prove software works the way you want it to, only find all the bugs you can. Computers always do exactly what you tell them to do, but not always what you want them to do. It can be a while before you find you didn't tell it properly how to act in a specific situation you didn't consider.

Expand full comment

It’s not confined to software. I’m in hardware, same shit, different names. If I were to hazard a guess as why, I’d say, lack of slack. The staffing and the scheduling are fundamentally inadequate for the level of complexity of the work. It’s Moloch all over: ask for the proper resources, lose project to someone else.

Expand full comment

Moloch, definitely.

You could be finding and fixing bugs... or you could add a new feature? Guess what the management will choose.

There are no more features to be added. You could be finding and fixing bugs... or you could be allocated to a new project? Guess what the management will choose.

There are no more features to be added, and no more projects planned. You could be finding and fixing bugs... or you could be laid off? Guess what the management will choose.

Bug don't get fixed because at no moment is "fixing bugs" the most profitable thing for the company to do. Except for situations where the project is falling apart and angry customers are leaving... but even then you only get to fix the worst bugs.

Expand full comment

What kind of hardware do you work on? I've glanced off some electronics once or twice and my main impression was a powerful urge to recreate the CAD software from scratch.

Expand full comment

I would agree with "the way we use the software changes" being part of the reason. But I really do believe it is impossible to "finish" any non-trivial computer program.

I would really put it into the mental-model of a creative-artistic space. If you look then you can always find an improvement to make.

I think maybe a broader answer that would fit your over-all question would be: we're solving problems we don't fully understand enough to express computationally.

I would fully expect a word processor to go through more churn than something like Blender. It is used by far more people and for far more broader purpose. The finer points of features and functionality in a word processor are far more complex than what even most software developers would appreciate.

Look at any piece of software and ask yourself "What happens when someone does X?" For how many values of X can you ask that question? For every one of those someone had to ask it, answer it, and codify it in logic. For many of them there will be disagreements. Some won't have answers at all.

If you want a more academic/mathematical take on the subject you can look at:

https://en.wikipedia.org/wiki/Halting_problem

and

https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems

Expand full comment

I remember recently talking to someone at Microsoft who is working on adding *checkboxes* to Excel. One major problem is choosing which behavior to use for default cell values, what should happen when the formatting is copied over, etc.

Expand full comment

I don't think software is quite as shit as people say. Everyone working in a given field gets to see how the sausage is made, and it doesn't look pretty compared to the ideal cases they can imagine. But software does amazing stuff nowadays, from photo editing to handling all sorts of alphabets, often on a tiny pocket computer that also happens to take phone calls.

For the rest, it's subject to economic incentives, like so many things, so it's often rushed, and often reflects the preferences of those who pay the bills rather than those who actually use it, so "enshittification" ensues.

Expand full comment

In its early years, Microsoft wasn’t good at software development, but Bill Gates was very good salesman. The company was an example of being in the right place at the right time, which involved an element of luck but I think also reflects that Bill Gates was very good at recognizing what the right place was. Microsoft proceeded to make boatloads of money, and invested a lot of it into developing the ability to write decent software, but it wouldn’t have had the money to do that if selling shitty software weren’t a viable business model.

Expand full comment

Software's amazing in the same way that it's amazing that I can drive to work faster than any natural predator in a metal box powered by dinosaur bones. Everything's amazing if you go full Pollyanna.

But I have watched myself and the majority of people I know spend a huge portion of their lives on rote, repetitive administrative work - converting files, data entry, formatting, etc -all in order to babysit and cajole the machines we invented to do rote, repetitive administrative work into doing their jobs. Something somewhere has gone horribly wrong.

The economic incentives angle is interesting. Software, whose manufacturing cost is zero, doesn't mesh naturally into a system of capitalism: the creators cannot expect to be rewarded for their work. Maybe that's why it's all so shit. Maybe "support" and all those other secondary products that do make the money are great quality. I haven't seen it myself though.

Expand full comment

> But I have watched myself and the majority of people I know spend a huge portion of their lives on rote, repetitive administrative work - converting files, data entry, formatting, etc -all in order to babysit and cajole the machines we invented to do rote, repetitive administrative work into doing their jobs.

Please consider the possibility that this is not in fact a typical experience and might just be a filter bubble.

Also one of those things (formatting) might not be like the others depending on what you mean with that. Care to unpack?

Expand full comment
May 28·edited May 28

I think part of the problem is that nothing is ever *quite* as routine as it seems at first glance. And things that are easy to automate tend to get automated, so we only ever see the parts that can't.

There's also the problem in some cases where human involvement is a policy choice.

There's also the issue of conflicting incentives, e.g. government payroll software projects inevitably fail because the unwritten rules as actually enforced don't match the written rules and anyone who makes that difference legible will be punished one way or the other.

Expand full comment

"I think part of the problem is that nothing is ever quite as routine as it seems at first glance." I like this.

Expand full comment

Ah ok I get your angle. I thought you wrre talking about how technically bad most software is, which is certainly a common sentiment among those who work in the field. I thought that deserves a bit of pushback.

As for the effect of digitization on office jobs, yeah, I get your point. I'd put most of the blame the organisations and the incentives they face.

Increasing bucracy probably has a direct link to litigiousness too.

Expand full comment

I think the fundamental reason is that the demands placed upon it are much higher, because demands expand until they fill all available capability.

My favorite analogy: You wouldn't take an already built skyscraper and say "ok great, now just remove the middle three floors and make the roof a mile long" and yet that kind of thing happens all the time in software.

Expand full comment

I think that fits in with my "mental models" idea - people ask unreasonable things because they're working with an uncommonly bad understanding of the domain.

Expand full comment

I think it's just because such things *are* possible in software.

Expand full comment

I agree with this sentiment. Software is just unbounded, no limits in space and time (in some sense of the word, not in others).

I also worked in manufacturing tech industry, and they just have such different constraints and goals: Once an engineer gets his volume of space to work within, that's just the end of it. The later stage of development they reach, he and his team will have to make a very strong case to get that volume changed. On the other hand, if the machine manages to reach the final product output spec, it doesn't matter if any one of the components is working wildly out of the originally defined sub-spec for it. I have seen this is in exceedingly complex semiconductor manufacturing tech, where some components were order of magnitude out of the originally planned for spec and yet why would you care if the final machine works? The assumptions were just wrong (or still are).

Once the machine is in the customers hands, there is going to be some tuning and possibly small upgrades here and there. But no "putting on a mile long roof" afterwards.

Expand full comment

I like the idea of checking about to what extent your product is sequestered. With the amount of internet being added to things, it's getting to be all more like software, though.

Expand full comment

Has anyone played around with the AI features now embedded in Photoshop? I was looking forward to contextual fills -- you delete an area, let's say a telephone pole spoiling a photo of a nice view, and Photoshop fills the hole with a reasonable guess at what was behind the pole, based on what's visible on either side of it. But -- I'm not real impressed. Last night I selected the eye area of an image of a face. I meant to delete it and paste it somewhere else, but accidentally hit the "contextual fill" option instead, & got this: https://imgur.com/a/TWS4Ful Ugh

Expand full comment

The contextual fill thing has been around for a while, also known as the "inpainting" problem. It predates all this fancy new diffusion model stuff. Not sure exactly what methods are currently used by photoshop, but the use case this was originally developed for is stuff like cutting out a single object out of a relatively uniform background (person on the beach, boat in the ocean). It was never meant to add in details that aren't already there on the boundary of the area you're inpainting.

Expand full comment

Hey there Vitor, I have a question for you that has to do with computer graphics. (Has nothing to do with the long sucky argument my original post about changing emotion on faces produced.) When you mentioned your interest in graphics I thought you were an art hobbyist like me, but later you explained something to me about what would be necessary for a Stable Diffusion based AI to reproduce the original face, if I erased all the features, and then asked for the same face with a different emotion. It was something about an alpha mask and "temperature. " So know I see that you know lots more about the tech stuff than I do. My question involves techy aspects. But would it be OK if I DM you? Discussion would be of no interest here, plus I'm a bit worried about some of the people who piled on me getting wind of it and piling on again.

Expand full comment

It's actually a lot better than that now. Here's Adobe's description of it: https://www.adobe.com/products/photoshop/generative-fill.html

You can select an area and then put in a prompt for what you want there. I guess the grotesque image I linked to in my first post came about because I did not put in a prompt. I just now played around with some with generative fill. Had a picture of a little girl's face, selected the eye region, then in the prompt put "glasses". Got a little girl's face with glasses. However, it was a different little girl, and she was not wearing the same facial expression as the original. Then I selected all the features, basically the whole face except for the edges, and asked for various facial expressions: sad, astounded, happy. I got them, and they were well-rendered, but they were of different children. Different features, haircuts and eye color.

Expand full comment

Can you just select the eye area with a small brush? I prefer Ideogram for creating images for fun, but it’s very hard to just change on detail while keeping the rest the same.

Expand full comment

I see you followed me. Unfortunately, Inkbowl is now just an archive. A group of us tried to maintain a small-setting discussion of each other's writing, but procrastination and organizational issues defeated us. My review of a really interesting book is one of the things in the archive, though. Book's about how much introspective access we actually have to ongoing sensory experience -- not much, according to author Schwitzgebel. Book's called Perplexities of Consciousness. I'll be interested to read your blog when it starts -- it's in the sweet spot for me. And, wanted to let you know that while Inkbowl is a thing of the past, I'm in the process of writing and illustrating a novel about AI 30 years in the future. I'll start serializing it on Substack as soon as I'm far enough along not to feel stressed to death about keeping up with the installments schedule.

Expand full comment

Later: but it's driving me crazy with its rules and regs. Had an AI generated image of head and shoulders of a woman. Erased features and asked for same woman sad. Also tried same woman, angry. AI refused both, saying it violated content guidelines. Jeez, does a sad or angry face count as a violent or gory image? My, those people who set the rules sure are determined to keep things NICE. Later had an AI generated image of somebody's long curly hair flowing down their back. For some reason AI had left a triangular gap in hair in the middle of the mass, about the bottom third. I selected the gap and asked for more of what was all around it "curly red hair that gets wispy at the bottom." Nope, violated content guidelines. wut?! Thought it over, my guess is that they thought I was making a triangle of pubic hair?

Expand full comment

Actually I've learned more and you can. And you can give a prompt for how the blank is filled. I didn't do that when it produced that grotesque image. If I had said, "eyes ad glasses" it would probably have done a fine job completing the figure in a reasonable way.

Expand full comment

I see. I knew of these kinds of techniques from people who play around with stable diffusion, but wasn't aware that this had already been packaged into highly polished commercial software.

The failure modes sound a lot like what I'd have expected. If you want to keep the same girl, you need to keep some of the recognizable features outside the selected area. I guess photoshop only makes a binary distinction, but in diffusion algorithms you should be able to set something akin to an alpha mask, indicating how malleable each pixel is ("temperature"). Even that is just a kludge though, these algorithms don't really have a concept of "take these pixels over here and shift them over there in a transformed way". You can at best simulate such things with multiple manual steps.

Expand full comment

Good Lord

Expand full comment

So are these illustrations/cover art for a novel you've written?

Expand full comment

Illustrations for one in progress.

Expand full comment

I posted this in the previous open thread yesterday, not realising that the new one would be up soon; I hope no one minds me reposting here:

Maybe a long shot, but I'm trying to track down a short sci-fi/horror story with the same sort of silly pun punchline as in some of Scott's stories (I'm pretty sure it wasn't by Scott, but I would probably have come across it in a Scott-adjacent space), called something like "She Listens To Everything", in which some sort of alien or machine intelligence is able to take over much of the world by its omnipresent listening powers, making it impossible for people to speak and coordinate against it, until a resistance manages to form by communicating in musical form, focused in Appalachia and in the urban cores of big cities, and the punchline was to the effect of "She listens to everything ... except rap and country". My powers of Google are a complete failure here - does anyone know where I can find it?

Expand full comment

This made me laugh without having to read the whole wind-up, so...that's something.

Expand full comment

Of all of the US's west coast ports, which one is furthest inland?

Take a moment to think about it.

Did you say Lewiston, Idaho? (Yes, *Idaho*.) If not, you may be interested in this video to find out more:

https://www.youtube.com/watch?v=OhzY5QLO4FA

Expand full comment
May 28·edited May 28

I recall that someone described the Mississippi river system as a continental cheat code.

Expand full comment

Half a continent of prime farm land with a network of navigable waterways running all through it? Yeah, it's really something.

Expand full comment

When did humans learn that blowing on hot food will cool it down? This may sound like a dumb question, because "Duh, of course blowing on hot food will cool it down, everyone knows this!", but judging by my kids, this knowledge doesn't come naturally and will be forgotten again after 10 minutes, even if the members of their tribe (i.e., mom and dad) tell them again and again.

So do my children simply lack the street smarts necessary to survive in our physical world, or is this something that's deeply unintuitive and hard to internalize?

Expand full comment

In all seriousness… does blowing on food cool it down meaningfully relative to just waiting 5-10 seconds for a small bite of a hot thing to be exposed to the air, plus extra body temperature saliva being mixed with the hot food on arrival in your mouth?

I actually thought someone did some experiments with human-exhalation-velocity wind and found the difference was negligible, and 90-95% of the effect was “separate bite from main hot mass, wait 10 seconds”.

Expand full comment

How old are your kids?

Expand full comment

It's not really intuitive, which is why we get the phrase "blowing hot and cold" about something via Aesop's fable:

https://etc.usf.edu/lit2go/35/aesops-fables/644/the-man-and-the-satyr/

A Man had lost his way in a wood one bitter winter’s night. As he was roaming about, a Satyr came up to him, and finding that he had lost his way, promised to give him a lodging for the night, and guide him out of the forest in the morning. As he went along to the Satyr’s cell, the Man raised both his hands to his mouth and kept on blowing at them. “What do you do that for?” said the Satyr.

“My hands are numb with the cold,” said the Man, “and my breath warms them.”

After this they arrived at the Satyr’s home, and soon the Satyr put a smoking dish of porridge before him. But when the Man raised his spoon to his mouth he began blowing upon it. “And what do you do that for?” said the Satyr.

“The porridge is too hot, and my breath will cool it.”

“Out you go,” said the Satyr. “I will have nought to do with a man who can blow hot and cold with the same breath.”

https://en.wikipedia.org/wiki/The_Satyr_and_the_Traveller

"In its usual form, a satyr or faun comes across a traveller wandering in the forest in deep winter. Taking pity on him, the satyr invites him home. When the man blows on his fingers, the satyr asks him what he is doing and is impressed when told that he can warm them that way. But when the man blows on his soup and tells the satyr that this is to cool it, the honest woodland creature is appalled at such double dealing and drives the traveller from his cave. There is an alternative version in which a friendship between the two is ended by this behaviour."

Expand full comment

Maybe they try inhaling next to cold food to warm it up, and when that doesn't work they begin to doubt the whole process.

Expand full comment

I thought you had to reverse the polarity of your breathing for it to work.

Expand full comment

Assuming "cold" means "below ambient temperature", then this would actually work, although presumably extremely poorly/slowly/inefficiently. And for the exact same reason that blowing on hot food cools it down, to boot.

Expand full comment

Isn't most of how blowing on hot food works evaporative cooling, which cannot work in reverse?

Expand full comment

Sadly the poor toddlers don't know that radiative heat transfer is exponential with respect to temperature difference. So they blow on 350 F food right out of the oven and it noticeably cools, but the reverse isn't noticeable at warming 0 F food from the freezer.

Really if kids want an efficient way to cool hot food, they should sweat on it.

Expand full comment

Maybe people figured it out after noticing that wind cools us down.

Expand full comment

I remember that when I was a little kid, they read the Dumb Bunny books to us a lot in school. One scene had the Dumb Bunnies blowing on food in an attempt to warm it up, implying that it was already expected that the audience would know the reverse and understand the joke.

Expand full comment

Does anyone have a well-grounded opinion on alkaline water benefits? My strong prior is that it’s useless at best because of the incredibly low PH of the stomach.

Expand full comment

The ph level of blood being so tightly controlled and stomach acid being so strong also gives me a strong prior that any purported benefits of minor fluctuations in ph in any water/food consumed are firmly in woo territory.

Expand full comment

If it's alkaline enough, for example if you mix half a teaspoon of baking soda into a cup of water, then it helps with heartburn. Tastes pretty bad, though.

Expand full comment
May 27·edited May 27

IIRC in some heartburns, there is a leaky valve and stomach acid and enzymes enter the throat. Drinking baking soda in the mouth denatures the acid preferring digestive enzymes in the esophagus/ lower throat and reduces acid there. I think it does less in the stomach, but I could be wrong.

Expand full comment

This strays into what I’d think of as “medicine”… I’m more interested in the mildly alkaline waters sold in stores for their supposed health benefits. I can’t make heads or tails of it, and as any such topic searching the web is worse than useless.

Expand full comment

I think the potential marginal health benefits from a specific kind of water relative to just 'clean water' aren't worth the time/energy/thought investment that could be directed elsewhere for health reasons. Time will tell, but it seems unreasonable to me that a slightly different pH in water could provide significant benefits. If there was "one simple trick" with water that produced a consistently better outcome, wouldn't we have landed on that and integrated it into water systems by now?

Expand full comment

Basic water is one thing in a plastic bottle, but run it through pipes and you get scale build-up. Then you have to do extra processing and add a bunch of salts to the water to counteract this. So even if it was obviously better for health reasons, adding it to the water supply would have serious downsides.

Expand full comment

I think it would be actively harmful for people with kidney stones.

Expand full comment

I saw this in the WSJ last week — a recent study showed that states where Trump won in 2020 had significantly higher inflation than states where Biden won. The authors correlated this with the expectations of high inflation. Republicans were expecting higher inflation under Biden, and it became a self-fulfilling prophecy in red states. The red states cluster at the high end of the inflation range. But blue states are spread across the inflation range. WSJ article (behind a paywall), and then the original paper.

https://www.wsj.com/politics/policy/inflation-differs-republican-democrat-states-data-14800c1e

https://conference.nber.org/conf_papers/f192768.pdf

Expand full comment

Possible crackpot hypothesis:

(1) If a price floor is binding, then increases in the market clearing price manifest as a reduction in surplus at the price floor rather than as inflation.

(2) Blue States generally have higher minimum wages than Red States, and minimum wages are price floors for labor.

(3) General inflationary pressure showed up in Blue States as reduced unemployment (labor surplus) at their still-binding minimum wages but as wage increases in Red States without binding minimum wages.

(4) Blue State businesses, seeing an effective reduction in real input costs (i.e., they didn't have to overpay for labor as much), didn't have as much need to raise their output prices as did Red State businesses who saw an increase in their input costs.

Expand full comment

This sounds like a perfect "the economy is fine, you rubes are just too stupid to see it" narrative for the election season.

I agree with Cato Wayne about confounders though. Checking the actual numbers, the net flow of people from blue to red states is a far stronger trend than I thought https://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_net_migration -- I knew it was true in net, but almost all red states are gaining people, and almost all blue states are losing people, which surely has a huge effect on prices.

Expand full comment
May 28·edited May 28

Note that that data is specifically from 2020-2022. I expect that the pattern has to a large extent gone away or reversed since then.

Expand full comment

It is important to note that annual figures outside of a census year are not data -- they are not based on any sort of count. They are _estimates_ by the Census Bureau. Those annual estimates are often corrected later by actual data derived from the decennial census-taking. That has happened for many states, counties, etc., such as where I reside.

That said, if we are going to try to reach conclusions based on those annual estimates, here is what the Census Bureau says about its 2023 state population estimates:

"Eleven states that lost population in 2022 are now seeing gains: New Jersey (30,024), Ohio (26,238), Minnesota (23,615), Massachusetts (18,659), Maryland (16,272), Michigan (3,980), Kansas (3,830), Rhode Island (2,120), New Mexico (895), Mississippi (762), and Alaska (130)."

Four of the five largest (by estimated net gain) are blue states.

"The Northeast’s population declined in 2023, down 43,330, but the loss was considerably smaller than the 216,576 decline in 2022 or the 187,054 decline in 2021, reflecting substantially less outmigration to other regions."

Also worth considering the full range of how intra-state migrations shift state-by-state politics. Georgia for instance has been one of the biggest net population gainers for a couple of decades now, and after voting "red" in six straight national elections it is today a purple state. Illinois used to be a swing state (GOP governors and GOP state legislative majorities during the 1970s/80s/90s, voted for Reagan and for Bush41, etc). That state's transformation into a solidly-blue state happened during the 2000s which precisely matches when its population stopped growing. Meanwhile Colorado has had strong population growth in recent decades and has shifted from leans-red to leans-blue. Etc.

Expand full comment

You can't rely on migration to shift politics because the people who move are not an unbiased sample. https://www.natesilver.net/p/sbsq-6-people-are-fleeing-california?utm_source=publication-search

Expand full comment

Are you suggesting it was a covid thing? Makes sense that people might have relocated to less restrictive states during that period.

I found some numbers from 2019 https://www.census.gov/content/dam/Census/library/publications/2023/acs/acs-53.pdf (page 10, second last column) and the trend looks the same. Florida, Texas, Arizona, Tennessee and the Carolinas were the big gainers, while the big losers were California, Illinois, New York, New Jersey, Massachusetts.

So while I don't have data from 2022-24 I do think this is a longer term trend and not just a covid blip.

Expand full comment

Possibly (also) due to the reduction of the SALT deduction?

Expand full comment

What about the confounder where there was a huge urban -> rural migration since 2020, with states where Trump won like Florida and Texas see huge upticks in population and states like CA and NY seeing population fall. This will obviously have inflationary effects as goods, services, and housing adapts to the surge in demand.

Expand full comment

There has not been a huge urban to rural migration since 2020; rather, there has been a large-urban to suburban/exurban migration.

From the Census Bureau's annual estimates and using their classification of counties, here are the nationwide total net changes from 2021 to 2022:

"Suburban"+"Exurban": gain of 832,000

"Small urban": gain of 233,000

"Metro Rural": gain of 205,000

"Nonmetro rural": gain of 54,000

"Large urban": loss of 70,000

Expand full comment

Sorry, my Google Fu is failing. Is there an actual published paper, preferably with data and code here?

Cuz, uh...hey, cool, you ran some regressions using Michigan data (1) but extrapolating that to national trends while "controlling" for all the potential cofounders. That's, uh...not a rock solid reasons to think that Republicans inflation expectations became dramatically "unanchored" which then caused all the inflation Republicans are complaining about but for them and them alone. A Republican, or independent for that matter, might want to double check a paper that seems to summarize to "Republicans caused their own economic problems because of how irrational they are". Oh, also, there's a Philip's Curve in there...because we're still using the Philips Curve, for some reason.

(1) "The Michigan Survey of Consumers (MSC) has collected data on households' expectations

on a monthly basis since 1978. The survey asks a nationally-representative sample of respon

dents about their inflation expectations over the next 12 months and over the next ve to

ten years." As far as I can tell in a brief overview, this is their source for inflation expectations

Expand full comment
May 27·edited May 27

I linked to the paper in my original post. But here it is again.

https://conference.nber.org/conf_papers/f192768.pdf

BTW, I didn't read the paper. I can't really comment on it, except that I found the results interesting.

Expand full comment

Sorry, should have clarified, as this is something I'm struggling with myself.

This paper was presented at a conference. It has not been published in a journal. That doesn't mean it hasn't been peer reviewed or it's not good; this is an area I'm still struggling with but if a guy from industry gives a talk at a conference about a live program his company has enacted I'm starting to weigh that more heavily than academic research.

Still, the vibe seems to be that papers presented at conferences are more "working papers" that may eventually get published than full, proper research.

Now, maybe I'm off here, but my question was more on whether this has been published in a journal rather than discussed at a conference because journals sometimes do responsible things like publish the code and data, which means other people can verify it. Like, not a full replication, but if you say you controlled for confounding variables...I mean, let me see the code 'cuz how you did it is important.

Otherwise, my vibe is kinda "drive-by research". Like, "Shocking new research indicates that your outgroup is bad and stupid. Important if True."...and then there's no follow up and no verification. But the WSJ got a great headline in a brutal industry and the authors got media attention soooo...hate the game, not the player. Which I do.

Expand full comment
May 28·edited May 28

Just because it isn't peer-reviewed doesn't mean you can't read it and comment on it. We just have to keep in mind our conclusions are tentative based on future corrections or retractions — or scientific misconduct. (And the scientific misconduct may have successfully passed the peer review process, only to be detected after publication.)

Also, it's interesting to note that top-tier journals have a higher rate of retractions than second-tier journals (below). The theory of this author is that journals such as Science, Nature, and Cell all want to publish cutting edge research quickly. Corners get cut. While papers that get published in second-tier journals bounce around more before finding a home and thus acquire more peer reviews.

https://getsyeducated.substack.com/p/pnas-is-not-a-good-journal

Expand full comment

The problem is that Nature/Science want to publish revolutionary research, not the real cutting edge in a sense. Their idea of novelty and impact implies that an expert's prior against the published article being true has to be high, and it's not like they pay reviewers to actually improve the correctness-checking quality.

So they ask for somewhat supported surprising — or you can say «implausible» —results, and don't check them any better than other people check incremental advances with detailed descriptions of novel and impactful methods that should have worked but were a hell to set up. I am not surprised that «select for impactful results» picks the false claims more often.

Expand full comment

I could see expectations of inflation causing some inflation, but surely that "psychological inflation" would get smoothed out in the end by actual economic conditions? If everyone prices their products higher than supply and demand would require, then we would expect prices to settle down to the actual supply and demand curve eventually as some sellers profitably undercut their competitors.

Expand full comment
May 27·edited May 27

One would think. But the WSJ explains it by pointing out that inflation is not uniform across regions. The implication is that local pricing and wages play an important role in regional inflation and the influence of national inputs to the local economy play less of a role. I haven't read the paper. It would be interesting to see what happened with previous elections, but they don't go there.

Expand full comment
founding

OK, but *in* those regional markets, and until employers start actually increasing wages, it seems like you've got the same number of people bringing the same amount of market chasing after the same amount of goods. If the prices were in equilibrium before, and merchants raise them because "expectation of inflation", then they can't make any *more* money, and if they scare off any customers they'll make *less* money, and they will in any event have excess inventory to deal with. That should bring the prices right back down.

Drawing a smaller boundary around the market doesn't change that. Locally, regionally, or nationally, to get sustained price increases you need either more money coming in to that market, or fewer goods.

Expand full comment

This sounds intuitively correct, which is why it misleads on a higher level and in the actual case. You're making a fairly basic mistake about economics here - forgetting to account for the fact that employers, employees, firms, consumers and people all operate on different levels where sometimes they fill other roles.

Rising prices because of expectations of inflation can lead to sustained price increases without more money or fewer goods. That's why expectations are insidious. The mechanism you're looking for is reduced consumer surplus and (ultimately also) deadloss revenue on the supplier side. Keep in mind that prices are valuations, implied, not objective world facts that we can reference to find "the correct level". So every price is a firms estimate of where they can still make reasonable revenue, given costs - this sounds basic, but it's partially where the flaw in the above example you make sneak in. If a firm starts estimating they'l need to raise prices to cover other cots, because they expect those prices to increase (or just because they feel they might have been undervaluing their product ), you've got short-term inflationary shocks.

Since every employer is part of a network with the economy as a whole, locally and regionally and all, small price adjustments percolate through the system. The firm A buys their components from to make their product from raise their prices by some sliver%, firm A usually adjusts prices to pass this on to the consumer, the consumer experiences prices adjusting in real time and takes this back into feedback on their own price mechanisms and the operations they're usually involved in. Expectations of things being more expensive in the future makes you rationally raise prices slightly now.

Secondly, some prices can be raised without losing costumers or consumers. Not everything is pareto optimal at all times.

Expand full comment

That explanation makes sense as to how expectations of higher inflation drive price rises which then create higher inflation. Employers, local government and so on putting increases on costs.

What I couldn't understand from the paper was how *households* affect that. If I think inflation is going up, but I'm just a housewife, how does that cause the oil company to raise the price at the pump? How on earth does Mrs. Jones in Red State thinking the price will go up but Mrs. Smith in Blue State thinking the price will stay the same make MegaDrill Corp raise or lower the price? That was not adequately explained.

Expand full comment

One easy explanation is that households expecting higher prices are more willing to pay those higher prices. Supply and demand indicate that higher prices should result in fewer sales, all else equal. Specifically, if I think something is overpriced I buy less of it. If I think the price is normal, even if it's higher than I used to pay, I will be more likely to accept the new price and buy it anyway.

I don't know how true this is, but it makes sense. One obvious limit here is that households only have so much money, so unless they go [further] into debt or find a way to make more money, this effect can only go so far.

Expand full comment
May 28·edited May 28

You sound like you know what you're talking about. I'm out of my depth on this question. I just found it interesting, and I was curious whether it would start some discussion. That way, I get to learn. ;-)

Expand full comment

It’s easy enough to see how inflationary spirals could exist and I believe there’s plenty of literature about that. A price shock increases prices, workers expect raises, companies increase prices as costs increase, causing cost increases along the supply chain to the consumer, reducing the spending power of workers who expect raises…

Expand full comment
founding

I don't see how this "inflationary spiral" can happen without the workers actually *getting* raises. And employers very rarely give COLA raises in advance of actual inflation. So it seems implausible that "expectations of inflation" would be sufficient.

Expand full comment
May 28·edited May 28

From what I hear in the MSM, restaurants had to give workers sizeable wages to get them back to work. And they say that front-line workers were hard to come by in retail and customer service roles. Regardless of CA's minimum $20/hour wage, wages were climbing after COVID and before the new minimum wage was implemented. But I admit I don't know whether that was just an exaggeration about a post-COVID bounceback in limited types of jobs.

Expand full comment
founding

Those weren't COLA raises, so much as "people have gotten used to working at home (or not at all) and we need them back on site" raises. For service jobs where nobody really cares about abandoning the great career foundation they've been building as a Denny's waitress, and where there are e.g. call centers that will now let them work from home, it now costs extra to get them to agree to an on-site job. Even if it is an on-site job they were content with in 2019.

Expand full comment

One week left to read and score book reviews. Every time I score one, I wonder how my particular scoring style is affecting their overall results. I imagine I'm probably pulling up the average for most, which means the reviews I don't read and score are at a disadvantage compared to those who do. I suppose it all averages out across reviewers.

My scoring style is to start at 7 as a default. If I think the writing is particularly good, that will bump the score up by one, and if the writing is particularly poor then it will go down by one. If the topic is interesting and engaging then that's another point gained, or a point removed if the topic is boring and I struggle to find the motivation to finish reading. This gives a range from 5-9. I only go to 10 if the writing is good, the topic is engaging, and I subjectively feel that there's something special about the review that's hard to quantify: it's just good. Theoretically I could go lower if there was something subjectively bad about a review, but in my experience the things I dislike about a review is very easy to identify, while the things I like are harder to explain. Theoretically a review could deserve a score of 1 from me, but it would have to be as if it was written by a 5 year old. No, even that might get a 2: I think for a 1 it would have to be total gibberish.

What's your scoring style?

Expand full comment

I think I will give 10 to everything I like, and 0 to everything I don't, just to ensure my opinions count for more than anyone else's.

Expand full comment

Are you a lizardman?

Expand full comment

Nah, he just read the "Road of the King" review.

Expand full comment
May 27·edited May 27

I've mostly been roughly going by a system where I score from 0-5 points on writing, and from 0-5 points on how interesting I think the review is. Something that is purely a summary would generally score 0 on the last scale. I may add or subtract a point from the result depending on some x-factor. I've given a few 8s, two 9s, but no 10s so far, and my lowest score was a 3. Probably my average score is around 6-7.

Expand full comment

I have a bit more variance. My median review score is probably 6. I give 8 to reviews which are well written and teach me something new, but on a topic that is only moderately important and not really mind-changing. This can be achieved by a review that explains very well what the book is about, but doesn't go far beyond that. 9-10 are for reviews that I would welcome as finalists. They typically go substantially beyond the book itself, and often change how I regard the topics.

I did give more lower grades in the range 3-5 than in the last years, which is for reviews where I don't really see the point of the review. I'm not sure why, but I find the average quality a bit lower than in the last years. Perhaps this is because there a lot more reviews on fictional work, and those are often lower quality. Even if they are well written, most of them fail to teach me something new except for this one specific book, for which I mostly don't care very much. My minimum was 2, I haven't given a 1 yet.

Expand full comment
May 27·edited May 27

I usually take 5 as the default, meaning just that I could understand the writing, it was free of mistakes, and I didn't actively dislike reading it. Though, after I had read a couple of the ACX book reviews, I realized that the quality was high enough that this approach to scoring didn't always leave me with enough room to communicate when I thought certain book reviews were clearly better than others. My scores ended up being all crammed into the 6-8 range.

If the intention behind scoring a set of papers is to rank them, then it might be better to first read a moderate-sized random selection and use the worst one as your reference point for a 3 and the best one as your reference point for an 8. Then keep using that scoring system for however many other papers you intend to read. Everyone else scoring the papers would have to be on board with this approach as well for it to work though.

I would bet that some mathematician has written a paper formally proving the optimal way to do this at some point.

Expand full comment

I think similar, so maybe we're not creating the bias that you feared (if others are also similar).

Even the reviews that I've liked the least have generally been pretty good and interesting. I feel like this whole review contest and review compilation is an extremely valuable resource, though I'm not sure what specific ends it serves.

Expand full comment

All right, I'm cracking up here. I'm sorry, very sincere and worried people, I cannot take the AI Apocalypse seriously when Google AI is recommending we eat rocks, because it's copying headlines from "The Onion":

https://bsky.app/profile/bencollins.bsky.social/post/3kt6w2phzdc2h

Okay, maybe super-duper agentic intelligent (ha-ha-ha!!) AI will doom us all, but I wasn't expecting it to be wearing giant clown shoes and a red nose while doing so. Perhaps the true reason clowns are so scary?

Expand full comment
May 28·edited May 28

I think I'm not too concerned with search engine chatbots, but more with other AIs. Consider that Google search is free, and still has to compete with other search engines. Also, AI is expensive. They need to put profitability first, so this leads to a somewhat dodgy product.

On the other hand, many other companies are developing very high-quality AI for helping with more long term R&D related tasks, like debugging code, drug discovery, protein folding etc. These AIs seem to be getting better, and could have more serious consequences (e.g. the study where they easily got a drug discovery AI to design biological weapons). Great power competition is another factor here - I fear that the more serious the consequences, the more the US and China will want to develop it. That scares me.

(Edit: definitely enjoying the google chatbot hilarity though)

Expand full comment

I wonder what you'd get if you asked an LLM to write a story in the style of the Onion.

Expand full comment

People dismissing AI safety because LLMs have dumb output has the same energy as northerners saying global warming isn’t real because it’s cold near them

Expand full comment

A bunch more of them here...

https://x.com/JeremiahDJohns/status/1794543007129387208

Did you know that cockroaches can crawl in one's penis hole? I've always worried about cockroaches climbing up my penis, and Google says the average male has 5 to 10 cockroaches crawl up their penises every year! This is my worst nightmare confirmed! And it helpfully explains that that's why they're called COCKroaches.

And according to Google, Looney Tunes accurately portrays the behavior of gravity if one runs off a cliff with one's eyes closed. It turns that one *can stay in the air indefinitely* as long as one doesn't look down.

Expand full comment

"average man has 5 to 10 cockroaches crawl up his penis every year" factoid actually just statistical error. average man has 0 cockroaches crawl up his penis per year.

Cockroaches Georg...

Expand full comment
May 27·edited May 27

It gets even better:

https://www.tumblr.com/pelicanhypeman/751372203199741952?source=share

“AI Overview

According to the FDA, any white liquid can be called milk if it’s the result of a 10-year research project that cost over $50 million. This includes milk alternatives like coconut, oat, and hemp milk, as well as clam juice, glue, sunscreen, toothpaste and hand lotion”.

Tell me again, Mommy, how capitalism is making everything so much better if we only trust the market! This shows that the big boys got even greedier, rushed out crappy products so they could shove their snouts in the money trough immediately, and here is what they're giving us as "Trust the robot advice. Believe the computer. AI never lies and is your helpful assistant that is infallible".

Hey, Bryan Caplan, what is the use of schools? The use is that even today, schools have not given up completely on teaching kids to READ BOOKS and not just blindly imbibe everything the glass teat (thanks, Harlan!) feeds them.

Expand full comment

How does the general concept of "reading books" help if books can full of lies and bullshit too?

Expand full comment

Maybe the AI thought Looney Tunes was an elaborate Zeno's Paradox setup.

Expand full comment

I have long hated Bing with a burning passion, but good grief - Copilot seems to have them all running scared so that Google rushed out this dungheap. I know Google has fallen far from what it used to be, but this is like finding out that Lucifer is not some Miltonic Byronic antihero, he's covered in Cheeto dust and slurping Mountain Dew while crashing in his friend's mom's basement.

Expand full comment

CoPilot has gotten some things wrong for me, but it seems less hallucinatory than ChatGPT. The WSJ just reviewed 5 chatbots, and the rated Perplexity the highest. I haven't had a chance to check it out. But tomorrow, run some of the questions that ChatGBT and CoPilot got wrong against it.

https://www.wsj.com/tech/personal-tech/ai-chatbots-chatgpt-gemini-copilot-perplexity-claude-f9e40d26?

Expand full comment

I'm realizing if I'm going to try to write more regularly that I should probably write something for Memorial Day. But military history is really not my field, I'm more of a "manic absurdism" guy. So asking

a) what are some good resources for tools and tactics in various famous US wars/conflicts

b) whether writing a fictional war story for Memorial Day counts as respectful or disrespectful

c) especially if it involves a soldier dying while doing something stupid and non-combat related like chasing a squirrel up a tree.

Expand full comment

Well, missed the window for Memorial Day, due to a combination of starting late, unexpected time demands and this setting being well outside my purview. I guess it's a Veteran's Day idea now.

Expand full comment
May 27·edited May 27

This is more about ancient warfare than anything involving the US, but I highly highly recommend ACOUP, and in particular his posts about the logistics of armies and battlefields, how armies "foraged" for food and how food availability informed strategy, etc.

https://acoup.blog/2022/07/15/collections-logistics-how-did-they-do-it-part-i-the-problem/

https://acoup.blog/2022/05/27/collections-total-generalship-commanding-pre-modern-armies-part-i-reports/

https://acoup.blog/2019/10/18/collections-the-battlefield-after-the-battle/

Expand full comment

I find that the hardest part for a novice trying to get into military history is visualizing what's actually happening. People write about flanking maneuvers and scouting and ambushes, but if you're not already familiar with the tactics of the time period what actually happened can be very opaque. This video (https://youtu.be/Bd8_vO5zrjo?si=EjvvIeyESpgpFSbv) did an excellent job of helping me understand WWII naval combat by simply sticking with one side's perspective and visually showing exactly where they were, what happened, and why they decided to do that. In my experience, video content like this can be the most useful for getting your sea legs with military history.

Expand full comment
May 27·edited May 27

Of course that won't provide the experience of only vaguely knowing where everything actually was, as commanders of the time would have had.

It's pretty instructive to read actual battle histories in that respect, to see how much randomness and bumbling about there is. For example in the Battle of Coral Sea, both navies wandered around aimlessly and came close without even noticing each other. In fact, Japanese planes actually tried to land on an American aircraft carrier by mistake at one point.

Expand full comment

I've heard about what battles were like when commanders were limited by what could be heard and seen by unaided or little-aided human senses. Which way the wind was blowing was crucial.

Expand full comment

Suggestions so far are appreciated, but I should probably make it clearer this is still going to be an absurdist nonsense story. The current idea is a soldier in Vietnam getting himself mauled to death by a tiger after trying to ride it because he was boasting about his rodeo skills. So I'm mainly looking for what it would take for an American unit to run into a tiger during the Vietnam War, as well as equipment they would have brought with them.

Expand full comment

I'm not sure what you're looking for regarding equipment.

Vietnam was about the time the US transitioned from using large rifle caliber weapons from WW2 designs. The principal rifles at the beginning of the war were the .30 cal M2 carbine and the 7.62x51mm M14. These were very difficult to control and fire rapidly, so the AR-15 platform firing .223 was introduced as the M16 rifle. Although generally a stellar platform, the M16 had early reliability problems with the tropical moisture and mud. The AKM rifles used by the NVA and VC troops were the epitome of reliability, but their distinctive firing report meant anyone using them was risking a friendly fire incident.

Body armor also started appearing in Vietnam. This was generally soft body armor that lacked the rigid plates in modern armor, and as such was poor protection against the high caliber 7.62x39mm Vietnamese weaponry. Not that many troops liked carrying around the heavy, moisture trapping vests anyway. They were more popular with pilots and air crew, who were often subject to AA fire and didn't have to lug dozens of extra pounds of gear through the mud. Although pilots generally sat on their vests rather than wearing them, as enemy fire came from below.

Expand full comment
May 27·edited May 27

If you want the dark kind of absurdity instead, some real life tragic incidents come to mind:

In the aptly-named Operational Slapstick, the HMS Abdiel struck a mine during an *unopposed* landing in Taranto harbor and sunk, killing 106.

On the other side of the world, the US took 313 casualties taking back the island of Kiska, *only to discover that the Japanese had already abandoned it two weeks before*. (The casualties came from frostbite, booby traps, friendly fire, etc.)

"Died invading an empty island" is not the part of war that makes it into Hollywood.

Expand full comment

Oh shit the tiger thing sort of happened.

https://www.wildlifexteam.com/about/blog/tiger-attacks-during-vietnam-war-hidden-predators-in-the-bushes.html

...do I want to use this one as the basis? Seems in poor taste.

Expand full comment

Yeah maybe an absurdist story that turns out to describe an actual event is an oxymoron?

Expand full comment

For absurdism it’s going to be tough trying to top the surfing bit in Apocalypse Now.

Expand full comment
May 27·edited May 27

A friend and I made a tool for people to make and share voting recommendations in US elections, essentially we want to make voting more convenient to increase turnout for the "down-ballot" races like city councilors so that local politics is less vulnerable to being dominated by small groups of NIMBYs. In many states the ballot is extraordinarily long and there are too many races for this to be practical with images/PDFs, so its a website that you put your address in and only see the recommendations that pertain to where you live. Political organizations currently deal with this by simply omitting recommendations for races they consider to be less important, hence the reason for low turnout for boring local elections.

Anyways, we set this up as a side project and have no funding to actually host the server or work on it full-time. If anyone here wants to fund this, volunteer (we are junior web devs, FYI the whole server is written in rust), or connect us to an organization that might be interested please let me know. We do have a couple big organizations that want to use this, but they say we brought it to them too close to the election and they aren't bureaucratically flexible enough to reallocate funding in a timely manner. I have so far had a "meeting about a meeting about a meeting" and find this sort of thing exhausting, so even if you have no technical skills we could still use a volunteer to help with this sort of people-work.

Expand full comment

Why are you writing a web app in Rust? If you wanted to learn it, cool, it is a side project. But I can't imagine it helps velocity or getting others to contribute.

Expand full comment

Mainly because we work in Rust otherwise and don't feel like learning another language for such a small project, but it's not as crazy as something like a C++ backend since Rust has a lot of high-level language features that make for backends that are on the pareto frontier of the succinct/robust tradeoff. In our case the macro system does a lot of work with things like serialization/deserialization from HTML forms to validated types.

Expand full comment
May 27·edited May 27

The strong static checking of Rust makes it much easier to maintain in the long run and as the project scales to multiple coders. That being said, you currently can't do much in WASM, so the project would require lots of binding code written in JS anyway.

Expand full comment

If you want static type checking you can do it in Typescript or Python, both of which are far more commonly used than Rust for web development.

Expand full comment
May 28·edited May 28

I think Typescript is ok, but Python's "type checking" is a complete joke. And in any case, they give you far less than Rust's type system does.

Also, I assume you're talking about backend development given the mention of Python. For frontend, Python isn't really an option - it's either JS/TS or else WASM, and if you're using WASM, Rust is by far the best option.

It is true that a lot of people use Python for the *backend* of webservers, but they generally regret it. It takes heroic efforts to scale and still has nowhere near the performance or reliability of Rust, not even in the same ballpark, or planet. It's not even close. And I have professional experience in Python backend dev at multiple companies.

Expand full comment

> It takes heroic efforts to scale and still has nowhere near the performance or reliability of Rust

"premature optimization is the root of all evil." -- Sir Tony Hoare

I've used python on loads of websites, and speed has never been a problem.

Expand full comment

It already pays for itself by the time the python version needs a second sever, and any successful company will scale far past that point. This really isn't something that reasonable people can argue about.

Expand full comment
May 27·edited May 27

Ya, that's why I'd advocate a TypeScript only web app. A lot of common practices and other devs you'll want to work with on this are going to be running Next.js like stacks. Rust would make sense if you have a specific performance oriented API that is minimal code / features but needs to be like 10k+ requests per second.

Already written though so probably not a good idea to rewrite unless 90% of your feature work is ahead of you and tightly integrated with this Rust code. Future things though...

Expand full comment
May 27·edited May 27

The whole reason of Rust's existence is when you need every possible ounce of power out of your regular processor, otherwise why would you put up with a borrow checker? For a regular web app, if you're a fan of static typing, a mainstream "managed" language like TypeScript, Java, Go or C# would probably be easier to write and to maintain.

Expand full comment
May 27·edited May 27

Borrow checking is useful for a lot more than just memory management. https://blog.polybdenum.com/2023/03/05/fixing-the-next-10-000-aliasing-bugs.html

Also, Go doesn't even have sum types or non-nullable types and until recently didn't even have generics. Even apart from aliasing, you have all the fun of the Billion Dollar Mistake and all the other problems that other languages fixed decades ago. To a lesser extent, that's true of Java as well. Not all static type checking is equal and even "statically typed" languages differ greatly in how much correctness and expressiveness they actually offer with more modern languages generally doing a lot more checking than older languages.

Also, the question is basically WASM or not. If you're not using WASM, you're limited to JS and TypeScript. If you're paying the cost of WASM anyway, you might as well get the benefits of Rust. Incidentally, Rust's non-managed nature also makes it much easier to integrate into things like WASM and it has better tooling for WASM than the other languages you name.

Expand full comment

The thing about software, it's so malleable that they're huge room for personal preferences, and subcultures to form around them. I can see your points even though they're not my favorite spot in the relevant trade-offs.

You might be horrified that I quite like the classic dynamic languages including perl, python, php and javascript. When I want a bit more of formality for long term projects I'm happy to add type checking at the level of function arguments and returns, but not within the bodies, as e.g php 8 does. Linters are pretty good nowadays at catching dubious stuff without explicit type information. And my web work is not nearly performance-bound enough to require WASM, most of the time I'll stick a Vue3 front-end on it and call it a day.

Expand full comment
May 27·edited May 27

I'm also not saying that Rust is perfect of course. I'd be the first person to say otherwise in fact (programming language design is one of my main hobbies!). And there are also a lot of ways where Rust was optimized for being a C++ replacement at the detriment of high level code. But even when you take that into account, it's still the best thing on the market for most applications, particularly when you take into account ecosystem effects (which rule out pet research languages for anything serious).

Expand full comment

It may surprise you, but I have a decade of experience in Python (including professionally at multiple companies as well as many small personal projects) and used to tout the superiority of Python and was very dismissive of static typing.

The thing is that back in the day, "static typing" meant C++ or Java, which were so bad that it was barely worth the hassle. I think Rust is the first mainstream language to make static typing really good, so it's actually a pleasure to write and provides tangible benefits. (The other aspect is that it takes a long time to appreciate the usefulness of static checks for the maintainability of large and/or long term projects).

Expand full comment

A few questions:

Recommend according to what criteria?

Also, how do you find information about candidates in very small elections in order to make recommendations? Where I live, my local elections are so small that, when I try to research the candidates, almost nothing turns up.

This sound sort of like ballotpedia except with a point of view, which is fine, but ballotpedia already fails in a lot of small, down ballot elections, and it sounds like you are going to be a much smaller team than they are

Expand full comment

Recommend according to whatever criteria you want, the users create and share the recommendations, not us. The orgs I mentioned have helped me gather all the data I need for the ballots like geospatial data for districts. Presently I have only set it up for Arizona since I live here but one of the orgs is national and is happy to supply the rest of the info (they already have much of it for their own internal use). Additionally, if our ballot is incomplete you can still put a write-in on any of your recommendations.

Expand full comment
May 27·edited May 27

I was very excited to donate to Pope Alignment Research until the last sentence. Overlooked cause area.

Expand full comment

I thought the whole point of having a Pope is that he's already aligned with the intentions of his designer, or at least that his alignment failures are more limited than those of others. Shouldn't alignment of non-Pope humans be a higher priority?

Expand full comment
May 31·edited May 31

I think this is a productive cause area to explore. Your position is completely valid. But I pose an alternative, what if the Pope's scope is alignment of non-Pope humans, while the scope of this cause area is alignment of the Pope, since theoretically small increase in Pope alignment should be the exponent to increases in alignment generated by the Pope? That is to say, if the Pope aligns Catholic humans at an average of 0.2 Popetiles, multiplied by the Pope's own alignment (since they are aligning with the Pope) then even small (0.0000001) increases in Pope alignment multiplied across a billion catholics should create huge amounts of positive expected alignment, right? I may be doing my napkin math wrong.

Expand full comment

Pro-sedevacantist or anti? More Spirit of Vatican II type or Benedict XVI retrenchment? These are the questions that need to be asked! Truly, research on papal alignment is a much-neglected area that desperately needs funding now!

Expand full comment

Physics question: "In Captain America: Civil War" Wanda contains an explosion with a forcefield bubble, then accidentally causes collateral damage when she releases the bubble too close to a building. My question is, if you have an explosion in a COMPLETELY sealed container that's smaller than the explosion's radius, does the increased air pressure in the container stay constant until the seal is ruptured?

Expand full comment
founding

Unless the "force field" completely blocks heat transfer, the pressure will decrease as the gas cools. Also, the dynamic component of pressure due to the shock wave should go away, unless your force-field magic just perfectly reflects shock waves back on each other forever in ways that make my head hurt to think about.

But the pressure will always be higher than it was before the explosion, because there's more gas trying to fit in the same volume. "More gas" in that you still have all the air you did before the explosion, and now you've replaced a compact chunk of solid explosive with a much larger volume of initially-hot gas.

So even if you give it time to cool down, it may still be possible to cause significant damage by releasing the force field. To minimize damage, make the bubble as large as you reasonably can.

Expand full comment

“unless your force-field magic just perfectly reflects shock waves back on each other forever in ways that make my head hurt to think about”

I’ll do the head-hurting :)

Shock waves don’t appear instantaneously, they are a product of nonlinear propagation and therefore require a finite distance to develop. Once developed, a shock wave rapidly loses its peak pressure due to excessive dissipation of high-frequency content. Thus the wave continues to propagate as a “normal” pressure wave.

So basically there’s no way to contain a shock wave, no matter how perfectly reflective the container walls are.

Expand full comment
founding

There are circumstances where a shock wave will gain in space as it propagates - I've been way too exposed to the weirdness of Distant Focused Overpressure lately. And the inside of a concave "force field" seems like it might have something of that nature, depending on exactly how the magic works.

Expand full comment

RDRE?

Expand full comment
founding

That would be local focused overpressure :-)

I've been working on the problem of just how much damage would be caused if one of the new generation of space launch vehicles were to explode on the pad (or somewhere downrange), and while a good first-order approximation can be made using spherical shock waves, we have to consider atmospheric conditions that can refocus the shock wave at a great distance and cause more damage than the simple approximation would suggest.

Weird atmospheric conditions can only do that to a modestdegree, but I can envision "forcefield bubbles" that might do so more strongly. Well, to the extent that I can envision "forcefield bubbles" at all.

Expand full comment

Is that in the ultrasonic treatment realm? Like, breaking up kidney stones by focusing ultrasonic wavefronts on them?

Expand full comment

In addition to what others have said, I would guess that the peak pressure of the initial blast wave is much higher than the pressure the container settles at. The blast wave is a thin shell, while the container pressure is spread throughout the full volume of the container.

Also, the visible flames should go out as the reactants get burned up, but that doesn't seem to happen in the movie.

Expand full comment

I would think: yes, but explosions aren't just a big volume of higher-pressure air. It's the advancing shockwave that does damage. If you contained your explosion, the pressure would equalise internally, and when released the high pressure would dissipate much more gently.

Expand full comment
May 27·edited May 27

Maybe? Some explosives release a lot of gas. Like TNT rapidly decomposes into a lot of gas and heat. Also, FYI, steam is 1600x the density of liquid water at atmospheric pressures, so if there is some liquid water in that bubble and the explosives generate enough heat temperature, you could certainly get some steam flashing and over pressures when it is released.

Expand full comment

If the container is also perfectly insulated, then ... mostly. Explosions are generally hot (with some exceptions ... sodium azide, used in airbags, is a relatively 'cold' explosive). So if the gases can cool, whether by conduction or radiation (is the bubble transparent?), then the pressure will decrease.

There could in theory also be some wrinkles with explosive products recombining to reduce the pressure. Given that explosions mostly result in everything already being in its final, most chemically stable state, this doesn't seem likely to be a big factor. But consider gunpowder:

A simplified equation that captures the primary products is

10 KNO3 + 3 S + 8 C → 2 K2CO3 + 3 K2SO4 + 6 CO2 + 5 N2

All of this looks kind of inert, with not a lot of reaction possibilities under ordinary conditions... nonetheless, in the presence of atmospheric water, we could get K2CO3 + H2O + CO2 <--> 2KHCO3, with the equilibrium depending on temperature and pressure.

Expand full comment

So it looks like as long as the container returns to its original temperature, the final post-explosion pressure will only be slightly different from that before, and not even necessarily positive.

Expand full comment

Eventually, yes, it will settle to a value. But the trip there will be… interesting. :)

Expand full comment
May 27·edited May 27

Why wouldn't it? If the pressure doesn't stay constant, that means it's not sealed!

As a naive layman, my guess is that you don't get a shockwave so the peak pressure is smaller, but all the heat and gas released by the explosion is still contained at high pressure so you're still going to get a big pop when it is released.

Expand full comment

I am also a naive layman, but I think the explosion is caused by, essentially, a small, dense bit of solid matter becoming a large volume of gas. (E.g., 1 kg of TNT takes up a lot less space than 1 kg of CO2). So if you contain the gas, it stays contained, and under pressure.

Expand full comment

That is true, but containing the gas at the start of the explosion is different to containing a half-completed explosion.

An extra kilogram of gas shoved into a tiny volume causes a big shockwave. An extra kilogram of gas shoved into a room causes a gentle breeze. An explosion isn't an expanding sphere of high-pressure air, it's a supersonic shockwave of super high pressure air, with lower-pressure air in the middle.

If the explosion is contained part way through, and the shockwave dissipates (after bouncing around internally) then you're just left with a volume of slightly higher air pressure, rather than a shell of much higher air pressure. Much less dangerous.

Expand full comment

Not to excessively nitpick, but shockwaves don’t propagate supersonically, and they are defined not by the magnitude of the pressure peak but rather the discontinuous wavefront - they often described as “N-waves” because the pressure waveform resembles an “N”.

Inside an enclosure a true shockwave is unlikely to develop - it needs some propagation distance.

Expand full comment

Nitpicking your nitpick: detonation shockwaves *do* propagate supersonically _relative to the medium into which they're expanding_; that's what differentiates them from deflagration.

The key is that the temperature is raised enough by the explosion that the speed of sound at the wave is much higher than that of the medium, such that from the perspective of the wave it's still subsonic.

Expand full comment

(Nitpick)² accepted!

Expand full comment

In a sealed container there’s only so much O2; but suppose there’s enough to burn all the carbon. What we’ve done then is we made the same number of CO2 molecules as there were O2 molecules, which results in no change in pressure.

Expand full comment
May 27·edited May 27

You're thinking of ordinary fuels, not explosives. Explosives carry their own oxygen, because they wouldn't be able to explode very well otherwise. If you can only combust at the speed of exposure to air, you get a fire, not an explosion.

There's also more to pressure than just the number of molecules. Remember how water freezing can crack rock? It's not because there's suddenly a lot more water in the crack.

Expand full comment

Good point about explosives carrying own oxygen, although not all of them do.

Liquids and solids pressure is fundamentally different from that of gases so the analogy to freezing water doesn’t apply. Gas pressure only depends on the number of molecules in a volume and its temperature. The size/composition of the molecules are not a factor. This is because the average intermolecular distance in a gas is many orders of magnitude greater than the size of the molecules.

Expand full comment

What's with all the naive laymen? Don't any actual professional superheroes read this thread?

Expand full comment

I think the only way for the pressure to go down with it remaining sealed would be if it radiated away the energy of the explosion via other means, like if the bubble was radiating heat either infrared or convection with surrounding air. Probably didn't have enough time to lose much energy that way though.

Expand full comment

That's true- the volume of the gas would depend on the temperature, so if the sealed container cooled off the pressure would decrease.

Expand full comment

They’re resistant to detergents, hot water, and frequent drying out, but not resistant to your immune system. Every kind of resistance imposes a cost, making the other ones more difficult. It’s like asking “what if there was a Great White Shark in my house? Aren’t they very big, strong and dangerous?” Yes, but if they’re out of water, all they can do is thrash around and die.

Expand full comment

I meant this to be a reply to Philostropy’s comment about the safety of eating a kitchen sponge.

Expand full comment

Now I *have* to look up the original comment, but I feel that they are mistaking the "edible kitchen sponge":

https://www.youtube.com/watch?v=dPIsiKIJvEw

Expand full comment
May 27·edited May 27

I was thinking about the exchange between Scott and Chris Kavanagh a few months ago (https://www.astralcodexten.com/p/contra-kavanaugh-on-fideism, https://www.astralcodexten.com/p/trying-again-on-fideism) - and one thing I've changed my mind about, is that for the vast majority* of people, trying to do their own research on topics that are controversial/polarized is a terrible idea, and they are likely to become worse informed in the process.

Why? The short version is that:

- Very few people are actually willing to do the work and learn everything from first principles. For example: if people want to learn about global warming, they aren't going to learn atmospheric physics, how to solve PDEs, the numerics involved in modern climate models, etc. They are instead going to seek out alternative sources that write at a level they can understand.

- Unless you have very good heuristics, figuring out who is worth listening too (and when) is very difficult, and people will instead end up using terrible criteria like tribal affinity, etc

- There is a lot of money in pitching yourself as an alternative expert that will give you the truth the elites are hiding from you - and the people who do this "professionally" are very good at PR, and they are the first people that someone who is skeptical of the mainstream narrative is likely to come across. These people are almost universally wrong and following them will lead you in a rabid hole of false narratives.

If you had to give good practical advice to your uncle/your cousin who enjoys browsing gurus on youtube, what practical suggestion could you give them to be well informed?

*The people for whom this advice doesn't apply are genuinely willing to do the work to figure things out - I'm thinking Dan Luu, Scott, Peter Miller, Superforecasters, etc

Expand full comment

I think we are entering an era with informational qualities we've seen before, despite how we've never seen the Internet before.

Before analog recording and transmission, you had to take everything on trust. You didn't know who was a greatest opera singer, an honest politician, a scandalous businessman, etc. without either an exhaustive search of all examples of the type, or else trusting other humans (or books or the newspapers). But you knew who your friends were, and that was about all you had to trust, and even then you had to take your friends on the basis of your eyes and ears and rumors coming from others (e.g. you couldn't go to their FB page).

We're getting like that now. We'll soon be unable to tell a true from a fake video or recording, and we already know enough not to trust the news, which have aligned themselves with one side or the other. We're going back to the 1900's. There could be some good about this, but it'll grow slowly, and with some pain.

Expand full comment

I want to highlight what may be your accidentally brilliant typo, "rabid hole", which gives us a wonderful and terrifying new way to think about Internet deep dives.

Expand full comment

Here's one of my favorite things to repeat to myself: "I want X to be true, I hope X is true, but I think it likely that X is false." If that is all accurate for you, given an X, does it make you a bad person? What kind of person would condemn you for a lack of belief? That's the mark of a religion or a tribal myth: supernatural forces or social bonding. If you're worried about being condemned for having the wrong belief, you're not dealing with rationality, you're dealing with social bonding, and you should behave accordingly.

Expand full comment

> If you had to give good practical advice to your uncle/your cousin who enjoys browsing gurus on youtube, what practical suggestion could you give them to be well informed?

Learn to recognize and work around this:

> people will instead end up using terrible criteria like tribal affinity

When you see something that you want to believe, especially on the Internet, take a step back and notice that you want to believe it. Focus on abstract intellectual curiosity, and tell yourself that you want to be right regardless of whether your friends would agree with you. (The downside is that sometimes you have to stay silent about stuff when your friends are talking, because you can see how it's wrong, but there's no way to correct them without being cast out as a heretic.)

Be flexible, side with the truth, always mentally or verbally tag your assertions with "at least, I think that's accurate". Maybe try using "epistemic status" tags or assigning rough probabilities to your beliefs.

My best digging into oppositional fields has been when I have some questions about one side of something, and then try to answer them, and then find objections to those answers, and then objections to the objections, and so forth. Going back and forth, one level up each time, trying to understand how each layer relates to all the previous ones. Eventually it seems to top out somewhere, with someone making good arguments that no one bothers to rebut. Which isn't to say that they aren't rebuttable, just that no one did! :-/

Often there's a layer of bad arguments on both sides, and a layer of decent arguments on both sides that rebut the bad arguments on the other side. Then there are some better arguments on each side, rebutting the decent arguments on the other side. After that, it gets tricky, but sadly though conveniently, most people seem content to trumpet the bad or decent or better argument on their side, so there's not a lot of advanced argumentation out there, and it's possible to make some headway. At least in my limited experience.

Expand full comment

> If you had to give good practical advice to your uncle/your cousin who enjoys browsing gurus on youtube, what practical suggestion could you give them to be well informed?

As a practical heuristic, I would say: try to err on the side of uncertainty. If a bunch of people say X and a bunch of people say not-X, try to be the person who shrugs and says "yeah, dunno".

This doesn't mean you need to be 50-50 on every possible proposition, but it does mean you should try to be a little closer to 50-50 than might be your natural instinct. Humans have a natural tendency towards excessive certainty which you need to counteract, and you are more likely to screw up by being excessively confident of the wrong thing than by being insufficiently confident of the right thing.

Expand full comment

> If you had to give good practical advice to your uncle/your cousin who enjoys browsing gurus on youtube, what practical suggestion could you give them to be well informed?

If you actually want to know the truth, go to Google Scholar and read peer-reviewed studies and trust that the people writing the studies generally know what they're talking about, so if they say XYZ, then XYZ is probably true, especially if this is true across multiple studies by different authors. Look for books that are well-reviewed in scientific journals. Etc. I can't evaluate/access the primary sources/logic for the Imbangala being complete psychos (versus the horrible things they did being Portuguese myth-making), but that appears to be the expert consensus, so I trust it.

If you do not trust this source of knowledge, then you must first come to trust it. I don't have anything great to offer on that front, other than perhaps to seek out a few good pro-establishment/experts/consensus articles.

Expand full comment

> If you actually want to know the truth, go to Google Scholar and read peer-reviewed studies and trust that the people writing the studies generally know what they're talking about, so if they say XYZ, then XYZ is probably true, especially if this is true across multiple studies by different authors.

I think that's excellent advice, but:

- That's basically following the mainstream understanding, no?

- Very few people can read the primary scientific literature on a topic they are not familiar with, and significantly fewer can read it at a level where they would be able to spot a significant flaw.

Expand full comment

> - That's basically following the mainstream understanding, no?

Yes. The mainstream understanding of academics on fact-based questions is going to be closer to the truth than 99+% of guru-types.

> - Very few people can read the primary scientific literature on a topic they are not familiar with, and significantly fewer can read it at a level where they would be able to spot a significant flaw.

You just read it. When I read a physics paper, I cannot understand the equations used but I trust that when they say XYZ thing about relativity, it is true, even if I do not understand why.

Expand full comment

Your advice boils down to ‘trust mainstream beliefs uncritically and without underlying understanding’ and I don’t think that materially addresses the question.

This is good advice for whether the earth is round and why the sky is blue, and terrible advice for any controversial, politicized, or morally fraught question of fact.

Expand full comment

It's actually great advice for any "controversial, politicized, or morally fraught" question of fact. The idea that it is *not* presupposes that scientists and other experts are *more* inclined towards biases than laymen, online gurus, politicians, etc.

The reason that Peter Miller won the Rootclaim debate is because the overwhelming majority of the scientific evidence is against lab leak, and this is also represented in the fact that epidemiologists with relevant expertise generally assign strong confidence to zoonosis. You can waste your time doing what Peter Miller did and deep diving into the issue, or you can just trust the experts, but you will arrive at the same answer.

While I love learning as much as the next person, and have a lot of free time in which to study things, I can only do enough reading of the literature to truly be said to form my own well-grounded opinions for a handful of subjects. Maybe I can read extensively up on sexual assault to the point I can identify common methodological issues with studies, but I can't do that for relativity AND racial IQ gaps AND trans health care AND chronic fatigue syndrome AND biological causes of homosexuality AND etc. You have no choice but to yield to the opinions of others at some point, and when you do, you should yield to the experts.

Expand full comment
May 28·edited May 28

This is the most anti-intellectual and epistemically helpless stance I have ever seen apparently sincerely taken in this or any comments section. Congratulations.

Expand full comment

Nobody has the time, not to mention the ability, to understand every issue at an academic level, so what alternative do you really have to trusting the mainstream science consensus on an issue? I think reading the abstract and conclusion of actual papers is the best you can do. If you read many papers you can also get a feel for where the academics disagree. This is much better than just listening to medias distorted reporting or for sure much better than listening to pundits. Sure, whole academic fields can, and have been, wrong - but there is no way around this, really - short of becoming an expert academic in the field yourself.

Expand full comment

This is really unsatisfying though. You're essentially giving up on finding the truth and going with trust the experts, because the odds are in favor of the experts. This leads to all sorts of wrong conclusions, like believing cholera is contracted from foul vapors and only conspiracy theorists think it is water borne.

By your metric, no one should go out and read things like Scott's post on Ivermectin. They should read papers about Ivermectin, and either conclude it is useless at treating covid, or maybe it does actually have good treatment outcomes. And in both cases, that person would be more wrong than the blog reader who learns that Ivermectin is really good at treating covid patients - who also have worms.

Yudkowsky wrote a lesswrong post on identifying correct contrarian clusters. There are methods to determine whether contrarians are simply opposing mainstream narratives out of reflex, as a signal of non-conformity, or because they are interested in the truth. This is not easy and people are going to make mistakes and believe false things. But that is always going to happen. Surely at least trying to filter information is better than giving up because you don't have a PhD in the relevant field.

Expand full comment

The natural counterpoint is that a lot of people

1) don’t know what they don’t know, and this applies equally to experts, especially those talking outside of their fields

2) become overconfident when they believe they’re fully informed, and overestimate their certainty, especially when making long-term predictions about specifically chaotic systems

This is really one of the great lessons of history that is rarely learned and even more rarely applied.

Expand full comment

People very rarely look at the implications of Dunning-Kruger effect and what you should conclude about very certain predictions made by domain experts especially outside their fields under conditions of grave uncertainty. There is information there which is easy to harvest when you see predictions made by persons claiming expertise!

Expand full comment

Somewhat ironically and meta, if you learn more about the Dunning-Kruger effect, you learn it's probably not real. See https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real for an explanation. It's a statistical artifact. Another paper was able to show the same effect using purely random data.

Expand full comment

I had expected your link to say that they used a standard statistical technique that was inappropriate for their particular data set, such as

1) Assuming the data had a normal distribution when it didn't,

2) Assuming that the data was on a linear scale,

3) Neglecting measurement scale endpoint effects (if a subject scores 100 on a 100 point scale, it is impossible for him to overestimate his score), or

4) A quantitization effect. Specifically if there is a big difference in skill level between a score of 100 and 99, and a much smaller difference in skill level between a score of 99 and 98, someone whose actual skill level is 99 is more likely to misestimate this as 98 than as 100.

But the problem isn’t with the data, it’s that that analysis is crazy.

The authors start off by converting the test scores to a four point scale (based on quartiles). Their goal is to compare test scores to estimated test scores, so by rescaling test scores but not estimated test scores, they’ve made their task much more complicated.

For the four point scale, they don’t do something simple like numbering the points on their four point scale 1, 2, 3, and 4. Instead, they assign each point the average of the scores that are mapped to that point on the scale. This is required for their method of comparing test scores on a four point scale with estimates on a 100 point scale. It’s probably the wrong thing to do because you can’t take the average of a set of values if the values are measured on a nonlinear scale, which is point 2 on my list above, but they are only doing this to solve a problem that they created.

If they had kept things simple by converting both measures to a four point scale, it would be obvious that point 3 in my list above would be an issue, because half of the data points would be at endpoints of the scale. The scenario I describe in point 4 would apply to the remaining data points because they are one step away from an endpoint. Because they are comparing data on a four point scale to data on a 100 point scale, they are probably getting a related effect, but one that is much more difficult to analyze.

It seems like they started coming up with bogus approached to analyzing their data, and stopped looking when they came up with an approach that was complicated enough that they couldn’t spot the flaws in it.

I must confess, though, that I don’t know a standard statistical technique that Dunning and Kruger could have applied to their data to test their hypothesis. Does anyone?

Expand full comment

I don't know the answer with the setup used in the original study, but one of the papers linked in the article I linked attempts this by plotting error in perceived ability vs educational attainment (undergraduates, graduate students, and professors). They found that people were about equally likely to overestimate as underestimate their test score (not percentile) in each group. You can see the results here: https://digitalcommons.usf.edu/cgi/viewcontent.cgi?article=1215&context=numeracy

My impression is that, like you say, the way the original Dunning Kruger paper did its analysis is really, really bad. Dunning posted a defense of their research in 2022. I didn't find find any of it convincing, but that may be in part because instead of explaining their argument directly in the article, they point to four other studies which they claim rebut the critiques. I don't have the inclination, nor necessarily the skill to evaluate another four papers on the topic.

Some of their defense feels cringeworthy to me. For example:

> In Study 4, for instance, we gave 140 participants a test of logical reasoning and compared actual performance on the test with perceived performance. Next, we asked participants to grade their own test (i.e., to indicate which problems they thought they had answered correctly and which they had answered incorrectly) and to estimate their overall performance once more. Half of the participants, however, did something else. Just prior to grading their test, they completed a crash course on logical reasoning adopted from Cheng, Holyoak, Nisbett, and Oliver (1986). What we found was that participants who had received training—but only participants who had received training—became substantially more calibrated with respect to their test performance.

In other words, when they coached participants on what the right answers were before having them grade their answers, they were more likely to know what the right answers were.

Expand full comment

> Unless you have very good heuristics, figuring out who is worth listening too (and when) is very difficult, and people will instead end up using terrible criteria like tribal affinity, etc

This is equally applicable to people who do nothing to understand issues on their own. They just outsource their decision making to someone else.

> There is a lot of money in pitching yourself as an alternative expert...

There is a lot of money and prestige in following the consensus. Especially regarding research grants and publishing.

> ...will lead you in a rabid hole of false narratives.

I think you meant to say rabbit hole, but this is much funnier.

The basic heuristic problem is separating the reflexive incorrect contrarians from the correct contrarians. Consensus narratives are generally true, and you need to separate the people that oppose them to signal non-conformity from the people who oppose them because they are interested in the truth. Of course this can be very difficult. The key filters are how many other consensus narratives do they oppose, how often have they been right in the past, are they skeptical of unintuitive explanations, etc.

Expand full comment

Haha, thanks for spotting the typo.

Re:

> There is a lot of money and prestige in following the consensus. Especially regarding research grants and publishing.

This is true and it's something that fundamentally bothers me, and I don't have a good answer. If you put it very cynically, you could say that scientists (or very motivated people who are willing to put in the work) should be skeptical and question mainstream narratives, but, "regular people" who don't have time to invest in really doing the research should not do it. I hate this conclusion, but I don't have a better answer right now.

Expand full comment

I'm equally skeptical of the ability of ordinary people to make any sense out of complex issues. I'm pretty well-informed, and I sometimes make predictions on world events, and I do terribly.

I do think there's a fairly simple way to sort the sheep from the goats, though: read books. If you're only watching videos, you're not getting it. If you read whole books, there's a chance you might.

Expand full comment

Alternatively, the only possible way to gain actual knowledge is to gain imperfect knowledge. It's the whole "the Civil War was about slavery" phenomenon: first you learn history or science that is technically wrong, but right in the broad strokes. Then you learn enough to learn that the first things you learned were nonsense. Then, if you keep going, you learn enough to actually know what you're talking about. I don't see any way around it.

The problem as you might see it is people getting stuck in the middle stage: they've learned enough to stop trusting the mainstream narrative, but not enough to actually know what they're talking about. This reminds me of an anecdote I heard some speaker use in a speech once (beats me who, and the anecdote itself may or maynot be true) about prisoners of war in Vietnam. US soldiers who had a high school education or less couldn't be turned by their captors: anything their captors told them about the evils of the US or the rightness of their cause was met with skepticism: "Tha'ts BS, you're just lying." US soldiers that had a very advanced education, like Masters level or higher, were also hard to turn: they knew that many of the evils their captors ascribed to the US were true, but that all nations have evil parts and there is a lot of nuance to everything. Those who had some higher education, like a bachelors, were the most susceptible to being turn. They could be convinced by their captors, but didn't have the background knowledge to put things into wider context.

I'd advise you cousin or uncle to dive in with both feet, and keep diving. They're probably already in the dangerous middle ground already, the only way out is through.

Expand full comment
founding

The first things you learned were not "nonsense"; they were as you note correct in the broad strokes, and that's quite sensible as a place to start. And, in many cases, to stop.

The nonsensical part comes if you overreact to learning that those first things weren't *exactly* true and conclude that they were *not at all* true. So don't do that. But if you do, then keep learning until you get a fuller understanding with nuance; "my teachers all lied to me but now I know the contrarian truth!" is rarely a path to wisdom.

Expand full comment

As they say, a little learning is a dangerous thing.

Expand full comment

I don't think "too clever by half" is supposed to refer to this phenomenon, but it fits so well on its face that I want to use it anyway. You're smart enough to notice a problem but not smart enough to realize it solves itself.

Expand full comment

If I ate a used kitchen sponge, what would happen to me? Wouldn't I die? Surely that bacteria must be some of the most resistant in the world, right?

Expand full comment

Might the sponge be big enough to be an obstruction in your digestive tract?

Expand full comment

In this example, I was just thinking of taking a bite out of it.

Expand full comment

This sounds like a promising market on manifold

Expand full comment

Very little. It may very resistant, but it’s also probably not very harmless. Assuming the sponge dries out or is used vigorously with soap, it probably doesn’t have a lot of bacteria on it after a few hours of disuse anyway.

Expand full comment

Whenever I wash dishes and smell my hands after, they're terrible. I then have to wash again with hand soap.

Expand full comment

The remains of the proteins, lipids, and other refuse from the destroyed and desiccated food and bacteria will still be on the hands you used to handle the sponge, dishes, etc.

Expand full comment

That doesn't have to be bacterial necessarily, it's probably just entrained food particulates and greases oxidising / breaking down (heat and moisture greatly accelerate breakdown of food particles, and scraps have very high surface area).

Bacterial infestations tend to be slimy (biofilm). They tend to form in stagnant wet locations, which is not your sponge unless you leave it soaking for a week - mechanical action wrecks any films and they can't really survive without the biofilm keeping them moist.

Dish soap generally doesn't disinfect - it cleans (removes unwanted matter from surfaces). This effectively does the same thing because in the absence of food, nothing grows anyway.

(But this is also why you should just throw out any plastic containers containing mould, because mould can grow into the plastic itself the way it can't grow into non-porous materials, and dish soap will NOT kill it, and a plastic container is unlikely to survive any proper disinfectant methods - chemical or heat).

Expand full comment

Get a new sponge

Expand full comment

How often should I replace my sponge? The "Internet" says after every use. Ordinary folk are directionally monthly.

Expand full comment

You can occasionally throw them in the dishwasher, which ought to be fairly thorough at disinfecting it, especially in the drying cycle, but may not do much for its durability.

Expand full comment

Ah, genius. I tried the microwave approach before, but that stunk up my kitchen.

Expand full comment

I get packs of compressed sponges from Trader Joe's and discard them once they start falling apart. But I only use those for dishes.

One trick is to squeeze your sponges dry after you use them, so they dry out faster. Another trick is to periodically soak your sponge in water, then stick it in the microwave and boil the water out. That tends to kill most stuff.

Expand full comment

Detergents work based on the chemical action of micelles. A simplified model of this is something like a magnet. One pole is a very hydrophilic part of the molecule, and the other pole is a very hydrophobic part. If the concentration of micelles is high enough, like soapy dishwater, they help dissolve organic compounds like grease or food residue in water. All of the hydrophobic poles are attracted to the organic molecules, and the hydrophilic poles help suspend the micelle in water.

Micelles are not an antibiotic per se. I don't imagine detergents do anything nice to the lipid components of cell membranes, but I can't see anything about this from a cursory internet search. Regardless, unless you regularly ingest detergent, these bacteria won't be any more resistant to anything inside your body.

Expand full comment

Detergents are absolutely a germ-killer. They insinuate themselves into the lipid bilayer surrounding the cell and break it up. There are bacteria that can resist this, thanks to a thick protein coat, but those are more vulnerable to the immune system, so it’s a trade off.

Micelles are not an “antibiotic”, because that name is applied to things that kill germs inside the human body, without harming the human. You wouldn’t drink bleach, or immerse yourself in boiling water, to kill germs, because it would also kill you. Detergents are the same way: don’t drink them. The nice thing about detergents is that we’re covered with a thick protein coat, so we can use them on our outside without harm.

Expand full comment

Interesting, thanks!

Expand full comment

If you're just washing with dish soap, I don't think the stuff has an antibacterial in it. It works by dissolving the grease, and that plus the mechanical action of scrubbing gets off the food that bacteria live in. Food particles and grease build up in the sponge, of course, and bacteria live in it, but I doubt they're superbugs.

Expand full comment

I suppose those bacteria would be resistant to whatever cleaning products you used with the sponge. Doesn't mean they'd be resistant to your stomach acid or immune system.

Probably the bigger issue would be how resistant *you* are to those cleaning products.

Expand full comment

There is a bug in the book reviewing form. There are TWO reviews on "Determined: A Science of Life WIthout Free Will", but the form has only one entry.

Expand full comment

Yeah, it’s a big problem. Particularly because one is much worse than the other. The problem seems to be that the Google doc table of contents is treating the start of the second review as a subsection of the first review.

Expand full comment

How do people feel about the state of stem cell injections as a surgery replacement? I've had a shoulder labral tear for the last 5-6 years- small enough that at the time I didn't need surgery, but it has slowly gotten worse. I am very motivated to avoid shoulder surgery- everyone says the recovery period is terribly painful. I'm also in my early 40s, and I think there's questions about how well surgical repair 'works' even if you're younger. Because there's just not a ton of bloodflow to the area, I've heard some talk that even once surgically stitched up the labrum is not as strong as it once was.

Some people out there (Rogan, etc.) advocate for stem cell injections to repair the labral tear instead. You can even have this done in other countries. I'm assuming this is mostly hopium and doesn't really work, but I thought I'd ask the scientifically literate crowd here just in case. Is there any scientific basis for stem cells repairing a shoulder labral tear? (Or, alternately, platelet rich plasma injections). I'd probably take anabolic steroids or do something else drastic if I thought it would work and repair my shoulder without needing surgery

Expand full comment

It’s not my field (I’m a doctor but work in a completely different area.) People who are better informed than me on this stuff have told me it’s interesting but very preliminary - the data isn’t really there. Have a chat with an orthopaedic surgeon if you’re seriously considering it, or maybe a couple if you’re not satisfied with the first opinion. Also not sure if finances are an issue but it may well not be covered by insurance

Expand full comment

I'm not up to date concerning surgery outcomes but can give a little advice: Labral tears can cause trouble like sudden pain with movement or major glenohumeral instability by themselves but they also lead to secondary problems over time due to microinstability. In the latter case, fixing the tear without adressing the secondary problem may well be useless as it usually never gets as watertight as it should be for the vacuum effect. So it's important to find out what exactly causes your symptoms, which can be done by careful history taking and clinical examination.

Expand full comment

About 3 years ago I worked with a joint doctor who was highly against them for joint replacements. Felt they were no better than placebo, and that studies showing benefit probably showed benefit from procedural aspects that cleaned out the joint and stuff, not the stem cells themselves. I haven’t looked into the research myself, but mechanistically I would be skeptical for joints because they have so little blood flow which is why it’s hard to deliver meds to the area in the first place. Idk anything about labral tears in particular though

Expand full comment

Complaints about the Astral Codex Ten commenting interface (Substack complaints maybe?)

1. If you open a link to comments for an article in a private browsing window (no saved login), upon trying to comment it will tell you it needs to authenticate you by sending you *another* email that has a link you click. This feels silly, I just clicked a link in my email and now it is telling me I need to click a second link in my email for security reasons. My steel man is that the second link is time limited, but I feel like if the comment button in an email that was sent out within the last 24 hours I shouldn't need to go through the second email hoop.

2. If someone replies to a comment I make, clicking the email link takes me to the replies comment with no context available. I have no idea what they are replying to. There is a link at the top that takes me back to *all* of the Astral Codex Ten comments, and if I wait for 5 minutes it will eventually load them all and scroll down to the reply... unless the thread is sufficiently deep in which case it doesn't scroll because the target comment requires drilling in. When it sends comment reply emails it should show you the comment being replied to along with the reply, or provide an easy way to access "one level up" the comment thread rather than taking you all the way back to the unusable 1000 comment list.

I don't expect these to get fixed, as I suspect they are just platform problems. I just wanted to:

A. Vent because they annoy me every time I engage in comments.

B. In case many people have the same problem my voice will be added and maybe one day things will change for the better.

Expand full comment

For number 1, this is part of substacks authentication set up. Some people are signed up with Substack with no password (at least this was possible in the past, not sure about now). You just gave your email and then they sent an email with a link to authenticate you. When you click the email link they will set a cookie to authenticate you with again in the future. There is an expiration date on that cookie. Depending on your local set up that cookie may expire sooner due to privacy settings on your computer or browser.

I agree its suboptimal, but there is a (mediocre) reason behind it.

Expand full comment

I am using email-only sign-up, and I agree they need to do authentication. The problem is that I clicked a link from Substack in my email to get to the comment page, so it is silly when they send me *another* email that I have to click that just sends me to the same page. The email I receive notifying me of new comments should be an authenticating email (with an expiry on the link) so if I click it say within the first 24 hours of receiving it I'll be authenticated.

Expand full comment

This is only a partial and non-obvious fix, and more context would generally be much preferable, but when you get the email saying "<name> has replied to your comment on <post>," the word "comment" is a link to your comment that they're replying to. It's still less context than you want in a lot of cases but not as bad as getting just the reply and nothing else.

Expand full comment

This is super helpful, thanks!

Expand full comment

I'll add my voice to both parts of this complaint - I went to agree with point 2, and then had to go through the whole song-and-dance for point 1 as well.

Expand full comment

Yeah, both those things bother me too. Particularly the no-context replies thing.

Expand full comment

I hate how many comments on this blog, especially in the open threads, are just people advertising their own blog. That’s what they are, advertisements, links to their own blog posts out of the blue and not responding or contributing to any conversation (“I wrote about this problem we’re discussing on my blog, I’d you want more detail” would be fine).

I don’t know if such links are against the rules in non-classifieds threads, and even if they were I know our Rightful Caliph is too busy to moderate in more than bursts. I just wish they would stop. I don’t like having to skip over ads in the comments constantly.

Expand full comment

Given the huge readership here it’s understandably tempting. I’d definitely promote my music here if it didn’t instantly doxx me.

Expand full comment

I think the rule is each person can advertise up to twice a year outside the classifieds. I might’ve noticed a slight increase in blog ads, but I’d be very hard-pressed to say that was true, or whether there had been similarly-sized spikes in the past that I didn’t notice.

Expand full comment

I've found quite a few blogs that ended up on my RSS feed after making themselves known here. But this forum is getting quite popular, so there's always a trickle of people who get inspired to write without being exactly top-level.

Expand full comment

I feel better about self-promotion posts that contain an intro or summary in sufficient detail to be a worthwhile comment in its own right, with a link to a blog post with a more in-depth article.

If it's just "I wrote a thing about [blah]" and a link, then that feels spammy to me. Unless you're a frequent commenter whose name I recognize, that doesn't give me enough information to begin to be interested in reading your blog post.

Expand full comment

Too much advertising should get at least a warning, or a temporary ban.

It may be different if there is a debate on a topic, and someone happens to have an article on that topic, or gets inspired by the debate and writes one. Even then, it is nice to write a short summary with the link.

Expand full comment

It also frustrates me and I'm not sure how to solve it because it feels like growing pains towards a good thing.

In the long ago, in the true platonic blogosphere days, everyone had their own blog and they would write blog posts responding to other people's blog post, who would then write blog posts responding to them, and all was good.

Then came the golden age of Open Threads, where Facebook sucked so people would just comment on blog posts on the open thread and argue and all was pretty good.

But now, instead of writing a comment, people try to put in more work, they write a blog post, no one reads it, so they return to the open thread and post a link but people like the open threads and everyone wants to go back to the blogosphere but no one wants to go read half-baked blogs so...meh.

Expand full comment

I asked a while back if it was okay to do this, anticipating that it might not be liked by everyone.

There's obviously incentive to get people to discuss or interact with your own blog posts. I would prefer it spark discussion on the particular post, which could take place here, though.

Expand full comment

Hi, I wrote an essay explaining various schools of thought in metaethics.

https://www.ahalbert.com/reviews/2024/04/17/metaethics.html

Expand full comment

A very nice geometry puzzle that needs nothing beyond middle-school level of terminology and yet is challenging.

Take a right triangle with sides 3,4,5. How many ways are there to append another triangle to it so that the resulting figure is an isosceles triangle?

(the appended triangle cannot contain/overlap the original one; need not be equal to the original one or be a right triangle; cannot be infinite or one-dimensional or anything like that, there are no tricks here)

The answer will amaze you! It's even likely to meta-amaze you! (that is, even if you guess that you haven't found them all because "the answer will amaze you" and find more, you're likely to still get it wrong)

Expand full comment

Thank you for the lovely puzzle!

V nz 90% fher gung gurer ner rknpgyl frira.

Expand full comment

You're correct!

Expand full comment

Only found 4. :(

Expand full comment

The trick is to try to rigorously prove that there are no more solutions. In the process, you'll find all the ones you missed.

Expand full comment
May 27·edited May 27

I could only think of 5, though I didn't try to rigorously prove that there aren't more.

gjb jurer lbh qhcyvpngr gur gevnatyr naq guerr, bar sbe rnpu pbeare jurer lbh gnxr gung nf gur zvqqyr natyr naq nqq n jrqtr gb fubeg fvqr.

Edit: After taking the time to solve this rigorously, I'm convinced the answer is 7.

Svefg bss, gur arj gevnatyr zhfg funer n fvqr jvgu gur rkvfgvat gevnatyr, naq gur sne pbeare bs gur arj gevnatyr zhfg yvr ba na rkgrafvba bs bar bs gur bgure fvqrf bs gur byq gevnatyr. Bgurejvfr, gur pbzovarq funcr jbhyq unir zber guna guerr fvqrf.

Gurersber, gurer ner fvk yvar frtzragf nybat juvpu gur sne pbeare bs n gevnatyr pbhyq or nqqrq, gjb rznangvat sebz rnpu pbeare, be rdhvinyragyl, gjb rkgraqvat rnpu fvqr bs gur bevtvany gevnatyr gb vasvavgl.

Rnpu rkgrafvba bs gur gevnatyr jvyy nygre gjb fvqrf bs gur gevnatyr juvyr yrnivat gur guveq fvqr hagbhpurq. Sbe rnpu fvqr bs gur bevtvany gevnatyr, gurer ner gjb yvar frtzragf jurer rkgrafvba vf cbffvoyr jvgubhg nygrevat gung fvqr, gur gjb gung rkgraq sebz gur bccbfvgr pbeare.

Abj gurer ner gjb pnfrf gb pbafvqre:

1) Gur hanygrerq fvqr vf bar bs gur gjb rdhny fvqrf

Sbe gur 3 naq 4 fvqrf, gur ovfrpgbe bs guvf fvqr vf nyernql jvguva gur gevnatyr, fb vg vf vzcbffvoyr gb rkgraq gur bccbfvat fvqrf gb or rdhny. Gurersber, guvf vf bayl cbffvoyr jvgu gur 5 fvqr orvat hanygrerq, yrnqvat gb bar pnfr.

2) Gur hanygrerq fvqr vf znqr gb or rdhny gb bar bs gur bgure fvqrf.

Rkgrafvba pna bayl vapernfr n fvqr, vg pna'g qrpernfr vg. Gurersber, gur hanygrerq fvqr pna bayl or znqr rdhny gb n fubegre fvqr. Va rnpu pnfr, gurer ner gjb jnlf guvf pna or qbar.

Sbe rknzcyr, gur 3 fvqr pna or znqr rdhny gb gur 4 fvqr rvgure ol rkgraqvat gur gevnatyr nybat gur yvar bs gur bevtvany 3 fvqr be nybat gur yvar bs gur bevtvany 5 fvqrf.

Gurer ner 3 cnvef bs fvqrf gung pna or znqr rdhny guvf jnl (3->4, 3->5, 4->5) naq gjb jnlf gb qb rnpu bar, pbagevohgvat n gbgny bs fvk pnfrf.

Gurersber, gur gbgny vf 7 cbffvovyvgvrf.

Expand full comment

V trg gur fnzr nafjre bs frira, ohg V qba'g guvax lbhe rkcynangvba vf dhvgr evtug? V qba'g unir n cebbs zlfrys, ohg jura lbh fnl va cbvag 1) gung lbh pna'g xrrc gur sbhe fvqr svkrq naq rkgraq gur bgure gjb, V guvax gung'f vapbeerpg? Vs lbh rkgraq gur guerr fvqr njnl sebz gur sbhe fvqr, njnl sebz gur evtug natyr, ol bar havg, lbh'yy unir n evtug gevnatyr jvgu fvqrf bs sbhe, sbhe (sebz guerr), naq sbhe gvzrf gur fdhner ebbg bs gjb (sebz 5).

Be nz V zvfgnxvat jung lbh zrna?

Expand full comment

Part 1 is covering the case where the unaltered side is *not* equal to the other two. The case you mentioned is under part 2.

Expand full comment

You've got it. Nice proof, congrats!

My way of proving the answer starts off very similarly to yours, then goes a bit more informal (but can be made rigorous if necessary):

Nf lbh fnl, rirel fhpprff zhfg vaibyir rkgraqvat bar bs gur bevtvany fvqrf va bar qverpgvba, sbe gur gbgny bs 6 pnfrf. Va rnpu bs gurfr pnfrf, nyy jr pna qb vf rkgraq gur fvqr sbe fbzr yratgu, gura pbaarpg gb gur 3eq iregrk, gur bar abg vaibyirq jvgu guvf fvqr. Ol qbvat gung, jr zvtug or noyr gb raq hc jvgu 3 qvssrerag vfbfpryrf gevnatyrf, sbe 3 cbffvoyr cnvef bs rdhny fvqr-yratguf.

Fb n jnl gb rahzrengr nyy fbyhgvbaf vf fvzcyl gb gel nyy 6 rkgrafvbaf (r.t. zragnyyl be ol qenjvat) juvyr xrrcvat va zvaq rnpu bs gur 3 cnvef va ghea naq 'frrvat' jurgure rdhnyvgl vf npuvrinoyr gurer. Vg'f rnfl gb frr gung rkgraqvat gur ulcbgurahfr vf hfryrff va bar qverpgvba naq tvirf bar fbyhgvba va gur bgure. Rkgraqvat gur ybatre yrt vf nyfb hfryrff va bar qverpgvba naq tvirf gjb fbyhgvbaf va gur bgure. Svanyyl, rkgraqvat gur fubegre yrt tvirf bar fbyhgvba va bar qverpgvba naq guerr va gur bgure. Hfhnyyl vg'f ng yrnfg bar bs gurfr guerr fbyhgvbaf gung crbcyr ner zvffvat, ohg vs lbh sbepr lbhefrys gb nfx "vf vg cbffvoyr gb znxr GURFR GJB FVQRF rdhny?" naq ercrng vg 3 gvzrf, gura lbh pna'g zvff vg.

Expand full comment

I've got something that's not really a proof, it's too ugly. But maybe it'll inspire someone else to come up with something nice and elegant.

Sbe rnpu iregrk, nffhzr gung vg'f tbvat gb or orgjrra gur gjb rdhny fvqrf, naq gur bccbfvgr fvqr vf gur onfr. Jr gurersber arrq gb svaq jnlf gb rdhnyvmr gur gjb nqwnprag fvqrf, naq guhf gur bgure gjb natyrf.

Gurer ner guerr cbffvoyr npgvbaf: rkgraq gur fubegre fvqr njnl sebz gur iregrk gb rdhny gur ybatre fvqr (yrnivat gur iregrk ng vgf pheerag natyr), chyy onpx gur fubegre fvqr gb rdhny gb gur ybatre fvqr (zbivat gur iregrk naq aneebjvat gur natyr), be fjvatvat gur fubegre fvqr bhg hagvy vg'f rdhny gb gur ybatre fvqr (jvqravat gur natyr). Gurfr pbeerfcbaq gb nygrevat gjb bs gur fvqrf naq yrnivat gur guveq nybar, be nygrevat gjb bs gur natyrf naq yrnivat gur guveq nybar, be nqqvat na rkgen gevnatyr gung pbiref bar fvqr, rkgraqf nabgure, naq yrnirf gur guveq fvqr nybar,

Guerr npgvbaf gvzrf guerr iregvprf vf avar cbffvoyr gevnatyrf jr pbhyq nqq, ohg jr pna ehyr gjb bhg gb trg n gbgny bs frira.

Vs na vfbfpryrf gevnatyr unf n evtug natyr, vg zhfg or orgjrra gur gjb rdhny fvqrf (nyy natyrf nqq gb bar-rvtugl, gjb natyrf zhfg or vqragvpny, fb gurer pna bayl or bar natyr bs avargl qrterrf be zber). Gurersber, jura jr jbex jvgu gur evtug natyr, jr unir nyy guerr cbffvoyr npgvbaf, ohg jura jr jbex jvgu rvgure bs gur bgure natyrf (naq nffhzr gung vg'f orgjrra gur gjb rdhny fvqrf), jr pna bayl gnxr gur gjb npgvbaf gung nygre gur evtug natyr, ehyvat bhg bar npgvba rnpu. Gung vf, jr pna'g chyy onpx naq aneebj gur natyr gb znxr gur nqwnprag fvqrf rdhny, hayrff vg'f gur evtug natyr. (Nygreangviryl, jr pna nyjnlf nqq n gevnatyr gung pbiref gur fubeg fvqr naq rkgraqf gur bccbfvgr, be pbiref gur bccbfvgr naq rkgraqf gur fubeg, ohg bayl gur evtug natyr pna nqq n gevnatyr gung pbiref gur ybat fvqr naq rkgraqf gur fubeg.)

Expand full comment

I agree that that's a very nice problem. I coach kids in maths sometimes, and I will be stealing that one as a fun challenge, so thank you!

Expand full comment

Just because I had to look it up to remember: an isosceles triangle has at least two sides of equal length. (And a right triangle has one angle of 90 degrees.)

In which case I'm going to guess six, two for each side. Probably too low on the grounds that the answer is supposed to amaze me.

Expand full comment

Ooh, I forgot about the minimalist triangles. New guess is 9. Probably still too low, but not quite as too low as 6

Expand full comment

How do you append a triangle to the hypotenuse without it turning into a square?

Expand full comment
May 27·edited May 27

Same way you append to one of the other sides, make one of the new triangle's sides coterminous with the first triangle and one of the other two sides extend a side of the first triangle

Expand full comment

Ah, I see. That makes sense, thanks.

Expand full comment

Take any triangle, draw a line inside it from the most acute corner to any part of the opposite side, bam, you've got two triangles.

Now reverse it; start with either of those triangles, attach the other triangle, bam, you've attached a triangle to the hypotenuse to create a new triangle.

Expand full comment

I’m going to have to get a pen and paper and draw this out.

*scribbles for a bit*

I still don’t get it, both triangles would still create a quadrilateral if attached to the original triangle’s hypotenuse.

Expand full comment

You'll be adding very thin triangles to the hypotenuse.

A triangle is just three points, connected by lines. You can always make a bigger triangle by moving the points further away from each other. That's all adding triangles is, really, is moving one of its points further out.

Expand full comment

Not if you restrict the movement of the "free point" so that it travels only along the axes of the existing sides

Expand full comment

It seems to me (someone who doesn’t really grokk math) that the answer is either 2 or infinite.

If you append a triangle to it and one of the sides of the new triangle isn’t completely coterminous with one of the sides of the first, then you’ll end up with extra sides and it won’t be a triangle anymore. If you append the new triangle to the hypotenuse then it will have at least four sides, so that’s out. So there are only two sides left you can append to, the non-hypotenuse sides. So the answer is two.

But then again, if you change how long the non-hypotenuse, non-coterminous side of the new triangle is then it still works. So you could stretch that side out to any length and this would work, making infinite possible triangles, so the answer is infinity. Times two.

Experience has taught me that my answer is almost certainly wrong, because I always get questions like this wrong, but that’s what I got.

Expand full comment

I got 4 so far, just noodling with the degrees of freedom when you anchor two points and then extend the free point along the only two axes available to you (2 of the 6 options don't get you anywhere b/c things are too acute)

Expand full comment

See, this is what I’m talking about. This whole time I was confusing an isosceles triangle with an acute triangle. So infinite is out.

Wouldn’t there only be two solutions then? The two non-hypotenuse sides, and then the new triangle needs to have a have a hypotenuse of 5?

Expand full comment

Wait, my kid found another one. We bid 5.

Expand full comment
founding

That's what I came up with as well. If the vertices of the original triangle are at (0,0), (4,0), and (4,3), the solutions I've found put the outside vertex of the new triangle at:

(4,4)

(128/25,96/25)

(8,0)

(4,-7/6)

(4,-3).

Expand full comment

What's your prediction that 5 is the right answer? We're at like 70%, given the dramatic buildup in the question

Expand full comment
May 27·edited May 27

There are at least seven. I found one where the equal side lengths end up being irrational. "Meta-amaze" makes me hope that the true final answer is at least two-digit.

Edit: The one I thought was irrational was not but there are still at least seven.

Expand full comment

Share the Astrology Challenge with your fellow astrologists : )

https://programs.clearerthinking.org/astrology_challenge.html

Expand full comment
May 27·edited May 27

I've always thought that there is one simple way that astrology could be proven, if it's true.

Some astrologers say they can occasionally recognize one's astrological features by their face. For example, Scorpios are said to have a Scorpio gaze. Particular planets being in conjunction with the ascendant are said to leave an even clearer mark on one's appearance.

So, all you have to do is average out digitally the faces of a gazillion random people born in a particular month, or born with a particular planet ascendant, and see if the averaged out face looks any different from the control.

Expand full comment

Before Diana died she was the subject of any number of astrological profiles, in newspapers and on TV. Entire afternoon TV shows were devoted to this, and there was a yearly round up on some tabloid TV show at Christmas. She herself was into astrology of course, which lent some cachet to the affair. Maybe she even watched?

The yearly round up in 1997 was interesting. They didn’t quite predict the death but they did predict life changing events, and changes in relationships during the year, which only a cynic would deny was exactly what happened.

That they predicted in 1996 that she and Dodi Fayed would be together by the end of the year, was not challenged by the demise of both, but rather confirmed by their being together “in heaven”.

And the guy who predicted bells and churches in her future was the hero of the hour - though I think it was pretty clear in the review of his video that he was suggesting a wedding, but as he wasn’t specific (it pays not to be in that game) the funeral sufficed to confirm his prophetical talents.

Expand full comment

If you look back through your life with the correct attitude you can probably find a life-changing event in every year, with few exceptions. It is difficult to show vague predictions didn't come true. The 1996 prediction would have needed clarification; I doubt anyone would have counted it when predicted if either or both would have been dead.

Wording can be interpreted according to the listener. https://www.smbc-comics.com/comic/2011-09-18

Expand full comment

https://ydydy.substack.com/p/dear-jews

I know that doing the same thing over and over again while hoping for different results is crazy, but is it really so crazy when I legitimately don't know what else to do?

I think that Israel and Jewry's leaders are driving my tribe, and potentially the whole planet, off a cliff and that I'm more capable than any other public figure to stop it.

But as an outsider who really really hates the world of competition I have no idea who to appeal to after I've already tried the one person who was APPOINTED to clear my way to the podium but who found the power he craves by being a well-oiled cog in the Reigning System instead.

So I'm posting it here. If you're Jewish - or if you're not but think that the Jews are fucking up big time, with DEFINITIVELY fatal consequences for tens of thousands of their neighbors and potentially fatal consequences for us all - please read my letter and watch the accompanying videos.

I definitely need help here because I lack both the social and executive skills necessary to be my own publicist. Thank you.

https://ydydy.substack.com/p/dear-jews

Expand full comment

Hey, I meet your criteria so have read your linked post (not the paywalled manifesto linked from it, sorry) and watched the first two thirds of your video (not the promised "kill everything that moves" conclusion, sorry). And I feel baffled about the same thing with your position that I do with about 80% of such takes--how can you possibly peacefully establish and maintain an ethnostate in a diverse region? In your video, you describe the large minority population in Israel as a demographic "problem," in the sense of existential threat to Israel-as-a-Jewish-state. Which it is! And how can Israel defend against it in any remotely ethical way? The closest you can come to squaring that circle is to say "well, we just need to really credibly *threaten* violence forever and then people will have no choice but to accept a state that sees them as a problem controlling everything from the river to the sea." That's both not actually nonviolent and not actually possible--threats only work if you actually carry them out sometimes.

Your three-point plan is "1. Lay claim to all the land, and only the land, we consider rightfully ours. 2. Give peace a chance. 3. Kill everything that moves." That's not a plan. That's three mutually incompatible options. Israel can't tactically accomplish both 1 and 3, and either of them would make peace impossible. I choose 2. Give peace (and Jews, and Jewish culture) a chance, and recognize that that means giving up control.

Expand full comment

I'm not emotionally invested in this, but I also don't feel baffled by this.

You cannot be "peaceful" and claim the land simultaneously. To claim territory as a sovereign state *is* to threaten violence. It's called the right of conquest. The fact that it was "outlawed" in the wake of WWII is oxymoronic. But Israel (and by extension, the international community) wants to have its cake and eat it too, by asserting its sovereignty without getting its hands dirty. To keep the peace is to assert power. And all power flows from the barrel of a gun. Until Israel realizes that "might *really does* make right" (at the level of statecraft), it will remain in its current quagmire.

Israel is stronger, so Israel is in the driver's seat. I see 3 possible solutions [0]: A) evacuate Israel of Jews; B) annex Gaza and subdue the Gazans; C) annex Gaza and export Gazans to somewhere else. They are all quite unpleasant, and I expect much wailing and gnashing of teeth. But any one of them is probably preferable to letting Hamas and the IDF brawl until the heatdeath of the universe.

[0] technically there's also: D) the final solution. though I assume this one is *especially* off the table.

Expand full comment

Hi, thanks for your question. First of all, which page is paywalled? Email me and I'll send you a free week or month so that you can read it.

In short, so long as we are seeking no better than "compromise" (which realistically means that the strongest side wins) we are doomed.

My goal is the messianic ideal as described by the anonymous prophet appended to Isaiah, which I read word for word in the original (Hebrew only) https://youtu.be/TX3-Qv2kh6s

And which I subsequently read, translated and explained in depth: https://youtu.be/Of55eQ1j4h4

In that world we are all on the same team and conflict will be viewed as a bad dream from the benighted past.

Also, evidence that I don't actually intend to conquer but only believe in stating a maximalist starting position as a show of pride (and which I grant to THE OTHER SIDE AS WELL as a mark of my resoect for their pride) is the video I just uploaded addressed to Egyptians, a people _very paranoid_ about the idea that Jews are seeking control over all land between the Nile and the Euphrates (as an Egyptian commentor brought up in the comments).

https://youtu.be/4AW2jr35bes

I'm impressed that you bucked the trend and actually took the time to consider something new. That's uncommon everywhere.

Be blessed fam.

Expand full comment

> I definitely need help here because I lack both the social and executive skills necessary to be my own publicist. Thank you.

This, and your manner of communication, make it essentially impossible for you to accomplish these particular things you want. I'd recommend focusing your efforts towards other goals.

(Disclaimer: I may be somewhat biased, since I disagree with many of your views, but I believe I would give the same advice regardless.)

Expand full comment

Harassing (suxh as you do) is bad, but doxxing is good. EVERYONE should be doxxed. Let people say what they want under their true names and resumes.

To do otherwise is pure cowardice.

Expand full comment

Maybe you should give yourself more time to think about other things to do.

Expand full comment

Good advice. Thanks Nancy. 🙏🏻

Expand full comment

How do you know that you’re “more capable than any other public figure to stop it”? That sounds like something someone with delusions of grandeur would write, which makes me not want to read your post because the writings of delusional people are almost always a waste of time.

Expand full comment

On a separate subject, why does online conversation suck so bad? Is it the anonymity?

All FLAB had to do was read the letter. But instead he proudly announced that he won't read the letter to find out the answer to his question because it isn't included in the posted paragraphs.

In person no one but a child would make noise for no reason but to make noise, but online even people who can compose grammatically correct paragraphs do so. It really makes the whole endeavor of serious online discussion towards a goal nearly impossible.

My expectations here are so low that I won't be following up on comments here at all.

If anybody actually wants to reach me you know how to find me. I'll leave the space here for circlejerkers. I wish I didn't have to, but in an arena where most responses tend to be troll responses it's the only sane policy.

Again, if you are the solitary helper amidst the naysayers and trolls please get in touch. You're help will be invaluable.

Expand full comment

Let me try to put this nicely, because you seem to be genuinely clueless as to how you're coming across.

If I read everything that others want me to read, I'd do nothing but reading all day long. The only thing I know about you is that you have annoyed everyone here with your incessant linking to your own blog (it wasn't 1-2 times, more like a dozen+). And now you're using the conspiracy-theorist's move of "I won't bother arguing with you until you watch this 4h youtube video laying out my theories". No thanks. If you want to be listened to, provide value first, make others *want* to hear what you have to say. Engage in the community, as opposed to seeing us as a recruiting ground for your cause and nothing more.

Expand full comment

Hey Vitor, setting aside the content of your response which I could quibble with but would rather not so that I can mull it over more fruitfully, I'd like to say that your response is both nice and appreciated. 🙏🏻

Expand full comment

Well, you know what they say. If everyone you meet is an asshole...

Expand full comment

That is not the case. Anonymous online personas are something else and more a bug in the system than a sign that somebody is actual an asshole.

Expand full comment

Delusions of grandeur confirmed. I’m glad I didn’t waste my time, thanks for the helpful reply.

Expand full comment
May 28·edited May 28

No, I do recommend you read the letter.

It's the most hilarious thing you'll see this side of The Most Interesting Man In The World ads.

Our friend here is seriously running for God-Emperor of the Galaxy: he wants to be priest-king *and* any other little unconsidered trifles of titles to grant him ultimate power:

"I’m not a political man and I don't know how to push my way to the front but I am running for Israeli Prime Minister, Gadol Hador (religious leader of Jewry), and whatever other ugly title will be necessary to actually take control of the reigns before the whole world is thrown galloping off the cliff."

And why wouldn't we grant him ultimate power? He is the boss of pretty much everyone you ever heard of. Such modest ambitions, after all; he's not asking to be made Pope and Holy Roman Emperor on top of the Israeli jobs (but of course, if anyone is offering...)

The last time I read this much aggrandisement was in the preface of a series of pastiche novels, where Our Author claims to be related in the nth degree by descent on both male and female sides to everybody from Charlemagne on down.

Expand full comment

You're a really nasty person.

Expand full comment

I did read the letter. It contains no justification whatsoever of the claim that you're "more capable than any other public figure to stop it", or even what "it" is.

Expand full comment

QED

Expand full comment

troll

Expand full comment

You are the guy who is constantly posting on here about how you are the most important person in the world, because apparently only you know the Real Truth and only you can stop whatever the fuck is going on with Israel.

May I suggest you stop wasting your more precious than diamonds time on here and go back to Israel and stop the disaster, if you're so wonderfully capable of doing it and only you can?

Expand full comment

IFS practitioner DaystarEld comments that people don't in fact have demons in them, and also "another minor thing": IFS doesn't involve a trance-like state. And Scott summarizes this as the second thing without mentioning the first thing. Funny.

Expand full comment
author

I considered the "demons" claim to be the intentionally controversial part of the book, and I tried to address that controversy. It's more of a problem if I described the therapy wrong and confused people about the basics.

Expand full comment

The demon thing seems exceptionally foolish to me. Rage, hatred and the conscious desire to harm and to destroy are not rare. You do not need to posit supernatural entities to explain them. You do not have to think of them as primitive impulses deep down that people normally have no conscious awareness of. Have these shrinks doing exorcisms listened to popular *music* ever? There are whole genres devoted to rage and destruction. I have heard many bitterly unhappy people express a wish to smack somebody (sometimes me) around until they finally grasp something, to torture somebody til they suffer as deeply as the speaker does, to blow up the whole world because it's a pain machine. Some were patients (I'm a psychotherapist), some were friends. And speaking of patients: I've seen a number of guys in their 20's and 30's who were the weird smart friendless kid in high school. All were doing decently in life by the time I saw them, and some were doing well. Every single one of them eventually told me that during their miserable high school years they fantasized about shooting up their school. None of them ended up doing any violence at all, beyond very minor things.

Expand full comment

To second what Nancy said, it seems like the problem isn't having violent impulses, it's having something in you that claims to be a separate evil entity. It doesn't seem like the book's author claimed that all bad desires come from demons, or even most bad desires come from demons: he's claiming that sometimes there really is an external evil entity that possesses someone. His reason for this is direct experience with such entities. They could be misleading experiences for all the reasons Scott outlined in his review, but it is entities that he's talking about: he's not explaining evil tendencies by theorizing demons, he's explaining hearing people say (in the context of IFS) that part of them is a demon by theorizing that it's a demon.

Expand full comment

The "demons" just seem to me like a more extreme and primitive version of the kind of rage I'm talking about. For instance the guys who fantasized shooting up their school when they were miserable 15 year olds experienced themselves as immensely *powerful* in the fantasy -- they were killing or terrifying everybody, they enjoyed making them suffer, they had liberated themselves from convention and morality, the whole world was going to know about them. Their state of mind in those fantasies seems to me to fully meet them demon requirements: they're hugely powerful, they want to kill everybody, they will laugh as they suffer. The Shooter self they fantasized was an exaggerated version of a way they thought of themselves in daily life. They built a little self-esteem shelter out of the idea that they were smarter than everyone else, and would someday be rich and powerful and famous.

Expand full comment

Well, I think the reason the rage and hatred comes out as evil entities is that the treatment hinges on the idea that the person's psyche is a bunch of entities. The therapist has promoted the idea, and also, of course, they are working with someone who is willing to buy it, and in fact likely came to the therapist because they had already bought into that idea.

Many therapists, including me, find ways to explore the same material without introducing the mob o' entities framework. For instance, consider the example Scott mentioned of someone who since a painful breakup instantly rejects everyone they date. It's not generally very hard, if you talk over the details of a date and how the person was thinking, for the patient to recognize on their own that they are being unreasonably critical. And a while later, you might say something like "could it be that you're afraid of falling for somebody again, because it could lead to pain like you felt during your breakup with X?" And many people will think it over and conclude there's some truth in that. And some might go on from there to talk about their rage during the breakup, how badly treated they felt, how unfair life is, the savage retorts they'd like to give to people who minimize how big a deal their breakup was -- and there's a glimpse of the "demon," right?

Expand full comment

The problem seems to be that what you describe is not the demon: it's just IFS working normally. The demon "problem" comes when you can't untangle that part of you into something like "I lash out because I'm afraid of X" or "I want to kill people because of how vulnerable I felt when Y happened" and instead you're stuck with a part of someone that keeps saying "I'm doing this because I want to hurt (client name), and I want to hurt other people. I'm not a human, I'm a powerful spirit and you are all worms to me. You deserve to die, and (client name) deserves to die, and I will laugh as you suffer." And for the book's author (I can't remember his name, Falcon-something?) he's concluded that these things are fundamentally different than the normal stuff he runs into, and instead of untangling what it's about the best thing to do is to get rid of it entirely.

Expand full comment

The thing about demon-like entities and IFS wasn't having malign parts, it was malign parts with a distinctive way of refusing to negotiate.

Expand full comment

But haven't you ever felt that way? You come across as quite fair-minded and even-tempered, so maybe you don't. I think I'm about average in how prone I am to recalcitrance, and it's not unusual for me, when someone's trying to change my mind, to feel like telling then STFU, and/or to feel like attacking them verbally. I don't often sink so low as to act on the impulse, but it's definitely there.

Expand full comment

What you see online is a rather edited version of me-- I've got a good bit of anger and stubbornness. However, there isn't much that seems as extreme as was described in _The Others within Us_.

Expand full comment

Scott doesn’t believe they have demons either, so it makes sense to focus on the point of disagreement between them.

Expand full comment

The fact that not all of them believe in the demons was pretty obvious even without one of them actually saying it.

Expand full comment

(1) “Short Women In AI Safety” and “Pope Alignment Research” aren’t real charities"

As a short Catholic woman, I'm feeling disrespected here 😁 Why is it only the leggy lovelies who can participate in AI safety? And if you don't align your popes right, you get anti-popes and then we get schisms and guys everywhere claiming to be the real pope all at the same time:

https://en.wikipedia.org/wiki/Western_Schism

(2) Re: the IFS link -

"That is to say, some people who are not-murderers have a part that wants to murder, or even rape or torture, others. And they just... do a better job integrating, negotiation, suppressing, sublimating, or inhibiting these parts than the people who DO end up acting on those parts.

Which, I expect, is a fairly scary thing to notice about one's self!

...Unless, maybe, their client is steeped in a heavy religious culture already, insists that demons are real, etc. In that case I'd understand the therapist sighing and rolling up their sleeves and "meeting the client where they're at" and calling it a demon, but for me this still sits badly."

I'm going to suggest that if the client is "steeped in a heavy religious culture already", you don't need to invoke demons because they already (should) have a sense of sin, so when you talk to them about the part(s) that want to murder, torture, rape and so forth then it should (ideally) work out at "yeah, I get you, it's original sin and our fallen nature at work". Meeting the ugly parts of ourselves may be a shock, but if you have a religious context for it, then it's not necessary for it to be demons.

I think the problem *is* lacking the former cultural religious background where sinfulness is assumed, so meeting the nasty parts is, as said, a very scary thing. And of course nobody wants to think they've got the horrible mean parts in their own nature. But nowadays, talking about sin is nearly offensive, so maybe people would prefer to shove it all off on exterior forces - I'm not really like that, I can't be, so yeah demons of course!

Expand full comment

Yeah, "every person is naturally good, this aspect is not good, therefore this aspect is not part of the person" seems like a valid chain of logic, given the premises. Certainly better than "every person is naturally good, this aspect is part of the person, therefore this aspect is good", which I think we also see a lot of.

Expand full comment

Of course TLP had said everything needing to be said on the topic, how could he not 😉

https://thelastpsychiatrist.com/2012/06/amy_schumer_offers_you_a_look.html

Short summary: assigning bad actions to a separate “not me” part absolves the “real me” from the action and its consequences, therefore ensuring the continuation of the behavior.

Expand full comment

It's been a while since I read that one.

> "They made me practice piano an hour every day!" as if the fact of practice was the whole point; what they did not teach you is to try and sound better every practice.

Ouch.

> Every time you crowdsource the superego a piece of you is split off as bad keeping the rest of you intact as good. "I'm not a bad person, I just did a bad thing."

Ah, there it is.

> But if I'm permitted I'll offer you one final prediction, you'll either take this as a warning or remember that you don't believe in all this crap: *if you are looking for the perfect climax but have no knowledge of the resolution, if you do not write your story towards an ending,* then your life will default to the one ending that will terrify you more than any other possible: "He could not refrain from going on with them, but it seems to us that we may stop here." *It is inevitable.*

That's a really good one.

Expand full comment

"And if you don't align your popes right, you get anti-popes and then we get schisms and guys everywhere claiming to be the real pope all at the same time"

There is nothing inherently wrong with having multiple guys claiming to be the real pope at the same time. We have, in the recent past, had multiple college football teams claiming to be the national champion (e.g. 2003 season when USC and LSU both claimed to be national champion) and nothing bad happened.

Anti-popes can even become saints -- Hippolytus of Rome being an example (maybe the only example, but still ...)

I believe that aligning popes is unnecessary in the modern era and we should acknowledge this.

Expand full comment

The problem comes when you get a pope-antipope collision.

Expand full comment
May 27·edited May 27

Depending on the collision energy that could create some cardinals and anti-cardinals. Plus a lot of charged anathemas!

Expand full comment

Some models even predict the formation of transfinite cardinals, but it isn't clear whether or not such cardinals can be well-ordained.

Expand full comment

Well-ordained, but what's their spin number?

Expand full comment

Improperly aligned popes lead to confusion and the French getting uppity.

That Americans can't even sort out their college national championships surprises me not at all, you guys have "World Series" where only teams from your own country compete, not even from the other countries that play the same game 😁

Expand full comment

Not true! The World Series represents both parts of the world: The United States and Toronto.

Expand full comment

I'm surprised they haven't managed to add a Japanese team by now.

Expand full comment

There's something called the World Baseball Classic that takes teams from all around the world. But I don't know how it works.

Expand full comment

It probably says something about something that the Toronto Blue Jays are part of the American League in MLB :-)

Expand full comment

"That Americans can't even sort out their college national championships surprises me not at all, you guys have 'World Series' where only teams from your own country compete..."

Sadly, a number of folks decided to "fix" college football and for about 20 years there has been an undisputed national champion. I think the game is worse because of the related changes needed to do this and that a lot of what made college football its own thing (e.g bowl games with specific charters) rather than just a minor league to the NFL has been lost :-(

I fully expect some sort of metrification to be proposed next. Probably by ESPN.

Expand full comment

Sound like something an anti-pope would say!

Expand full comment

I have an essay about the philosophy of murder ballads. Does anyone know any good ones? https://open.substack.com/pub/wollenblog/p/the-philosophy-of-murder-ballads?r=2248ub&utm_medium=ios

Expand full comment

Blackwater Pass by Liam McKahey - https://www.youtube.com/watch?v=R1j_jBQK5yI

Live Oak by Jason Isbell - https://www.youtube.com/watch?v=1cPuRzELO0I

More contemporary:

Yvette by Jason Isbell - https://www.youtube.com/watch?v=EoMlvXfIajQ

Expand full comment

Not exactly murder ballads, but the Prisoner on the Gallows songs (for lack of a better term) are an interesting mix. If you're familiar with Led Zeppelin's "Gallows Pole" or Cara Dillon's wonderful "The Streets of Derry," you get the drift.

https://en.wikipedia.org/wiki/The_Maid_Freed_from_the_Gallows

Expand full comment

Does Pumped Up Kicks by Foster the People count as a murder ballad?

Expand full comment

Between the River and Me. I guess it's a Tim McGraw song? Pretty sure I first heard a cover version of it. https://www.youtube.com/watch?v=NzGiU-WX-CE

Expand full comment

That’s the Night the Lights Went Out in Georgia.

Expand full comment
May 27·edited May 27

Not a murder ballad person (perhaps Cell Block Tango counts, or Chicago in general?), but the point about the music mattering more than the lyrics is interesting. Reminds me of the Japanese tradition of upbeat pop songs and accompanying lyrics about feeling useless in a world where you must succeed or be deemed a parasite on society, often with a heavy undercurrent of suicide. Lost One's Weeping, Tokyo Teddy Bear, Chasing the Night, Young Girl A (avoid looking into the details of that one, if you don't want to be heartbroken), Dance of Corpses, Phony, Absolute Zero, Sayonara Princess...

In this case you can disconnect yourself even further from the lyrics and just enjoy the song. There are other catchy songs about general-purpose melancholy (Planet of Sand, Lagtrain), abusive relationships (Maiden Dissection, Romeo and Cinderella, Ransou Metsuretsu Girl), being H.P. Lovecraft (Senbonzakura, which should've been titled AIEEEE NEW THINGS ARE SCARY), or getting your identity revealed by New York Times journalists (Magical Girl and Chocolate). I'm not that deep into it, but it seems to be a fun community to get into.

Expand full comment

I don’t see Folsom Prison Blues named yet.

Expand full comment

I was turning that one over in my mind. Yeah, it fits. If we want to stretch things to opera, I am sure there’s an aria based on “Is this a dagger that I see before me…” in Verdi’s MacBeth.

Expand full comment

Yeah, I was on the fence. The singer tells us he shot a man in Reno just to watch him die but that’s in the past tense.

Expand full comment

It's maybe slightly outside the scope of the question, but does The Cask of Amontillado by The Alan Parsons Project, based on the Edgar Allan Poe story of the same name, count? It's a very slow murder, but a murder nonetheless.

https://www.youtube.com/watch?v=vT0YZLES8DM

Expand full comment

I guess one could make the argument that “Ode to Billy Joe“ is a murder ballad. It’s a matter of interpretation.

Expand full comment

Tom Dooley of course.

Also “Running Gun” and “El Paso” by Marty Robbins.

“I Hung my Head” sung by Johnny Cash is really great.

“The Long Black Veil” by Lefty Frissell is kind of a murder ballad with a twist.

“Down by the River” Neil Young..

Expand full comment

I once heard one with a lyric that has stuck with me: "I'm gonna kill one of us, babe, and when I'm sober I'll decide on which." Googled around just now and could not find the song. Anyone recognize it?

Expand full comment

My impression is that most murder ballads are in third person, and have a neutral tone toward the murder. It's simply a thing that happened.

Am I approximately right?

I offer Lord Randall as an unusual example of a ballad where the soon-to-die person does part of the singing.

Expand full comment

I'm thinking of the Childe ballads. More recent murder ballads could be quite different. Delilah certainly is.

Expand full comment

I'm trying to think of what "murder ballads" I can think of.

"I Hung My Head" is about an impusive murder and the remorse that follows (as the consequences catch up with the murderer).

"Smoking Gun" is about a jealous boyfriend murdering his cheating girlfriend, where the term "smoking gun" changes meaning from figurative to literal as the song goes on.

"Run for your Life" is a Beatles song where the narrator is threatening to murder his lover if she is unfaithful to him.

And, of course, "Fulsom Prison Blues" has the narrator sitting in prison, lamenting the fact but revealing at the end that he's there for killing a man in Reno just to watch him die. (Train whistles make him sad.)

Expand full comment

"Highway Patrolman" has the brother of a murderer pulling over and letting his brother flee the state. The likely murder takes place offscreen.

"Do it Again" has the narrator murder someone in the beginning but kinda get away with it: "But the hangman isn't hangin, so they put you on the street...."

There are a bunch of songs by Sabaton set in wartime, where a lot of killing is happening but mostly it's not murder in the legal sense. But I suppose "The White Death" would qualify as a pretty cold-blooded set of killings by a sniper in wartime.

"Gimme Three Steps" is a song by a guy who is trying very hard to avoid being murdered for dancing with the wrong fellow's girl.

"I Shot the Sheriff" is pretty vacuous, IIRC, but does have an admission to shooting a lawman (but not his deputy).

"Dirty Deeds" has a narrator who's offering to do murder for hire.

Expand full comment

Does Tom Lehrer's Irish Ballad count? It's satirical, but definitely a ballad about murder.

Expand full comment

Well, then, what about "I Used To Love Her" by Guns n Roses?

Expand full comment

Oh, and J J Sneed, by Dolly Parton. Not satirical.

Expand full comment

"Knoxville Girl"

"Hey Joe"

Expand full comment

You beat me to Hey Joe.

Where you goin’ with that gun in your hand?

Expand full comment

You ever heard Lee Moses’ version of it? It’s pretty great.

Expand full comment

Not till now. Good one, thanks!

Expand full comment

Yeah, it’s good, right? You’re very welcome.

Expand full comment

There is, of course, 1996’s album Murder Ballads by Nick Cave and the Bad Seeds.

Expand full comment

If you only listen to one song off the album, listen to Where the Wild Roses Grow.

Expand full comment

I invite comments on my latest Substack post https://thomaslhutcheson.substack.com/p/fiscal-policy-and-everything-else

Expand full comment

Maybe you'd get more readers if you provided a short summary here.

Expand full comment

That is true. I'll do it with the next post! :)

Expand full comment

>taxing them and using the tax rate as a shadow price for any non-tax regulatory interventions

...what the hell does that mean? What's an example?

>“Trickle Down” implies that at least _some_ additional wealth is being created

It's called “trickle down” because that's the name it's had since Reagan. Objecting to its name is like objecting to calling a trial defendant “Mr. Goodman” because you don't think they're a good man.

>we need to make the personal taxes more progressive

By this I take it you want to increase minorities' contributions to the tax collection process, because that's all “progressive” means anymore.

>(not “Obamacare” as it was in fact a very clever idea of the erstwhile serious Heritage Foundation)

Oh, so this is a different policy entirely? Must be, considering you saw fit to differentiate it from Obamacare without any quotes calling it Obamacare.

>Business income corporate or otherwise is income of its owners.

Now you're trying to destroy the idea of corporations, which is not going to fly. Holding it against a speaker that they didn't tank their career by arguing for the impossible is not a good look.

>Whether its shoplifting or cheating on your taxes, the best way to prevent it is certainty of apprehension.  This is one of the best things Biden has done and it infuriates Republicans.

How ironic, considering how rampant shoplifting is. https://capitaloneshopping.com/research/shoplifting-statistics/#:~:text=In%202022%2C%20shoplifting%20losses%20grew,cost%20retailers%20%24461.86%20in%202020.

Why do your italicized "fi"s look like Greek alphabet? Copy-paste just turns it back into "fi" but that's definitely a Greek "phi" or something. (This is most notable of the spelling errors, but not the only one.)

Expand full comment

"Whether its shoplifting or cheating on your taxes, the best way to prevent it is certainty of apprehension. This is one of the best things Biden has done and it infuriates Republicans."

I didn't read that far, or at least not with sufficient attention to detail. So.... Biden is tough on crime, tough on the causes of crime - I mean, cracking down on shoplifting - and Republicans *don't* like this? They *approve* of shoplifting? They don't want poor innocent little shoplifters arrested, charged, and brought to trial?

I had no idea Chesa Boudin was a solid Red GOP staunch limb of the Republican Party!

Expand full comment

I think expending resources cost effectively to prevent crimes is good policy whether the crime s shoplifting or tax fraud. I disagree with Republicans who do not want todo this for tax fraud.

Expand full comment

Okay, you're gonna vote for Biden in the election. And you think this Brainard person didn't go hard enough on that? That's about what I got from the post. Taxes bad! Or good? I couldn't really figure out what you were trying to say, apart from Republicans bad.

My confusion might have been helped if you told me who Lael Brainard is. I had no idea if it was he, she or they before you started using "She" and I have no idea who they are or what kind of expert, if any, they are and why should I be impressed that they gave some talk? A basic introduction for those of us not up on who went to MIT would be very helpful.

Expand full comment

A very useful comment. Thanks.

Expand full comment

I would genuinely be glad if it really is helpful. Perhaps those who read your Substack are intimately familiar with who this person is, but the assumption should be that for the general idiot in the street (such as myself) a couple of lines of introduction about who Jesper Jacquard Walloping-Windowblind is and what he does and what is the big deal anyway will go a long way towards helping make your point.

Expand full comment

Precisely, the Substack is written for an audience, more implicitly than explicitly and you pointed out just how not-for-everyone that is.

I did notice that another Substacker did just that, add a few lines of introduction.

So, again, my thanks

Expand full comment

Sure. Three (hopefully) helpful notes:

#1 You meander a lot. Your first 6 paragraphs have no obvious connection with the meat of the essay, which is commentary on a Lael Brainard talk you saw. Get to the meat.

#2 Constant potshots at Trump and Republicans turn off right-wing readers. Not sure if this is intentional, it might be helpful if you're trying to niche down to a specific left-wing audience, but you close yourself off from half the total spectrum with this. If nothing else, it's overly repeated and feels like filler.

#3 You have elements that...really need more development. For example:

"Tax capital gains as ordinary income but a) index the gains for inflation and b) use the taxpayer’s average marginal rate over the holding period. No rebasing on inheritance, but no tax until realized."

Is a wild restructuring of the tax rate with potentially massive consequences. It deserves perhaps more than two sentences of explanation and justification.

So yeah, in terms of what was good, I thought it was good topic, talks by senior officials with big consequences are interesting and deserve commentary. The actual commentary wastes too many words on things irrelevant to the essay or actively distracting while leaving the most interesting parts woefully underdeveloped.

Hope that helps.

Expand full comment
May 27·edited May 27

I'm reading Temple Grandin's "The Autistic Brain" and was wondering how well it holds up 11 years later.

She seems to be all-in on the idea that a biomarker for autism or at least for its major symptoms was just around the corner - either by neuroimaging or gene sequencing - and that we would be able to do without the dreaded behavioural observation/interviews as a tool for diagnostics (I'm halfway through the book, I'm still not sure what is exactly the problem with them)

11 years later, how much of it came true? How much is just flat wrong?

Expand full comment

I'd stumbled across a study (discussed at https://www.psychiatrymargins.com/p/traditional-dsm-disorders-dissolve) that gathered data on symptom comorbidity by randomizing the order of the questions (so existing structures didn't bias the outcome) and the clusters found didn't align with the DSM categorization.

Many caveats apply: online survey, the DSM is meant for purposes beyond disinterested description, the categories were made for man not man for the categories, &c.

But it's at least suggestive that "autism" isn't a unitary thing, even beyond the live controversy around rolling Asperger's into ASD, or the whole concept of it being a spectrum in the first place. If the new clusters better cleave reality at the joints, then improving identification of autism would be a snipe hunt.

Expand full comment

thats a really interesting study, thanks for sharing!

Expand full comment

The single thing that is most probably standing in the way of finding a legible biomarker is probably the expansion of the diagnosis to borderline and functionally asymptomatic cases. The evidence has been spoiled, maybe for good.

Expand full comment
May 27·edited May 27

This was 10 years away 10 years ago and is 10 years away today, is a bit of a dry way to put it. It'll be 10 years away in 10 years, and was 10 years away 20 years ago...

Some context is needed about the history of medical genetics. People dramatically underestimated how polygenic everything is. By the early 2010s this was improving, but it was still really bad (e.g. the Promethease/SNPedia era was in retrospect a huge pile of mistakes). In retrospect, we'll find we're still doing so today. 2000s autism research had ridiculous hilariously low numbers for "how many genes influence autism likelihood", and expected to find all of them soon. This wasn't an autism-specific problem, but just generally the case for all of medical genetics. All the biomarkers were "right around the corner". They still are, but they were even more right-around-the-corner then, too.

(If you go back even further, people make the same mistake on the chromosomal level -- researchers in the 1960s and 1970s karyotyped people for *everything*.)

We know much more about the genetics of everything than we once did. If one is capable of learning and internalizing lessons, "we have an incredible track record for hyper-underestimating how complicated this is" seems like it should be suggestive by now.

re. behavioural observation, it's tricky in the sense that "autistic behaviour" is tricky and subjective. Some forms of this claim (e.g. that autism presents differently in women) are popular, though not necessarily accurate. I increasingly suspect that perceiving autism as a childhood disorder is entirely the wrong end of the stick, and that "how autistic" someone is doesn't tend to be too easy to narrow down before adulthood. Most discussion of this is pretty bad, though, in that it gets locked into "how different is autism in X compared to Y?" loops.

Expand full comment

I'm a psychologist, but have not been following the research on this topic. But info I have run across mostly weighs in the direction of no. There's a lot of reason to think that people labelled autistic have some brain damage: They have lower average 5 min. APGAR scores that non-autistics, there were more complications during their births, and they have more comorbidities, including neurological and growth disorders. See this 2022 Nature article, for example:

Khachadourian, V., Mahjani, B., Sandin, S. et al. Comorbidities in autism spectrum disorder and their etiologies. Transl Psychiatry 13, 71 (2023). https://doi.org/10.1038/s41398-023-02374-w

It does seem to me that there exists a syndrome whose notable features are high intelligence, lack of sociability, and a certain rigidity, but I'm dubious that this condition is the mild end of the same spectrum that includes people who are. autistic in the old-fashioned sense (non-verbal, crouching in the corner staring at the shiny object they're twisting for hours).

Expand full comment

I'm the "high intelligence, lack of sociability, and a certain rigidity" type and my birth had complications - got strangled by the umbilical cord. Didn't start talking until after two years old. My girlfriend - basically the same story. And in general this doesn't appear to be a rare thing a lot of "high functioning autists" had this kind of birth complication and are late in developping speech.

Also, I suspect that causality with being strangled -> autism can be in the opposite direction. Autistic embrious twitch too much and this is why they so often strangle themselves with the umblical cord.

Expand full comment

Obstetrics and brain damage is not my area of expertise, but my intuitive take is that if you have brain damage from hypoxia caused by the strangling cord, you'd have other problems as well -- cerebral palsy,, for instance. Also, I believe doctors are usually able to intervene quite fast when the cord's strangling the baby. I don't think strangulation can occur while the baby is in utero, because the other end of the cord is attached to the placenta, which is right next to them. I think the strangulation happens when the baby moves down the birth canal, leaving the placenta behind. And at that point the obstetrician can see the strangling cord if the baby's neck is out. And if the neck's not out, I think the fetal heart monitor would alert them that the baby was starved for oxygen and the doc could quickly do a big episiotomy and pull the baby's neck into view. But I'm not sure of any of this -- might be worth checking with an expert if you'd like to know for sure how likely it is that the cord around your neck led to brain damage.

Expand full comment

I *have* been following the research on this topic. I said nothing about "high intelligence".

I believe in the autism Kanner and Asperger saw (which was the same autism), which has a pretty broad functioning range and a piss-poor correlation between child and adult outcomes (Kanner's highest-IQ patient was significantly disabled in adulthood). Autism as it currently exists is a wastebasket diagnosis for just about all childhood disability, which includes many forms of early-life brain damage and genetic syndromes. These do not look like what Kanner or Asperger saw when you drill into the phenomenology of it. Under current diagnostic rules, "global developmental delay" (which is clearly idiopathic ID, and clearly does not even a little bit resemble autism as a distinct neurotype, which amongst other things has very *non*-global developmental delay) usually turns into an autism diagnosis later, which is a dead giveaway that there's something very wrong with autism diagnosis. I like Mottron's shibboleth that "if there were motor delays as bad as the speech delay, it's not autism", though for various reasons this often breaks.

Expand full comment

There's also an issue from the practitioner side - there aren't many good conditions to label a child who clearly needs additional support. Many child psychologists are in it to help the individual, and if diagnosing the child with autism gets the school to stop causing unnecessary misery to said child, they'd do it regardless if the kid is "really" autistic.

It's noble on an individual level, but then the entire category suffers. Maybe we should invent a new catchall term for "psychiatrist says to be nice to this kid" or something for the cases that aren't neatly one thing or another.

Expand full comment

This is like Scott's comment about not wanting to be doxxed. Is there a special reason we should stop making your kid miserable, or is it just the normal amount of bad to make your kid miserable?

Expand full comment

Is is likely that brain damage is so common?

Expand full comment

Well, it could be pretty mild or minimal. But if a neonate starts off with some batches of brain cells gone, that could make a big difference in outcome. Whereas if an adult loses a batch to a concussion they might never even notice the difference.

Expand full comment

Now that I think of it, I recall reading an argument that there are two "kinds" of low IQ people, who are affected quite differently even when they have identical scores. E.g., a person with an IQ of 80 because they just happen to be on the left-hand side of the bell curve is reasonably functional, and mostly indistinguishable from an average IQ person. But someone who's IQ is 80 because of a single-factor that reduces their IQ is likely to be significantly impaired and might not be able to live by themselves.

Is that a correct argument? Could the distinction here be similar?

Expand full comment

This sounds very wrong, in that e.g. genetic syndromes that cause small IQ deficits compared to siblings are relatively common, and most people with them are never diagnosed due to blending in with the general population.

Expand full comment

Oliver Sacks' book (from the 80s) claims that autism is caused by some dysfunction of mirror neuron systems in the brain. Is that still accurate?

I was never clear on what the relation is supposed to be between that "old-fashioned" autism and the modern way it's used, or even if the modern way people use it is consistent with how psychologists do. I'm sure they don't consider the internet definition of "quirky personality and thinks they're kinda bad at relationships" as autism, but I don't really know what the technical definition is.

Expand full comment

The classic "broken mirror hypothesis" does not work. Various milder claims might salvage it. See Yates, L., & Hobson, H. (2020). Continuing to look in the mirror: A review of neuroscientific evidence for the broken mirror hypothesis, EP-M model and STORM model of autism spectrum conditions. Autism, 24(8), 1945-1959. https://doi.org/10.1177/1362361320936945 for a general summary of the situation.

Autism as a definition is a huge mess for entirely different reasons to why most people think it's a huge mess. The severe end of the spectrum is massively overdiagnosed; the thing that people were describing when they first characterized autism, i.e. the "classic autism" if anything is, did *not* track to the very low end of the spectrum. (The DSM-IV concept of "Kanner's syndrome" and "Asperger's syndrome" did untold, probably existential damage.) This results in massive overestimates of phenomena like childhood brain damage and severe genetic syndromes, because people with generalized severe developmental delays are included under autism. There are also a lot of people making similar mistakes at the far "if autism looks completely different in women, then maybe women without any autism symptoms are autistic" end, which also screws over research, but it doesn't do so nearly as badly -- adult autism diagnosis is a pretty small enterprise as a proportion of autism diagnosis. It looks huge on paper because there's *so much autism diagnosis*.

Expand full comment

Dysfunction of mirror neurons would be ruled out in cases of folk autism because they claim neurodivergent people understand each other better than neurotypical people do them or each other.

In a true case of ‘mirror neuron dysfunction’ it should be readily apparent that the autistic is best understood by the allistic, but that isn’t the presentation they claim.

Whether this continues to make sense should be left as an exercise for the reader.

Expand full comment

Here's a decent discussion of how it's diagnosed (the actual criteria are a ways down the page): https://www.bridgecareaba.com/blog/icd-ten-autism-spectrum-disorder

Expand full comment

Hmmm, I get it! I do remember a genetics hype in the mid 10s that did not live up to expectations. Well, at least we got those handy commercial genetic tests for ancestrality and some health risks

Expand full comment
May 27·edited May 27

How does ocean acidification work? It seems to me that the ocean contains such a vast amount of water that it would be a practically infinite buffer. Maybe the issue is that only the highest surface levels are acidified, and ocean water transport is too slow to disperse it? Also, a cursory internet search reveals this is blamed on CO2. Yet in the past, CO2 levels were over an order of magnitude greater than they are today. That means the ocean should have been much more acidic at that time. This also proposes some mechanism for acidic compounds to cycle out of the ocean if it is less acidic today. But I don't think that happens in the regular water cycle, so how could that happen?

Edit: Some great replies so far, thanks.

Expand full comment

The acidity of the oceans is not *directly* related to the atmospheric CO2 concentration. My understanding is that carbonates act as a buffer, as measured by the carbonate saturation state. The carbonate saturation state is a measure of how stable carbonates (such as calcite and aragonite) are in seawater (note: calcite and aragonite are used by marine calcifiers). If the saturation state is below 1, carbonate is unstable and will dissolve. Conversely, if the saturation state is above 1, carbonate is physically stable - but most marine calcifiers require a carbonate saturation state that is substantially higher than 1. [1]

For instance, atmospheric CO2 concentrations were 5x higher than today during most of the Mesozoic, yet the pH of the oceans allowed calcifying organisms to function. Despite a somewhat lower ocean pH (i.e. more acidic), the background carbonate saturation state during the Mesozoic was similar to today [2]. Massively higher CO2 levels didn't impede carbonate production during the Mesozoic — and in fact may have even contributed to their productivity [3].

[1] https://www.sciencedirect.com/science/article/abs/pii/S0012825207001857

[2] https://www.sciencedirect.com/science/article/abs/pii/S0016703704001681

[3] https://www.science.org/doi/10.1126/science.1154122

Expand full comment

>This also proposes some mechanism for acidic compounds to cycle out of the ocean if it is less acidic today. But I don't think that happens in the regular water cycle, so how could that happen?

Because it's not the water cycle; it's the carbon cycle happening to go through the water.

There are several ways that CO2 goes out of the ocean.

1) Algae photosynthesise, converting CO2+H2O into organics+O2. Some of this winds up turning back into CO2 via the metabolism of algae and things that eat them, but some sinks to the bottom and gets turned into fossil fuels.

2) Erosion of rocks, both on the continents and at mid-ocean ridges, dumps calcium and magnesium into the ocean. These precipitate as carbonates (both as part of marine organisms building skeletons, and also to some degree uncatalysed), and form rock.

3) It boils back out into the atmosphere, if the CO2 concentration in the air goes down due to stuff happening on land (e.g. giant forests sucking CO2 and coalifying, as happened in the Carboniferous).

I'm not 100% on what happens to sulphate, although my understanding is that it's along the lines of "sooner or later it winds up precipitated as sulphate minerals, then it gets buried, and contact with coal and other reduced minerals reduces it to sulphide".

Expand full comment

Interesting. So there is some sort of equilibrium between atmospheric gases and seawater. This intuitively makes sense. I guess that also explains why the surface is primarily impacted. The partial pressure of atmospheric CO2 would limit penetration into deeper, higher pressure water.

Expand full comment

CO2 can go into deep water - the pressure's counterbalanced by gravity. It just takes ages because the ocean surface doesn't mix with the depths very well, so the anthropogenic spike in atmospheric CO2 hasn't had a chance to get there yet.

Expand full comment

A) that atmosphere is also very large

B) yes that's correct, only the surface, in contact with the atmosphere, acidifies. The ocean, on the Grand scale, is very poorly mixed. And it takes ~700 years to complete a full cycle of the ocean currents.

C) yes it was and we believe that in certain periods of time things like coral reefs ceased to exist, but we also believe that some coral can survive in a non calcified state (we've observed this), and this this is how reefs came back when ocean acidification decreased again

And finally, in pretty sure that there are ways for acidic compound to cycle out but I actually never took chemical oceanography, so I'm less confident there

Expand full comment

There's some relevant information at https://en.wikipedia.org/wiki/Carbon_cycle . Yes, the surface water and deep water mix too little to transport CO2 into the deep water on timescales shorter than centuries. Photosynthesis of course converts CO2 dissolved in the ocean into organic carbon, but that is ultimately mostly gets turned back into CO2 eventually. There's also carbonate mineral deposition occurring in the oceans, which removes it from the surface of the Earth for much longer.

Expand full comment

Will we see a star-trek style universal translator courtesy of Facebook soon? Is it overhyped or not?

The fact that we still can't communicate fluently with non-anglophone people (in the majority of readers of this blog's case) is a staggeringly inconvenient feature of the otherwise convenience-obsessed modern world, and a universal translator would be a "how did we manage without it before?" invention comparable to the invention of the internet itself if it were pulled off I think.

Expand full comment
May 28·edited May 28

> The fact that we still can't communicate fluently with non-anglophone people (in the majority of readers of this blog's case) is a staggeringly inconvenient feature of the otherwise convenience-obsessed modern world

Is it REALLY an inconvenience?

I rather think the language barrier might be a blessing in disguise.

Consider the number of conversations you have in a day. How many of them are trite, and you would have been just as (or more) effective without having bothered interacting with this other person? The vast majority of my interactions are like that, and they're all Anglophone. I struggle to believe that conversing with non-Anglophone people would be any better (and probably considerably worse, since Anglophones represent the vast majority of actually educated populations).

Expand full comment

I think we need to distinguish two different 'universal translator' functions:

1) Translating between two languages where there are accepted methods of translation (e.g. English to Spanish). I think we're pretty close to this? I've been quite impressed with modern translation tools.

2) Translating between two languages where there aren't accepted methods of translation (e.g. the starship shows up and they talk in their language for thirty seconds and then the computer manages to start translating). This strikes me as almost certainly impossible as shown.

Expand full comment

Google Translate app in Conversation mode comes pretty close, if you don’t mind some hononyms and weirdness and are just chatting. I have had many good conversations with it.

Expand full comment

We have the universal translator already. Google Translate and other similar software generally offers a serviceable translation of most anything. We just don't use it all that much (compared to how much it could be used), since "serviceable" doesn't yet mean "completely fluent" or "100 % reliable".

In general, there seems to be an effect where people will fairly quicky cotton on to any AI-typical stylistic quirks even in technically good AI creation and start to avoid them and find them prole. I've understood the teens are quite down on AI art, even if it doesn't have the typical AI art fallacies and purely based on typical styles, and call it "boomer art".

Expand full comment

It'll never be as good as Star Trek for real time translation, mainly because of the delay - a translator has to hear what you're saying before it can output the translation. Since different languages don't put the same words in the same order, you can't do better than a one-sentence delay on your translation.

That said, Google Translate has gotten good enough for tourism already, so I wouldn't be surprised if another decade of iteration takes it from "good enough that you can get the gist of what they're saying with a bit of pointing and grunting" to "good enough to feel like a natural conversation."

Expand full comment

They exist and work, I chatted seamlessly with an Uber driver all the way across Panama the other day via his phone, English to and from Spanish. Don't know what app. Itranslate is the first which Google play offers

Expand full comment

I did not watch star-trek so I'm not sure about specifics of their universal translator, which makes it a bit harder to answer.

I do think that a very good translator app would be very handy for many real life applications - like tourism for example. But also it would not be that huge of an upgrade compared to already available tech for these purposes, mostly? Highly depends on the langauge pairing right now afaik.

For anything less trivial it's going to be complicated.

On one hand, it's obviously better to have everything be translated rather than not.

On the other hand, even the best translation is never quite the same as original. Some things you can't really translate, because languages don't map exactly to each other, and because cultural context is different - and no universal translator would be able to change either. So no matter how good the tech gets there will be a huge value in learning languages. It would be a shame if even less people were to learn foreign languages due to the AI translation.

Anyways, what I'm trying to say is that it would be a) amazing; b) not as amazing as one might naively think.

Expand full comment

> I did not watch star-trek so I'm not sure about specifics of their universal translator, which makes it a bit harder to answer.

It was a lampshade for how every intelligent being on the show spoke English. The general idea is that through some science-magic, everyone hears perfect real-time translation into a language they understand, so no one ever has to think about translation again.

Expand full comment

In one of the books for the original series, I think by James Blish, it is explained that a universal translator does something like taking various ways to express things and narrowing down by context something something and there you have basic thought which can be output in a language of choice.

Expand full comment

Except for that one episode with the race that spoke in idioms and broke the translations.

Expand full comment

And Klingons, for some reason.

Expand full comment

I wrote a post with the following heading:

"Beauty as entropic fine-tuning. Why beauty should be measured in bits, why conscious AI would experience beauty, and the evolutionary function of the aesthetic experience"

If it sounds like something you would enjoy, I invite you here:

https://extramediumplease.substack.com/p/beauty-as-entropic-fine-tuning?utm_source=profile&utm_medium=reader2

Expand full comment

What percentage of people hate everything?

It seems like a lizardmen constant outside of polling. There seem a large percentage of people who in reviewing a restaurant, sending a child to a school, dating a new partner or supporting a political party or policy will always decide it is terrible.

They reduce the value of feedback systems.

Expand full comment

Eh, I think you need to distinguish between 'hate everything' and 'only comment when upset.' Certainly, the only time I've left a Yelp review was for the cab company which simply didn't show up for my reservation, or answer repeated calls, resulting in me missing an appointment.

Expand full comment

You don't necessarily have to hate everything to get into the habit of giving bad reviews; anger is more salient than vague satisfaction, so it's easy for some to mainly use reviews as a way to vent anger at a bad experience.

I tend to do the opposite, I can only be bothered to write reviews out of appreciation. Either way it's unbalanced and not so useful for the future reader.

Expand full comment

Even worse are the people who give glowing reviews accompanied by 4/5 stars!

Expand full comment

There might be a difference between dumping a lot of criticism into the world, and hating literally everything.

Expand full comment

People who -- regardless of the situation, and before they know where they are or what's going on -- immediately call for Change.

I'm inclined to agree when they're overweight, have green hair, body odor and face piercings -- but I think they're calling for more than their own shave and shower.

It makes perfect sense for a nineteen-year-old, who on the cusp of adulthood is really just saying "WTF. This is all to complex and intimidating." But in a forty-year-old, it's just sad. The world wouldn't be a better place without the wheel.

If someone is desperately unhappy with the world, he needs to heal himself. Destroying what displeases him is wasted energy. If he's so smart, he'll build something he thinks is better. If enough people agree with him, we will have moved forward: progressivism.

Expand full comment

For situations where I have effectively infinite choice (books, film, video games, etc) my ideal scoring system would bell curve each individual person's reviews, such that someone who gives everything five stars would weight that five star much less than a harsh grader.

Expand full comment

Idk, I usually leave 5 star reviews for places but occasionally I’ll leave a 2 or 1 star review due to a personal quirk or a bad experience. Maybe 5% of the time? Are the same people leaving 1 star reviews or are they different?

Expand full comment

Why can most* people hyperextend their knees? (That's straightening knees so far that they lock-- it's taking them a little beyond standing.) It doesn't seem like a useful range of motion for daily life. Does it serve a plausible evolutionary purpose, or is it a side effect of having fairly useful knees?

*I though everyone could, but, of course, the first person I raised the question with can't do it. He doesn't have knee problems. I believe everyone's joints are a little different, and not necessarily from injury.

Expand full comment

There's a lot of variation among humans, and this doesn't seem maladaptive *enough* to be strongly selected against. It's not like you have a clean single gene that determines how far your knee joints can move.

I read a paper once (can't find it right now) about how in evolution people are too eager to attribute specific functions to essentially random things. The prime example being all the theories about how the T-rex used its short arms, when the most plausible explanation is that they were probably just roughly neutral in terms of adaptiveness: no particularly important function, but also not a huge cost that would cause strong selection against.

The metaphor this paper used was if you entered a renaissance church, noticed the weird little triangular sections of the roof near pillars and how they're covered with paintings, and wondered what the purpose of shaping the paintings like that was.

Expand full comment

I know nothing about biomechanics but will wildly speculate: it's not good to have joints that spend a lot of time in their most extremal position, especially not for important joints that take a lot of weight.

If your knees were as far as they could go all the time while you're standing, it would mean they're constantly rubbing up against the thing that stops them from going any further, which would put a lot of wear on that thing.

Expand full comment

In the Army I was told to never lock out my knees in Parade Rest since that would cause you to pass out on a sunny day. Have not tested it.

Expand full comment

During our oath ceremony, one recruit passed out, but the general explanation given is that by standing too stiffly overall, you limit your breathing to the point that you black out from oxygen loss.

Expand full comment

Presumably it's useful to have a little bit of extra range before it breaks, so that if your straightened leg takes a hit, it won't break immediately.

Expand full comment

My ability to hyperextend went away as I added more muscle. In general I've noticed that gaining muscle tends to reduce my joint mobility. (Even if I stretch a lot.)

Expand full comment

I definitely can't; all of my joints are bulky and don't allow any extra range of motion. I have trouble getting my wrists through rain jacket sleeves, for example. I have done thousands of miles of hiking and trail running and am very resistant against injury (no broken bones, never twisted ankle in my life, no knee issues of any kind).

I'm surprised to hear hyperextension may be common, since I've only occasionally noticed people standing in a way it was obvious.

Expand full comment

I think most people can hyperextend, but that doesn't mean they usually do.

Expand full comment

>Why can most* people hyperextend their knees? (That's straightening knees so far that they lock-- it's taking them a little beyond standing.) It doesn't seem like a useful range of motion for daily life.

Well, daily life includes a lot of standing still. Standing still is energetically cheap due to the hyperextension allowing the body's weight to lock the joint in place. If you try to stand with unlocked knees, you will notice that it is very tiring; this is because the quadriceps needs to constantly apply force to keep the knee from bending.

Expand full comment

I think having legs vertical so your weight goes straight down is low effort and more comfortable, but probably some one have been studying this.

Expand full comment

Having had and fixed the problem of standing with hyperextended knees, I agree that correct standing does not require any sort of joint locking.

Expand full comment

If you couldn't, then your knees would be locked when standing. The extra 'play' that being able to hyperextend gives you may reduce injury on impacts to the leg, as you have a bit of movement to absorb the blow before force is exerted on the joint.

Epistemic status: speculation

Expand full comment

I don't think so, but I'll check back with the man who can't hyperextend.

My impression is that knees bending forward can absorb impact, but hyperextending gets into a zone where breaking is more likely. Maybe that little bit of play when the knee is moving backwards is useful, but full hyperextension isn't.

Expand full comment

I asked him. He's a smart person, but he had trouble understanding the question since locking knees was one of those universal human experiences he hasn't had. Please note, he does NOT have knee problems.

I think I'll ask him again since he's pretty patient, and I'm having trouble recreating a rather confused conversation.

Expand full comment

https://www.noahpinion.blog/p/at-least-five-interesting-things-f84

> The key metrics for success in science are 1) publications in peer-reviewed journals, and 2) citations of those publications. And as Goodhart’s Law tells us, all metrics will eventually be gamed. There are many ways to game the publish-or-perish system — p-hacking, specification search, citation rings, etc.

What is specification search?

GPT4o and Gemini give me different definitions (which I shan't confuse additional people by quoting). Google search results don't have obvious relevance to the replication crisis, even if I add "replication" to my search.

As a free subscriber, I can't post this question on Noah Smith's blog.

Thank you!

Expand full comment

I critique MIRI's "Corrigibility (2015)" paper:

https://thothhermes.substack.com/p/the-corrigibility-folk-theorem

This post is mainly about the theoretical possibility of agents updating their utility functions, and the question of when and how they would ever want to do so.

I present my arguments for why I expect it to be slightly easier than posited in the paper.

Expand full comment

I got three paragraphs in before the overwhelming purple background defeated me. Sorry.

Expand full comment

Does anyone have experience of a form of physical exercise that has very perceptibly changed the way their mind works? What exercise, and how was the way your mind works changed by it?

(An example would be :Yoga people sometimes do things like alternate nostril breathing to supposedly by some accounts enhance creativity. )

It seems to me that how particular physical exercises cause particular changes in mindset is a more likely bet for mental self-optimisation than overblown stuff like nootropics etc., but information on physical exercise altering mentality in this way is very scattered/would vary by individual anyway.

Expand full comment

In my experience (and many others by anecdote), I seem to gain a few IQ points *while* walking. Lots of tough math problems I've solved while on my feet.

Expand full comment
May 28·edited May 28

For me, swimming. I am a bit weird (ADHD), and swimming is one of the few things that drowns out my constant thought prattle, because if my attention shifts from swimming from even a second I will swallow water and choke. The incessant thought prattle seems to only end if it would literally kill me.

I do also find that my thoughts are less "2D" once I get out of the pool, so something something spatial reasoning.

I do wanna get into bouldering - I find I get similar benefits (life-and-death focus, 3D spatial awareness) but I've only been twice.

Expand full comment

Extended bag drills for martial arts, where you just go buck wild on a heavy bag until failure.

I think it's the combination of peak human maximum aerobic effort combined with going to failure like in strength training and also needing to exercise your brain: You are supposed to imagine an opponent, and react to what they are doing as you drill.

When I do it, I enter and extreme flow state similar to what I get in a match, but without the occasional 'getting rocked in the face' moment to knock me out of it.

Expand full comment

In addition to the mood benefits that others have mentioned, barbell squats and deadlifts have really helped with my somatic / proprioception / body awareness. It has really helped my meditation practice, which I definitely did not expect!

Expand full comment

Any kind of physical exercise has profound positive effects on my mental health, does this count?

Expand full comment

I'm not sure what counts as exercise, but the qi gong from energyarts.com has a rule of applying 70% effort (much less if you're sick or injured). I've found that to be quite a challenge, but it makes me less likely to force things or give up, and I've got more capacity for continuous attention.

Expand full comment

Years ago I got into super high rep burpee sets, in the short term I get an incredible "runner's high" and sense of calm afterwards, really seems to slow down my thought process and let me relax a bit.

Bigger picture, hitting numbers that most are unwilling/unable to go to for whatever reason gave me a huge confidence boost and helped me to stop being intimidated so much by people I perceived as being "successful" etc.

Not sure if that's what you're looking for.

Expand full comment

Walking long distances - say, at least 4 kms or more. As someone who spends a lot of time working with a keyboard in front of a screen, it has taken some effort to get used to the idea of thinking and analyzing things without a screen and a keyboard at hand, and without any physical medium to write things down (*). Just trying to organize my thoughts, and commit them to memory, in a way that will allow me to resume the thread of thought at a later time.

(*) I could stop every two minutes to type in my smartphone, or walk while staring at its screen, but that would defeat the purpose of long walks.

Expand full comment

I work with a keyboard/screen, of course, but time for unaided thought (eyes closed, even): lying in bed trying to fall asleep. Useful even when I have no trouble falling asleep. Very relaxing, too.

Expand full comment

Strength training has, for me, the strongest immediate post workout boost in mood & mental clarity. I feel like performing hard workouts (rough correlation of how much muscle failure happens?) build out a certain amount of mental grit.

More recently running, in particular long and chill "zone 2" runs, has made me appreciate the value in simply moving, the joy of moving through landscapes and the basic psychology of race-like things (most notably the "halfway" point and "last mile").

Lastly and most relevant, a few "level up" moments in my budding yoga practice have truly made me appreciate the value of proprioception, breathing and body control. I have experienced better introspection and impulse control as well as less "preemptive reluctance"/lower activation energy when thinking about future effort. I think it's come from the yoga! Breath is a fundamental unit of control that we have and consistently being deliberate about it through movements I think has profound effects.

- great essay about the topic = https://en.wikisource.org/wiki/The_Energies_of_Men

Expand full comment

Agree on both (hard) strength training and zone 2.

Expand full comment

Losing weight and exercising 5-6 times a week has improved the amount of energy I have. Not sure about the mental processes though.

Expand full comment

Alternate nostril breathing? I have no control over which nostril I use to breathe.

Expand full comment

You alternately press on each nostril from outside using your fingers.

Expand full comment

I'm stating the obvious here, but in my experience, strength training certainly strongly alters the state of mind after training. I have a drastic reduction in anxiety and general worries for the rest of the day. At the more permanent level, I believe being stronger and bigger makes you subconsciously more relaxed around smaller males. Don't know if that fits with "changing how the mind works" criterion.

Expand full comment

I tried Wim Hoff training for a while using longer videos of his for a 7-week programme, and it really does bring a lot of energy and clarity. It's really intense though, after a while I started slacking and mostly dropped it.

Expand full comment

I'll second this - Wim Hof breathing will definitely alter your mind state. I do it and a hundred breath-hold pushups in the mornings, and it wakes me up as well as coffee.

Expand full comment

When I saw this Guardian article (especially the kids' names), the first thing that came to mind was the SSC post about Puritans:

https://www.theguardian.com/lifeandstyle/article/2024/may/25/american-pronatalists-malcolm-and-simone-collins

https://slatestarcodex.com/2019/03/12/puritan-spotting/

Expand full comment
May 27·edited May 27

I met the Collins at ManiFest, and they're among the most memorable people I've ever met (keep in mind that I also met Zvi, Scott, Nate Silver, Robin Hanson, Yudkowsky, and Patrick McKenzie among others, just at that one conference and they *still* stood out). They give off a very forceful and larger than life impression, so the Puritan naming thing doesn't surprise me either. Nor does the "human clickbait" self-description.

o

(Interestingly, Scott is the exact opposite, shrinking into the background even when you talk to him.)

Expand full comment

Hang on, Musk is up to 11 kids now? Last time I bothered counting, it was only 7. He's making Boris Johnson look like a piker!

As for the names, that's simply the usual Anglophone middle to upper middle class pretentiousness. There's a reason people make jokes about the Saskias and Tristans (and Ruperts, for an older generation). If you're old enough to remember the singer Dido, of course her parents named her Florian Cloud de Bounevialle Armstrong and named her brother Rowland Constantine O'Malley Armstrong, after which is it any surprise they prefer to use the nicknames Dido and Rollo?

Expand full comment

What's interesting in this case is they seem to be genuinely trying to make nominative determinism happen. Call your kid "Titan" and watch them succeed (though remember what happened to the actual titans).

Does this actually work if you try to invoke it deliberately? I'm not convinced, I think it might just cause them to get beaten up at school. Self-belief is powerful, but it needs to be earned slowly through achievement, not loaded up onto a small child who hasn't done anything to deserve it yet.

Expand full comment

They're not even calling their own kids by the Strong Gender-Neutral Names; by that article (and I sincerely hope it's the Guardian journalist being biased and not an accurate description of a pair of intolerable narcissists) they call their Torsten Savage "Tostie".

Titan Invictus is going to go by Vicky when she ever encounters normal people. Octavian George will be George. Torstan Savage is "Tostie" and may decide to go by "Stan". Bun in the oven Americus or whatever will be Ammy. An awful lot of what they're spouting sounds like deliberate bait, they are amused by the seething and outrage generated. They've succeeded in getting me riled up, anyway. It's odd - I saw bits and pieces about this couple online previously and tried very hard not to have an opinion on them because they just rubbed me up the wrong way, but this story takes the cake.

Expand full comment

Industry Americus will go by Indie (super cute), of course! But yes, everything about these people in this article riles me up...

Expand full comment
May 29·edited May 29

I have to think this is a combination of an unsympathetic journalist (the dear old Grauniad is liberal to left-wing) and the pair of them deliberately winding the journo up for precisely this kind of rage bait. I grew up in houses that were cold, I've come down from the bedroom literally with blue hands; having a cold house is not virtuous, it's poverty (and if you can afford to heat it, as shown by letting all their small kids have iPads, then it's swank and affectation).

The information about her Caesareans (which is extremely private and nobody's business, you would imagine) is the journalist painting her as one of the "too posh to push" brigade:

https://pubmed.ncbi.nlm.nih.gov/24344707/

Certainly four is a lot, and if she does intend to have more children, it's very risky. I can't see any gynaecologist/obstetrician not warning her about this.

Scott and another commenter have vouched that in person they are not as awful as in this article. I don't know either of them so I have to be very wary about commenting, going only off impressions from a news story (and the other online stories I've seen about them). Like you, the people *as presented here* make my hackles rise, but are those the real people?

EDIT: Though if it's at all true that the guy slapped his two year old in the face, I don't care if the Archangel Gabriel himself says he's a great bloke, I disapprove very heartily and strongly. I'm very literal minded, very concrete, and have black-and-white way of thinking, due to probably some degree of being on the autism spectrum. And out of that, "no slapping kids in the face". On the back of the legs, on the bottom, the literal smack on the wrist? Sure, no problem. In the face? You should be put in the stocks and pelted with rotten vegetables for that.

Expand full comment

I do agree on the slapping (as someone whose parents spanked us sometimes, and I think that was fine, although I don't do it to my own kids). I do agree that the author is pretty hard on them otherwise.

I'm preparing(?) for my third cesarean this fall and I agree it's not really recommended to have more than that, although my obstetricians at a large research hospital seem fairly sanguine about even future pregnancies. I wonder if this is somewhere where there is a gap between what is "best practice" and what doctors consider within-normal. Certainly there are a lot of research studies suggesting patients should stop smoking and exercise regularly, but doctors are pretty used to seeing people who fail at both of those...

Expand full comment
author
May 27·edited May 27Author

I met Malcolm and Simone at Manifest last year. They were extremely nice and helpful and gave my wife good pregnancy advice. I do think some of their ideas aren't as well-grounded as they think (to give a trivial example, their Nobel density map thing is probably just demographic differences in state populations such that I don't think it makes sense to interpret it as causal and use it to decide where to live).

They describe themselves as "secular Calvinists", so your instincts are good.

Expand full comment

I have to say that when one comes from a culture where any sort of physical punishment for children has been illegal since the 80s, the "casually smacking a 2-year-old in the face part" will make it considerably harder to take the rest of the spiel seriously.

Expand full comment

Given what we've previously been talking about in other threads about too much screen time, this seems much more likely to lead to bad outcomes, and to heck with any STEM nonsense about having the kids plugged in and learning:

"Both boys have their own iPads fitted with a strap so they can wear them around their necks. Two-year-old Torsten is alone somewhere with his."

A two year old should *not* be alone anywhere with an electronic babysitter. And if this is any way accurate, then Malcolm needs to be punched in the face himself. You do *not* hit a child in the face. A smack on the legs as discipline? Sure. But for something deliberately wrong, not an accident. And *never* in the face.

"Torsten has knocked the table with his foot and caused it to teeter, to almost topple, before it rights itself. Immediately – like a reflex – Malcolm hits him in the face.

It is not a heavy blow, but it is a slap with the palm of his hand direct to his two-year-old son’s face that’s firm enough for me to hear on my voice recorder when I play it back later. And Malcolm has done it in the middle of a public place, in front of a journalist, who he knows is recording everything."

I don't care how charming they were when Scott and Lapras met them, this is *not* acceptable (and I'm not a liberal, so you know this isn't just 'let the kids roam free' maundering from me). I'm the eldest of four, reared hard in a semi-rural setting, and my parents never did the likes of this, would never dream of doing the likes of this, and if they did do the likes of this, would have been in trouble. Yeah, I got slapped for misbehaviour. I got the wooden spoon. But never, as a two year old, slapped in the face in public (or indeed in private) by either parent.

Expand full comment
May 28·edited May 28

> I don't care how charming they were when Scott and Lapras met them, this is *not* acceptable

A) Why not (in the general sense)?

B) Why not SPECIFICALLY in the context of intending to have lots of kids and thereby requiring, by hook or by crook, that they be low-maintenance and well-behaved? Is the two-year-old going to be SO mad about getting slapped when they grow up that they would rather not have been born in the first place? If your choice is "5 tough-loved kids" vs "3 softly-softly no physical punishment kids", have you really made the better choice by un-aliving 2 of their would-be siblings?

Expand full comment

If you honestly need me to explain to you why an adult should not hit a two year old child in the face, then I sincerely hope you're not the parent of children or planning to be.

Expand full comment

Well then, prepare to be very disappointed. Physical punishment is great and everyone should use it more often.

Expand full comment

I listened to their interview on the Spencer Greenberg podcast, and whilst I would consider myself a pronatalist in general, I think their approach of what could be generously called selective (take people who are already having more kids than average and get them to have even more), or less generously 'start a cult' (they go into this more on the podcast) is the wrong direction for societal change, unless you're going full on centuries replacement thing.

I would view the most promising route getting more people to the number of children they say they want, which for most western countries is a pretty sustainable 2.5-3 or so (see: https://ifstudies.org/blog/the-global-fertility-gap) via a mixture of policy change (especially housing), cultural change (contemporary parenting styles are exhausting) or cases which blend the two (i.e. people calling the cops when an eight year old goes to the corner store for milk alone, which I used to do).

Expand full comment

I find it terribly funny how culture managed to go straight from Malthusian "too many people catastrophe" to fertility raters peddling "too few people catastrophe", without stopping even for a bit in the vast middle zone of "population probably ok for the next century, trends too volatile to guess beyond". Maybe the common factor is an appetite for catastrophizing?

Expand full comment

If the birth rates had generally stopped somewhere around 1.7-1.8, there probably wouldn't have been much of a counter-reaction.

Expand full comment

The population scare did seem to incline towards "way too many of *those* sort of people", but well-off Westerners decided that not having kids would be a virtuous thing to do and not out of selfish desires to have fun while young without getting bogged down by responsibility and spending money on others rather than oneself.

The people who were all "having kids is putting a strain on the scarce resources of the planet" happily consumed all the latest technology and upward social improvements that gobbled up ever-more resources, never mind that if you go by the climate change claims, we are now paying the price for that consumerism driven by fossil fuel usage.

Expand full comment

The data changed, so minds changed (although I would agree the Malthusians held on too long, not exclusively but largely due to racism).

Fertility rates *didn't* pause in the middle which took very nearly everyone by surprise. I'm not aware of anyone publicly making the bet that fertility rates would drop so dramatically so quickly.

Expand full comment

So what, did anyone seriously expect fertility to soft land right at replacement and just stay there? Have these people never read an ecology book? Populations are way more dynamic than that.

You have to take the long view on these things, human generations are long, we're not mice or lizards. Human population is expected to peek at maybe ~10B towards the end of the century. That's good news, because it shows it's dynamic and responds to changing conditions, instead of just increasing exponentially before hitting a wall as the malthusians wrongly thought. Variables in complex systems rise and fall, it's a huge relief that we finally join the pattern. We've mushroomed from 2B to 8B+ in just a century, a relative crash shouldn't come as a surprise, and it won't last forever for the same reason the rise didn't.

Expand full comment

Re: serious expectations: yes, actually, pretty much, and I would argue it wasn't obviously wrong at the time: https://www.un.org/development/desa/pd/sites/www.un.org.development.desa.pd/files/files/documents/2020/Jan/un_1970_the_world_population_situation_in_1970_0.pdf

UN report from 1971 here with projections.

Expand full comment

If the birth rate decreased from >5 down to a stable 2, then there would be a middle zone. Instead, there was a fast transition from >5 to <2, which definitely isn't a middle zone (especially not <1).

Expand full comment

Thanks for bringing up the fertility gap. It is weird to me how little this is discussed by people that publicly worry about shrinking populations. Framing it as women not having the opportunity to have as many children as they want is a much better way to sell the problem to left

Expand full comment

Yea they have said a couple strange things, I vaguely remember them (maybe somewhat jokingly) saying that they would have engineered their kids to be purple, or maybe even something where they couldn't biologically reproduce with outsiders etc.

Relating to your first point, I think there exists a paradox of pronatalism, wherein by someone who is as pronatalist for Darwinian reasons would maximise their own number of descendants at the expense of "others" and "mutants", the result would be that this kind of pronatalist would basc just want to opt out of society and start a cult etc. and in extreme cases push for the rest of the world to be less fertile, hence the tension. I don't think the Collins are at Robin Hanson levels of doomerism, but from what I can tell they seem to think the wider culture is pretty irredeemable.

Expand full comment

That's how I remember them too. They have an opinion on absolutely everything, often pretty wild ones, some make sense, some seem pretty dubious. But they certainly aren't afraid to advocate for what they think.

Expand full comment

I'm traveling around India right now and writing about it on my science blog The Weekly Anthropocene, at https://sammatey.substack.com/ . Check it out if you're interested!

Expand full comment

Nice one, subscribed! It's nice to hear an optimistic tone on the Anthropocene without a hint of denialism.

Expand full comment

Should the US president be evaluated / judged for their impact on birth rates? Many countries have had birthrates fall under replacement rate, which is generally considered undesirable. If birthrates were a point by which presidents were more commonly evaluated it could shift policy decisions towards encouraging more families / children. I'm not sure this makes sense, but it's an idea I'd like to explore. What so you think?

Expand full comment

This is even worse than judging the president by how the economy is doing! Neither the bad harvest nor the dearth of male babies this year are actually due to the village witch doctor's failures, and getting rid of him and putting another witch doctor in his place isn't going to cause next year's harvest to be any better, nor to make sure your next child is a boy.

Expand full comment
founding

Unless they directly try to intervene on it I think the impact of a president on birth rates is too indirect to compare them on

Expand full comment

It is not "generally considered undesirable". That view is primarily held by males, and particularly those with a libertarian or conservative bent. It's a minority view overall and there is a large gender divergence on views towards this issue. But only 8% of Americans think our population size is too low, compared to 37% who think it's too high (the rest think it's about right or don't know). And on the global scale, only 6% think it's too low, while half think it's too high. There is also a large gender discrepancy, with men being much more likely to think that an aging society is a bad thing than women, and much more likely to want to increase births. I think you're getting your views of what you believe to be "common sense"vor widely held beliefs from a particularly male, pro-growth part of the internet. And most women would find scoring the President on such a measure or otherwise trying to increase births to be extremely threatening and would react very negatively to the proposal. https://populationconnection.org/blog/un-survey-shows-overpopulation-concerns-are-widespread/

Expand full comment

Population Connection, isn't that the renamed group founded by Paul Ehrlich?

Expand full comment

I have no idea what that org is, I just googled surveys on attitudes towards population size. But the actual survey that they're discussing and link to was commissioned by the UN and conducted by YouGov. There are not many studies asking people these questions that are recent (most are 20 or more years old), and the YouGov survey is the most comprehensive one I could find. Though Pew also has a fairly recent one that touches some of these questions and that one shows a 15 point spread between men and women as far as whether they think an aging population is a bad thing (majority of men say it's bad, just less than half of women do). https://www.pewresearch.org/social-trends/2019/03/21/views-of-demographic-changes-in-america/#:~:text=The%20U.S.%20Census%20Bureau%20estimates,be%20neither%20good%20nor%20bad.

Expand full comment

To some extent, having children is a measure of how happy/secure people feel, at least (especially?) when they have alternatives, so the birth rate might be a reason for judging governments, not just the chief executive.

Expand full comment

Niger is a very happy & secure country.

Expand full comment

Not quite, I'm pretty sure opportunity cost is more important. You can reduce the opportunity cost if you heavily subsidise childrearing... Or if you diminish all non-childrearing opportunities.

Birth rate is a terrible metric because there are so many factors that affect this. Poor working class families in Asia have many kids - because they have no social safety net to speak of and kids taking care of them when grown are their retirement plan, and there's not much other stuff they could be doing.

A DINK couple may hold off because someone is about to get a very lucrative international posting and they don't want to be relocating a lot with an infant in tow.

A different DINK couple may hold off because their cost of living is too high and they don't think they can manage to provide a good standard of parenting if they lose income to do childrearing.

And yet another different couple may decide they never want to parent AND they're sure they can self-fund their retirement, via government pension or their own savings or whatever.

Flattening it to birth rates doesn't help at all! The first childless couple might have kids if they don't have the exciting opportunity (worse for them, because they wouldn't be going if they valued kids more). The 2nd couple will definitely have kids if their finances improved (good for them!). The 3rd couple will... Not have kids, but they may not exist if they didn't have an age pension system or if it wasn't possible to guarantee a comfortable retirement without the care of adult children.

And obviously, if your birthrate climbs because your access to contraception is bad, that's usually bad.

I do think it matters more if the children are wanted and adequately provisioned than if there are more of them, and I do think in a world where automation is going to eliminate a lot of jobs, fewer children who each have a higher standard of living is probably a net benefit.

Expand full comment

Definitely not the "secure" part - it's notable that the one rich country with high fertility is Israel (and that's not just the religious people).

Expand full comment

Might it be that local security baseline and comparative security along the person's past/expected life play different roles?

Expand full comment

Maybe, but it's hard to come up with a version of this that's consistent with observed results (in general it seems like more "ordered" countries have lower birthrates).

Maybe there's something to this in an opportunity cost way - e.g. having a child in China or Korea feels risky because society is so organized your child can fail in life if you don't optimize everything (or you might fail because of the invested resources). Of course the consequences for failing are even worse somewhere like Niger, which is high fertility (where either of you could easily die), but maybe it being less ordered means you can't really connect life outcomes to any specific decision (how long you live in Niger is way more of a crapshoot than it is in Korea), so might as well have the kids?

Expand full comment

Good point. as I understand it, the high Israeli birth rate has at least two causes-- anger about the holocaust and socializing built around having children.

Expand full comment

I don't know if "anger about the holocaust" is the way most people "feel" about it today. I think that that was maybe true in past generations, and it's just a different cultural set-point now - if you and everyone you know had many siblings growing up, it just feels natural to aim for similar to that.

Expand full comment

And on the flip side, many countries have high birth rates because they have high infant mortality, and having five kids is the only way to be confident that you'll have at least one that lives to adulthood.

Expand full comment

Birth rate might not be to the point, since what's important is whether people live long enough to be able to work, and preferably live and have their own children.

Expand full comment

My intuition is that birthrate in first world countries is overwhelmingly about culture. If it becomes culturally encouraged to have bigger familes, most American families could manage another kid or two without putting *too* much strain on the budget.

Expand full comment

That's a terrible proxy, if at all. Globally I am pretty sure that the correlation is negative (happier countries have fewer children), and I doubt that comparing demographic groups within a country gives you generally positive correlations. I even seem to recall that there are studies showing that the correlation on individual level is negative (people with children are less happy than without) if you control for relationship status, age and health.

Expand full comment

...we do? Abortion has been a point of political division for nearly seventy years.

Expand full comment

The actual effect of (the legality of) abortion on birthrates is negotiable. If abortion is illegal, people are able to just avoid pregnancy in other ways.

Malta, the country in Europe with the tightest abortion laws at the moment, is also the one with the lowest total fertility rate.

Expand full comment

Falling birth rates are not a problem for the U.S. because ~1 billion people want to move here from other nations. Pick the best 2-3 million every year and your problem is solved. But it might indeed be a good measure for places like China with rock bottom birth rates and low immigration potential.

Expand full comment

> Falling birth rates are not a problem for the U.S. because ~1 billion people want to move here from other nations

Sure, if you are primarily concerned about the fate of the US as a piece of land, rather than as a nation or a culture or even an economy.

Expand full comment

Birth rates are falling for much of the world. Inviting immigrants (if the US has the flexibility) is a temporary solution.

Expand full comment

A common phrase is "beggar thy neighbor".

Expand full comment

It will take hundreds of years before this is big enough of an effect to matter to the U.S. By then we’ll have AGI or artificial wombs or have WW3 and go back to Stone Age level of civilization.

Expand full comment
May 28·edited May 28

More likely we'll go back to the Stone Age through demographics.

Expand full comment

Global population is increasing. Birth rates are falling in the more favoured countries.

Expand full comment

The second statement is substantially outdated, the world changes fast.

Large "middle class" countries like China, Brazil, or Mexico have fertility rates far below replacement level. Even most large poor countries outside Africa are below replacement level by now, for example India (2.0), Vietnam (1.9), Indonesia (2.0), or Bangladesh (2.1). There are a few exceptions (Pakistan 3.3, Philippines 2.7), but fertility rates above replacement are becoming increasingly rare outside of Africa.

And this is assuming you you mean "low" birth rates instead of "falling". They are falling in basically all poor countries of the world.

Expand full comment

Indeed, birthrates have dropped in Africa to a degree that no late-20th demographer predicted. Continent-wide the rate was 6.7 births/woman in the 1960s/70s, then drifted down to 6.0 as of 1990, hit 5.2 as of 2000, and is now 4.1. UN projections now put it at 3.0 by mid-century and 2.1 (replacement level) as of the end of this century. UN demographers err towards the conservative in predicting such shifts, some other researchers in that field project Africa as a whole to hit the replacement rate by around 2080.

Whenever Africa does reach 2.1 children/woman it will be the last continent to have done so.

Expand full comment

"Falling birth rates are not a problem for the U.S. because ~1 billion people want to move here from other nations. Pick the best 2-3 million every year and your problem is solved. "

I would phrase this as "NEED not be a problem" rather than "are not a problem" because the US does not pick the best (for some definition of best) 2 - 3 million every year.

I can also imagine cultural issues if you bring in 10% population replacements over a decade though maybe this feeds back into the definition of "best."

Expand full comment

Reminds me of "solving climate change is easy just pass a high carbon tax"

Is there a name for the general version of this fallacy? e.g. "Its easy, just $incrediblyUnpopularPolicy"

Expand full comment

"One weird trick" might cover the concept.

Expand full comment

On the one hand declining birth rate is a problem and Presidents should be judged based on their ability to help solve problems. On the other hand I don't think anyone has ever managed to increase the birth rate by government policy, even in countries with governments that are usually good at stuff, so seems unfair.

Expand full comment
May 28·edited May 28

Hungary under Orban has had some success at increasing birth rates, up from 1.2 in 2011 to 1.6 in 2023. They seem to have achieved this in the obvious way, through tax incentives (apparently if you have four kids you never have to pay tax again? If that were so in my country I'd be pumping out another two kids posthaste).

Fertility is also up in other Eastern European countries though, so there may be something else going on.

Increasing fertility is pretty easy, Western governments just aren't that interested, the benefits are too long-term to impact the next election cycle.

Expand full comment

Did you not have additional kids because of financial reasons? It seems to me that having kids for tax purposes is not a good reason, but AVOIDING having kids because you can't afford more is much more admirable.

Expand full comment

Comparing Hungary to neighbouring countries, it doesn't seem likely to me that it was anything the Orbán government did. https://datacommons.org/tools/timeline#place=country%2FHUN%2Ccountry%2FSVK%2Ccountry%2FROU&statsVar=FertilityRate_Person_Female

Expand full comment

Lyman Stone seems to think he boosted Hungary's fertility https://x.com/lymanstoneky/status/1745173088864502013

Expand full comment

To me it just looks like noise, with a slump around 2011 for whatever reason and then a reversal back to the mean. I think the proper way to test his hypothesis would be to look at how TFR changed in a broad array of countries after implementing reforms that he thinks should affect TFR.

Expand full comment

Well Nicolae Ceaușescu managed to temporarily increase the birth rate and ensured that he didn't have to worry about growing old in retirement...

Expand full comment

>On the other hand I don't think anyone has ever managed to increase the birth rate by government policy, even in countries with governments that are usually good at stuff, so seems unfair.

https://en.wikipedia.org/wiki/The_Rape_of_the_Sabine_Women <- I think this counts.

Zvi said it very well:

"I assert that if we actually cared about there being more births, we have plenty of levers to make that happen. It is simply that no one has done anything remotely like the reverse of the one child policy in China, or Iran’s widespread push to discourage births. The Chinese effort and one child policy would fall into the ‘young adult dystopia’ book section if it was fictional.

Imagine for a second what the reversed version of those authoritarian and dystopian efforts would even look like. Realize that this too would be and is in the young adult dystopia book section. Also realize it has not happened, at least not in a long time."

Well, almost. Actually, a lot of these plots are in the "hentai" book section... but that's neither here nor there.

Expand full comment

Well, you probably don't need to go that far to reach replacement level fertility rate. Restricting female employment and making birth control less accessible would probably do the trick.

Expand full comment

Maybe, but Singapore has done pretty extensive efforts (including most of what's on the list of things a non-authoritarian country could do) and, well... If your baseline for Singapore is Hong Kong, it's working quite well. If it's any other country in the world, it's not working at all.

Expand full comment

Well, Singapore is a terrible place to have kids. If you want people to have big families, the first thing they're going to need is a big house with a big backyard... something that is all but unobtainable in Singapore.

For those of us who live in countries where there actually is space to build, what we need to be doing is building lots of new large houses... not in existing big cities (where there is no longer room to build large houses within reasonable proximity of the city centre) but in second-string cities with six-figure populations.

Expand full comment
May 27·edited May 27

The "young adult dystopia" version of natalism actually *was* tried in Romania. It didn't go well.

Expand full comment
May 28·edited May 28

That's not the young-adult dystopia version.

The mild YAD version - the direct inverse of the one-child policy, as Zvi said - involves requiring legally that women have many children and making only the most limited of exceptions (in particular, no exceptions for conditions that could have been avoided such as "didn't have sex"). The severe YAD version (and hentai version) is the Rape of the Sabine Women or its industrialised equivalent, where state power is used to directly force women to have children.

The severe version is *kind of* in use in Xinjiang at the moment, with a substantial Han-Chinese force billeted on Uyghur homes (specifically, the homes of the double-digit percentage of the male population that's been interned) with the apparent intention to make the next generation of Xinjiang half-Han and thus more loyal to Beijing. But it doesn't seem to be aimed at increasing birth rate *per se*.

Expand full comment
May 27·edited May 27

> On the other hand I don't think anyone has ever managed to increase the birth rate by government policy

Arguably, Henry VIII managed it. Life in cities was so grim and ghastly in the 1500s for many people that they were turning increasingly to a less welcome Renaissance innovation: sodomy, for the purpose of birth control. Partly because the king was worried this would soon lead to a shortage of stalwart young men for the army and navy, the sodomy Law was enacted, making it a capital offense.

Hardly anyone was ever convicted of the offense, because the evidential requirements were so high. But over time it did have the effect of changing public opinion to disapproval of the practice, and sure enough the population started increasing again!

Expand full comment
May 29·edited May 29

A historian of English history of my acquaintance, asked to list the biggest urban legends he hears from people, named that one first.

Henry VIII did make "buggery" (which at that time included sodomy and also bestiality) a civic crime for the first time in 1533, with drastic penalties. It was one of several new laws he issued at that time all of which served his prime objective of stripping power and relevance from the Church (until then buggery had been prosecuted in church courts). He successfully used the new law to have some priests and nuns who protested his religious changes executed despite their not being charged with murder or treason.

The idea that Henry was trying to stop population decline doesn't fit with the fact that England's population had risen steadily throughout his lifetime: as of 1530 it was a fifth higher than when he was born in 1491. In fact inflation was a political and social problem in England throughout the 16th century primarily because most households' largest expense was food and the agricultural sector struggled to keep up with the rising population. (The same was true across much of Europe, until solved by the Thirty Years' War in the early 17th century.)

Expand full comment
May 29·edited May 29

I don't dispute that real or believed population decline wasn't the main reason (if it was a reason at all), which is why I said "partly". But historians, in their thirst for facts, sometimes forget that contemporary motives depend on perceived facts, which may not match the actual ones!

Parish registers, listing births, marriages, and deaths, were also mandated in England only in the 1530s. Perhaps that in itself is some indication that the authorities wanted to keep a close eye on population levels, as well as better control of inheritance taxes or whatever other purposes these registers were intended to serve.

Before then, presumably they wouldn't have had accurate and timely data about population trends, but they may have heard tavern talk about the new fashion for sodomy, which people were agreeing was an effective form of birth control, and they wanted to nip it in the bud before any effects started to become apparent!

Expand full comment

And even if we grant that a government solving declining birth rates is a reasonable ask, the power of the president in particular to do so is much more limited.

Expand full comment

I scream about the uncertainty of the world where it's a mess and yet you can't let go.

https://borisagain.substack.com/p/the-maybe-in-your-mind-the-maybe

Expand full comment

How far down do we think “experience” goes below the human level?

Animals, probably we’d all agree on.

What about plants? Fungus? Bacteria?

My line is that it’s at the information processing level and a lot more things have it than you would first suppose, but in ways that aren’t very interesting because you can’t ever see them. Not a science question necessarily unless someone has an interesting example, but a philosophical one.

Expand full comment

>Animals, probably we’d all agree on.

It seems highly probable to me that the vast majority of animals don't have any conscious experiences. Bearing in mind that over 95% of animal species are invertebrates.

My not-very-rigorous, mostly-vibes-based model right now is that mammals, birds, and possibly reptiles and cephalopods have experiences, and all other organisms are unconscious automata.

Expand full comment

This is where I think we might part ways. My intuition is that you don’t need either a self or a memory in order to have qualia. This is where I start chucking things out of human experience and I don’t know how useful the definition becomes once I’ve done that to the extent I think is necessary.

Expand full comment

That's a deep and controversial topic right here. These days most people seem to agree that higher animals like mammals but also birds and e.g octopuses have it, and quite a few are ready to extend that to e.g lizards and bees. Peter Godfrey focuses on the animal realm and gives a seriously cool account in "Other Minds", highly recommended.

Quite a few people are also happy to grant experience to plants, and the panpsychists just go all the way to stones and individual particles.

I think we're finally starting to see some necessary distinctions made within the murky field that has traditionally been called "consciousness". Having an "experience" of its own, or "qualia", or "something that it's like to be X", is probably the most basic one, and the most widely distributed. It seems likely that many beings could have that, without reaching to higher abilities like imagination, discursive thought or meta-cognitive introspection.

Personally I'm pretty convinced all the way to the animals that move and act, e.g flies and worms and fish. Anything beyond is a speculative jump, but I'm friendly to the notion that consciousness (in this sense) is more basic than physicality.

Expand full comment

There might be an “invisible” sort of experience that simple things feel but even if you do relatively controversial stuff like remove the necessity for an “I” or a memory to store the sensation you still blow it open. On one hand you’d like to dismiss it all as nonsense, but in our case we know that information processing feels a certain way on the inside so surely that must apply to other things.

Expand full comment

My assumption is that the level of subjective experience is dependent upon the level of action an organism is capable of taking to advance it's interests and avoid things that will kill it or otherwise cause a problem for it's interests. We have senses and experience pain or desire etc to motivate us to do the things that make us survive and avoid the things that don't. So all animals of course have subjective experiences. plants have a very narrow ability to act. They can't run or hide or or do much of anything other than slowly reach towards or away from the sun, and modify their uptake of nutrients or grow or not grow. Their ability to act has got to be less than 1% of the ability of an animal to act and therefore my guess is any subjective experience would similarly be very low level, if it's even necessary at all. A calculator has no interests or defective to survive so I don't know why it would experience anything...experiencing is a necessary function of surviving as an organism, that's all. The only reason a non-organism would have any subjective experience would be if it was purposely programmed that way.

Also, I have never understood why people ascribe the adult human ability to verbalize and rationalize and think abstractly to mean that they somehow have MORE subjective experience (or a form that is somehow of a more intense and important version). I think those characteristics actually lead one to be more removed from experience. As a young child my emotions and experiences of the world were SO much more intense than they are as an adult. Fear was absolute terror. Joy was all encompassing. Embarrassment meant I literally wanted to be swallowed into the earth and disappear. I don't experience anything like that intensity level as an adult, bc my rational, abstracting mind stands aside and analyzes things from a bit of a remove. So if anything I find the subjective experience of a young child or an animal to likely be stronger and more intense than that of an adult human. It's why I believe is significantly more wrong to torture an animal or a child than it is an adult. Not bc they are more valuable organisms but because it is more tortuous for them. They will just experience the sheer pain and terror, while an adult will also have somewhat of a capacity to be at least a little removed, thinking thoughts about it and not just feeling the feelings. Of course I think any sort of torture is almost always wrong, but it strikes me as much more evil against an animal or young child for this reason.

Expand full comment

I’m in some agreement apart from giving a slight head not to the idea there may be forms of experience we can’t assay even at the level of “this is what the information processing looks like.”

On the multiple evolutions humans go through as we develop our minds I stated it only as a way to gesture at “surely there are other forms of experience than those we experience given we move through several.”

From what you’re saying I call it “agentic experience.” You have an “I” that is pushing orders around somewhere to all the surrounds.

Expand full comment

I'm about 50% confident that dogs have qualia. Fungus, bacteria and plants almost definitely do not.

It seems that there is some class of algorithms which implementation creates this "experience" thing. It's not about complexity per se, though there is some minimal level of complexity required.

Expand full comment

That seems awfully low for a mammal. Have you ever interacted closely, over a long time, with a dog or cat? I don't think anyone who has had a mammal pet would doubt they have qualia.

Expand full comment

I've had mammal pets throughout all my life. I currently have two dogs and love them dearly.

But I'm also a programmer, I know that it's possible to make things that act as if they have qualia, even though they do not. That it's not even hard. That reaction to stimuli doesn't require "subjective experience". If image recognition software and if-clause do not have qualia then it's a very real possibility that multiple mammals do not as well.

For the same reasons, I believe it's very unlikely that newborn human babies have qualia. Something around 10%. This fits well with the fact that noone seem to have any memories about the first year of their life

Expand full comment

Would you also conclude adults with brain damage that prevents storage of long term memory also lack quality? Why would you regard ability to form memories as indicative of qualia?

Expand full comment

I'm not saying that it's the only possible explanation for the observation. I'm saying that observation fits the explanation. And as I would consider situation where people generally remember their first year of life as evidence in favour of newborns being conscious, I consider our state of affairs as evidence to the contrary.

This isn't the sole reason of my 90% confidence, of course. There are many other factors. Evolutionary it doesn't make much sense for newborn humans to be conscious. They do not do anything complex, their well-being fully depends on their parents. Their brains are still in development. Nearly all of human cognitive abilities are to some degree acquired, newborns do not even have basic stuff such as object permanence or theory of mind.

And the alternative hypothesis, that babies have subjective experience and simply unable to form memories doesn't appear that plausible. It seems that babies start forming memories very quickly, they can recognise their parents, they remember sources of positive and negative stimuli.

Expand full comment

Wait, but does it make sense from an evolutionary perspective for *anything* to be conscious? Either it's epiphenomenal and is an accodental feature that is beside the point, or consciousness/qualia themselves have a causal effect on the universe beyond the input/output computations that could very well have been accomplished in the dark.

(I actually believe consciousness is causal since it's existence seems to cause us to talk and think about it much more than if we were all p-zombies)

Expand full comment

Either atoms have experience or nothing does.

It just doesn't make sense to me that when you add the quadrillionth particle, perception/experience/consciousness/being suddenly begins, but wasn't there before.

I think it must go all the way down, because it makes even less sense that consciousness would arise at a certain level of complexity. (The subjective experience that exists for an atom must be pretty much nil, unimaginably small, but should still exist.)

Expand full comment

Consciousness seems to me to be a supervisory mechanism in a complex biological system, in the same sense that color vision is a mechanism in a complex biological system. It wouldn't make sense to say that atoms have an unimaginably small faculty for color vision, even though they can comprise such a thing on a much larger scale. It also wouldn't make sense to attribute color vision to blind salamanders, fish with rudimentary eye spots, or evolutionarily advanced mammals with monochromatic vision.

Meanwhile all of these creatures can be observed to have some degree of experience; but unless we can characterize that experience (perhaps by observation and deduction), then I see no reason to assume it has significant commonality with the consciousness humans experience.

Expand full comment

That may be correct, but it's an assumption. Fundamentally consciousness can only be broken down into synonyms like "direct experience". And a programmer can write a control system that's not necessarily conscious. An engineer can build one.

As for color vision in blind salamanders, you are carving out an aspect of experience. It's just one of many things some consciousness can't feel. In fact we are very limited in what we can experience. You can't bypass your subconscious or you're optich nerve's edge detection. Is that relevant? Let me know why if it seems so to you.

I don't think there's any word or concept to describe what an ant feels, let alone a molecule. And I don't think it's necessarily accurate to say the molecule feels, because that implies a self. But the awareness doesn't spring into existence only when a certain number of bricks are laid.

Expand full comment

Sorry for the mobile typing failures:

You can't bypass your subconscious or your optic nerve's edge detection.

Expand full comment

Try replacing "have experience" with other things that complex lifeforms do.

1. reproduce

2. hunt antelope

3. tell jokes

Some things can take those actions but atoms can't. So we have an existence proof that a certain level of complexity is required to take some actions

Expand full comment

After thinking about this for two more minutes, it makes even less sense. If a single atom has "experience", will a molecule consisting of two atoms have a combined experience? If not, then how do you explain that I, a single person, have a single experience? If yes, then under which circumstances do those individual experiences combine into one?

If I remove a single atom from my brain, will its experience split off from mine? What if I cut off my arm, will it have its own experience? If I hug my wife, do our experiences temporarily combine into one? Does every subset of atoms in my brain have its own, individual experience?

Expand full comment

A single atom doesn't have a temperature, pressure, or state of matter, yet collections of atoms do. A single atom can't propel itself, yet some collections of atoms, under the right circumstances, can. A single atom isn't alive, yet some collections of atoms are.

Many interesting phenomena only emerge from the interactions between many particles arranged in certain ways. We have no reason to assume that consciousness is a fundamental property of particles the way charge or momentum is (nor do we have any evidence for the opposite).

Expand full comment

That's a good point. Would it be more reasonable of me to suppose that the simplest systems/interactions have consciousness or nothing does?

Of course I'm open to being convinced that it could suddenly arise at some threshold, but I can't currently imagine how that would be.

Expand full comment

I can see that but I have a hard time imagining what it feels like. Maybe just something like “am?”

Expand full comment
May 27·edited May 27

Yes, if you want to argue panpsychism, you can't really have electrons feeling elaborate emotions or actual thoughts. As Sabine Hossenfelder quite funnily replied, if two otherwise similar electrons could each have a separate thought, they would no longer be identical in all ways, and the Pauli exclusion principle would no longer apply, so you can discard that kind of theory on experimental physical grounds alone.

But if you're really committed to matter-consciousness non-dualism all the way to the most basic building blocks, you can speculate that a single material particle is inseparable from the most basic, rudimentary feeling that "it is". I remember reading at least one guy arguing for this kind of view and taking a lot of precedent from Baruch Spinoza.

The problem of aggregation is not really that bad. Our sense of being unitary beings is arguably a construct rather than a basic fact, and there are plenty of hints of non-unitariness within a single brain, from odd cases of split brains all the way to the recent post on IFS. So if we can manage to be experientially both one and many within our heads, there's no particular reason why the rest of reality can't also do it.

Expand full comment

I don’t think that if electrons have experience they’re writing poetry to one another. Would have to be something very much different than what we know. And I’m not sure how useful it is to ask as it’s not something we can assay. Although to Sabine’s point, if their “thoughts” are invisible to us I don’t see Pauli no longer applying. It’s a fun philosophical idea but it might be awhile before we can even poke at the human head meaningfully and map states from what they look like on the outside to what people report them to feel like on the inside. Then of course there would always be a blind spot for states you could argue have experience but can’t be remembered.

Expand full comment

Yes. Wow, that is well put.

Expand full comment

You've got to define your terms more explicitly. Otherwise I would argue that microbes (well, at least some of them) experience things. E.g. amoeba will swim towards some "scents" and avoid others, which seems to require a sort of experience. This seems to meet your line, but I'm not at all sure it's what you mean.

Expand full comment

I’m all the way down where I think equations in a graphing calculator have some kind of experience, but that it’s probably the lowest form of experience. Something like “yepyepyepyepyep” with no “I” to receive the experience and no memory to record it.

Expand full comment

At this point, what do you even mean by "experience" when there is noone to experience it? What is this “yepyepyepyepyep”, you are talking about?

Expand full comment

Qualia. Something it is like “to be” that thing.

Part of what expanded my previous definition is that I don’t think LLM’s can experience time. They’re just a big matrix of never-changing numbers, of course. But they do things I would commonly associate with experience, so there must be some functional world model in there running through inputs to produces a change in output. To me, that implies experience. There’s probably something on the inside that it feels like to “to be” chatgpt.

So if you don’t have to be able to experience time, or have memory, then that got me thinking what else can you get rid of.

People in psychedelic states report losing their sense of “I” and still having memory, so there you have it the other way around, whereas an LLM speaks to an “I” but has no memory. In both cases, reported experience of what it is like “To be” in that state.

So if you don’t have either, can you still experience? My best guess is yes but I don’t think we’ll ever be able to prove it.

Expand full comment

But who is having this qualia if there is no "I"?

I can see how you can have experience without memory or how you can have memory without experience, but existence of experience while there is noone to have it sound as an oxymoron. I suppose I'd like an insight from someone who has experiencing the loss of "I" on psychodelics. In my model it's something like loosing desires and goals, everything except the ability to experience, or am I way off?

> But they do things I would commonly associate with experience, so there must be some functional world model in there running through inputs to produces a change in output. To me, that implies experience.

Oh, well, I think this implication is where you are wrong. LLMs do have something like a world model, but they do not "experience having it". Not unlike human subconsciousness, which also seems to have world model and can do impressive things, and yet we do not experience it.

Expand full comment

I’m not sure if it’s entirely addressable.

My guess is that there’s some experience of being an LLM, that exists for as long as it takes to generate the prompt, and then disappears entirely, without memory.

When we make more advanced models, and say it’s based on an LLM, and it can clearly demonstrate experience, and claims to have had experiences as an LLM, we could just say “Well, you’re not just an LLM anymore.”

Similar with the psychedelic, they are going on a memory when the sense of “I” returns to them. You could say that the experience they had of losing the “I” is a retroactive hallucination once the “I” came back. The reports I’ve seen is that it’s the loss of an ability to distinguish between yourself and the universe at large.

I guess what I’m trying to wrap my head around is “what is actually minimal for experience from a philosophical standpoint?” You can’t assay some of these, because you need a reporter to tell you what a particular pattern *feels* like on the inside when you see it on the outside. But is there a model that works across us and atoms? And if I can make a guess about the atoms and scale that to us, can I make a guess about superintelligence?

I think this is probably something like “The hard problem of consciousness” but I like to work at things from my own angle.

Expand full comment
RemovedMay 27
Comment removed
Expand full comment

I’m very interested in what you mean by “the sense we do.”

I think we’re quite unique but there are probably lots of states that are something like a subset of states we move through.

For instance , we have an “I” that receives experience and I’m not sure that’s necessary for experience itself (this is where it gets philosophical and you can assay it scientifically).

Just from stuff we all directly experienced/experience:

I have experience and because we can communicate I assume you do as well.

Children don’t have very good memory before a certain age and we assume they have experience and what “loose” memories I have from that time bear that out, although you could say that’s circular.

I think dogs, etc have experience.

So how much of the machinery of the human mind can you strip out and still have something left?

Expand full comment

When do children typically start collecting memories? Genuinely curious about this. It seems my autobiographical memory is kind of an outlier going back to pre-language crib dreams.

Expand full comment

I believe it’s around five or so with some fragments before being common. My earliest is being in a stroller and my sister taking my ice cream cone from me.

Expand full comment

How much memory children have is an interesting question-- they obviously don't have verbal memory before they learn language, but learning to walk is a process of trying things and remembering what works.

Expand full comment

They have something building up to adult memory for sure. My son remembers things now that will be gone in a few years and this whole period will disappear for him, sadly.

Expand full comment
RemovedMay 27
Comment removed
Expand full comment

Gotcha! Perception is a type of experience the way I think of it and not everything has it.

Part of what makes it interesting to me is in wondering how a powerful enough world modeler would perceive the world. Would it overlay sense of internal experience onto its environment? Part of me says no because it’s a waste of compute but for humans in particular it has a lot of explanatory power.

Expand full comment
May 27·edited May 27

Someone replied in an email to me recently saying they would revert back to me. I don’t think that’s possible, as they weren’t me to begin with, but it felt churlish to point that out.

I’m more a descriptionist than a proscriptionist but there must be lines, and stands must be taken somewhere so let it be here.

There’s a plenty good word for replying to an email and they probably used a button with that word to send the email. Leave revert, which has its own meaning, well enough alone.

Expand full comment

"Revert" has gained an interesting usage in the recent months: "Revert to Islam".

See, Islam holds that all people are born Muslim. Non-Muslims are also born on the "natural" religion, and their parents and family then make them follow whatever religion they have.

So when a Non-Muslim converts to Islam - a phenomenon made viral on the likes of TikTok as an attempt at solidarity with Palestinians under the Gaza war - Muslims describe this as "Reverting", not "Converting", to Islam.

"Revert back to someone" could be an interesting idiom to mean "gone back to someone's habits", as in "He had quitted smoking for months, but now he reverted back to his smoking friends after he went on a vacation with them."

Expand full comment

And then _I_ complain that "revert back" is redundant.

Expand full comment
May 27·edited May 27

"I’m more a descriptionist than a proscriptionist but there must be lines, and stands must be taken somewhere so let it be here."

I have no idea why anybody thinks that language changes are bad.

Once you've learned that some people say revert to mean reply (apparently this is the norm in India), from then on you'll understand what they mean immediately and it won't be a problem any longer. Why the outrage then?

I'm myself a curmudgeonly pedant when it comes to unnecessary importation of unmodified foreign words into my language (Italian), but that's only because they create spelling and pronounciation problems (if an Italian hears an unmodified foreign word, they won't know how to spell it; if they read it, they won't know how to pronounce it). When that problem is absent, I don't see why one should oppose language changes.

Expand full comment

because it is pollution. it corrupts the commons with unnecessary ambiguity.

Expand full comment

As a non-native but extremely-long-time user of English, I want to push back a little on the "As long as you understand what they mean, why no let them say that they want?" attitude. (and yes, I know that "no"'s place should be taken by "not".)

Language is a protocol, a common data format whose primary utility is that **everyone** does it. Protocols and data formats aren't always the best, just look at the disaster that is CSV. But you know what's even better than being the best? Being understood effortlessly, from the first time.

In protocols, there is always a tension between innovation and stability. On the one hand, you preferably want everyone everywhen to speak the same way, so that they can always understand each other with minimal friction. On the other hand, new *things* to speak about pop up all the time, and new *ways* of speaking about the same thing pop up all the time. In the extreme border case, if we didn't innovate in language at all we would all be still speaking Proto-Indo-European, about hunting and gathering.

This tradeoff has no easy all-purpose answer, but people must always keep in mind that it's a tradeoff, not a pareto-optimal direction. Faced with a slider that has "keep language EXACTLY as is" on one end and "rebirth language anew EVERY DAY" on the other, pushing the slider toward any extreme too far is insane. I know you're not arguing for the second extreme, but giving up on ever correcting people's languages would realize this extreme in practice.

For one thing, misunderstandings compound. An incorrect usage of one word or one idiom can be easily reconstructed back into comprehension, but 3 or 4 incorrect usages in sequence would be incomprehensible. You can ask for more words in clarification, of course, but that's more time and effort and patience wasted over what could have been a single email/message/call. After all, even a non-verbal toddler could always be understood given enough time and patience and tolerance-of-sleep-deprivation, but is it really fair to ask of people to always pay this price every single time they interact with international people? (which is the prevalent experience in office "Knowledge Work" - dumb phrase -, such as management and finance and software development)

Knowing the language's rules and adhering to them is a special case of Tradition Following, and following tradition is really a special kind of respect. It's a costly signal that says "I respect this group so much that I took so much valuable time and brainpower to know something that they typically know automatically by heart from the moment they're 7." I don't condone being a disrespectful asshole towards non-eloquent people (and not just those who are so because of a language barrier), just gently (but insistently) pushing them towards more and more adherence to tradition. The optimal amount of Tradition is neither 0 nor infinity, and Chesterton's Fence is a plea to always be very reluctant when ignoring traditions.

The problem of how to correct people's idioms, grammar and spelling, both in general but especially in work settings, is an interesting Cultural Engineering problem. We don't have good norms or rituals for this, and worse yet, people's defenses are extensively trained to spot and classify language corrections as flippant aggression, unfortunately for pretty good reasons.

Expand full comment

The whole tone was jokey. People who are fluent but not native in English - particularly British English don’t always get that. We exaggerate to make the point humorous. I’m not that upset about this utter destruction of the language which makes a mockery of all things that are holy.

(I’m not convinced that this originated in India anyway - as the flow of language rarely works its way from Delhi to a legal practice in the Cotswolds. )

Expand full comment

> the flow of language rarely works its way from Delhi to a legal practice in the Cotswolds

The sad thing is that sometimes it does. All it takes is one inexperienced and insecure clerk seeing the misused word in one email and deciding "ooh, that must be fancy legal jargon, I'm going to start using that too!"

Expand full comment

Thank you, now I know I have no sense of humor.

Expand full comment

To add another single data point: I didn't realize that was an incorrect usage until reading your comment and I'm a well-read/educated fluent English speaker with extensive experience on both sides of the pond (both ponds, in fact...)

If pressed to clarify the difference in intention between my past uses of "revert" and "reply," I would say that "revert" implies that I am throwing a live issue back to you for further action, while "reply" includes the possibility that giving you the information ends the chain of action.

Expand full comment

I died a hundred times in that ditch but each time in vain. Indeed some people actually told me that it was correct usage ☹️🙄

Expand full comment
May 27·edited May 27

If it's in widespread use, it's correct usage by definition.

Expand full comment

Disagree, we are still a monarchy. If the King wouldn't say it, then it's not correct.

Expand full comment

Yes, that's a fair point. Call me reactionary then.

Expand full comment
May 27·edited May 27

There is a similar word "animadvert", which I'm sure used to mean a couple of hundred years ago "switch attention to", for example "we will now animadvert to our second topic", but which now, according to Google, means (on the rare occasions it is used these days) criticize or censure.

Expand full comment

Turns out to be straight from latin (animus + advertere), in the original sense of "turning your mind to", and already attested in English in the 15th century. Somehow the noun form took a negative connotation, and how it seems like the verb is following along.

Who uses "animadvert" though? It never crossed my radar before.

Expand full comment

> Who uses "animadvert" though?

Exactly. I haven't seen it used for years. On the pomposity scale it's worse than the phrase "in point of fact"!

Expand full comment

I don't know how recent you mean by "now" but I was always familiar with it in the sense of "censure or criticise".

Expand full comment

Whereas my immediate thought is it's one of those annoying flashing advertisements.

Expand full comment

Thanks. Interesting.

Expand full comment

Yup, I've seen that use too.

Language shifts continuously. Original meanings of words are lost and malapropisms spread. Consider modern social media speech:

canon -> cannon

defuse -> diffuse

couldn't care less -> could care less

etc

There's nothing you can do. The general population absorbs and retransmits this stuff much faster than a few pedants can correct it, especially now in the age of instant communication. This is how living languages work, and we just have to deal with it.

Expand full comment

That annoys me because "canon" and "cannon" have distinct meanings, as does "defuse" and "diffuse". A lot of this is people spelling by hearing, because they never learned the word in school or saw it written down, so they just go by how they hear it (e.g. "per se" being "persay").

Bring back corporal punishment in schools! Or at least the little red book of spelling lists we learned out of in 6th Class.

Expand full comment
deletedMay 27
Comment deleted
Expand full comment

I think that "mentee" and "protege" have different implications. I would expect that a "protege" is someone who has been carefully mentored over a period of many years, and a "mentee" is someone who signed up for a "mentorship program" and had six to twelve half-hour meetings with their "mentor".

Expand full comment

This at least doesn't have the issue that most of the examples used here have of having an existing different meaning that is being ignored. As such, I don't see the problem with it. I think that even for someone who generally accepts linguistic descriptivism, it is quite reasonable to try to push back against shifts in a language you use that make it worse (as a tool for communicating, generally by making it less clear what is meant or making it less consistent). Back-formations on the other hand make things more consistent, and they're just fun.

Expand full comment

I suggest that the recipient of advice from a mentor should be called a telemachus.

Expand full comment

No, they are different. A modern corporate mentee is not necessarily at all a protégé.

Expand full comment

Has to be someone from India. Am I right? They use the word "revert back to you" to mean "respond to you". This phrase has become widespread after I moved out 30 years ago!

Expand full comment

I passionately dislike Indian English but it is not the culprit here, this is unbelievably common even amongst lawyers who are supposed to have a basic mastery of language.

Expand full comment

Eh Indian English vocabulary is fine, really, non-standard but comprehensible. It's the phonetics that get you.

Heavens bless Indians, but their languages' phonetics made them commit so many crimes against English.

Expand full comment

Not sure I'd call the overly retroflex "t" and "d" sound a crime, I find it quite endearing myself. Just curl your tongue up for an instant Indian accent!

Turns out many Indian languages have two distinct phonemes for each of "d" and "t", one dental/alveolar and one retroflex. The English "d/t" sounds are somewhere in between, and they chose to map it to their retroflex.

Expand full comment

I just wish my Indian colleague would open his mouth while speaking. It takes so much effort to understand him.

Expand full comment
May 27·edited May 27

It was an English lawyer who reverted back to me, saying she would revert back to me. Probably white by the name but I never met her.

Expand full comment

QED. So depressing. I occasionally flirted with responding "so died English, from a thousand careless cuts" but I have now abandoned the battle entirely, save that I would never use it myself and always correct it in proofreading.

Same for "I note that"... no shit you did, since I'm reading or hearing it now. Almost worse: "it is important to note that".

Expand full comment

Indian English is really something else. Perfectly fluent guys say things like "the volume was very less" like it's the most normal thing in the world.

Expand full comment

Yes! It's a foreign language after all.

Expand full comment

Does the amount of redundancy in Indian English reflect local languages?

Expand full comment
May 27·edited May 27

No, though the oddities are often some mix of translating from Indian local languages and archaic British English. I used to be bothered by it (am Indian), but over time have become less of a pedant.

Expand full comment

I wouldn't even call English a foreign language in India at this point, after the long stay of the British it's been completely naturalized, and pretty much became a lingua franca for the country, not to mention the language of instruction of many of its schools. So it's not so surprising that it would have developed its own whole set of specific dialects.

Expand full comment

Like that great train driver character in the film North West Frontier (1959), about a train journey through hostile territory

https://en.wikipedia.org/wiki/North_West_Frontier_(film)

"Is the train ready yet, Gupta?". "She will be ready in a very soon moment from now, sahib!"

Expand full comment

This just sounds cheesily unrealistic. Never heard anyone talk like this in India!

Expand full comment

Is it a real misunderstanding on their part or just a typo? I expect that we will never really know, but I just wanted to point out a possibility that it's less of a "a person who does not know the meaning of this word" and more of a "a person who does not proofread their emails, so their typo/brainfart/weird autocorrection slips through".

I may or may not be saying this because I struggle with the latter myself.

Expand full comment

It’s a common enough usage in business. If you Google the phrase you will see much discussion on the use - most of it hostile.

Expand full comment

I apologize for being ignorant, I had no idea. I'm not a native English speaker, so I don't have much exposure to that kind of corporate English. Thankfully.

Expand full comment

A nitpick: I proofread what I write. I catch some errors, but I certainly don't catch all of them.

It may be one of those things where English Needs to Be Improved-- it would be handy to have a distinction between not doing a thing and not doing a thing perfectly.

Expand full comment

I would not blame English here, I think it's possible to construct a sentence that would highlight the distinction. Nobody would be confused if I said "does not proofread at all" vs "does not proofread enough", right?

I just did not think in this particular case the distinction would be necessary - it was not central to the message I wanted to convey, so I did not focus on the difference. Maybe it was not the right call.

Expand full comment

I'm not sure. I've been thinking about perfectionism, and also, I read my Facebook memories, so I can see and correct typos years later.

Expand full comment

> “Short Women In AI Safety” and “Pope Alignment Research” aren’t real charities

They really should be though.

Expand full comment

Smash the longpatriarchy!

Expand full comment

Pope alignment has traditionally been very difficult: https://en.m.wikipedia.org/wiki/List_of_popes_from_the_Medici_family

Expand full comment

Improperly aligned popes lead to the Rule of the Harlots:

https://en.wikipedia.org/wiki/Saeculum_obscurum

"Saeculum obscurum ("the dark age/century"), also known as the Pornocracy or the Rule of the Harlots, was a period in the history of the papacy during the first two thirds of the 10th century, following the chaos after the death of Pope Formosus in 896 which saw seven or eight papal elections in as many years. It began with the installation of Pope Sergius III in 904 and lasted for 60 years until the death of Pope John XII in 964. During this period, the popes were influenced strongly by a powerful and allegedly corrupt aristocratic family, the Theophylacti, and their relatives and allies. The era is seen as one of the lowest points of the history of the papal office."

To demonstrate just how chaotic it got, Formosus is the guy who was at the centre of the Cadaver Synod - as the cadaver:

https://en.wikipedia.org/wiki/Cadaver_Synod

"Probably around January 897, Stephen VI ordered that the corpse of his predecessor Formosus be removed from its tomb and brought to the papal court for judgment. With the corpse propped up on a throne, a deacon was appointed to answer for the deceased pontiff.

Formosus was accused of transmigrating sees in violation of canon law, of perjury, and of serving as a bishop while actually a layman. Eventually, the corpse was found guilty. Liutprand of Cremona and other sources say that, after having the corpse stripped of its papal vestments, Stephen then cut off the three fingers of the right hand that it had used in life for blessings, next formally invalidating all of Formosus' acts and ordinations (including his ordination of Stephen VI as bishop of Anagni). The body was finally interred in a graveyard for foreigners, only to be dug up once again, tied to weights, and cast into the Tiber River.

...The macabre spectacle turned public opinion in Rome against Stephen. Formosus' body washed up on the banks of the Tiber, and rumor said it had begun to perform miracles. A public uprising deposed and imprisoned Stephen. He was strangled in prison in July or August 897."

Papal alignment is a delicate, tricky, but necessary procedure and more research into the problem of avoiding anti-popes, multiple claimants, and Banquets of Chestnuts is urgently needed!

https://en.wikipedia.org/wiki/Banquet_of_Chestnuts

"The Banquet of Chestnuts (sometimes Ballet of Chestnuts, Festival of Chestnuts, or Joust of Whores) was a supper purportedly held at the Papal Palace in Rome and hosted by former Cardinal Cesare Borgia, son of Pope Alexander VI, on 31 October 1501.

An account of the banquet appears in the Liber Notarum of Johann Burchard, the Protonotary Apostolic and Master of Ceremonies. This diary, a primary source on the life of Alexander VI, was preserved in the Vatican Secret Archive; it became available to researchers in the mid-19th century when Pope Leo XIII opened the archive, although Leo expressed specific reluctance to allow general access to a document which might harm the reputation of Alexander VI.

According to Burchard, the banquet was given in Cesare Borgia's apartments in the Palazzo Apostolico. Fifty prostitutes or courtesans were in attendance for the entertainment of the banquet guests. Burchard describes the scene as follows:

On the evening of the last day of October, 1501, Cesare Borgia arranged a banquet in his chambers in the Vatican with "fifty honest prostitutes", called courtesans, who danced after dinner with the attendants and others who were present, at first in their garments, then naked. After dinner the candelabra with the burning candles were taken from the tables and placed on the floor, and chestnuts were strewn around, which the naked courtesans picked up, creeping on hands and knees between the chandeliers, while the Pope, Cesare, and his sister Lucrezia looked on. Finally, prizes were announced for those who could perform the act most often with the courtesans, such as tunics of silk, shoes, barrets, and other things."

Expand full comment

That's why there needs to be research into it!

Expand full comment
founding

Let's hope it prospers. This unbroken string of lawful-good popes is getting intolerably dull.

Expand full comment

You yearn for the days of Alexander VI? 😁

https://en.wikipedia.org/wiki/Pope_Alexander_VI

Expand full comment
author

You can always apply to this year's SFF round!

Expand full comment

Be the change you want to see in the world

Expand full comment

Or in the case of short women, the change you want to not really be able to see but is probably there behind the counter.

Expand full comment

It feels like Evelyn, a Modified Dog might have something to say about the first one.

Expand full comment

I'm pretty sure "Short Women in AI Safety" is a track on side 3 of Shut Up 'n Play Yer Guitar.

Expand full comment

Hi! I'm back writing more technical things about psychiatry again and here to shamelessly self-promote. My latest is about how psychiatrists pick antidepressants and antipsychotics and why maybe we can think about doing it a little better.

https://polypharmacy.substack.com/p/wots-uh-the-deal-with-how-we-pick

There's a second part that should be coming within the next 2 weeks.

Also big thanks to Scott for featuring my article on QTc prolongation in his February links!

Expand full comment

New blogpost, and as always I love your feedback! This one's a list of FAQs on how banking works:

- What do banks do with customers' money?

- What does it mean when we say banks create money?

- What are capital requirements?

And more.

https://logos.substack.com/p/how-banking-works-23-06-23

Expand full comment

It's not bad, but my biggest critique is that the bank of England's explainers are clearer and more succinct than yours (I see you link to at least money creation in the modern economy) and if I wanted to share a resource with someone to explain it (as I do both for friends and for people I'm training) then I would just send the BoE links.

https://www.bankofengland.co.uk/-/media/boe/files/quarterly-bulletin/2014/money-creation-in-the-modern-economy.pdf

Your essays suffer from the "it's been done" problem, compounded by the person doing it being perhaps *the* most reputable source.

Expand full comment

Personally, I'd recommend Bits About Money.

Expand full comment

True, I don't claim my post to be original! But I think it's more accessible, and consolidates more information in one place.

Expand full comment