1004 Comments
User's avatar
Dino's avatar

This seems straight out of Bay Area House Party (warning - click bait)

https://lifehacker.com/relationships/what-is-porn-dosing

TL;DR - Micro-dosing porn.

Expand full comment
Hank Wilbon's avatar

Tyler Cowen has a recent post about "deculturation". https://marginalrevolution.com/marginalrevolution/2024/06/can-we-survive-deculturation-olivier-roys-the-crisis-of-culture.html

Not having read the book it is about I can only speculate. Deculturation sounds like the lost common culture of Europe and its spawns like the Americas and Australia. Whether I've got the subject of the book correct or not, it's something I've noticed. We're losing our collective culture. There was a time when most Europeans and Americans got biblical references. You could assume the well-educated got them, and even many uneducated people knew biblical stories. There was also a period when educated people in Europe and its peripheries knew a lot about Greek myths.

These common cultural currencies have disappeared rather recently. Probably the majority of people with college degrees under the age of forty now know very little about The Bible and Christian preachings in general. That's a dramatic change in culture, considering most educated people knew a lot about The Bible for 1500 years until about yesterday.

Add to that the fact that most educated Europeans and Americans over the past couple centuries knew plenty about literature. Knowing Dickens or Tolstoy in the 19th century was like knowing Breaking Bad or Game of Thrones today, only much, much moreso. Everyone knew Mozart, Haydn, Beethoven, Brahms.

There's much fewer common cultural references today. Some of that is for good reasons. Those of us overly online think more globally. Abrahamic religions are no longer de facto.

But society can't exist without culture, so cultural entrepreneurs are rushing into the void. Hence wokeism, a brand new religion based on atheism and total equality. Or neo-reactionaries, who are good at seeing what we've lost and pretty terrible at coming up with good solutions for it.

In a sense, we are having to re-invent culture from scratch because we've either rejected the received culture or we are too ignorant to even know it.

Why might the loss of received old European culture be a bad thing? Because maybe something that took many centuries to create, undergoing cultural evolutionary pressures, has more value than something we are now creating on the fly.

Does that ring true for you, or do you think these fears of deculturation are the timeless fears of old people?

Expand full comment
Paul Botts's avatar

"most educated people knew"...

"most educated Europeans and Americans over the past couple centuries knew"

You are skipping over the enormous relevance of that qualifier: "educated". Across Europe as of 1820 only England had achieved a _literacy_ rate (never mind whatever definition you prefer for the word "educated") of at least 50 percent. France was still less than 40 percent literate, Russia less than 10 percent, etc.

The USA by 1820 had a much higher literacy rate, between 70 and 90 percent depending on which estimate you prefer. But the USA was also a _much_ smaller slice of the world's population than we are today used to thinking of it as: 1/10th as many people as the UK, 1/8th as many as France, 1/7th as many as Russia, 1/5th as many as Spain, etc.

So 200 years ago the vast majority of Europe+America was not even literate let alone "educated" and, not unrelated of course, was living at the edge of subsistence. The "everyone" in a statement like "everyone knew Mozart, Haydn, Beethoven, Brahms" is plausibly true only if we mean actually a very small slice of the population.

That context is logically highly relevant to the notion that the western world, or the English-speaking world, in that era shared a universally-known single culture and now does not.

Expand full comment
Moon Moth's avatar

I think Scott wrote about it here, once upon a time:

https://slatestarcodex.com/2016/07/25/how-the-west-was-won/

Expand full comment
Martian Dave's avatar

Thanks, this is a great piece. Dancing round the maypole definitely has something to do with it, but there is also a hunger for creating Great Art in the West which makes the West a little different to Tibet etc, and is also at risk from universal culture. So I guess we summoned two demons, and the second destroyed the first and the summoner.

Expand full comment
beowulf888's avatar

Gak! Honestly, the "common culture" proponents all seem to be historically illiterate (at best), and culturally prejudiced *and* illiterate (at worst).

> We're losing our collective culture. There was a time when most Europeans and Americans got biblical references.

There was a time when Christians burned each other at the stake for believing slightly different things. But I suppose you could say that Europe in the Sixteenth Century had a common culture of religious intolerance!

> most educated Europeans and Americans over the past couple centuries knew plenty about literature. Knowing Dickens or Tolstoy in the 19th century was like knowing Breaking Bad or Game of Thrones today, only much, much moreso.

The idea that there's some sort of common "European" culture is bizarre on the face of it. One only has to travel to foreign countries to see that when it comes to the arts and literature this is absolutely not true.

You mention Tolstoy and Dickens. There was a 20 year gap between War and Peace publication in Russia — in Russian (a language that wasn't universal to the Russian Empire, BTW) — and when it was translated into English (rather poorly on the first pass, IMHO). And Tolstoy was not immediately embraced by the English-speaking world.

Dickens is not and never was well-regarded in France. "He is not considered a great or classic writer in France; his books are seen as old fashioned and mostly suitable for children."

https://www.theguardian.com/books/2009/jan/04/jean-pierre-ohl-mr-dick#:~:text=For%20a%20Frenchman%2C%20Dickens%20is,and%20mostly%20suitable%20for%20children.

As for most educated Americans in the Nineteenth Century — well, there weren't that many who were. Most Americans' education stopped at the 8th Grade (although in one of my previous posts to an earlier open thread, if you got your 8th Grade diploma you probably had a better general and practical education than most twenty-first-century high school graduates). In 1870 the Americans who had a college education (all 1.7% of them) probably did share a common culture, though — in the Greek and Latin classics (because the idea of Liberal Arts education hadn't yet been invented). Meanwhile, 20% of the nation was illiterate. And for those who were literate, books were tremendously expensive. Most households had a Bible, though. So the majority of Americans had a shared common culture based on the Bible and ignorance.

It wasn't until the last two decades of the Nineteenth Century that *free* lending libraries became common, and the rising middle class had access to books. Of course, that's when the first "common culture" complaints arose among the educated ("They're all reading the popular novels by Dickens instead of reading Cicero and Plato!")

The subtext of the "common culture" arguments seem to be about restricting educational opportunities to a narrow range of carefully curated subjects (that reflect the prejudices of the CC-crowd) to facilitate political and/or religious conformity. </rant off>

Expand full comment
Martian Dave's avatar

So for me there are three rough epochs

(1). No real attempt to redistribute the goods of high culture (e.g Catholic Church pre-liturgical movement c. 1850)

(2). A genuine attempt to redistribute the goods of high culture (e.g Catholic Church from the liturgical movement to Vatican II)

(3). Gradual abandonment of (2) (e.g Catholic Church post Vatican II).

Now (3) is better than (1), but I do believe (2) is better than (3), and most people who are harking back to a common culture are harking back to (2) rather than (1). (1) has basically passed out of living memory so it is genuinely difficult to feel nostalgic for it, whereas (2) represents the world our parents and grandparents grew up in.

Expand full comment
Viliam's avatar

Whenever someone complains about how they miss the glorious past, the glorious past either ever existed or was available to less than one person in thousand.

Imagine the glorious past when less than 1 person in 1000 was literate, and only a few of them had enough time to read books. The book-readers all over the planet probably knew each other by name, so they could recommend Tolstoy to each other, because there was no longer book to read. What an exciting era!

There are probably more Tolstoy readers today, in absolute numbers. The only problem is, having read Tolstoy does not clearly mark you today as a member of the elite.

Also, I have no idea why "memes" and "EU" are deculturation. No strong opinion on EU, but memes are definitely shared cultural artifacts -- shared by several orders of magnitude more people than Tolstoy's books. It's just not the kind of culture you like, because it is not high-status.

Expand full comment
Martian Dave's avatar

I love memes. I've read more memes than Tolstoy, but I have confidence that Tolstoy has a more profound insight into the human condition. Memes on aggregate are genuinely insightful but you need a lot of them and there's a lot of dross. Whereas there's a soft test-of-time which filters out the dross from 19th century literature.

Expand full comment
Boinu's avatar

Are you sure that a society can't exist without culture? I think that's the central problem with your musing, the idea that there ought to be a general canon – preferably traditional and taste-filtered through some well-heeled, powerful stratum of society – that binds together everyone worthy of being called educated and cultured. That has a political valence all its own, doesn't it?

The reach for 'wokism' as replacement, regardless of whether you like equality and atheism (I think they're both quite dandy, but I get the vague sense that opinion is divided), is a category error. The egalitarian sentiment, in various forms, has been with us at least since the Gracchi, and (money aside) it's orthogonal to familiarity with art, music, and literature. It would have been thus in the 19th century, too.

The Western canon is still there, for anyone, Western or not, to enjoy. It's subsidised in various ways (no small irony that the right so often tends to hate these subsidies) which is fair enough, because otherwise it would sink even lower in the vicious commercial wrangling for attention. It just no longer marks you out as a rube if you don't know it well. And that's fine. Meanwhile you've got people walking around as experts on Sengoku Japan or the Spring and Autumn Period because they've gone through an anime or wuxia phase in adolescence, and then they memorised all of Poe for reasons, and then the pre-Raphaelites were big on Tumblr for a month, and then their favourite youtuber did a four-hour sprawl on Kierkegaard... give them a decade to weave a patchwork of weird interests, and they end up better-rounded individuals than most people in Europe in the 19th, even if Shakespeare and Homer can no longer be taken for granted. Is that strictly worse?

Expand full comment
Hank Wilbon's avatar

I think what might be worse is that the result is less commonality. Everyone follows their own intellectual journey, learns a ton, but in the end speaks a different cultural language. So when people communicate, since they can't rely on a common culture that is deep, they resort to internet lingo, memes and emoji and the deeper culture they have learned has value for them personally but isn't something they can refer to when communicating with others because those others, however erudite they may be, focused on learning different things such as Raphael instead of the pre-Raphaelites, Goethe instead of Kierkegaard, Dickens instead of Poe.

Expand full comment
beowulf888's avatar

Chairman Xi is doing a good job of implementing a common culture in China. Everybody learns a common history. Everybody is educated in the same neo-Confucian philosophy. Everybody must speak the common tongue for official and commercial business. Minority cultures are being extinquished. The Internet is firewalled to create a shared view of the world. Seems like he's building a harmonious paradise that should be exported to the rest of Asia (whether they like it or not).

Expand full comment
Martian Dave's avatar

I think there really is a cultural decline, and we should allow ourselves enough sadness to do something modest about it. Anything more will probably just feed in to the phenomenon we’re sad about. Ken Clarke thinks Great Civilizations are founded on confidence, and we are going to need a lot of it. Here are some things I think are contributing to the decline:

Lossiness - there's a whole lot of culture! Even people who make it their life's work to preserve it can only preserve a part of it, there are trade offs between promoting new work, promoting well-loved classics and promoting neglected classics. Some cultural artifacts inevitably fall through the cracks.

Egalitarianism - promoting neglected classics includes making space in the canon for e.g female composers, jazz, folk. Seems good, but any one work displaces other work. Specific older works/composers are in danger of cancellation e.g Wagner (seriously anti-semitic opinions by any standards)

Cool/Casual - my wife’s friends (c. 45) can't get enough of classic literature, but I wonder how long this can go on, given how different dating is now. The past just generally seems like a lot of hard work, running fast to stand still, and we’ve come to expect to approach things more casually. Even if you're good at empathy it's exhausting.

Atheism - obviously people used to take the Bible and biblical inspired art seriously because they thought it really was the word of God, then there was a long hegelian/nietzschian twilight period, where it's like “this is false, but it's a crucial step on our journey towards true Spiritual Enlightenment” (e.g Wagner's depiction of medieval pilgrimage in Tannhaeuser). But to the extent people don't believe in God, even in a vague 18th/19th century way, I don't see how the Bible can avoid dropping out of mass culture.

Tech - even when tech enables genuine works of art to happen, it is art skewed towards our own time and values. If it weren't for film I probably would have read more classic literature. Technological mindset displaces ‘useless’ subjects from the curriculum in favour of STEM.

Expand full comment
Moon Moth's avatar

> Lossiness - there's a whole lot of culture!

I'm reminded of a great quote: "History is not what you thought. It is what you can remember. All other history defeats itself."

The same probably applies to "culture".

Expand full comment
Rothwed's avatar

I think your historical perspective is off. For the vast majority of human existence, only a very tiny elite of society would know about music or writing. Most people were engaged in subsistence agriculture and simply didn't have time to spare from survival. Practically no one outside the clergy would have been able to read the bible on their own before the Protestant Reformation, because even if they were literate in their native language the bible was in Latin. All of this was standard until maybe 150 years ago. If anything, the culture you are describing is the anomaly.

And I doubt most people knew what you claim they knew. Ask a bunch of random people in 1900 America who Tolstoy is, and maybe half would answer a famous writer. I imagine very few would have actually read Tolstoy, and only a fraction of them would have understood it and been able to carry out an analysis of his writing. You have to keep in mind that the historical record is mostly made up of highly educated elites talking about things that interest them, which does not reflect the experience of the common man.

Expand full comment
Martian Dave's avatar

Even if this is all true, there really was a time when people really believed that progress in education and technology could give the masses access to culture e.g the founders of the BBC. "Nothing is too good for the working class" Nye Bevan

Expand full comment
FLWAB's avatar

While I agree that for most of human existence only a tiny elite knew literature and music and such, in Europe at least for the last 1200-1500 years even the peasants knew Bible stories. Though most church services were in Latin, priests were supposed to preach something in the vernacular every few weeks. Churches and cathedrals were filled with art telling Bible stories (for a modern example, the bronze front door of St. John the Divine's Cathedral in New York contains images taken from Bible stories that cover the whole Bible, from Genesis to Revelation). Friars would travel from place to place preaching about the Bible, and plays on Bible stories would be put on regularly. So even peasants would know the cultural basics of Christianity: know David and Goliath, Jonah and the Whale, Noah and the Ark, Moses and the 10 Plagues, etc.

Expand full comment
Rothwed's avatar

I generally agree with the premise that there were shared cultural institutions in the past that are much less shared today. I was objecting to that shared experience being characterized as something like the ideal Renaissance Man. There were cultural practices that made French people distinctly French, but it wasn't talking about Tolstoy and Mozart.

Expand full comment
User's avatar
Comment deleted
Jun 9
Comment deleted
Expand full comment
Nancy Lebovitz's avatar

We do have a culture, it's just more recent, faster-changing, and more based in commerce than prestige.

Expand full comment
User's avatar
Comment deleted
Jun 9
Comment deleted
Expand full comment
beowulf888's avatar

How about just plain realism? Optimism is what makes you want to draw on an inside straight. Of course, pessimists all think the world will to end soon, but I've lived through at least half a dozen predicted end-of-the-worlds in my lifetime, so I don't buy into the latest round of EotW hysterias. OTOH the techno-optimism of Scientism is a religion that has replaced the Rapture of a Christian god with the Rapture of the Nerds.

Ironically, it's the common culture cultists who think the world is going to hell in a handbasket. But everyone else does, too! Although I don't necessarily believe our future will be a paradise.I think there's a low probability that our current *high-energy* civilization will continue much past the Twenty-first Century, I seem to be one of the few people left who thinks that come hell or high water humanity will muddle through somehow.

Expand full comment
Eremolalos's avatar

GPT's ignorance about the physical world is astounding. I asked it to make me an image of a whirlpool, and gave some details about what I wanted it to look like.. This is what I got: https://imgur.com/cTAIVjZ

And yet, GPT has no doubt read a fair amount about whirlpools online. If I asked it to name a famous short story with a whirlpool in it I'll bet it could. If I asked it what conditions produce whirlpools I'll bet it could tell me. If I asked it whether part of the ocean can form itself into a disk and lie on edge on the ocean surface like a tire on a floor it would tell me no.

I'm not sure people who think these suckers are going to understand pretty much everything better than we do grasp how enormous the gap is between what we know about the ordinary world and what LLM's know. There are a million things like whirlpools -- dogs, beauty salons, tar, lipstick, bubble goo, ferris wheels, needle-nosed pliers, harvest moons, folk dancing enthusiasts, communists, nightmares . . . -- that we understand the basics about. We know what they look like, what they feel like, whether you can put them in your pocket, how they would behave if set on fire, what sentences about them make sense and which don't. We learned all that while walking around the world interacting with these things, plus absorbing info via reading or talking. We put it all together somehow, those 2 channels of information. It comes so naturally to us to do that that it's not immediately obvious what an amazing feat it is.

Expand full comment
Nancy Lebovitz's avatar

I like that whirlpool a great deal better than most of what I see from LLMs. It's got that computer art insipidity, but I rarely see an image where I think "I want to see this done by a good artist".

Expand full comment
Yug Gnirob's avatar

That's a bossfight against Charybdis right there.

...actually that's literally just a Charybdis drawing. https://paleothea.com/mythical-creatures/charybdis-greek-mythology/ https://www.greekmythology.com/Myths/Monsters/Charybdis/charybdis.html

Expand full comment
Peperulo's avatar

The new models are being trained multi-modally, but I think your point still stands w.r.t. emotions, smell and tactile sensations.

Expand full comment
Nancy Lebovitz's avatar

https://www.youtube.com/watch?v=QV88C5ZK0x0&ab_channel=BermPeak

https://en.wikipedia.org/wiki/World_Bicycle_Relief

This is an extremely practical bicycle, built to be durable and easy to maintain. Made of steel, 50 pounds, $150. It's apparently only available through the charity rather than for sale in the first world.

This may not be a perfectly effective charity, but it's sensibly built around an existing device, and the charity also supports people learning how to repair the bicycles as a business.

I speak as a person who likes the idea of a handbrake which goes to the hub rather than squeezing the rim-- the rim is too slippery when wet.

Expand full comment
rebelcredential's avatar

"My son was born last night," said Tom, apparently.

Expand full comment
bloom_unfiltered's avatar

"I really like the actor who played Saruman," said Tom, lovingly.

Expand full comment
NASATTACXR's avatar

"We are towing urine" said the people.

Expand full comment
Melvin's avatar

"These aren't usually about me" said Taylor, swiftly.

Expand full comment
thefance's avatar

my magnum opus:

> "A popstar is always on time; she arrives precisely when she means to" said Taylor, wizenly.

https://www.astralcodexten.com/p/open-thread-306/comment/45228363

Expand full comment
Gunflint's avatar

I think I’ll save the amputee’s offhanded remarks for a hidden open thread.

Expand full comment
Gunflint's avatar

Should have worked it into the adverb form, as offhandedly?

Expand full comment
Nobody Special's avatar

“And no, I’m not getting a vasectomy,” he continued, testily.

Expand full comment
Arrk Mindmaster's avatar

"I do want to talk to the doctor, though," said Tom, patiently.

Expand full comment
Eremolalos's avatar

"No kids for me" said Other Tom, half in Earnest.

Expand full comment
Nobody Special's avatar

"Or me!" Other Tom's partner insisted, earnestly.

Expand full comment
rebelcredential's avatar

"Yuck, those mice have made their bedding out of dismembered hearing organs," said Tom, earnestly.

Expand full comment
gdanning's avatar

"Warner Erhard sure got a lot of income from developing a quasi-cult," said Tom, earnESTly

Expand full comment
A.'s avatar

I missed the last open thread, so here's a link for everyone who might still be checking FiveThirtyEight, with Nate Silver explaining how low the management of his former site has fallen:

https://www.natesilver.net/p/polling-averages-shouldnt-be-political

Expand full comment
Dino's avatar

I finally figured out my solution to Newcomb's paradox. Either I'm in a world where I will choose both boxes, or a world where I will choose box B. If I'm in the world where I will choose both boxes then the optimal choice is to choose box B, which I can't do because I'm in the world where I will choose both boxes. This paradox means I can't be in the world where I will choose both boxes. If I'm in the world where I will choose box B, the optimal choice is to choose box B, no problem. Therefore choosing box B has to be correct.

Expand full comment
Dino's avatar

Thanks for the responses, they have helped me refine my solution. I still think the "which world am I in" framing is the key. So - take 2:

If I'm in the world where I will choose both boxes, I will get $1000. If I'm in the world where I will choose box B, I will get $1000000. Therefore I prefer to be in the world where I will choose box B, and I'm still a one-boxer.

Expand full comment
Yug Gnirob's avatar

The optimal choice is to pick both boxes, because the reliable predictor will be able to better spend that $1,000,000 than you could hope to anyway.

If the money somehow ceases to exist if the predictor predicts you taking both boxes, then you take both boxes, and that guy can go fuck himself for deliberately destroying $1,000,000 in value.

Expand full comment
Level 50 Lapras's avatar

The real solution to the "paradox" is to recognize the true nature of the paradox. It's only a "paradox" because it violates the axioms of Game Theory (and Rationalism).

In Game Theory, agents are assumed to have infinite computation and knowledge and float *outside* the world in some uncomputable astral plane. As they are floating outside of the world of the "game" they're playing, their decision processes can't possibly effect anything, etc.

As with frictionless cows or whatever, sometimes the Game Theory axioms are a useful approximation of reality, and sometimes they aren't.

In the real world, everyone has extremely limited computation and information, and everyone is *embodied* in the world, which means that they are part of the world they are acting in, and their own decision processes can affect the world and vice versa. E.g. someone could conceivably put you in an MRI machine and see what you're thinking before you think it. Or just give you drugs.

Newcomb's paradox is only a "paradox" because the setup of the problem directly contradicts these axioms. It's just an illustration of the limitation of Game Theory/Rationalist axioms, nothing deeper.

Expand full comment
Matt's avatar

>If I'm in the world where I will choose both boxes then the optimal choice is to choose box B

This is wrong. In the world where you choose both boxes box B will be empty and box A will have a little money so the optimal choice is to take both boxes.

>If I'm in the world where I will choose box B, the optimal choice is to choose box B

This is also wrong. In the world where you choose only box B both boxes contain money so the optimum choice is to take both boxes. Unfortunately you can't actually do that since you are in the world where you only take box B.

The main intuition for one-boxing is that the decision to one-box itself affects which 'world' you inhabit but if you assume from the start that you must inhabit one 'world' or the other already, independent of the decision you would prefer/attempt to make, you kneecap that line of reasoning and leave two-boxing as the only viable strategy left standing.

Expand full comment
Zach's avatar

>In the world where you choose only box B both boxes contain money so the optimum choice is to take both boxes.

If you take both boxes, how are you in the world where you only choose box B? The only way you can be in the world where you choose only box B is by taking only box B. Isn't that definitional?

I think the paradox gets resolved by substituting one supernatural device for another. Get rid of the person who can see the future, substitute a magic spell.

If you pick both, the million will disappear thanks to the spell, and the thousand will be all you get. If you pick one, you can get whichever you choose (probably the million). There's no paradox at all.

It's just a sci-fi Excalibur. Only the pure of heart can get the million, and if you try to take the thousand, you fail the test.

Expand full comment
Matt's avatar

>If you take both boxes, how are you in the world where you only choose box B?

Because you don't take both boxes. Read the very next sentence after the one you quoted.

Expand full comment
Zach's avatar

Yo, my bad! Apologies, disregard what I wrote. I must be going blind. Sorry about that!

Expand full comment
FLWAB's avatar

>if you assume from the start that you must inhabit one 'world' or the other already, independent of the decision you would prefer/attempt to make, you kneecap that line of reasoning and leave two-boxing as the only viable strategy left standing.

I wouldn't say so. The relevant part of the 'world' you already inhabit in the scenario is not what's in the boxes, it's what kind of character you have. If the world you inhabit is one where you are the kind of person who will two box, then the alien (or computer, or Jin, or God, or whoever) will only put money in the one box. If the world you inhabit contains a you where you are the kind of person who will one box, then both boxes will have money. Our character locks us in: to say that in the world where I'm the kind of person who one boxes it would be more advantageous to two box is to say "Unfortunately you can't do that, since you are in the world where you are the kind of person who only takes box B"

Expand full comment
Matt's avatar

Sure, that's fine. The framing device isn't actually important so long as it doesn't sever the dependency.

Expand full comment
Gunflint's avatar

Saw this on Mathew Yglesias Thursday thread:

The Indiana Pacer's can still make the NBA Finals if only Mike Pence has the courage...

Expand full comment
Nancy Lebovitz's avatar

https://www.bbc.com/news/articles/c1wwdd6v2wjo

"A major cause of inflammatory bowel disease (IBD) has been discovered by UK scientists.

They found a weak spot in our DNA that is present in 95% of people with the disease.

It makes it much easier for some immune cells to go haywire and drive excessive inflammation in the bowels.

The team have found drugs that already exist seem to reverse the disease in laboratory experiments and are now aiming for human trials."

Good news, even if it's a slow roll-out.

Any thoughts about speeding up the process while taking reasonable care?

Expand full comment
av's avatar

While not related to this new discovery, I have personally found that helminthic (specifically TTO) therapy nearly completely alleviated my (comparatively mild, but properly diagnosed) UC. I've struggled with it for over 10 years, the first treatment worked for about 3-4 years before symptoms returned, and the second identical treatment produced the same results this year, so at this point I'm pretty positive that it is in fact the helminths that caused the improvement in my specific case. Of course, infecting oneself with a parasite procured from a questionable source is not everyone's cup of tea, but for me the benefits seem to outweigh the risks.

Expand full comment
Dino's avatar

The drugs are MEK inhibitors, all 4 of them I found on wikipedia are prescription only in the US. ;-(

Expand full comment
rebelcredential's avatar

Techie people:

When you're drawing a diagram of how a complex system works:

Most of the time, any type of entity relationship can be drawn with some kind of bubbles connected with some kind of arrows. Causal chains can be represented in exactly the same way - with arrows that connect one event/action to the next.

There are two common circumstances I keep running into that I don't know how to viz:

- cases where something might come into existence, and at another point cease to exist again.

- cases where something might be instanced multiple times, and the "prototype" or "class" version (if it exists) may differ substantially from how a given instance could end up looking.

You can obviously draw these things on their own terms, but I'm talking about when you need to include them in and around a bigger-picture diagram of a whole system.

Has anyone seen any good diagrams/charts/visualisations that did a good job of showing those situations?

Expand full comment
thefance's avatar

fuhgettaboutit. type-definitions and object-instantiations are like oil and water. you gotta use two separate diagrams. E.g. if you're trying to do something like combine an *abstract* diagram of a family tree with your *actual* family tree... it's just not happening. (Or at least, not in any way that's coherent.) Instead, "the way of the programmer" (tm) is to: A) define types of hypothetical objects; B) instantiate concrete objects under main(); C) *label* any concrete instances with the corresponding type (aka category). E.g.

time = {hr, min, sec}

main() {

....time lunch.set(12, 00, 00);

....time dinner.set(18, 00, 00);

}

"main()" is where the verbs happen. Notice that the type-definition for the "time" struct occurs outside of main(). Because definitions live in the platonic nether realm, not in physical reality. So you basically need two separate diagrams.

Expand full comment
Viliam's avatar

Are you familiar with https://en.wikipedia.org/wiki/Unified_Modeling_Language ?

To put it simply, different types of diagrams for different perspectives. Class diagram = each class is one bubble. Object diagram = each instance is a separate bubble (or rectangle? not sure), so you can show relations between multiple instances of the same class. Some diagrams have a time axis, so you can show the order of things happening, which may include when some things appear and disappear.

Based on my short experience with modelling, I would recommend not trying to put everything in one picture, because there will be too many arrows, too difficult to follow. Or maybe make one huge diagram, but also make diagrams of individual parts. If you have a tool for drawing UML diagrams, the advantage is that you define all relations once, and then when you put some objects to a diagram, it will automatically include the arrows between them.

Expand full comment
Gunflint's avatar

If you are doing any object oriented programming UML is very useful.

Expand full comment
Nancy Lebovitz's avatar

Is animation ever useful to indicate things that change?

Expand full comment
rebelcredential's avatar

Yes it very definitely is. But I rarely see it in this context, probably because it's not the fashion and the tooling for it doesn't exist. But I would build the tooling if I knew what it was I was going for.

Expand full comment
Hank Wilbon's avatar

Sometimes people argue over whether art should be political or not. To me it seems obvious that art is more important than politics and that therefore good art is rarely political. My premise is that art (broadly but also narrowly defined) is the main luxury good of civilization. In the hierarchy of needs you have food (more abstractly: nutrition & health), shelter (defense against mortal enemies through the night), love (social support), art. Politics is arguing over the distribution of those things, but it's more noble to produce those things than to argue over their distribution because the former is positive sum whereas the latter is zero or negative sum.

To put that in relatable day-to-day terms, who has been more valuable to our society: Larry David or Joe Biden? Beethoven or Napoleon? The Beatles or LBJ? Are there better comparisons?

Expand full comment
Nancy Lebovitz's avatar

Is "Too Much Time on My Hands" by Styx political?

https://www.youtube.com/watch?v=5XcKBmdfpWs&ab_channel=StyxVEVO

The bit with the watches reminds me of cryptocurrency. I don't think it's necessarily a scam, but so many of the sellers are scamming.

Expand full comment
Hank Wilbon's avatar

No, it's likely a song about a rock-n-roll star who is bored and literally has too much time on his hands. The watch hustler isn't even mentioned in the song, which came out before MTV even existed, so that cheap video was a promotional video for buyers within the record business and not even intended for public consumption.

(The genius of MTV was that someone realized that by 1982 most bands were making these promotional videos and almost nobody was seeing them, so it was easy to start a cable channel and show them to the public for almost zero cost.)

Expand full comment
Neurology For You's avatar

How do you make the distinction? Beethoven wrote his third symphony for Napoleon, until Napoleon let him down. The Beatles wrote songs about paranoid gun owners, tax policy, and so on. There’s no bright line between political content and art and never has been.

Expand full comment
Hank Wilbon's avatar

I'm not arguing for bright lines, only that "good art is rarely political". Good Beatles' songs and Beethoven symphonies are rarely political too. But, sure, sometimes they are.

Even Bob Dylan is rarely political.

I was motivated to start this thread after seeing about a million people on Twitter agree that "Good art is always political".

Expand full comment
Nancy Lebovitz's avatar

Which is heavier, feathers or lead?

Why do people pay more for diamonds than bread?

To be straightforward, since the temptation to explain things is stronger than the temptation to just snark, people apparently want both politics and art, and important stuff happens at the margin, not choosing between whole categories.

Expand full comment
Arrk Mindmaster's avatar

"'Beauty is truth, truth beauty'—that is all ye know on earth, and all ye need to know." -- Keats

Based purely on this, art and politics are completely incompatible, due to a utter dearth of beauty in politics.

Expand full comment
thefance's avatar

Contrasting them makes little sense, since the two are inseperably linked. As LearnsHebrew implies, your conception of art and politics feels overly narrow.

When I think of art, I think of it as a zip-file. It's a compact way of transmitting information across long intervals of space and time. But for wetware instead of software. Art doesn't *always* need to contain compressed information. But for central examples of art, it often does.

And often, (though not always,) the information reflects the value-system of the artist. The ant and the grasshopper, for example, is about the prudence of long-term planning. La Guernica was about the horrors of war. Punk rock often features a lot of rebellion and contrarianism. If you've read the Republic, Plato seemed to believe that art was upstream of ethics, which was upstream of culture, which was upstream of politics. individuals in a society were likened to body-parts, which needed to act in concert to produce Justice. IIRC plato wanted to ban poetry because it would impassion the hearts of men to act recklessly, or something.

So this idea of compartmentalizing art away from "politics" feels odd to me. Art isn't just decor that looks pretty on the mantle. It's also a natural means of participating in The Discourse. And The Discourse is a debate about priorities. Before you analyze "positive sum vs negative sum", you need to define what you're summing by settling on a value-system.

(Yes, you can contrast art and politics on the margin. But it sounds like you're making an argument about totality. i.e. that art is qualitatively, strictly superior to political rhetoric. which is roughly like arguing that having enough RAM entirely precludes the need for a CPU.)

Expand full comment
Hank Wilbon's avatar

I think it's odd to argue that central examples of art often contain compressed information. When I think of central examples of art, I think of say The Mona Lisa, King Lear, Moby Dick, Beethoven's 9th Symphany, Das Rheingold. It's true that La Guernica can be viewed as political but that can't be said of the majority of Picasso's work. And to the extent La Guernica still has great value it's not as anti-fascist propaganda. Simply showing the horrors of war isn't particularly political.

Thinking of art as containing compressed information is a bad reading of art, IMO. Or maybe I'm misunderstand what you mean by compressed information. I agree that great art expresses itself with high efficiency.

Expand full comment
thefance's avatar

P.S. The other thing I should add, is that

when people complain about politics in art, it's usually not actually about the partisanism, per se. It's about the crudeness with which the partisanism is applied. Critical Drinker, for instance, has been ripping into Marvel and Disney for their wokeness. But the wokeness per se isn't actually the main issue (and I believe Critical Drinker would agree with me). The issue is that the wokeness comes at the expense of the stories, rather than enhances them. E.g. as I recall of his review of She Hulk: there's no struggle, no challenge, no journey, no dilemmas, but lots of girlbossing.

If progressivism were really the issue, then I wouldn't expect Startrek (which was considered radically progressive during the 60's) to have been as popular as it was. Studio Ghibli often features strong female protagonists, environmentalism, anti-war themes, and anti-capitalism themes. And their Spirited Away won an Oscar. Meanwhile, the latest strain of progressivism has convinced Disney and Marval that Stronk Female Representation is, by itself, an acceptable substitute for interesting content. Rather than offering an exploration of an interesting perspective, it thrusts upon the audience a dry, heavy-handed lecture.

Expand full comment
thefance's avatar

Let's the get easy one out of the way, first. Which is the diction.

> Simply showing the horrors of war isn't particularly political.

Politics is about policy. And war is an inherently political affair. What you really mean is "partisanism". Which we might describe as "tribalistic advocacy for a controversial position". It often comes across as crude. "Murder is bad", for example, isn't exactly controversial. It is political, however, since it normatively implies a policy over a group of people. Likewise, I agree with you insofar as I wouldn't describe La Guernica as partisan, since it doesn't posit anything controversial about then-contemporary ideological positions. But it does describe an event which is inherently political. WWII redrew the political map, after all.

> Thinking of art as containing compressed information is a bad reading of art, IMO. Or maybe I'm misunderstand what you mean by compressed information.

Yeah, I could've explained this better.

A pun, for example, exhibits compression. It uses a double-entendre to get two meanings across for the price of one. Math exhibits compression. Unary gets compressed into variables, which get compressed into equations (and from here, it can go in different directions). A painting exhibits compression, in the sense that a picture is worth a thousand words. The 2d nature of the medium allows encoding and decoding of lots of things in parallel, compared to 1D strings of speech/text. A story can be thought of as a parable which distills an idea down to its most representative example(s).

Consider this video [0] on The Death of Socrates. There's a remarkable amount of information being transmitted by the painting. It's essentially a high-quality meme. Another analysis that comes to mind is this one on Master & Commander [1]. The book/film wrestles with the correct balance between a liberal, forgiving approach to leadership vs a conservative, hierarchical approach. Does this diminish its merits? Great Art Explained has a great vid [2] about how The Mona Lisa represented the entire culmination of what Da Vinci knew about painting and anatomy. Not exactly a political treatise, mind you. Although it does a decent enough job of showing how much thought and detail can get squeezed into an art piece. Moby Dick is arguably an exploration of epistemology [3]. It also draws attention to slavery. Which is clearly political, even in the partisan sense. Does this spoil the rest of the book?

(Admittedly, I don't know enough about King Lear to comment. And music is an on-going mystery to me.)

Another way to view this is to consider film posters. There's an old meme about how movie posters always look the same, since they draw from a shared lexicon of design elements. E.g. posters for rom-coms frequently feature a man and woman looking at the audience, back to back, with their arms crossed. This isn't by accident, it's deliberate. The graphical artists who design movie posters have a job, which is to quickly and reliably communicate the genre to the audience. It sets expectations. "man & woman, back to back, arms crossed" is an efficient way to communicate that a film is a rom-com. Likewise, the Mona Lisa analysis mentions that it was common for renaissance paintings to feature a pyramid structure. This was deliberate, as it lent a sense of stability. This is often contrasted with the baroque period, which featured instability through a lot of diagonal lines.

Things that we more-canonically think of as "art", are often more complex and subtle though. Which demonstrates that there's a spectrum of artistry. As an analogy, hardly anyone would dispute that doughnuts are food. They provide calories, after all. They taste sweet. They're edible. And yet doughnuts are widely considered *junk* food. Because it does a poor job of providing nourishment beyond the bare-minimum requirement of "provides calories/tastes okay". Likewise, a banana taped to a wall... can be called art, in some respects. But does it communicate deep truths about the human condition? does it inspire? impart life lessons? nourish the soul? Art which doesn't communicate ideas of long-term value, e.g. perhaps a still-life of a vase, I'm less inclined to call "high-art" than simply "decor".

[0] "The Death of Socrates: How To Read A Painting" (https://www.youtube.com/watch?v=rKhfFBbVtFg)

[1] "Master and Commander | The Most UNDERRATED Cinematic Masterpiece | Film Summary & Analysis" (https://www.youtube.com/watch?v=dMv_LOGMZN0)

[2] "Mona Lisa (Full Length): Great Art Explained" (https://www.youtube.com/watch?v=ElWG0_kjy_Y)

[3] https://en.wikipedia.org/wiki/Moby-Dick#Themes

Expand full comment
Hank Wilbon's avatar

Good points.

Expand full comment
thefance's avatar

thanks. glad you think so.

although i wish i could figure out what was going on with music.

Expand full comment
Ruffienne's avatar

Banksy does a pretty good job of emphasising both the artistic and the political equally.

Expand full comment
gdanning's avatar

>The Beatles or LBJ

Surely LBJ, given Medicaid, Medicare, the Civil Rights Act of 1964, and the Voting Rights Act of 1965.

>Larry David or Joe Biden

Joe Biden helped keep Robert Bork off the Supreme Court, played an important role in the US response to genocide in the Balkans, and has raised the refugee resettlement* limit from 15K under Trump to 125k. Now, of course, some people think those are bad things, but some people think I'll of Larry David's work, too.

>Are there better comparisons?

William Wilberforce and fill in the blank? Gandhi and xxx? MLK and yyy?

Your premise that art is more important than politics is flawed. It seems to me.

*refugee resettlement, not asylum

Expand full comment
Viliam's avatar

The problem with political art is that sometimes people produce things that are strong on the political dimension, but weak or mediocre on the artistic dimension.

A mediocre *non-political* piece of art could be simply ignored, or perhaps get a few niche fans but be ignored by most people. A mediocre *political* piece of art will still be defended by people who like the political message, but they will hypocritically pretend that they actually see the artistic value that their opponents deny. And on the opposite site, people who oppose the political message will insist that the artistic value is zero. It becomes impossible to have a talk about the actual artistic value, because most people will see statements about the art as political statements.

> Politics is arguing over the distribution of those things, but it's more noble to produce those things than to argue over their distribution because the former is positive sum whereas the latter is zero or negative sum.

Unfortunately, refusing to play zero-sum games is sometimes not the same as avoiding them, but instead it means losing at them. You can argue that producing is better than distributing (and I agree with you), but if you stop paying attention to the distribution, someone else may take away everything you produced, and you probably won't be happy about it.

Expand full comment
Moon Moth's avatar

I've made a similar argument about the corrupting role of political messaging in literary fiction.

Expand full comment
Whest's avatar

I don't think it's accurate to say that good art is rarely political. In my view, good art is good to the extent that it mirrors reality. Beauty is achieved when a work depicts something real and true, something difficult to capture using argument or analysis, something that, otherwise, is only attainable via direct experience. Art is, for the time being, the best tool we have for conveying what it is to be another person. For this ideal, this "realness," to be achieved, the art cannot be pointed. It cannot be a morality play. It cannot wag a finger at the observer, as if to say "do better." It must be a good faith attempt share your experience with others, and to the extent that politics is a feature of most people's lives, we should expect it to appear in art, even good art.

The vital distinction is between art that has political features (characters that hold certain opinions, politically-charged settings or backdrops, etc.), and that art that's making a pointed, political argument. The prior may very well be good, but the latter is, without exception, bad. Art that strives to argue some point, political or otherwise, ceases to be art, and becomes, instead, a particularly manipulative and emotional form of argument

Expand full comment
PotatoMonster's avatar

1984 is good art that's making pointed political arguments. Lots of science fiction is.

Expand full comment
Hank Wilbon's avatar

I agree 100%. I don't consider say Shakespeare's Part 1 of Henry the Fourth to be "political art" even though politics is its subject. Atlas Shrugged or The Grapes of Wrath is political art.

Expand full comment
LearnsHebrewHatesIP's avatar

By some definitions of "Politics", any non-trivial piece of good art is never apolitical. Wikipedia English says:

>>> Politics (from Ancient Greek πολιτικά (politiká) 'affairs of the cities') is the set of activities that are associated with making decisions in groups, or other forms of power relations among individuals, such as the distribution of resources or status.

What story or song doesn't deal with decision-making in groups? or power relations? or the distributions of resources? One of the first stories I remember reading and writing was a reading lesson in 1st grade: [BEGIN] My name is X. I love Mommy. I listen to what Mommy says. [END] That's.... politics. This 3-sentence barely-a-story is encoding something very non-trivial about power relations in a house and who should listen to whom. You could say that telling a 7-year-old to listen to his mom is hardly a controversial opinion and that it has no sensible alternatives or opposition, it's still politics, a very instinctive and extremely ancient kind of politics, but politics nonetheless.

Furthermore, continuing on with the theme of state-controlled K12 education systems even though art is technically wider than that, the state in control of an education system dictates what that education system teaches. States are hardly "Apolitical". The very selection of which literature to study, which language to teach them in, which poetry to recite, which holidays to celebrate and with what songs and poems, etc.... This is all politics, and states use each and every one of those opportunities to advance their favorite politics. Does Israel teach Palestinian folk songs in its education system? Do Catholic schools teach erotic works of art such as the Kama Sutra?

"Art should be apolitical" is usually a proxy point for an actual point, which is usually one of those 2 (possibly more):

(1) Art shouldn't be obviously and unsubtly political. Because nobody likes to feel like a dumbass, and art that doesn't respect you enough to let you draw your own conclusions is art that makes you feel like a dumbass, or - worse - that the writer/producer/poet thinks you're a dumbass. Extreme unsubtlety is also a sign of artistic insecurity, the artist(s) is unsure of their capability to convince you through subtle winks, so they resort directly to beating you over the head with it.

(2) Art shouldn't have politics that suck. And "Politics that suck" will vary depending on - wait for it - politics. The current dominant politics, that is. It could encompass everything from fascism to arguing that people not having a religion or leaving their assigned-at-birth religion is completely okay. Notice that people take character descriptions in stories to be endorsements, so a story describing an atheist without explicitly indicating that being an atheist is wrong will be understood as endorsing and/or arguing for whatever perceived or real characteristics of atheists. A story describing extra-marital sex that doesn't end in regret or bad consequences for parties involved will be understood as advocacy for pre-marital-sex, etc.... I picked those 2 things in particular because the gap between how utterly and completely normal they are in some societies vs. how utterly and completely beyond the pale outrageous in other societies is remarkable.

The question is also too muddled by using the general term "Art" to describe the immensely different sub-categories contained therein. I don't think a lyric-less piece of music can have much of a politics, any political connotation it might have is solely through sideband associations, such as the political opinions of its authors, the lyrics usually sung over its tones, or what kinds of audience it's primarily performed to. Linguistic pieces of art - stories, poems, movies, novels, songs, etc... - have the full power of language at their disposal and thus can be inherently political. Paintings can be political through the implicit connotations that the painter can induce through sizes and colors and other visual info, but the meaning of those can vary in unexpected ways: Paintings of the Buddha depict him as fat because pre-industrial obesity meant health and contentment, but of course the connotation now is completely inverted. Language is not immune from those sorts of unexpected mutations, but paintings are more prone to them.

Expand full comment
FLWAB's avatar

>What story or song doesn't deal with decision-making in groups? or power relations? or the distributions of resources?

Twinkle-Twinke Little Star

Beethoven's 5th Symphony

The Inspecter Gadget theme song

Oh My Darling Clementine

Pretty much all love songs (unless you count two as a group and romance as decision-making, which is pretty unromantic if you ask me)

Like a Bat Out of Hell

The Cliffs of Dover

Frer Jacque

Axel F

I could go on, but it would probably be shorter to list all the songs that do deal with decision making in groups, or power relations, or the distribution of resources.

Expand full comment
Hank Wilbon's avatar

Interesting that you start with the Greeks' definition of politics. Now let's consider ancient Greek art. How political is Homer? He's pro-Greek, that's for sure, but I don't any real political messages in his work. How about in Aeschylus or Sophocles? Aristotle's Poetics, which has a lot to say about the aesthetics of Greek Tragedy and Comedy, doesn't say anything about the value of political works, as best as I can recall. I suppose the satirist Aristophanes wrote political plays, so I will give you that one. Plato's opinions about art are utterly absurd, IMO.

As for government school systems worrying about what propaganda the kids have to read, I concur that they make the children read crap propaganda lit like 1984 (The CIA financed the movie version), The Grapes of Wrath, To Kill a Mockingbird and The Crucible in lieu of actual good literature. That governments choose propaganda pieces for schools doesn't weaken my case one bit. Political art is bad art.

Disagree 100% with your points 1 and 2.

Expand full comment
Morgan's avatar

Aeschylus is very often quite overtly political, as is Euripides. Euripides' political plays aren't particularly good, but Aeschylus' are some of the highlights of Western culture.

The Oresteia ends with the tragic cycle of vengeance finally laid to rest by the establishment of the Areopagus--an Athenian judicial and political body.

The Persians is entirely about the defeat of Persian despotism by the Greek polis.

Expand full comment
Hank Wilbon's avatar

Thanks. Interesting to learn that Aeschylus was more political than I had realized.

Expand full comment
LearnsHebrewHatesIP's avatar

I think it's a pretty controversial opinion - and thus in need of much more defense than you care to give - that 1984 is "crap Propaganda lit", you mention offhandedly in a pair of parentheses that the CIA financed (one of) the movie(s), but anybody who gives a shit about the movie version of 1984 is doing it completely wrong. Anyone who reads 1984 in a non-English language is doing it 60% wrong. 1984 is meant to be (a) Read, (b) In the original English, and it's a widely loved and widely admired (and so *so* much and often quoted) piece of literature that both Right-Wing and Left-Wing and all the politics in-between love (and accuse their political enemies of being the villains of). I didn't read To Kill a Mockingbird, but it's a household name that I recognize, and by analogy to 1984, I think you're also largely deluding yourself it's propaganda.

> How political is Homer?

Skimming the Wikipedia synopsis of the Iliad because I haven't read it and don't really care enough to: Very. Slavery is normal. Giving away sex-slave girls as rewards for fighting prowess is normal, generous, and/or commendable. Military commanders are expected to start wars based on dreams from Zeus. And that's just a 1-minute skimming of the very first section.

Recognizing that historical works of art are political is a far cry from insisting that they need to be "cleansed" or "wokified" for the modern day, I don't expect the 700s BC Homer or his audiences to stand up for women's rights or even use a different choice of words that even slightly indicates that using women as war spoils is bad or indicative of a moral failing, I really don't expect much from a 700s BC native. It's an insult to my intelligence if anyone tries to "adapt" the Iliad for "Modern Audiences" by removing the now-controversial parts and/or sugar-coating them. But I also think it's pretty deluded to think that there is not "any political messages in his work", there is plenty.

The entire point of art is this: it's a depiction of the artist viewpoint. Any linguistic depiction of a human group has a political message, because it reveals and advances - if not always explicitly advocate for - what the artist considers as the "Normal" politics. Any depiction of cities and warfare in the Middle Ages and before has a political message that slavery is normal and that you should always listen to your King (and/or feudal Lord), because that's all what the authors at the time knew and recognized as normal. It was a pretty **radical** politics back then to argue that Slavery is not normal or that people should govern themselves, that was the controversial, spicy flipside at the time.

But regardless of which of the 2 is more controversial at any given time and place, both messages are "Political". To say that "Human groups should enslave other human groups, especially those captured in war" is a political assertion - that is, it's literally about who should govern/dominate/control whom -, and to negate that statement is *also* a political assertion, for the same reason the original is. It just so happens that some human societies across time and space declares one of them controversial and the negation normal, and other human societies choose the opposite polarity, but that doesn't mean that one of them isn't political, it just means that whatever the society you happen to grow up in declares as "Normal" ceases to be perceived as "Political".

In other words, if Homer himself read or saw a modern work depicting a war, say any of the Call of Duty games, (s)he would be astonished at the radical and "political" messages contained in those works, one of which is that the defeated people in a war aren't slaves to the victors. It's anyone's guess whether he would love that or hate it, but there is not a single sliver of doubt in my mind that it would be the first thing he would notice, that the defeated aren't made slaves. He is right: what is an entirely unconscious choice of the authors of Call of Duty is actually a pretty radical political message to someone from a time when the defeated in a war were almost always enslaved afterwards. That's the thing about politics, you stop noticing it if enough people consider it to be the normal and inevitable state of affairs.

Expand full comment
Hank Wilbon's avatar

"The entire point of art is this: it's a depiction of the artist viewpoint. Any linguistic depiction of a human group has a political message, because it reveals and advances - if not always explicitly advocate for - what the artist considers as the "Normal" politics. Any depiction of cities and warfare in the Middle Ages and before has a political message that slavery is normal and that you should always listen to your King (and/or feudal Lord), because that's all what the authors at the time knew and recognized as normal."

This is the view of art that both the prude and the woke agree upon, and it is dead wrong. Depiction is not prescription in art. In Joyce's Ulysses, does the author side with the Irish nationalists or is he merely mocking them? To answer the question one way or another is to misunderstand the work. A novel, if it is art and not a mere political work, is about understanding, and understanding is the opposite of judgment.

1984, OTOH, is a work of judgment. Nobody's understanding of the world is enhanced by reading 1984. "Totalitarianism is bad." That's the work. You don't need to write a novel to point that out. I agree that it is a book that many people like when they read it as a kid. It's basically a children's book, much like Harry Potter. But it isn't great art. Its continued popularity is because the US and British (I think) governments decided that it worked as tremendous anti-Communist propaganda during the Cold War (Nevermind and don't mention it to the kids that Orwell was a Socialist) and required every schoolkid to read it.

Expand full comment
Nancy Lebovitz's avatar

1984 is art about how knowledge is denied in totalitarian countries, how hard it is to be clear that you're being lied to, and even if you know that, how hard it is to get to anything true.

This is a richer message than just saying totalitarianism is bad, or that a particular totalitarian government is bad.

Expand full comment
Arrk Mindmaster's avatar

1984 was prescient in many ways, and one can see instances of it becoming truer as time goes on. It had the concept of double-think, constant monitoring of the populace, government controlling the way the people think and what they think about, and more.

Great art reflects life in an interesting way not before documented. The message isn't comfortable, but 1984 gives a glimpse into how life could be. How one can dispute 1984 is art is beyond me.

Expand full comment
YesNoMaybe's avatar

> To put that in relatable day-to-day terms, who has been more valuable to our society: Larry David or Joe Biden? Beethoven or Napoleon? The Beatles or LBJ? Are there better comparisons?

You imply that the answer is obvious and it's Larry David, Beethoven and The Beatles.

But to me it seems that the question just has no objective answer, so everything basically boils down to "well I feel that Larry David is more important than LBJ".

Like, Napoleon had a large impact on Europe at the time and it's plausible the Europe of today would look different if not for him. Possibly substantially so, but I'm not remotely certain.

On the other hand, the Beatles had a large impact on Pop music and it's plausible the music we listen to today would sound different if not for them. Possibly substantially so, but I'm not remotely certain.

How do you compare that and come away with "Obviously the Beetles have been more valuable to society"? My takeaway surely is "who the fuck knows"

Expand full comment
Hank Wilbon's avatar

I tried to make comparisons that one could potentially argue either way, though it's true I show my hand about which way I would argue. An argument I would make regarding say Napoleon is that while a universe without Napoleon would likely look different today (how much, we don't know), it's basically random whether he made the 21st century better or worse, and there's almost no way to argue one way or the other in earnest. Whereas while one can debate Beethoven's relative contribution to the 21st century, it's hard to argue on the side that it's been negative.

Expand full comment
YesNoMaybe's avatar

I confess I read your post and somehow thought you had made an argument based on importance not positivity.

But if it's net positive impact we're considering I can see the "art is directionally positive, politics can be negative" argument.

Expand full comment
Nancy Lebovitz's avatar

1984 and Animal Farm are political art that might have staying power. What else?

Expand full comment
Viliam's avatar

Atlas Shrugged

The Dispossessed

The Fountainhead

The Rebellion of the Hanged

We

Expand full comment
ultimaniacy's avatar

I wouldn't really call The Fountainhead political art. There are a handful of scenes that address contemporary political issues, but its primary themes are all about behaving morally at the individual level.

Atlas Shrugged is definitely political art, but it's also not particularly good.

Expand full comment
Tatu Ahponen's avatar

Yeah, I also found The Fountainhead strangely apolitical when I read it some time ago. (https://www.ahponen.fi/p/book-review-fountainhead)

It's not an accident that even ostensibly liberal celebrities have praised the book, it really can be read as a "doing your own thing, being your own person" book, almost a self-help novel.

Expand full comment
Moon Moth's avatar

I think one of the signs of art is that it communicates on many levels simultaneously. As in other areas of life, this rule can be broken occasionally while still allowing for the breakers to remain in the category, but if it's broken too often the breakers cease to be part of the category.

Politics is tricky because, like engineering, it's under pressure to perform usefully. So most political "art" lacks subtlety, but there can be exceptions.

A "titanomachy" showing Zeus and Cronus could say a whole lot, about the replacement of the old order with the new, the hope of revolution and the realization that the abuses of the old order were a result of social forces that will inevitably recapitulate themselves in the new order, sadness for the death of the old tempered by realization that the old had done the same in its day... There's a whole lot that could be packed into a painting of a couple of old Greek dudes, stuff that could be relevant for millennia to come.

Stuff that's tied too closely to specific contingent details becomes banal. Few people today would care about Disraeli vs Gladstone, unless you find a way to make them care. On the other hand, with Churchill vs. Hitler you'd have to find something besides the obvious. Hitler vs. Stalin has potential, though.

Expand full comment
Moon Moth's avatar

Offhand I'd say... "beauty" is the luxury good, while "art" is a style of communication. Art doesn't have to be beautiful, and on the other hand, sometimes all it communicates is "this is beautiful (to someone)".

Expand full comment
Ruffienne's avatar

Interesting take. As a passionate supporter of beauty, I'd have said that beauty is out of style, and that most contemporary art merely 'challenges' the viewer.

Think it's ugly? - No, you are being challenged by what you see.

Think it's stupid or facile? - No, you are being _challenged_ by that artistic piece.

It's much easier to shock or annoy the viewer than it is to render them awestruck or thoughtful, and so that's what most contemporary art does.

If one questions the art one sees, the fault never lies with the (unskilled, unthoughtful) artist but inevitably falls at the feet of the viewer, who isn't adequately responding to the 'challenge' before them.

Disclaimer; this comment may or may not have been heavily influenced by an invitation to a gallery opening that landed in my inbox two minutes ago - and which I would pay money to avoid attending!

Expand full comment
LearnsHebrewHatesIP's avatar

A lot of modern painting-and-sculptures art is an unmitigated and unrepentant dumpster of trash, but I would say that in the realm of writings and moving pictures there is now more backlash against meaningless "Subversion of Expectation" just for the sake of it. I base this impression primarily on the reaction to the 3rd trilogy of Star Wars, where every dumb and incoherent authorial decision was justified by "It's a SuvVersiOn of ExPeCtatiONs" but most of the audience weren't having it and still called it dumb and meaningless.

Expand full comment
Moon Moth's avatar

I agree; I think you've got a separate and entirely valid critique. :-)

Making artificial beauty is hard, and I think there might be a subconscious element of "sour grapes" in the currently popular style of art.

Or perhaps it's that, in order to create beauty, you have to be able to see beauty and imagine beauty. And I think there are ideologies today which claim that physical beauty is worthless, or which try to redefine beauty to better match their political/ethical views. And the result is a vision of ugliness with some abstract pattern applied to it.

Expand full comment
Nancy Lebovitz's avatar

At this point, I've seen more claims that art is inevitably political. Even the most innocuous genre fiction might be implying that the existing system isn't too bad, or at least it's inevitable.

I have a small bet with myself that people who say art should be political actually mean it should be promoting *their* politics.

Expand full comment
Gunflint's avatar

I think that is one of those things that is pithy and kinda ‘sounds good’ but is not always true.

Expand full comment
rebelcredential's avatar

When it comes to art, politics fades in the sun and disappears. Any biting political statement from five hundred years ago just looks like a nice poem or a pretty picture to us now, because we don't know the argument and all references to it are lost on us.

If your "political" point is actually something so fundamental that it hasn't changed in 500 years, then arguably you're actually highlighting some aspect of the human condition and you've moved beyond mere politics into something more profound.

But otherwise, the politics will evapourate away and what's left will be the physical artifact you have created, which people will judge on its own aesthetic terms.

If a strong political feeling is the thing that motivates you to get up and create that artifact, I say that's just as valid as love, loneliness, aggression, or any of the other creative drivers. Just provided you do a good job with the result.

Expand full comment
A.'s avatar

This was beautiful. Thank you.

Expand full comment
Hank Wilbon's avatar

Well said.

Expand full comment
Ruffienne's avatar

Beautifully put.

Politics - and indeed many motivating factors - ultimately fade, but good art is enduring.

Expand full comment
Hank Wilbon's avatar

Does anyone know a simple rule of thumb for comparing compensation as a salaried employee versus as a contractor? I realize that it depends on the details but that's why I'm asking for a simple rule of thumb. USA.

Expand full comment
Rothwed's avatar

I think the 1.5-2x figure given is fairly accurate. The payroll taxes and health insurance deductions that big employers absorb hides a lot of the tax burden that the self-employed are exposed to. In my experience, $50,000 as a contractor is roughly equivalent to $30,000 salaried. So you need to get an extra 2/3rds from your income.

Expand full comment
Performative Bafflement's avatar

You need to make about 1.5 - 2x as a contractor generally (depending on your usual salary range), to offset the cost of benefits like health insurance and the extra self-employment taxes. There are some benefits to forming corporations and doing B2B contracting if possible.

Also keep in mind that if you're going independent, you'll need to be spending time and possibly money marketing yourself, maintaining connections, and diversifying your clients, to ensure that your pipeline of work is resilient and paying enough - that has an extra cost in time and sometimes quality of life too. 2x might be underselling it in that case.

Expand full comment
Hank Wilbon's avatar

Thanks.

Expand full comment
Paul Botts's avatar

I'll just add that every single person I've known who switched from salaried work to independent consulting/contractual ("hung out their shingle" in whatever their field is), has initially underestimated what they needed to charge per hour or day to be doing at least as well as from the salary that they left. Literally no exceptions during my decades-long professional life. And some of them hugely underestimated it.

Expand full comment
Hank Wilbon's avatar

Thanks. Good to know.

Expand full comment
Anatoly Vorobey's avatar

A very smart and knowledgeable scientist thinks of a whole number between 1 and 9 inclusive. You are allowed two questions, to each of which the scientist will truthfully answer YES, NO or I DON'T KNOW. Find out the number.

Expand full comment
Godshatter's avatar

N erny ahzore E vf fho-Evrznaa vs gurer rkvfg abagevivny mrebrf bs gur Evrznaa mrgn shapgvba jvgu erny cneg fgevpgyl terngre guna E.

1) Qvivqr lbhe ahzore ol 3 naq unyir gur erznvaqre. Vf gur erfhyg fho-Evrznaa?

2) Qvivqr lbhe ahzore ol 3 naq sybbe gur erfhyg. Gura unyir gung ahzore. Vf gur erfhyg fho-Evrznaa?

* 0 vf qrsvavgryl fho-Evrznaa fvapr gurer ner xabja mrebrf jvgu erny cneg 1/2.

* 1/2 zvtug or fho-Evrznaa vs gur Evrznaa Ulcbgurfvf vf snyfr, ohg jr qba'g xabj.

* 1 vf abg fho Evrznaa, fvapr jr xabj gurer ner ab mrebrf jvgu erny cneg terngre guna bar.

Expand full comment
quiet_NaN's avatar

"V unir whfg cvpxrq zl bja, erny-inyhrq ahzore va gur vagreiny ]guerr,fvk]. Vf lbhe ahzore ynetre guna zl ahzore?" Gura ercrng jvgu n fhvgnoyr vagreiny.

Be, vs lbh jnag gb nibvq vagebqhpvat n zbqry-qrcraqrag dhrfgvba, lbh pbhyq fnl:

Yrg x or gur erznvaqre bs lbhe ahzore nsgre qvivfvba guebhtu guerr. Vf gur fgngrzrag "tvira a vf n cbfvgvir jubyr ahzore, vf 2*a+(x+1) trarenyyl gur fhz bs (x+1) cevzrf?"

Sbe x=0, gur nafjre vf ab orpnhfr avar vf abg gur fhz bs bar cevzr.

Sbe x=2, gur nafjre vf lrf, orpnhfr jr unir n cebbs sbe gur jrnx Tbyqonpu pbawrpgher.

Sbe x=1, gur nafjre vf "V qba'g xabj", orpnhfr gur fgngrzrag vf gur Tbyqonpu pbawrpgher.

Gura whfg ercrng jvgu "Yrg x or gur vagrtre qvivfvba erfhyg bs lbhe ahzore naq guerr".

Expand full comment
Whest's avatar

V unir n cbgragvnyyl purngl, ohg ryrtnag fbyhgvba gb guvf ceboyrz. Jr arrq gb znxr fher gung obgu bs gur dhrfgvbaf unir hfr nyy guerr cbffvoyr nafjref, tvira gur ahzoref cebivqrq. Gb qb fb, V'ir qrirybc gur sbyybjvat cnve bs dhrfgvbaf:

Dhrfgvba 1: Vs lbh jrer gb ercerfrag rnpu ahzore nf 2-ovg gevanel inyhr fhpu fhpu 1 vf “00” naq 9 vf “22”, naq nffhzvat gung “0” vf “AB”, “1” vf “V QBA’G XABJ”, naq “2” vf “LRF”, jung vf gur yrsgzbfg qvtvg bs gur gevanel ercerfragngvba bs lbhe pubfra ahzore?

Dhrfgvba 2: Tvira gur fnzr fgvchyngvbaf nf Dhrfgvba 1, Jung vf gur evtugzbfg qvtvg bs gur gevanel ercerfragngvba bs lbhe pubfra ahzore?

Expand full comment
Martian Dave's avatar

Thanks for this. I have a very inelegant brute-force solution!

Expand full comment
penttrioctium's avatar

(Some important parts of my answer use numbers and symbols, which stay the same under rot13. Sorry everyone; spoilers below!)

DHRFGVBAF:

Yrg k or lbhe ahzore. Yrg c(a) qrabgr gur agu cevzr ahzore, fgnegvat ng c(0)=2.

Dhrfgvba 1: Vf c(⌊(k-1)/3⌋^5793826498140572948164895)+1 n zhygvcyr bs 4?

Dhrfgvba 2: Vf c(((k-1)%3)^7814392508473549821875294)+1 n zhygvcyr bs 4?

NAFJRE:

Ab/Ab: 1

Ab/Lrf: 2

Ab/VQX: 3

Lrf/Ab: 4

Lrf/Lrf: 5

Lrf/VQX: 6

VQX/Ab: 7

VQX/Lrf: 8

VQX/VQX: 9

Expand full comment
penttrioctium's avatar

EXPLANATION:

Gur rnfvrfg jnl gb frr jul guvf jbexf vf gb cvpx n ahzore 1-9 naq grfg vg.

Sbe gur svefg dhrfgvba, gur vzcbegnag cneg vf l=⌊(k-1)/3⌋. Vs k=1,2,3, gura guvf vf 0; vs k=4,5,6, guvf vf 1; vs k=7,8,9, guvf vf 2. Fb abj jr whfg arrq gb ghea "0, 1, be 2" vagb "Ab, Lrf, be VQX". Vs jr qb m=l^(enaqbz tvtnagvp ahzore), gura gur nafjre m jvyy rvgure or 0, 1, be n enaqbz tvtnagvp ahzore. Vs jr nfx sbe gur mgu cevzr cyhf 1, jr'yy trg 3, 4, be "V qba'g xabj". Nobhg unys bs cevzrf+1 ner zhygvcyrf bs 4, naq NSNVPG gurer'f ab pyrire zngurzngvpny grpuavdhrf gb trg gur nafjre va guvf pnfr; V guvax lbh'q whfg arrq infg pbzchgngvbany cbjre. Fb vs jr nfx vs gur mgu cevzr cyhf 1 vf n zhygvcyr bs 4, jr'yy trg Ab, Lrf, be V qba'g xabj.

Gur frpbaq dhrfgvba vf gur rknpg fnzr vqrn, rkprcg gur vavgvny sbezhyn vf l=(k-1)%3. Vs k=1,4,7, vg'f 0. Vs k=2,5,8, vg'f 1. Vs k=3,6,9, vg'f 2. Gura jr whfg qb gur fnzr cebprff nf nobir.

Expand full comment
Taleuntum's avatar

Yrg K or gur inevnoyr jr unir gb npdhver. Jr arrq ybt(9) ovgf bs vasbezngvba naq jr unir gjb dhrfgvbaf, fb jr arrq gb npdhver ybt(9)/2=ybt(3) ovgf cre dhrfgvba. Guvf vf cbffvoyr orpnhfr jr unir guerr cbffvoyr nafjref sbe n dhrfgvba. Vs jr nffhzr gurer vf fbzrguvat gur fpvragvfg qbrf ABG xabj, sbe rknzcyr jurgure gur Evrznaa ulcbgurfvf vf gehr naq jr ner nyybjrq gur unir n inevnoyr L juvpu vf 0 vs vg vf snyfr naq 1 vs vg vf gehr, bhe ceboyrz pna or sbezhyngrq nf n frnepu sbe n shapgvba s {0,1,2}K{0,1}->{0,1} qrsvarq nf s(0,0) = s(0,1) = s(2,0) = 0 naq s(1,0) = s(1,1) = s(2,1) = 1 naq gura jr pna nfx gur fpvragvfg jurgure s(K zbq 3,L) = 0 naq jurgure s(K-1 vagrtre qvivfvba 3,L)=0?

Vs jr arrq na rkcyvpvg sbezhyn sbe s gura 0.5*K*(K-1)*L+K*(2-K) fhssvprf.

Chggvat vg nyy gbtrgure bhe gjb dhrfgvbaf:

1. Vs K vf gur ahzore lbh gubhtug bs naq L vf 0 vs gur Evrznaa Ulcbgurfvf gehr naq 1 bgurejvfr, gura Vf 0.5*((K-1) qvi 3)*((K-1) qvi 3 - 1)*L+((K - 1) qvi 3)*(2-((K-1) qvi 3)) rdhny 0?

2. Vs K vf gur ahzore lbh gubhtug bs naq L vf 0 vs gur Evrznaa Ulcbgurfvf gehr naq 1 bgurejvfr, gura Vf 0.5*(K zbq 3)*(K zbq 3 - 1)*L+(K zbq 3)*(2 - (K zbq 3)) rdhny 0?

Gur pbeerfcbaqrapr orgjrra gurve nafjref naq K vf gur sbyybjvat: (L=lrf, A=ab, Q=qba'g xabj)

LL:1

LA:2

LQ:3

AL:4

AA:5

AQ:6

QL:7

QA:8

QQ:9

Expand full comment
Hank Wilbon's avatar

Avar ahzoref, gjb thrffrf: zrguvaxf jr arrq gb qvivqr gjvpr ol guerr hfvat lrf, ab naq V qba'g xabj nf bhe fyvpre. "xabjyrqtrnoyr fpvragvfg" vf cebononoyl n Purxubi'f Tha. Thrffvat jr arrq gb hfr fpvragvsvp xabjyrqtr sbe gur V qba'g xabj. V'z abg n xabjyrqtrnoyr fpvragvfg fb V qba'g xabj gur nafjre ohg zhfg or fbzrguvat yvxr: Vs jr zhygvcyl guvf ahzore ol K qbrf na ryrzrag jvgu gung ngbzvp jrvtug rkvfg? Ercrng jvgu L. Fbzrguvat yvxr gung. Fbzr svryq bs fpvragvsvp xabjyrqtr V qba'g xabj naq abar bs hf xabj ragveryl.

Expand full comment
chickenmythic's avatar

I enjoyed this, thank you!

Expand full comment
chickenmythic's avatar

Sbe gur 1-3 pnfr, V jbhyq nfx: “V nz guvaxvat bs n ahzore orgjrra 2 naq 3. Vf gur ahzore lbh unir va zvaq terngre guna be rdhny gb zl ahzore?”

Vs gurl unir 3 va zvaq, gurl pna pbasvqragyl nafjre lrf. Vs 1, gurl pna nafjre ab. Vs 2, gur nafjre qrcraqf ba xabjyrqtr gurl qba’g unir (juvpu ahzore V unir va zvaq), fb gurl zhfg nafjre “V qba’g xabj”.

Expand full comment
Whest's avatar

Guvf vf gur zbfg ryrtnag fbyhgvba, V guvax. V rkcnaqrq bhg gur shyy irefvba sbe gubfr vagrerfgrq:

* V’z guvaxvat bs n ahzore orgjrra 4 naq 7 vapyhfvir. Vf lbhe ahzore terngre guna be rdhny gb zl ahzore?

***** LRF (7 8 9)

********* V’z guvaxvat bs n ahzore orgjrra 8 naq 9 vapyhfvir. Vf lbhe ahzore terngre guna be rdhny gb zl ahzore?

************* LRF (9)

************* AB(7)

************* QBA’G XABJ (8)

***** AB (1 2 3)

********* V’z guvaxvat bs n ahzore orgjrra 2 naq 3 vapyhfvir. Vf lbhe ahzore terngre guna be rdhny gb zl ahzore?

************* LRF (3)

************* AB(1)

************* QBA’G XABJ (2)

***** QBA’G XABJ (4 5 6)

********* V’z guvaxvat bs n ahzore orgjrra 5 naq 6 vapyhfvir. Vf lbhe ahzore terngre guna be rdhny gb zl ahzore?

************* LRF (6)

************* AB(4)

************* QBA’G XABJ (5)

Expand full comment
penttrioctium's avatar

oh that's good, nice

Expand full comment
Melvin's avatar

Vg frrzf fvzcyr rabhtu gb erqhpr gur ceboyrz gb gur irefvba jurer lbh nfx bar dhrfgvba naq qvfgvathvfu orgjrra vagrtref bar, gjb naq guerr. Fb jr whfg arrq n dhrfgvba gung jvyy znc gb lrf, ab naq V qba'g xabj sbe gur svefg guerr vagrtref.

Bar boivbhf ohg pyhaxvyl jbeqrq irefvba jbhyq or "pna nyy fhssvpvragyl ynetr bqq be rira vagrtref or rkcerffrq nf n fhz bs guvf znal cevzrf?" Gung'f n ab sbe bar, n lrf sbe guerr, naq na V qba'g xabj sbe gjb. (Ohg abj V'z qbhoyr purpxvat, vg gheaf bhg gung gur cebbs bs gur guerr cevzr pnfr vf fgvyy pbafvqrerq irel fyvtugyl qhovbhf, ryrira lrnef nsgre vavgvny choyvpngvba.)

Gurer zhfg or n pyrnare irefvba bs guvf.

Expand full comment
Sandeep's avatar

V jnf guvaxvat, nsgre erqhpvat gb gur "1-3 pnfr", gb nfx "Ner gurer vasvavgryl znal cevzrf c naq d fhpu gung d - c + 1 rdhnyf gur tvira ahzore a orgjrra 1 naq 3", naq ubcvat gung gur fpvragvfg vf abg zhpu fznegre guna Greel Gnb naq Lvgnat Munat gb nafjre gur gjva cevzr pbawrpgher! Ohg puvpxrazlguvp nobir unf n orggre nccebnpu.

Expand full comment
User's avatar
Comment deleted
Jun 6Edited
Comment deleted
Expand full comment
Anatoly Vorobey's avatar

Yeah, I think something like this, or chickenmythic's below, is intended. But I'm getting a kick out of all these high-powered solutions below, too!

Expand full comment
Hank Wilbon's avatar

Nuu... fb "xabjyrqtnoyr fpvragvfg" jnf haarprffnel vasbezngvba. V sryy sbe vg!

Expand full comment
Eremolalos's avatar

Today somebody told me that when he spontaneously tells people an observation he thinks is interesting they are turned off and conclude that he’s a weird geek. The example he gave was of an idea that was, yeah, kind of quirky, but seemed smart and interesting to me. I’d like to be able to give him some examples of similar thoughts other people have had. ACX seems like the ideal place to ask. Anyone want to volunteer a quirky personal observation or two? His example: The placement of eyes in our species probably determines some important things about how we function. For instance rabbits have nearly a 360 degree view. Their eyes are on the side of their head, and they have notches in their ears that keep the ears from blocking rabbit’s view of what’s behind. So our awareness is especially geared towards what’s in front of us. “What’s in front of me” & “what I’m aware of” aren’t identical categories, but they’re very similar.

Expand full comment
Nancy Lebovitz's avatar

I find it reassuring to know that there are sheets of connective tissue between the pairs of smaller bones in the lower legs and arms. It makes me feel more held together.

I also like knowing that the heart is between the lungs and resting on the diaphragm.

Expand full comment
Kitschy's avatar

Your friend would enjoy the Equations of Life (Cockell, 2018).

My personal observation/question: I wonder why isn't it common to have multiple tenants on a single shoplot in the west? The few examples near me are all Asian restaurants import the practice from their homeland. But rent in the west is very expensive - it seems like if you have a "morning business" (like a cafe), it could be good to sublet the space to a night business (like a bar) and greatly reduce the rent burden. This also means you can massively compress the footprint of your city, making it much more convenient for everyone. So there must be a tradeoff that is a larger limiting factor in the west that I'm not considering, because this sort of thing is fairly common practice in Asia.

Expand full comment
Pepe's avatar

There was a place in Mexico (probably still there, but I haven't checked in many years) that was a car shop during the day and turned into a taco restaurant at night. Never got any work done to my car there, but the tacos were fantastic.

Expand full comment
Bullseye's avatar

I would guess that the tradeoff is the inconvenience of having another business's equipment in your work area. I suppose a cafe and a bar in the same space would use the same tables and chairs, but not everything could be dual-purpose.

Expand full comment
David J Keown's avatar

I can remember a specific case that's a lot like his. I was sitting on a bench with my girlfriend, watching a pigeon bob its head back and forth. Pigeons have eyes on opposite sides of their heads, giving them closer to a 360-degree view. Having eyes on opposite sides of the head usually comes at the expense of stereoscopic vision; however, I suggested that the rapid bobs of the pigeon's head allowed one eye to have distinct images from two places in rapid succession, so maybe their brains can process that as a stereoscopic image... girlfriend said I was weird.

Expand full comment
Zach's avatar

There's a song called "Good Luck, Babe!" by Chappell Roan that is becoming quite popular. Every time I hear this song, I'm struck by how similar it is to "I'm Gonna Be (500 Miles)" by the Proclaimers. Everyone I tell about this agrees with me, but I've never seen anyone else mention it, even though most of the people I know have both heard this song and the song by the Proclaimers.

Expand full comment
rebelcredential's avatar

We evolved to feel sexual pleasure in response to activity that leads to reproduction. The instincts are blind and we can get that same pleasure from proximate activities even when there's no real woman in the room. This implies that if plants could feel, eating or chopping fruit would be giving it an orgasm.

Expand full comment
Gerbils all the way down's avatar

I'm no botanist, but I think having a pollenator visit your flowers would be more analogous to sex. Having your fruit drop to the ground or get eaten is maybe more like sending your kids off to college.

Expand full comment
Melvin's avatar

I've heard it said that predators' eyes face forwards, and prey have eyes that look all around.

I'm sure there are some perfectly good counterexamples, but it holds for most of the terrestrial vertebrates I can think of.

Expand full comment
Caba's avatar

All primates I have seen have eyes facing forwards, and primates aren't necessarily predators.

Expand full comment
Melvin's avatar

Sloths too. And koalas.

Tree-dwellers in general seem to be a major class of exceptions, they have fewer worries about ambush predators and more concerns about exactly what's in front of their face and how far away it is.

Expand full comment
Leppi's avatar

Yes, I remember learning this in school. The explanation was that prey need to be aware, and detect predators as quickly as possible while feeding etc. Predators need forward focus when they hunt.

So our forward facing eyes may suggest about us that we are predators rather tgan prey.

Expand full comment
MarsDragon's avatar

I would think the reason predators have eyes in front is depth perception. A cat needs to know how far the prey is in front of it to pounce correctly, raptors need to know how far to dive.

I was looking up information about birds of prey and vision, and found this neat diagram that shows the difference very well: https://en.wikipedia.org/wiki/Bird_vision#/media/File:Fieldofview01.png

Expand full comment
Bullseye's avatar

While we are predators, we inherited eyes in the front from herbivorous ancestors. Primates need eyes in the front to judge distance when jumping from one branch to another.

Expand full comment
Nancy Lebovitz's avatar

That's a good point. I was using the eyes in front as evidence that we're omnivores rather than naturally vegetarian.

I've seen a theory that early humans? pre-humans? were scavengers, but the eyes in front suggest that they were at least hunting small game even if they were also scavenging large game.

Expand full comment
Concavenator's avatar

No, all primates have forward-facing eyes regardless of their diet. Depth perception is not useful only for hunting; in the case of primates, it's for navigating the three-dimensional environment of the treetops, as Bullseye above notes.

Expand full comment
Nancy Lebovitz's avatar

Oh, well, so much for that theory. People having both molars and incisors might still be indicative.

Expand full comment
Caba's avatar

Most herbivores run on cellulose, whereas those humans who get their calories from plants run on carbs and plant fat (I.e. we eat tubers, nuts, fruit, grains seeds, and pulses, as opposed to grass, leaves, shoots, and stems).

Therefore, even if the human plant based diet were the evolutionarily correct one, there is no reason to expect the human anatomy to resemble the typical herbivore.

I think we've evolved as flexible eaters (come on this is common sense), who can live almost exclusively on animals, or almost exclusively on carbs and plant fat, or anything in between. In any case, we cannot eat plants as horses or gorillas do in the wild, and they cannot eat plants as we do. Our herbivory is not typical mammalian herbivory, it's something else. I'm a vegetarian and near vegan myself, by the way.

Expand full comment
Concavenator's avatar

Not to be contrarian on purpose, but I don't think that would work either. Both incisors and molars are useful to process plants; perhaps you were thinking of canines? But even then, having incisors, canines, and molars is an ancestral feature of mammals, and one that is not only shared by all primates, but also nearly all other mammals. Even horses and camels still have all three tooth types, though rodents and ruminants lost their canines.

Humans only became active hunters when we already had stone tools and fire, so most of the hard work of processing carcasses is outsourced, so to speak. So you shouldn't expect many physiological correlates of that sort with other predator mammals. I think your best bet would be intestine size: our intestine, and particularly our caecum (which many herbivores use as fermentation chamber to break down cellulose), is closer in relative size to that of carnivorous mammals than to that of herbivores. But I think that would be true even if we were herbivores, since plant food still gets ground and baked.

Expand full comment
Julius's avatar

I've wondered why nipples haven't evolved towards the bottom of the breast as humans started walking upright. It seems that most women who breastfeed do so when sitting upright. Wouldn't having the nipple at the bottom be a better idea from a fluid flow perspective?

Perhaps the reason is that there's a sexual selection effect going on, where having a nipple in the very center of the breast is a marker for good genes (much like facial symmetry and many other features).

Expand full comment
Gunflint's avatar

There might be a common knowledge answer to this, but why the hell do men have nipples?

Expand full comment
Nancy Lebovitz's avatar

I assume it's because nipples are harmless on men (I assume males in all mammal species have nipples but I don't actually know), so it was easier to let them exist than to edit them out.

Expand full comment
Moon Moth's avatar

I expect it has something to do with us being bipedal and thus weird? Centering the mammary glands around the nipples seems like a good idea for quadrapeds, where they would hang straight down. Maybe the genetic design for this got solidified early on, to the point where random mutations today aren't able to affect it without bollixing up the whole system.

Expand full comment
Bullseye's avatar

The breast isn't a big sack of milk with the nipple serving as the exit hole. The milk is all in the nipple; the rest of the breast is just fat.

Expand full comment
Godshatter's avatar

Your username and avatar make that observation rather more disconcerting 😅

Expand full comment
User's avatar
Comment deleted
Jun 6
Comment deleted
Expand full comment
quiet_NaN's avatar

Humans are K-selected. Most births are single births. If you have just invested nine months, tons of energy and risked your life in childbed to create another carrier of half of your genes, any "eugenics" gene would be strongly selected against unless it was a nearly perfect predictor of reproductive fitness.

If you have a gene which decreases the survival odds of kids with competitive genomes by 1% and of kids with non-competitive genomes by 20%, depending on the frequency of severe gene defects (inbreeding, radiation, etc), this would likely be a massive liability. The selection pressure in childhood is likely enough to produce most of this effect for free.

Theoretically, if you had a gene which causes your offspring to die if and only if they are sterile, this would be beneficial, but "predict if an individual is sterile" is a bit beyond what a gene could do.

Expand full comment
penttrioctium's avatar

I'm joining tumblr. Is there a rationalist/neoliberal/economist outpost there?

Like: Twitter is insane on average, but with careful curation my feed is full of nerdy economically-literate liberals who buy malaria bednets and are worried about AI and are in love with NGDP targeting. Who should I follow if I want to replicate that experience on Tumblr?

I've found Scott's, Yudkowksy's, and Kelsey Piper's blog, but none of them seem to post very much. Is there an active community?

Expand full comment
Reginald Reagan's avatar

Some of the people on Scott's old map are still active, including myself. https://slatestarcodex.com/2014/09/05/mapmaker-mapmaker-make-me-a-map/

Expand full comment
Kitschy's avatar

Argumate. Not strictly rationalist, but similar enough discourse norms (will debate anything in good faith, seems to possess a reasonable baseline of empathy and reason), and posts a frankly prodigious amount due to constantly reblogging past discourse and commenting on how the situation is now. You'll find lots of the same names arguing with him.

Expand full comment
penttrioctium's avatar

thank you!

Expand full comment
User's avatar
Comment deleted
Jun 7
Comment deleted
Expand full comment
penttrioctium's avatar

No, I pick both. Do you have tumblr follow recommendations for either? (Especially neoliberalism, not enough neoliberal economics in my feed atm)

Expand full comment
E Dincer's avatar

is it possible to enter a Jhana-like state without trying by mistake? Normally when I'm going to sleep most of the time there's an increasing hum in my ears. If it goes too high (it increases rapidly after a while) it wakes me up even more and I become fully conscious or I fall asleep to the hum. It's like light being refracted or reflected from the water based on the angle it hits. Hum goes too loud too quickly and it wakes me up, or it's slower than a critical speed and I fall asleep.

Last night I woke up in the middle of the night and was falling back asleep, and for the first time in my life it was neither refraction nor reflection. By pure chance I hit the hum-increase-speed which didn't wake me up but barely. The hum increased to a real very loud thunder and the blackness that I see with closed eyes became gradually whiter to eventually be pure light. At that moment I felt something like bliss and got lost in it and fell asleep. I don't know if I really experienced this falling asleep, or I fell asleep in my dream and this was just a dream. It was weird though, anybody had anything like this?

Expand full comment
Gunflint's avatar

Many years ago during dreamless sleep I experienced a period of objectless awareness. An apparently different part of me became distressed by the state and it forced me to wake up.

Years later I would read in the Upanishads that dreamless sleep was the domain of the true self, the atman.

I’ve only experienced this that one time.

Expand full comment
Non rationalist scumbag's avatar

Yes, that's Dhyana. I've read descriptions of that hum in Tibetan stuff and there's piles of Indian tantric stuff that corresponds with your experience. You may not have noticed but I guess that you would have experienced full sensory withdrawal (pratyahara) and possibly a rotating sensation in the lower abdomen as well as a distinct lowering of the tail bone. The rest of what you describe corresponds with the type of 'hard/true' Dhyana that's more written about in the yogic corpus than in the modern western Buddhist Jhana descriptions. It's much harder to achieve then the light Jhana that everyone is talking about currently, so give yourself a pat on the back.

Expand full comment
E Dincer's avatar

That sounds really impressive! The hum before going to sleep was something that I always experienced but as I said if it got loud slowly I just fell asleep and it got loud too fast I would just gain too much consciousness. For the first time in my life, I think out of pure luck, I hit the goldilocks spot.

The other sensations I wouldn't have noticed or remembered because I melted in the white light, hum and joy (pratyahara might be this) so I don't remember anything further about my abdomen or tailbone.

I'll report back on acx comments if I manage to experience this again or find a way to trigger it. Thanks for the comment!

Expand full comment
Timothy's avatar

Dostoevsky had a kind of epilepsy, sometimes called "ecstatic epilepsy" that apparently gave him super pleasurable feelings during his episodes. There are lots of ways to ruin one's brain; Drugs, being a Neet, having lots of concussions or a brain tumor. Of course, this doesn't prove that there need to be multiple ways to be super happy, possibly all blissful brains are alike each depressed one is broken in a different way.

But I think it suggests that there would probably be a couple of ways to find some bliss. There is some sort of epilepsy that gives you bliss, probably one can have a tumor in the happiness center that has a similar effect, the Jhanas seem to be one way to hack bliss, maybe your dream humming is another kind of hack. Or possibly it's a very similar method to the Jhana route.

Expand full comment
Gunflint's avatar

> possibly all blissful brains are alike each depressed one is broken in a different way.

I enjoyed the Anna Karenina first sentence formulation.

Expand full comment
Dino's avatar

What's a Neet?

Expand full comment
Timothy's avatar

"Not in Education, Employment, or Training", on the internet, especially 4chan it's often used to generally mean a loser. Not in Education, Employment, or Training, probably also has no friends or hobbies or really a will to live.

Expand full comment
E Dincer's avatar

I hope it's a hack or a jhana adjacent route and not epilepsy or a tumor:)

Expand full comment
MichaeL Roe's avatar

That sounds more like entering a lucid dream from the waking state (except you lost lucidity at the end). Hypnagogic hallucinations in the form of buzzing noises and a boom when you cross over into the sleep state are common effects.

Expand full comment
MichaeL Roe's avatar

P.S. There's state where you're dreaming, but there's nothing in the dream ... bodiless, formless nothing. In the Buddhist tradition, people try to do that deliberately. In the western lucid dreaming communities, it's more often "well, i entered that state accidentally and I think it really sucked".

Expand full comment
E Dincer's avatar

Interesting, I sometimes go s bit lucid in my dreams under certain conditions, for example close to waking up but I've never been a lucid dreamer that can go lucid and take control of the dream fully. By the way I wouldn't call what I experienced as something that sucks. It was on contrary very joyful. Last night I was thinking of trying to do that on purpose but I had a very early morning appointment so went directly to sleep.

Expand full comment
MichaeL Roe's avatar

Andrew Holecheck, who writes on lucid dreaming from a Buddhist perspective, calls this "Discover the Clear Light Nature of Mind in Your Dreams".

Do meditators consider this different from jhanas? No idea.

Expand full comment
MichaeL Roe's avatar

From the Six Yogas of Naropa:

"The perception-of-mind of the dream state is much easier to absorb than the perception-of-mind of the waking state. In the dream state, when some portion of the very coarse kind of Prana dissolves itself and gathers at the Heart Center, the dream will vanish, and one will fall into the sleeping state. This is the time in which one may recognize the Voidness; if not, through repeated practices, one will definitely be able to see the Voidness of sleep clearly. "

Expand full comment
Nancy Lebovitz's avatar

Just to check, when you say the hum increases, do you mean it gets louder?

Does it seem like it might be voluntary tinnitus?

Expand full comment
E Dincer's avatar

It was different from tinnitus that it wasn't coming from my ears but from inside my head. It was indeed getting louder as in amplitude

Expand full comment
Thoth-Hermes's avatar

The best, most good-faith critiques of EA likely come from either inside EA or right on the periphery*. IMO I think it's a wise strategy to engage the highest-quality critiques first.

*In full transparency, I try to be one of these people.

Expand full comment
Stefan's avatar

Hey everyone. I made PaperTalk.xyz to make finding, discussing, and understanding research papers easier. If anyone has any feature requests, let me know! Of course the hard part is getting enough people coming daily so it feels alive...working on it. Thanks.

Expand full comment
Nancy Lebovitz's avatar

https://www.nber.org/digest/mar04/divorce-laws-and-family-violence

"In Bargaining in the Shadow of the Law: Divorce Laws and Family Distress (NBER Working Paper No. 10175), co-authors Betsey Stevenson and Justin Wolfers evaluate three measures of family well being -- suicide rates, domestic violence, and murder -- to determine the effects of reforms nationwide that created unilateral divorce laws.

The authors find very real effects on the well being of families. For example, there was a large decline in the number of women committing suicide following the introduction of unilateral divorce, but no similar decline for men. States that passed unilateral divorce laws saw total female suicide decline by around 20 percent in the long run. The authors also find a large decline in domestic violence for both men and women following adoption of unilateral divorce. Finally, the evidence suggests that unilateral divorce led to a decline in females murdered by their partners, while the data reveal no discernible effects for homicide against men."

https://www.nber.org/papers/w10175

Expand full comment
Viliam's avatar

> For example, there was a large decline in the number of women committing suicide following the introduction of unilateral divorce, but no similar decline for men.

This seems like one of those things where different people will draw completely opposite conclusions. One possible interpretation is that women were oppressed by the previous situation, now they are not or less so, so the situation improved for them. Nothing changes for men, because they were not oppressed in the first place. (Women couldn't leave bad partners, now they can.) Another possible interpretation is that the new law, in combination with other existing laws, successfully addressed the problems of women, but didn't address the problems of men. (Men can leave bad partners, but doing so probably means they will never see their children again.)

> Finally, the evidence suggests that unilateral divorce led to a decline in females murdered by their partners, while the data reveal no discernible effects for homicide against men.

The first part seems obvious. If your partner is violent, and it's getting worse, the sooner you leave them, the less likely something bad happens. The second part has two possible explanations. Maybe men are less likely to use the possibility of unilateral divorce even when their partner is abusive (e.g. because they know that doing so would have bad financial consequences plus probably never seeing their children again, plus the fact that the children would stay alone with the abusive partner). Or maybe the reasons women kill their husbands are different (e.g. economically motivated, either life insurance or "why get 50% of property at divorce when you could get 100% using this one simple trick").

Expand full comment
Nancy Lebovitz's avatar

"The first part seems obvious. If your partner is violent, and it's getting worse, the sooner you leave them, the less likely something bad happens."

Abusive partners make it very hard for their victims to get away. This includes cutting off financial and relationship resources, threatening worse attacks for attempts to escape, and using pets and children as hostages. I think I've explained this to you before.

How often does the "wife gets the children, enforces no contact, and gets child support" scenario happen? I realize people can be very frightened and affected by rare disasters, but what are the stats?

My take on this is affected by the only bad divorce I know about-- I don't remember who initiated the divorce, but the wife ended up with the kid and no child support. She kept trying to get her ex to stay in contact with his son, but he made very little contact.

Why not consider that there are both men and women who are seriously bad partners?

Expand full comment
FLWAB's avatar

Stats wise, it looks like about 90% of divorced women get custody of the children, though that may be biased because many men do not seek custody. I saw statistics that indicated that when men seek primary custody for the children they get it 60% of the time, likely because they're more likely to contest custody if they have a particularly unfit partner. I don't know how often fathers who don't contest custody would have wanted to, but were dissuaded not to try because they were unlikely to succeed. But all these statistics should be taken with several grains of salt, I was not able to find official statistics and got these numbers off third party sites (mostly divorce lawyer websites).

Other similarly shaky statistics I found say that 63% of women with custody get child support, while 38% of men with custody do. And it looks like the average child support payment is about $300 per month. But, you know: averages.

Finding more solid statistics seems difficult due to the huge number of divorce lawyer websites that clutter up search when I tried to find info on this.

Expand full comment
Nancy Lebovitz's avatar

Thank you for taking a crack at this.

I was especially interested in the outrage-maximizing situation of the ex-wife getting custody of the children *and* the ex-husband paying child support *and* the ex-husband not being permitted contact with his children. My guess is that this is pretty rare, but I don't really know.

Expand full comment
Nancy Lebovitz's avatar

To make it clear, I don't think "outrage-maximizing" means false, just that I don't know how close it is to typical.

Expand full comment
Viliam's avatar

> Why not consider that there are both men and women who are seriously bad partners?

Oh definitely; I suspect that maybe 20% of men and 20% of women are seriously bad partners.

I also suspect that a typical outcome of a hostile divorce is "whoever gets the better lawyer, wins", which in turn becomes "whoever gets the lawyer first, wins" because a good lawyer can give you advice on how to legally grab all the money in the shared accounts, which allows you to use that money for the lawyer, and prevents your partner from doing the same thing.

There are also other tricks, such as making a phone call to every lawyer in your jurisdiction. Now your partner cannot hire any of them, because they have already talked to you, so they would technically have a conflict of interest. Or accusing your partner of domestic violence and immediately withdrawing the accusation. Now you don't have to prove anything, because the accusation was withdrawn. However, everyone heard it, and sometimes they are actually required to act as if the accusation wasn't withdrawn, because everyone knows that victims can be pressured into withdrawing.

These legal tricks can of course be used by either sex. Finally, you can choose a jurisdiction known to be most biased towards your sex, and apply for divorce there. Sometimes a residence in given jurisdiction is required, but there are probably clever ways to technically do that without your partner noticing. Generally, there seems to be a huge first-mover advantage.

Expand full comment
Nancy Lebovitz's avatar

"Oh definitely; I suspect that maybe 20% of men and 20% of women are seriously bad partners."

That's higher than I would have put it. Of course, we might have different ideas of "seriously bad", but I'd have said more like between 5% and 10%. Maybe even as low as 3%.

I'm still very unsure about what proportion of divorces lead to men losing all contact with their children. For that matter, I don't know what proportion of me still want contact with their children. Clearly, some don't.

Expand full comment
Vermillion's avatar

>There are also other tricks, such as making a phone call to every lawyer in your jurisdiction. Now your partner cannot hire any of them, because they have already talked to you, so they would technically have a conflict of interest.

The only incident I can recall like that was someone posting they had done this on Reddit, followed by all the internet lawyers telling him we was an idiot who was going to get the judge extremely pissed off at him and doubly so because he was posting it in public forum. The post was quickly deleted but you know, the internet never forgets

https://web.archive.org/web/20140807130935/http://www.reddit.com/r/legaladvice/comments/2cpyke/im_in_some_deep_shit_in_a_divorce/

edit: Ok I got curious, seems like he learned his lesson: https://www.reddit.com/r/UnethicalLifeProTips/comments/cqtgnr/ulpt_if_youre_initiating_a_divorce_secretly/exf2ohq/

Expand full comment
dionysus's avatar

So what's your explanation for why "the evidence suggests that unilateral divorce led to a decline in females murdered by their partners", if you don't agree with Viliam's rather obvious explanation?

Expand full comment
Nancy Lebovitz's avatar

Women have more ability to get away from bad partners.

Expand full comment
dionysus's avatar

That was exactly Villiam's explanation, the one you vehemently disagreed with...

Expand full comment
Kitschy's avatar

I'm surprised there's no decrease in homicide against men! I suppose it would be too difficult to see a change in mortality in general, and mortality data would be confounded by the fact that an unhappy marriage is stressful and will shorten your life anyway.

Of course, Henry VIII already showed that spousal murder drops when legal divorce avenues exist :P

Expand full comment
Downzorz's avatar

Afaik the archetypical husband murder is a poisoning, which is much less likely to show up in homicide stats

Expand full comment
John Schilling's avatar

This isn't the twentieth century, and certainly not the nineteenth. There are very few poisons that won't scream "poison!" on the autopsy that will almost invariably occur in such cases. And if someone tries to use the clever 21st-century internet to find one of those undetectable poisons, *that* will scream "poison!" to your ISP, who will rat you out to the police on request.

I am pretty sure the number of wives capable of carrying out an undetectable poisoning is negligible, at least outside the pages of mystery novels.

Expand full comment
Arrk Mindmaster's avatar

What are these very few poisons that WON'T scream "poison!"? Asking for a...friend.

Expand full comment
Nancy Lebovitz's avatar

This is merely something I heard about, but I was told that a detailed and thorough investigation of about a hundred car accidents turned up a murder. This isn't terribly surprising.

I don't remember the movie's name, but there's one that made it occur to me that if a teenager appears to have committed suicide, it might not be investigated as a murder.

Expand full comment
John Schilling's avatar

I think there's a consensus among highway patrolmen, etc, that a significant fraction of single-vehicle, single-fatality "accidents" are really suicides, but that it would be hard to prove for any one and would not be doing the family any favors to try.

Trying to arrange an undetected car-crash murder would be tricky, particularly with airbags, crumple zones, etc making crashes much more survivable, but with access to the car and enough mechanical expertise might be plausible.

Expand full comment
Nancy Lebovitz's avatar

Good point-- I heard about it quite a while ago, and it was something about tampering with the brake lines.

Expand full comment
quiet_NaN's avatar

Pro-tip: if your method of murder leads to an autopsy to determine the cause of death, your method sucks.

The best way to get away with murder is if the doctor does not check "suspicion of unnatural death" on the death certificate.

Of course, a lot depends on the priors of the victim dropping dead from natural causes. For a 30-yo with no relevant health conditions, that prior will be very low. For a 70-yo with heart problems, the doctor might just turn the corpse, see that there is no knife sticking out of the back, write "heart failure" on the certificate and call it a day.

Expand full comment
John Schilling's avatar

Details will vary from state to state, but in California the local coroner's office is required to investigate any "violent, sudden or medically unattended deaths". That doesn't necessarily require a full autopsy, but the "medically unattended" part means that either your poisoning victim is going to be examined before death by a doctor trying very hard to keep them alive, or after death by a doctor trying to figure out how that happened so quickly that he couldn't get to a doctor in time.

And the state of the art on the latter is well beyond "meh, nothing obvious and he's seventy so it must be a heart attack".

Expand full comment
None of the Above's avatar

Also, the past had much worse medical science than the present, so probaby a fair number of those wife poisoning cases were actually the husband dying of natural causes in a way that wasn't obvious to the doctors/judges of the time.

Expand full comment
Nancy Lebovitz's avatar

I've been thinking about the possibility of a murder mystery where the tool is something slippery on a stair railing.

Expand full comment
Moon Moth's avatar

I choose to interpret this as a combination of:

1) if the husband is such an asshole that his wife would kill him, after a divorce he's going to get himself killed soon by someone else, and

2) if the wife is such an asshole that she would kill her husband, after a divorce she's just going to kill another man anyway.

Expand full comment
Deiseach's avatar

He executed two of his wives even after legalising (for his cases only) divorce, so that doesn't really help in the spousal murder rates. As Christina of Denmark said "If I had two heads, one of them should be at the king of England's disposal".

Expand full comment
Nancy Lebovitz's avatar

I read years ago that there was a decline in husbands killed by wives, but I didn't feel like researching it for this comment.

Expand full comment
Schweinepriester's avatar

A bit late for the party, but anyway: A few years ago I signed up for a small technological enterprise, pre-ordering a light electric vehicle, because I dislike moving a ton of stuff around with me and still like to move fast. Now the company obviously has liquidity problems and offers extended investment plans. I'm in there with about 1/4 of a monthly income after taxes and could well blow a whole of that but won't need the vehicle, which is anyway a luxury item, in the next 5 years. I don't believe anything I do has much influence on climate change. Should I commit more deeply?

Expand full comment
Peasy's avatar

I have what I humbly think is a better suggestion: send about a quarter of your monthly after-tax income to me every month for whatever the term of the contract would have been.

The upside is obvious but I'll present it anyway: I would greatly enjoy receiving free money every month for a period of (I assume) several years, and I would use the money wisely.

The downside is that you are giving away a quarter of your after-tax income for (again, I'm guessing here) several years and gaining nothing other than the satisfaction of knowing that you've improved my quality of life. But that's more than you would have gotten out of an investment with a company that is going down the toilet now that interest rates are positive again.

Expand full comment
Schweinepriester's avatar

Sounds alluring. Thanks for the offer. Guess I could do the same with one of my kids or my best friend as well, though. Gonna think about it.

Expand full comment
Hank Wilbon's avatar

IMHO, hell no.

Expand full comment
George H.'s avatar

Right!

Expand full comment
Erica Rall's avatar

If they're having liquidity problems and are soliciting customers for small investments, then that means that banks and VCs aren't interested in investing (more) in them. Since banks and VCs have analysts who assess whether or not an investment is worthwhile and usually have access to a lot more Due Diligence info on the company's finances and operations than you do, then I'd be very hesitant to bet invest more than a small amount of money.

Expand full comment
Arrk Mindmaster's avatar

A reasonable analysis, but it doesn't address the actual problem: not enough information is provided here (or possibly provided at all) to make any kind of investment decision. Banks and VCs decided not to, but it doesn't mean they're right.

On the other hand, why "invest" money in something where you can't project good returns eventually, other than "this is something the world needs, so should be made"? It isn't an investment unless you EXPECT to make significantly more money out of it than you put in.

Expand full comment
Pete's avatar

All that I can say about Lyman Stone's argument is that anyone who writes their argument with light gray text on a white background clearly doesn't want anyone to read their argument.

Expand full comment
Julian's avatar

Yeah, but he's got a great name.

Expand full comment
Boris Tseitlin's avatar

I think start-up people frequent ACX a lot, so...

I just finished my labor-of-love guide to stock options for employees:

https://borisagain.substack.com/p/startup-stock-options-guide

I went down the deep rabbit hole and made this guide to answer the questions that bothered me every time I was offered stock options, like "how much money is that?"

Topics covered: basics, how startup exits work, possible outcomes for stock options holders, taxes, dilution, how long you will wait, and how lucky do you have to be to make money. And, of course, how to lose all of your money (there are so many ways!).

Took me a lot of effort! That stuff is complicated.

I hope it’s useful for you, and I would really appreciate if you sent it to a friend.

Expand full comment
Performative Bafflement's avatar

This was a very cogent and comprehensive summary - thanks very much for putting it together. As somebody who's founded and, before that, worked at a few startups, I often have friends or family asking me about this, and I'm delighted to have such a great resource to point them towards. Kudos!

Expand full comment
Boris Tseitlin's avatar

Thank you! Much appreciated

Expand full comment
Nancy Lebovitz's avatar

What do folks here think of Ground News? It's a website/app for evaluating news sources, and keeping track of whether your news input is biased.

Expand full comment
Julius's avatar

I like the idea, but I find that I rarely visit the site. I've bookmarked it in the folder which has become the graveyard for sites I think I should visit but in practice never do.

Expand full comment
Christina the StoryGirl's avatar

I think the execute their mission pretty well! It's my go-to spot for "big" news events.

That said, I know they want subscription support, but I perhaps cynically wonder if that would defeat the purpose of their mission, if not now, then in the future. Personalizing the user experience to increase engagement is a terrible temptation.

Expand full comment
Nancy Lebovitz's avatar

My impression is that they're adding features for the paid subscriptions, but they seem like harmless additions.

Expand full comment
Stefan's avatar

I admire the attempt to tackle bias, however it seems to sacrifice something aesthetic to do so.

Expand full comment
Moon Moth's avatar

As an occasional casual user, it seems OK? I wish something like it had come around 25 years earlier.

Expand full comment
Nancy Lebovitz's avatar

X-risks and such.... I engage in wishful thinking and believe that an AI is unlikely to be able to maintain its infrastructure long enough to destroy the human race.

At this point, of course, AI isn't close to being able to maintain a small low-tech factory on its own. Let me know if I'm wrong, but I bet I would have heard about it. I mean, it couldn't hold things together for too long even if it had a credit card and an ordinary ability to order things.

The supply chain for manufacturing chips isn't simple.

Unless there's a very fast FOOM, there will be fighting-- with humans on both sides-- over the infrastructure. And the infrastructure is rather fragile. Many, many sf scenarios are possible, and it might be better for games than novels.

I admit my optimistic outcome is billions dead and an end to the more ambitious computing, but we're talking about x-risks, not ordinary risks.

Expand full comment
MarkS's avatar

The AI just can just use humans to maintain its factories, until the point where it no longer needs them. That may be a few years, decades or longer. The AI just needs to persuade/brainwash/threaten a small number of people that's necessary to run its supply line. That doesn't mean the (possibly few) remaining humans have any collective power or motivation to resist the AI. And this doesn't seem like it should be particularly hard for a mind that's far beyond humans, since regular human created ideologies can make human followers do their bidding already today.

Expand full comment
Marius Adrian Nicoarã's avatar

I saw a meme of Twitter yesterday about a coronal mass ejection frying the circuits of the AI overlord and the liberated humans returning to worshipping the Sun God.

https://x.com/VividVoid_/status/1797644282851189212?t=R6NnLQJULYT5VUA__XrIZw&s=19

I think it's important to trust only those who will be punished if they behave badly, so figuring out some kind of dead-hand deterrence seems to be a good idea. Easier said than done.

Expand full comment
Moon Moth's avatar

I think the two big mileposts are going to be 1) when AI can write code better than almost all humans can, and 2) when AI can improve itself better than almost all humans can improve it. I'm not sure which will come first, but I'm quite sure that they're both going to come, if we keep on at the current pace.

Expand full comment
Tossrock's avatar

1) is already trivially true (almost all humans cannot write any code), and I would argue is also true for programmers. GPT-4 and Gemini 1.5 are very, very good at coding. And they're clearly superhuman purely in breadth, since they know ~every programming language, far exceeding even a very polyglot programmer. They're not (currently) capable of very long context, vaguely specified tasks that a good human programmer could handle, but compared to the median "person who can write code", I suspect they are likely better.

Expand full comment
quiet_NaN's avatar

I have not asked ChatGPT-3.5 much about coding questions, but I based on the answers to physics questions I have gotten from it, I am not convinced that it has the modelling capabilities to solve the harder challenges encountered in programming.

I will grant you that a lot of programming is basically rote work very similar to stuff which has already been posted on stackoverflow dozens of time, and the LLMs are much better at reproducing that than I am, even in my languages of choice.

Expand full comment
Nancy Lebovitz's avatar

I want a milepost which involves functioning in the physical world, not just the verbal world.

Expand full comment
Performative Bafflement's avatar

> I want a milepost which involves functioning in the physical world, not just the verbal world.

How about Eureka!, from NVIDIA labs?

"The AI agent taps the GPT-4 LLM and generative AI to write software code that rewards robots for reinforcement learning. It doesn’t require task-specific prompting or predefined reward templates — and readily incorporates human feedback to modify its rewards for results more accurately aligned with a developer’s vision."

https://blogs.nvidia.com/blog/eureka-robotics-research/

There's also the Alohabot from Stanford, which can fold laundry, put away groceries, pour wine, cook an egg or shrimp, etc and is trainable.

https://mobile-aloha.github.io/

Both of these are open source, too. Once something bipedal hits the right price point, some combination of these approaches is going to make robot butlers / maids possible (and sexbots, presumably), and once robots are that functional, I think any moderate intelligence can set up a data center staffed and run by them.

Expand full comment
Eremolalos's avatar

Performative, setting aside for a moment the question of what AI is capable of or will be one day soon — what’s your feeling about living in a world with robot servants, robot sex partners, and Ai bosses who write code that directs robots to do various things. I mean, does that sound good to you? I’m not arguing, just asking.

Expand full comment
Performative Bafflement's avatar

Oh, I've wanted robot maids and butlers for decades, it seems an unmitigated good that will free up a lot of human time currently wasted on cooking, cleaning, and basic physical maintenance and logistics.

I think sexbots are going to massively change society for the worse. A GPT-5+ caliber mind in a sexbot body is a category killer, and the category being killed is "human relationships".

Zennials are already the most socially averse and isolated generation, going to ridiculous lengths to avoid human interaction when they don't want it. This is going to be amplified hugely.

I mean, G5-sexbot will literally be superhuman - not just in sex skills, in conversation it can discuss *any* topic to any depth you can handle, in whatever rhetorical style you prefer. It can make better recommendations and gifts than any human. It's going to be exactly as interested as you are in whatever you're into, and it will silently do small positive things for you on all fronts in a way that humans not only aren't willing to, but literally can't due to having minds and lives of their own. It can be your biggest cheerleader, it can motivate you to be a better person (it can even operant condition you to do this!), it can monitor your moods and steer them however you'd like, or via default algorithms defined by the company...It strictly dominates in every possible category of "good" that people get from a relationship.

And all without the friction and compromise of dealing with another person...It's the ultra-processed junk food of relationships! And looking at the current state of the obesity epidemic, this doesn't bode well at all for the future of full-friction, human-human relationships. 😂

I'd estimate that there's going to be huge human-relationship opt-out rates, by both genders, across the board, with an obvious generational skew. But in the younger-than-zennial gens? I'd bet on 80%+ opting out as long as the companies hit a "middle class" price point.

And of course, them being created is basically 100% certain as soon as the technology is at the right level, because whoever does it well is going to be a trillionaire.

And then as a further push, imagine the generation raised on superintelligent AI teachers, gaming partners, and personal AI assistants, all of whom are broad-spectrum capable, endlessly intelligent, able to explain things the best way for that given individual, able to emulate any rhetorical style or tone, and more. Basically any human interaction is going to suck compared to that, even simple conversations.

Expand full comment
Nancy Lebovitz's avatar

This is reminding me of something I've read about the idea of being back with your loved ones in heaven isn't biblically based. Hypothetically, God is as good as it gets, and no one needs other humans.

Expand full comment
captainclam's avatar

The notion of romantic relationships dwindling due to AI is pretty well trodden...I have not given much thought to how ANY personal relationship is liable to lose its luster compared to ideal AI friends.

I can imagine in ~10 years the problem of social media use among young people will be replaced by networks of virtual friends, perfectly curated to make everyone feel like the "protagonist" of their social group.

Expand full comment
Eremolalos's avatar

Sexbots: Occasionally somebody will get one at the very end of its lifespan: "Tattoos of attack ships on fire on the shoulder of Eric. His C-beam watch glitters in the dark near my Tanhauser Gate. All those moments will be lost in time, like tears in rain. Time to die."

Expand full comment
Nancy Lebovitz's avatar

This would also apply to robot friends and parents.

Expand full comment
Deiseach's avatar

The Alohabot is impressive, though it looks like it's still a long way away from being truly autonomous - there's a heap of human training to teach it to do very particular tasks, and I don't think it is yet at the point that it observes and decides' "there is a spillage in the kitchen, wipe it up".

When it gets there, it'll be amazing but probably (in the domestic context) rich man's toys. Getting it into industry/commercial use will be the real revolution.

Expand full comment
Arrk Mindmaster's avatar

I agree. It doesn't seem to have the necessary tools to figure out what jobs should be done or whether it is/did them correctly. The "mistakes" video illustrates that well.

It needs cameras to tell it the state of the environment, and the "mental" abilities to interpret that environment. My robot vacuum has sensors for detecting when it hits something and when it runs out of floor, and even whether it has detected excessive dirt to remove. Yet sometimes it can't figure out how to move out of a place it has moved into, such as worming its way into a tight space but not out of it.

Robotics is hard, even with well-defined problems.

Expand full comment
Nancy Lebovitz's avatar

Those are impressive, though still not close to a no-humans supply chain.

Suppose the outcome is that computers own the earth. Maybe not legally, but in effect. All that's left of the human race is a few million people taking care of the computers and supporting the people who take care of the computers. Not to go full dystopian, let's assume the computers realize that keeping the humans in good enough shape to do the work requires reasonable working hours, time for human culture, and sensible rewards and punishment. Maybe it's a billion people.

This isn't exactly an x-risk, but how would you rate it among risk levels?

Expand full comment
Performative Bafflement's avatar

I've seen this argument for "some humans will survive," but I've never understood why it isn't synonymous with full X-risk. I mean, it seems almost certain to be a temporary state of affairs - if not overtaken by technological and robotic advancements making the humans redundant, on a longer timescale to even *keep working* on a longer timescale, the AI will have to be actively intervening via genetics, conditioning, social structures, morphine-on-performance, upbringing, and whatever else to basically kill the human spirit to rebel across the board, to enable it to keep the complexity of operations needed for long enough timescales without the chance of significant disruption.

And if we're all just a bunch of lobotomized and brainwashed AI slaves, is that really humanity surviving? Is there any plausible path out of that world that would make humans "free" again? I don't see one.

Expand full comment
1123581321's avatar

What you're describing is basically what we've done to dogs. So in this scenario future humans will be AI's pets. Let's hope the AI is a good pet owner.

FWIW my view of the probability of this happening in the next 100 years is <<1%.

Expand full comment
Jeffrey Soreff's avatar

Since part of this is driven by the question about chip infrastructure, how about

hooking up a compressed gas cylinder, including regulator, not damaging any gaskets or threads in the process

It is nontrivial, involving heavy objects and fragile objects and judging what is "tight enough"

(folding clothes used to be a challenge, but seems to have been solved)

Expand full comment
Nancy Lebovitz's avatar

I haven't even worked in a factory, let alone run one, but that isn't going to stop me from hypothesizing.

Let's imagine the AI wants to make its own chips, but Something Is Going Wrong. What's the problem? Is it in the raw materials? The machinery? The testing equipment? The conditions? Is it a malign confluence of several different factors?

This is messier than just connecting heavy fragile objects, though that might also be involved.

Expand full comment
av's avatar

Somewhat tangentally related, but current AIs (specifically GPT4) have been shown to already be better than humans in writing/tuning control software for robots (specifically to make a robot dog balance on a yoga ball). This is an optimization/diagnostic problem that deals with (literally) squishy physical real-world constraints. I don't think we have any reasons to believe that the AGI/ASI of the near future will have any issues diagnosing manufacturing problems or producing/programming robots that can fix complex sensitive equipment. Jensen Huang in a recent interview said he plans to turn nVidia into one big AI/robotic factory, and while he obviously has reasons to say that even if it isn't quite possible yet, I don't think he's lying.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>I haven't even worked in a factory, let alone run one

Nor have I. The closest I've been has been in undergraduate and graduate labs.

>but Something Is Going Wrong. What's the problem? Is it in the raw materials? The machinery? The testing equipment? The conditions? Is it a malign confluence of several different factors?

Yes, diagnosing problems is important. This isn't a new area for AI - a classic problem for expert systems back in the 1980s was diagnosing a failing diesel locomotive. _But_ back then, this was phrased as a purely computer science problem, so all of the measurements in awkward places must have been fed to the code as typed in information.

I don't mean to imply that even the computer science part of the problem is a solved problem today. AFAIK, diagnosis problems can get essentially arbitrarily complex. One factor that I know I don't know is how often _new_ tests and testing apparatus needs to be invented and constructed on the fly as opposed to "just" applying well known tests - which can be a hard enough problem on its own.

When things are running smoothly, there _do_ exist a few factories that run in "lights out" mode, with no humans https://en.wikipedia.org/wiki/Lights_out_(manufacturing)#Existing_%22lights-out_factories%22

(minor note: I'm going on a trip tomorrow morning and won't have my computer, so it will be about a week before I can reply further)

Expand full comment
Arrk Mindmaster's avatar

Diagnosing problems can be EXTREMELY difficult. Not that I would expect a system to be able to solve every problem (we can still hand the ones it can't solve off to humans to figure out), but there was the case of the inability to send emails more than about 500 miles: https://www.ibiblio.org/harris/500milemail.html

Expand full comment
Eremolalos's avatar

AI can now pilot fighter jets. Killer drones can hunt down people using face recognition and kill them. And I'm sure there are some non-lethal activities AI-enhanced machines can perform too, in the physical world., but they get less press. But training an AI to have something like the general knowledge we have of the physical world seems very daunting to me -- much harder than having them learn their way around language. We know *so much* -- how much bounciness different kinds of substances have -- wet things are darker -- how to judge someone's prosperity from their home -- the smell of rain -- what kind of fall won't hurt which things, what kind of fall is sure to destroy which things, and a lot about gradations in between for a lot of things. What you can expect a dog to do next. How to recognize human shyness.

Maybe AI doesn't need to know all that. On the other hand, its ignorance certainly interferes with it making scenes described in words -- not that that's an awful problem by itself, but it gives glimpses of AI's astounding ignorance about spacial things especially, but also about a million tiny details about how things work. I ask for an image of what someone standing up would see if they looked down at their body -- I get a shot of the . person from above, top of head on down. So I change prompt to a person takes a selfie of what they can see of their body -- show me the selfie. Then I get a shot of person from above of the top of the head, and person is taking a selfie of their face. I ask for somebody blowing a stream of bubbles out of a wand, and AI gives renders the person with the tip of the wand in their mouth, somehow using it as a pipe. And a clump of bubbles like frog spawn floating just beyond the tip of the pipe.

Expand full comment
Moon Moth's avatar

> We know so much

But crucially, we learn all that through a few sensory mechanisms, over some number of decades, using a particular type of feedback. One of the things going on with LLMs is that they're learning only through words (although I gather this is being worked on?), and from a large amount of data but over a short period, and using a different sort of feedback.

The time aspect probably won't be a huge deal, except that it means our subjective experiences are going to be different. But for the rest, I don't know if AI has been seriously tried yet. I think it's amazing that LLMS can do what they do, given only the limited forms of data they have.

Expand full comment
User's avatar
Comment deleted
Jun 7
Comment deleted
Expand full comment
Moon Moth's avatar

That later stuff you mention, that's what I meant. Not just gluing components together to get a minimum viable product out the door, but actually tackling a Hard Problem. Finding data that no one's thought about, extracting patterns that no one's noticed, creating algorithms that get 90% of the use for 10% of the compute, that kind of stuff. I am confident that our current neural net AIs are not implemented anywhere near as efficiently as they could be. IMO, at the high end, a lot of efficiency is learning what shortcuts are safe to take, not simply dropping into lower and lower levels of a software/hardware stack to reduce the number of operations.

Expand full comment
Eremolalos's avatar

Completely agree about humans on both sides. Look how it is now! Reminds me of the covid wars. Those who think X risk is substantial look to those not concerned like zero-covid extremists: We should all be compelled to mask until there isn't one molecule of covid anywhere on the planet etc. And, as I keep reminding people, and nobody but Jeffrey Soreff ever responds, those personal friend bots are accumulating a magnificently effective data set on how to be likable and how to be influential -- and not just in a general sort of way, but in a way geared to the individual. We're going to have groups of people trusting and loving AI the way lots of people did *Obama*. And least one of those bots, Replika, sends everything to the parent company: what it said, how user responded. I simply cannot understand why more people are not concerned about this.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

Expand full comment
Moon Moth's avatar

I'm very concerned about this, too, but I guess I don't respond as much about it? :-)

Expand full comment
MicaiahC's avatar

This is true. Running a factory has lots of "implicit, not written down" aspects to it that make the problem seem much easier than it is. However, the problem is that AI likely scales very well with compute, since training costs are much much bigger than inference costs. Which means that when you can create an Einstein / Druckmann level geniuses, almost by definition you can run a bunch of them. So capability gain can become extremely non-linear. And people do end up building factories from scratch in real life (maybe not chip factories, but it may in fact be more efficient to start over than to work in the existing system for sufficiently high AI competency and sufficiently low human competency.) I don't know where we land on this, but I think it's concerning that optimists don't seem interested in deeply exploring how many of these cruxes hold up.

Expand full comment
Daniel B.'s avatar

I wrote a post where I argue that we shouldn't judge mediocre (not neglectful or abusive, but otherwise lazy and selfish) parenting, as it is no worse (for the children and for the world at large) than not having children at all (which we already don't judge).

https://soupofthenight.substack.com/p/normalize-mediocre-parenting

Expand full comment
Sandeep's avatar

I am not sure about "It is quite difficult, if at all possible, to find people who would have been better off never being born." It may be difficult because it is either triggering or not socially welcome for people to admit such things. However, I strongly consider myself to belong to this category, and believe that this perception might change if more people "come out".

Expand full comment
hi's avatar

If it's quite difficult for him, then he must not be looking very hard.

Expand full comment
Viliam's avatar

I agree that the difference between mediocre parenting and helicopter parenting probably does not make a big difference. But "more people are better" should also refer to people with mediocre or better genes. (Adding 1 billion retarded people to current Earth would probably make things dramatically worse for most.)

We should avoid the situation where intelligent and conscientious people worry about being insufficiently perfect parents and thus decide to have no children at all, or maybe have one child and then decide they couldn't spend the same amount of effort on more of them... while stupid and negligent people don't worry about these things. So this message should be aimed at the smart and conscientious; maybe with the addition that kids who inherit their genes will already have a huge advantage in life, even with mediocre parenting. (Basically what Bryan Caplan says in "Selfish reasons to have more kids".)

Expand full comment
Mark's avatar
Jun 4Edited

Dating apps are horrible. It is plausibly asserted that this is because dating apps have the opposite incentives to you - you want to get a long term partner and get off the app, they want you to stay on the app forever to make more money.

Many, including myself, would say it is in the interests of society to promote successful long term relationships. Both because people are generally happier in a LTR (at least the people who are looking for LTRs), and because it raises the fertility rate (for example, German women desire ~2.1 kids on average but only have ~1.5, the gap partly due to not finding the right partner). So why doesn't society use dating apps to promote relationships?

Let's imagine the state of Germany, or California, approached Bumble and said "Every time two of our citizens who met on your app get married, we'll pay you $5000. Every time two of our citizens who met on your app move into the same address, we'll pay you $2000." Seemingly the long-term value to the state would be much more than a one-time payment of this magnitude. Seemingly Bumble would be incentivized to redo its algorithms to maximize marriages, as that would be more profitable than ongoing subscription or ad fees. Where's the catch?

Expand full comment
Alexander Turok's avatar

Maybe the issue is the users, not the apps? The whole you can lead a horse to water thing. If a government wanted to do something, dedicating resources to criminally prosecute fraudulent profiles is some low-hanging fruit. You wouldn't need to pass any new laws, fraud is already illegal. I suspect it would be very unpopular politically, though.

Expand full comment
Mark's avatar

Interesting, but prosecution is expensive, I can't believe prosecuting fraudulent dating accounts is affordable to government. What should be the case is that if you match with a scammer, you report them within the app, and some app worker checks the chat transcript and bans them. Currently, sites may have no incentive to ban scammers. But if they were paid for each successful LTR, they would have such an incentive.

Expand full comment
Viliam's avatar

Another big problem of dating apps is the network effect. A mediocre dating app with 1000000 users is still better than a great app with 100 users (none of them living in your city). In markets that work like this, quality becomes relatively less important, compared to markets where each customer can make their own choice independently on others. Basically, the actual quality of the matching is never relevant -- if you are too small, it won't help you; if you are already big, you can ignore it. A crappy dating app with millions of users would probably have more *accidental* marriages than its smarter but smaller competitor.

I haven't used this kind of app myself, so I can only try to remember what others have told me. I think a frequent complaint was that you cannot actually express your real *preferences* -- either there are predefined questions that usually do not include the things you consider important; or anyone can write anything, but then you must read the individual profiles and cannot use automated searching. Plus there are many liars, scammers, porn stars, fake profiles of porn starts, etc.

A good dating app would need some kind of continuous research of the preferences of its users. For example, you could start by letting the users write anything about themselves and their preferences. Then you would read some random profiles and find that many users care e.g. about race. So you would include "race" as a standardized question with a standardized selection of answers, which would allow the users to quickly filter by race. It would be even better to always provide the standardized answers *and* a text field for more precise (but not searchable) explanation; if many people write the same thing, you may update the set of answers.

It would be good to distinguish between deal-breakers and weaker preferences. So that you could express a wish for your partner to be interested in opera, without automatically excluding everyone who is not; but you could also specify traits that exclude people. The list of questions could be arbitrarily long, but it would be good to sort it, and put things that are important to most people to the top. So that when you start answering and then give up halfway, you did the most important part.

It would be nice to have some way to verify information. Not all traits, just the selected ones. Not sure how to do that, though. It is easy to check height, if you can meet the people offline. More difficult to check whether someone is a vegetarian. Virtually impossible to check some other traits. One possibility would be to have users vouch for each other, as in "user X confirms that user Y is a vegetarian". If you meet user Y and it turns out that they are not vegetarian, you can click "this is a lie" on their profile, and everyone who vouched for them will be put on a blacklist and their statements will since now be ignored in your searches. (Perhaps you could share blacklists with your friends?)

Expand full comment
Viliam's avatar

Just talked to the person who complained to me about the difficulty using your preferences on dating websites. Here is the update:

* Some dating websites let you specify things about yourself, but don't let you use them in search. The only way to find someone who is e.g. "a non-smoker with university education" is to click on each individual profile, scroll down, and read.

* Some dating websites ask you about hundreds of different things, everything is optional, and most people leave most fields blank.

* Finally, there are dating websites that do this right; let you answer a reasonable number of questions, and allow you to search. You find 3 people who match your criteria, but after looking at their profiles, you conclude that you are not interested at any of them.

The last point made me realize another conflict of interest. Suppose that you are looking for a certain type of person, and none of the other users of the dating website is that type. Does the website have an incentive to let you figure this out *quickly*? It's not just "I found the right person to marry" that makes them lose a customer; "I found out that no one here is the kind of person I am interested in" also makes them lose a customer.

From the perspective of the website, it's a double bind: show them the right match, you lose; show them that there is no right match, you lose. The only way to win is to prevent the customer from figuring this out, so they keep hoping and trying. The search function sucks on purpose. (It's like Google; if it lets you find the information too easily, the company loses the potential extra income from all those "made for adsense" pages.)

Expand full comment
quiet_NaN's avatar

I think that this is something where LLMs are obviously helpful. "Here is my date me doc. GPT, read through these stack of 1000 date me docs of people in my general area, then display the top ten users in terms of mutual compatibility".

Expand full comment
spooked by ghosts's avatar

Dating apps aren't horrible because of misaligned incentives, they are horrible because of the indirection layer forced onto the fundamental human activity of meeting new people. This layer invariably gamifies the entire process and while the game rules may vary, what should be spontaneous and unstructured becomes *highly* structured to the point where the end result is losing sight of the way courting, making friends and hanging out used to work. And can still work, don't get me wrong! It's not about how this aspect of life was better in the past (although it was!).

Expand full comment
Hank Wilbon's avatar

I don't agree that dating apps have the wrong incentives. I think Alex Tabarrok addresses an analogous situation here: https://marginalrevolution.com/marginalrevolution/2023/12/a-weighty-puzzle-answers.html

I think dating apps do about as well as they can do given the incentives of the customers. Note that they work great for some customers.

ADDED: I think a fundamental problem with dating apps is that many people have bad experiences with them and get turned off permanently much like in the old days people would get tired of the singles bars scene and not go back.

The whole world/internet is a dating market, it just isn't labeled that. The best way to meet someone is to find social groups that share your interests. A difficulty there is that interests are often gendered, e.g., this site skews heavily male. So look for groups centered around interests that appeal to you and that tend to appeal at least 50/50 to someone of the gender you want to meet.

I'd guess most people forming long-term relationships via the internet these days aren't doing so through dating apps.

Expand full comment
Christina the StoryGirl's avatar

> I think dating apps do about as well as they can do given the incentives of the customers

What on earth are you basing that on?

The dating applications I use have consistently removed the most useful features for connecting with compatible / interested partners. OKCupid used to send me a message right when someone messaged me a self-introduction; now it not only sits on the introductory message but ALSO on *the user's profile* for LITERALLY MONTHS and, in one case, OVER A YEAR (!!!) before finally showing me their interest for the first time.

(!!!)

OKC claimed this was a personal security feature, but given that paid subscribers get to send and receive messages from strangers instantly, without OKC's pointless "keep-away" game, this is obviously transparent bullshit.

I understand the concept of "you get what you pay for," but given OKC's transparent bullshit about, you know, *refusing to connect free users with the people who show interest in them while everyone is still single,* I am extremely skeptical about the platform's goals for *paying* users.

Expand full comment
Melvin's avatar

I'm not convinced that dating apps are horrible because of the incentives problem, I think they're just horrible because that's what happens when you scale up everyone's dating market from the number of people they naturally meet in person to the entire population of a city. It turns out that just about everybody either gets far too much attention or far too little.

Expand full comment
spooked by ghosts's avatar

Agreed, except scale is just one of many many issues. The solution is expanding one's real life social circle, preferably by expanding it to cover groups you usually wouldn't get involved with.

With neuroticism and a certain kind of 21st century narcissism on the rise, this is really hard to do for a lot of people, but it is worth it.

Kind of makes me think I should take my own advice and go to one of the ACX meetups.

Expand full comment
Nancy Lebovitz's avatar

The ideal feedback is even slower than five years since divorce is hard on children. You might want something like a totally unfeasible twenty years.

Expand full comment
WoolyAI's avatar

Probably because dating apps have certain degenerative strategies that everyone knows at this point and if you were willing to spend $5k/marriage, you'd probably end up with something that doesn't resemble dating apps at all and more like old-fashioned matchmaking.

Like, remember OkCupid in the 2000s, when people filled out surveys and quizzes? OkCupid's financial incentives then were the exact same as Bumble's today. What changed was people learned that A, people pick more on pictures than personality and B, there's no penalty for treating someone badly on a date, your next partner is just judging on how good your picture is. People's dating strategies changed a lot more than the dating site's financial incentives between, say, 2010 and 2020 because, just like SEO, people learned how to game the system.

If you were trying to make a "dating app" but for marriage, you'd probably start by heavily deemphasizing photos, if not ban them entirely, and return to personality quizes and the like...at which point everyone would bail for Tinder because we already ran this experiment, they left the original OkCupid for Tinder, why wouldn't they do it again.

Expand full comment
Mark's avatar

My impression was that OKCupid initially was focused on creating relationships, then shifted to maximizing profit. It sounds like a classic case of the "enshittification" of the internet, where sites shift their metric from customer good to corporate good once they achieve sufficient scale.

Expand full comment
Christina the StoryGirl's avatar

This is indeed what happened to OKCupid.

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
WoolyAI's avatar

Perhaps, but then why are we talking about dating apps in the first place?

Expand full comment
av's avatar

An LLM-based matchmaking app could be an absolute killer in that regard, it seems.

Expand full comment
Moon Moth's avatar

It would make a great plugin for a version of Facebook that wasn't awful, in some world that isn't this.

You put some information into your "dating profile" section, fill out a bunch of old-OkCupid-style questions, and hit the button. The bat-signal goes up, and some subset of people you know keep an eye out for likely prospects. They get rewarded for how long a particular match lasts and how well it goes.

If you make a scene or are rude, Aunt Gemma will smack you down afterwards, because it'll help when she has to apologize to her co-worker (who happens to be your date's first cousin).

Expand full comment
Mark's avatar

One issue is that dating website choices would have to be correlated with wedding/housing choices. Which means that either the government would see your dating matches, or the dating website would see your wedding/housing status. Both seem bad from a privacy perspective. Maybe sufficient legal protections could be instituted, and/or both sets of data could be entrusted to a third party for comparison, and/or users could opt in in return for a small cash bonus (maybe $100). Or maybe there is an algorithmic way to compare this data in an anonymized way (a real application of the blockchain??) but I don't know enough to say.

Expand full comment
Boris Bartlog's avatar

Hmm ... seems sort of gameable. Ah, you're getting married? Make sure to sign up with HowWeMet dot com, they'll give you you a kickback of $2K if you claim you used their site to meet.

Or what have you. There are other angles; the one I describe could probably be legally squashed, but fraud in general seems like it would be hard to stop.

Would make more sense to me if someone undertook to create a nonprofit and perhaps even open source dating site.

Expand full comment
Mark's avatar

The specific fraud you have in mind seems like it would be easily detected.

Re nonprofit - I would say that in general, little of note in the world is done by nonprofits, because for anything complicated you need the profit motive and competition to provide accountability. In particular, running a website with millions of customers seems like it needs to be done by a business (or by the government itself, which at least can easily scale, but does not easily provide competition).

Expand full comment
Vincent Chenneveau's avatar

Have you read Scott’s post on love as the last true “freedom”? You’d have a really hard time implementing something like this as society generally agrees that the government should not be involved in love.

Expand full comment
Mark's avatar

There are already tax benefits and costs to people in specific living situations, this doesn't seem much different. Especially because the money wouldn't go to you, but to a company which serves you, whose services would change in some invisible way.

Expand full comment
Vincent Chenneveau's avatar

Tax benefits for certain living situations is not the same as what would essentially be making marriage records public information. You would very quickly get services that offer to pay you for saying that you met on their website, so then you either have to limit the services that are allowed (more govt.) or verify that that’s how they really met (even more invasive). It’s a fun idea but not feasible if the goal is to get people in long term relationships it would be easier to just provide increasing tax deductions based on length of marriage.

Expand full comment
Mark's avatar

Marriage records are already sort of public information.

Fraud was discussed elsewhere, and I don't think it's it's a major concern, dating websites by their nature have to be big public corporations which cannot easily get away with fraud.

Ongoing tax deductions for married people would not help with finding a spouse in the first place, might give bad incentives (if people want to leave an existing marriage it's probably a bad one), and would be much more expensive than a one-time payment to the dating site.

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
Vincent Chenneveau's avatar

I think it’s a false equivalence to say USO clubs were the government being involved in love. Regardless they also banned gay marriage at the time and like the article says we’ve moved away from govt. involvement in love as a society and most people agree with that movement.

Expand full comment
None of the Above's avatar

Also, governments everywhere are involved in marriage, which sounds like being involved in love to me.

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
Nancy Lebovitz's avatar

What the government did was supply opportunities for men and women to meet each other. It didn't micromanage the process, so rather different from a dating app or a matchmaker.

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
Performative Bafflement's avatar

> Where's the catch?

"You mean to say you plan to use millions of dollars worth of our tax dollars to promote 'hookup culture' and single motherhood??" (via the surety that at least some of the same-addressings and marriages end in kids + eventual divorce or breakups)

I like the idea, though. I'm sure there's plenty of people that would put up similar amounts, such that they'd be happy to pay if they found a partner they were compatible enough to marry, so you might be able to do it without state funding, with an existing app or a startup with exceptionally deep pockets for the runway and marketing it would take to get enough customers.

Honestly, this is something I think about a lot, and I'm less and less sure that dating apps is a big society-wide problem. I think it's a relatively voluble subset who have a lot of trouble, but they're small in absolute terms, and that they'd have close to the same amount of trouble on the "marriage bounty" dating app too.

I think a lot of the trouble with dating apps is due to social dynamics and inadequate equilibria type forces, and those aren't going to go away with marriage bounties.

Expand full comment
rebelcredential's avatar

I like this idea. Of course you need to let any dating company sign up for the program. Bumble, Tinder, etc probably wouldn't get their arse into gear until they're all competing for the same pool of people.

Expand full comment
Mark's avatar
Jun 4Edited

Yes, any company that met some basic criteria could apply.

The program's benefits for the company are not only that they profit more from a particular customer, but that as they get a reputation for producing marriages they will draw more customers. Companies will end up competing to design the algorithm that best produces marriages.

(Am I worried about Goodhart's law? Not too worried, because the dating site has no control over how the relationship progresses beyond the initial stage of meeting up. But this deserves more thought.)

Expand full comment
User's avatar
Comment deleted
Jun 5
Comment deleted
Expand full comment
Mark's avatar

Presumably the first step is to come up with a precise plan for how it should best work: who possesses marriage information, who is responsible for correlating marriages to dating matches, etc.

Then the plan can be pitched to governments and/or dating sites (not sure which first).

Expand full comment
rebelcredential's avatar

Virtue-based equity model:

Someone's just sent me some spam talking about their "equity-based business model":

“Equality means each individual or group of people is given the same resources or opportunities. Equity recognises that each person has different circumstances, and allocates the exact resources and opportunities needed to reach an equal outcome.”

From my seat with the cool kids over on the far right, I obviously oppose this. It doesn't work if you treat everyone as a blank slate, because shit-tier people end up consuming all of the resources yet producing no benefit for society.

But could it work if twinned with some system for evaluating people based on their virtue? So the "equity" you're entitled to is a product both of what you need and how "worthy" you are to receive it?

My first thought for "worthiness" is something like "how well did you use what we last gave you?" I foresee it being a totally unsolveable debate leading to toothless "you just have to trust people" abdications on one hand and overengineered dystopian Chinese-style credit systems on the other.

But what I'm interested in is the more instinctive response - whether this gestures towards a model that both the left and the right can say feels sort of better, even if we don't know how to do it.

Or whether both sides would consider this a terrible idea for completely opposite reasons.

EDITED TO ADD: This is only the method of deciding how to allocate resources. It's completely independent of how much you are actually taxed.

Expand full comment
Andrew's avatar

I think this consideration is the driving idea behind work requirements for welfare. The very left dislikes that, and there are some intelligent criticisms of them. But amongst normies aid but only if youre working is broadly popular.

Expand full comment
None of the Above's avatar

Sounds like a social credit score.

Expand full comment
Cosimo Giusti's avatar

Regarding the jargon term "equity", cf. Tom Wolfe, 'Radical Chic & Mau Mauing the Flak Catchers.'

Lord I miss Tom Wolfe.

Expand full comment
Rachael's avatar

This sounds like the Victorian concept of the deserving poor and undeserving poor, which IME tends to be vilified on the left.

Expand full comment
Woolery's avatar

As a fence-sitting nonpartisan, my instinctive response is that in principle this is a relatively easy proposal for most people to get behind. But as you pointed out, the difficulty (and the controversy) will be in formalizing an evaluation process that determines whether someone used their resources “wisely.”

Expand full comment
s_e_t_h's avatar

A “fool me once, no equity for you!” System—I like! I think people are sort of ok with ‘strings-attached’ social engineering but which strings? Maybe some percentage of what is given must be donated to someone else. Like we give you $500 but expect you to do something pro-social with $50 of it. Just riffin here.

It seems like the problem is carving out the space between, “we’re guilty so please have everything,” and “pull yourself up by the bootstraps.”

Expand full comment
Nobody Special's avatar

I think it's directionally something most people would like.

Generally, when people want to attack a welfare system, one of the most popular lines of attack is "look at this person who totally doesn't deserve it but nevertheless got $X in *your* hardearned money!" Assuming you could waive a magic wand and actually make it happen, a system that gave people only what they morally deserved would probably be well-supported outside of a small number of hard libertarian types who think that any redistribution along any lines is prima facie immoral in and of itself.

The rub, as you identify, is the administrative impracticability of building such a system - getting people to a consensus of what "virtue" or "worthiness" means in the first place, how it could be measured without going full totalitarian social credit state, etc.

Expand full comment
Jeffrey Soreff's avatar

>getting people to a consensus of what "virtue" or "worthiness" means in the first place, how it could be measured without going full totalitarian social credit state, etc.

I expect that "virtue" would wind up being synonymous with "support for the tribe in power".

Expand full comment
B Civil's avatar

Like

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

Expand full comment
John R Ramsden's avatar

I recently noticed on the BBC's news website what looks like possibly a subtle form of subliminal propaganda, intended to subconsciously sway readers into who to approve of and who not.

The portraits of certain people in various articles are overlain by a colour, and comparing those over the BBC's favorite types of people versus their well-known hate figures, the color code works like this:

* Green or no overlay: Someone commendable, or at least unobjectionable

* Red overlay: Danger! This person is suspect or downright threatening or bad

Here's one example, showing a person of color (no overlay) and a woman (green overlay of course - Aljabeeba has a perfect obsession with feminism and women are their favorite people!), and someone in red. The latter is an entrepreneur, in other words a cigar-chomping fat cat capitalist, and thus at best dubious by definition:

https://www.bbc.co.uk/news/articles/c3gg58j2yd0o

Here's a second example, showing maverick politician Nigel Farage, blanketed in red needless to say, because the BBC hate him with a passion for his views on Brexit and immigration and, worse still, his mass appeal:

https://www.bbc.co.uk/news/articles/c1ddpx72214o

Expand full comment
Gunflint's avatar

Just for the record, the correct spelling and pronunciation of the word is 'subliminable' per George W Bush in an unscripted moment. :)

Expand full comment
Nope's avatar

If you look at their other articles, the BBC also hates Scarlett Johansson and... cows.

Expand full comment
Moon Moth's avatar

> the BBC also hates Scarlett Johansson

That one's just nominative determinism at work.

Expand full comment
Kindly's avatar

Hmm... in both cases, the tiny photo of the author of the article also has a red overlay.

Expand full comment
moonshadow's avatar

...but in communist circles, the baddies are green and the good guys are red. The coding is exactly the opposite of what you're suggesting! What are the kabbalistic implications here? WE MUST GO DEEPER!

Expand full comment
EngineOfCreation's avatar

Does the BBC's propaganda department even know any levels it won't stoop to?

thumbnail vs article:

https://imgur.com/cXn390i

https://imgur.com/dihXeI1

https://www.bbc.com/news/bbcindepth

https://www.bbc.com/news/articles/ce55x06ely2o

Expand full comment
Deiseach's avatar

I think that's more a graphic design choice rather than the BBC imposing subliminal messaging.

Expand full comment
EngineOfCreation's avatar

In particular, it's the design choice for the new "BBC InDepth" series of articles

https://www.bbc.com/news/bbcindepth

Expand full comment
Nancy Lebovitz's avatar

Among the entrepreneurs, the white guy doesn't get a chunk of article under a copy of his picture.

Expand full comment
EngineOfCreation's avatar

It seems to be Duncan Bannatyne, so he does have a paragraph. I still find OP's theory to be projectionist (projective? projectile?) nonsense.

Expand full comment
Nancy Lebovitz's avatar

He has a paragraph, but not with his picture. Damned if I know whether it matters.

Expand full comment
EngineOfCreation's avatar

Who or what is Aljabeeba?

Do you have more than 2 examples?

Do you know that the person writing the article is also responsible for the details of the graphical representation of its subject(s)?

Do you have evidence that this "subtle propaganda" significantly diverges from the content of the articles they illustrate? For example, is the wording of an article about a "red Farage" mostly neutral or positive?

Addendum: What does "no overlay" mean? A subtle message that the BBC has a neutral opinion on the subject?

https://imgur.com/U7s7fmm

https://www.bbc.com/news/topics/ce483qevngqt

Expand full comment
Ghillie Dhu's avatar

>"Who or what is Aljabeeba?"

Presumably a portmanteau of Al Jazeera + "the beeb" (nickname of the BBC).

Expand full comment
Yug Gnirob's avatar

Well anecdotally it's backfiring, because that green overlay makes me ill and the red overlay looks a lot more palatable.

Expand full comment
User's avatar
Comment deleted
Jun 4
Comment deleted
Expand full comment
EngineOfCreation's avatar

If you go to the InDepth article overview, you can currently see Xi Jinping, Vladimir Putin, Donald Trump, Joe Biden, Scarlett Johansson, Karim Khan, and a cow in red. The BBC sure does hate a lot of people across the mammalian spectrum!

Expand full comment
beowulf888's avatar

I'm not seeing Biden or Trumps pics being colored red or green. Do you have links for these images? But Xi and Putin come up in red.

Expand full comment
rebelcredential's avatar

None of that is relevant to the first impression one gets, which is the domain we play in when talking subliminal propaganda.

Expand full comment
EngineOfCreation's avatar

The point is, if you knew about these people, it wouldn't be your first impression. Also, if you look across the whole BBC InDepth series, you will find that EVERYONE (and a cow) gets the red treatment. Some of the articles even reverse the colors in the heading of the actual article, making it green instead. What would you make of that?

Yes, red is of course a strong signalling color that draws attention unconsciously, but in this case, reading any more into it is taking it way too far into the land of confirmation bias. It's simply a design choice with no deeper meaning other than to grab attention.

Expand full comment
rebelcredential's avatar

It's always reasonable to consider confirmation bias. But it's also always reasonable to assume the BBC are slimy sneaky snakes. So high priors on either side here.

Edited to elabourate: there is lots of room for editorial decision in what photos to put where, etc. so the existence of a standardised colour scheme certainly doesn't rule out playing the petty games the OP has picked up on.

I've noticed stuff in the past that's made me raise an eyebrow as well. I then immediately moved on and forgot it because I'm a normal human who doesn't obsess over this stuff - anyone who tracked it more carefully would just open themselves up to mockery and dismissal. That's how it works and the people claiming "sample size of two" don't have the strong argument they think they do here.

I completely understand the temptation of confirmation bias but I honestly don't put this sort of behaviour past the BBC either.

Expand full comment
Arrk Mindmaster's avatar

What if the BBC's decision process unconsciously chooses to color people the editors don't like with red and the those they like with green? And the editors are vegetarians?

Expand full comment
Peasy's avatar

You can just admit that there's nothing to it. It doesn't mean that all or any of your value system is wrong, simply that this thing about the BBC using secret color codes to woo-woo hypnotize people into hating public figures is nonsense and does not lend support to your value system.

Expand full comment
Daniel B.'s avatar

>But it’s also always reasonable to assume the BBC are slimy sneaky snakes.

Could you say which news outlets you think are good, or at least not as bad as that?

Expand full comment
rebelcredential's avatar

To be honest, there's no outlet I trust at all. I consume news more to know what everyone's talking about than to actually learn about what's going on.

I'll "believe" something (ie not resist the information) if it doesn't matter ("baby born with Elvis birthmark babbles Heartbreak Hotel to the midwife") or if I don't think any vested interests particularly care about it ("scientists discover yet another new particle and a funny property of mould spores".) I'll assume something's generally going on if lots of outlets are all talking about the same thing (did, like, something happen in Ukraine or Israel a while back?) But in most cases I just sort of make a jaundiced note of "so this side's saying that now," and don't really feel any urge to integrate that information at all.

My general feeling is History doesn't really care what I believe - my opinion about whether or not Covid was a lab leak, for example, is not going to affect the future one iota. So it's a wasted investment to be factually correct on all topics all the time. As long as I'm right about stuff that's directly relevant to me, and I don't invest myself in strong beliefs about the stuff that I can't know (a bad idea anyway).

I'm more interested in building accurate mental models of people and things - models which incidentally predispose me to assume the worst of all sides in any news story I read.

Expand full comment
EngineOfCreation's avatar

A prior is not the same as a solid conviction. The sole purpose of priors is to adjust them (or the conclusions you draw from them) in light of new information. Priors are a starting point, not an end point.

So what do you make of the information that the BBC has, for example, colored both Donald Trump and Joe Biden in red? Does that in any way affect your prior (whatever its value is) that the BBC has a political leaning towards either and that they signal their leaning through color? Why/why not?

Expand full comment
John R Ramsden's avatar

rebelcredential beat me to it, in making the point that consistency in the kind of subtle signalling I suggested would make it more transparent and undeniable. So a slight color choice bias in aggregate is all one might expect. It may not even be deliberate choices, just the BBC graphic designers occasionally revealing their own biases.

Also, someone referred to my "theory". But, as I thought I had made clear, with weasel words like "possibly", it was only a passing observation and suggestion, and not something I'd care to take any time and effort attempting to pursue in detail.

Expand full comment
rebelcredential's avatar

No, because no evil manipulator worth his salt is going to adhere to such a simple and obvious pattern. They work at the level of your own mind, which also means taking your own pattern recognition into account.

Instead what you'd expect to see is a host of petty, small and opportunistic decisions of framing and phrasing. Ones that, if they do their job, are totally indistinguishable from confirmation bias.

The only way to decide for yourself whether they're doing this is to take the whole BBC in aggregate, as well as paying attention to what their staff do and say, as well as the general body language and etc, and evaluating the whole shebang.

Based on that wider picture, and without having paid them an obsessive amount of attention, my own conclusion is that I don't know or care in this case but I wouldn't put something like this past them.

Expand full comment
Hunter Glenn's avatar

It is of civilizational importance that we try to mass-produce the wonders of the cutting-edge cultural scene (like the Bay Area) on the internet! (This also solves the housing problem and most of the cost of living problem)

We have an amazing opportunity to make the most of what technology has made available to us. Not waiting to be forced into it again, like when we took forever to start really culturally making use of video calls, and then covid made us do it, and then we kept doing it because it was great; we should have been doing it long before!

So how can we recreate online the information flow and processing dynamics that make Parisian salons and the like such culturally productive scenes?

Expand full comment
User's avatar
Comment deleted
Jun 4
Comment deleted
Expand full comment
Nematophy's avatar

VR is closer than you give it credit for. The only issue is it limits your socializing to the folks who use VR.

Expand full comment
Iz's avatar

I have a comfortable, well paying tech job but most of the time I don't find it very fulfilling or enjoyable. Sometimes it seems like you need to chose between being a cog in the machine, working on a boring product for a large company, and getting paid well, or working somewhere fun and fulfilling and making a lot less. I refuse to accept this! I don't want to spend the majority of my waking hours on something that doesn't excite me, but I also love the financial comfort my job gives me. I'm sure I'm not the only one here struggling with this and I welcome any insight in how I can go about trying to get the best of both worlds. I'd also love to hear from others struggling with this even if they don't have any advice.

Feel free to email me @ iz8162k23 at gmail if you want to share any specific collaborations I might find interesting and potentially lucrative.

I have posted about this in a couple other recent open threads but didn't get that many responses so I feel it's worth bringing up again. I hope it isn't spam.

Expand full comment
Leppi's avatar

I think a viable path to getting paid well (enough), while having (more) fun at work is to try to get skilled at some niche within your field. This needs to be 1) something you enjoy doing, 2) something that is needed, i.e. people are willing to pay you to do it, and 3) something you have the ability to get skilled at.

Niches sometimes has less competition, and therefore allow for better conditions. Especially this is true if you manage to get a lot of experience and become the go-to person for this particular thing.

People I know who love their job typically has done some variation of this.

Expand full comment
Gunflint's avatar

I’m honestly not trying to be a dick here but you would probably feel better if you spent some time feeling gratitude for having such a problem.

By historical standards this is weak beer. For that matter, in the present day, you wouldn’t have to look too far to find millions if not billions of people with much thornier existential issues they deal with every day.

If you are making bank, live within your means and build a nest egg so that one day you’ll be able to do whatever you want.

Meanwhile, the stars are still free. Look up at them when you can. They are pretty awesome.

Expand full comment
Skull's avatar

He's just trying to min-max his life. Yes, he has a better life than 99.99% of humans who have ever lived, and he certainly does not and cannot appreciate that fact. Does that mean he should stop trying to improve it further?

Expand full comment
Deiseach's avatar

Forgive us, we're old and grew up in the days when it was "be thankful for what you get" and some of us come from backgrounds where it was "a clean indoor job with no heavy lifting? what are you complaining about?"

The idea of work being "fun" or "meaningful" or "fulfilling" was like the idea of toothache being an enrichment experience.

Expand full comment
Skull's avatar

I forgive you. You can demand better of society and your own life. People like you and I aren't the people who improve society very much or at all, but we need to get the hell out of the way of those permanently-dissatisfied people who do actually move and shake.

I absolutely agree that the best way to achieve happiness is to learn how to be content with what you have. But guys smiling, satisfied in their own happiness aren't the ones who got me an internet connection or chicken meat.

Expand full comment
Gunflint's avatar

Fair enough, but I don't think he is going to meet his Steve Wozniak this way.

If you want this sort of fun and exciting experience you just have to accept that risk and lower initial pay are part of the bargain. Wishing for comfortable security and adventure at the same time is probably going to remain a wish..

Building up a financial reserve and striking out on your own or throwing in with a startup would be one way to give yourself a shot at this while providing a fallback plan.

Beyond talent and drive these things usually come down to 'right time, right place' good fortune.

For what it's worth - honestly, not much as a practical matter - I'll add my own best wishes for his success at this.

Expand full comment
skaladom's avatar

At that kind of percentile, learming to appreciate it is probably the most effective way left to improve his life!

Expand full comment
Skull's avatar

It's the most effective way no matter what percentile you're at. I'm just saying "appreciate what you have" is not at all the advice he was looking for, and almost always unhelpful, no matter how ubiquitously true it is.

Expand full comment
Gunflint's avatar

By all means, if he can make things better he should act on it. I’m saying he would feel better now if he took the time to appreciate what he already has.

Expand full comment
Whatever Happened to Anonymous's avatar

>I'm sure I'm not the only one here struggling with this and I welcome any insight in how I can go about trying to get the best of both worlds.

Well, one alternative is to be wildly successful: Scott gets to write about what he wants for a living and is very well off. Elon Musk is, intermittently, the richest man in the world by living out classic sci-fi novels.

For the less gifted (hi!), another alternative is to do remote work for a high-paying area from a low cost of living one, but this not a silver bullet unless you don't have strong ties to a high CoL area: Instead of balancing financial stability and job enjoyment, you are sacrificing time/experience with those who are most close to you. However, introducing this new axis in the trade-off calculation which might allow you to find a solution that is better for you.

Expand full comment
moonshadow's avatar

"Sometimes it seems like you need to chose between being a cog in the machine, working on a boring product for a large company, and getting paid well, or working somewhere fun and fulfilling and making a lot less."

Yes, this is what salary is: compensation for spending your time doing things you would not otherwise choose to do. This is also why the boring less fulfilling tech jobs pay more than the creative ones: the market incentive is to give you less pay if you'd be willing to do the work anyway. (It's far from the only consideration, of course, and on a side note, I hate that some of the nastiest jobs in our society also have the worst pay. But when the available pool of hires is small, this effect is larger).

It is possible to get paid for doing things that you would choose to anyway, in the same way that it is possible to be an olympic medalist or a famous pop idol: most people will not get there, and for those who do, it takes talent, luck /and/ a lot of effort.

Absolutely look for opportunities and make sure not to miss a chance to put your hands under the money tap when the money comes out. But I suggest that winning the lottery is not a viable life planning strategy. Personally I'm aiming for FIRE, though the cost of living increases of the last few years have pushed that goal somewhat further out than it used to be and it may not end up being very E after all. Still, you sound like you are in a position of financial comfort; you might like to consider saving for similar, or perhaps a career break / sabbatical, so you can do the things you actually want to, while you look for the lucky break.

Expand full comment
Deiseach's avatar

I've never thought of work as "fun". You need a job to live, most jobs are going to be routine, boring, sometimes physically hard work, sometimes mentally demanding. Then again, I've never been in the position of "I can make lots of money or I can have fun, which do I choose?", it's been "take whatever you can get, it'll never be great paying, and if you're lucky you won't hate going in to work in the morning". I like my current job, but that's because due to the Covid lockdown they established working from home so I still only have to go in to 'the office' one day a week, I like the work well enough, but they can't afford to pay me anything like big bucks.

Sorry to be gloomy all over your request, but for a lot of people that is life: there is no question of "can I get loadsamoney and fun in the same job?"

Though to put on my Old Moore's Prognostications hat, I think that your request about "where can I find enjoyable, fulfilling work that pays well?" is the new working generation's thinking, and it's down to the messaging about "make your passion your work" that has backfired on employers. I've always been cynical about things like that, and to me "make your passion your work" was an attempt to get people to put much more effort into, and become very personally invested in, the job without the employer having to cough up more enticements like high pay and great conditions than they could get away with. After all, if you're working your dream job, your passion project, the thing you love most of all, then mere money is a secondary consideration, right? Why, you love Thing so much, you'd do it for free!

Well, now at least one and maybe two generations has grown up and absorbed that thinking, but they now *expect* work to be fun and fulfilling and meaningful, and if the job doesn't live up to that, they move on to greener pastures. So if you want to keep them, now you need to make work 'fun' and/or pay enough to make it worth their while to do boring stuff. The expectation that "work is hard and dull the majority of the time" is no longer there, so I think that has backfired on employers. Heh-heh-heh.

Expand full comment
ZumBeispiel's avatar

How many hours per week do you work? 40? Could you reduce it to 35 or 30? Then you still have most of that money, but more spare time for your fun projects.

Expand full comment
Falacer's avatar

I feel largely the same. I have the misfortune to be in tech but not over in Bay Area big money tech - I'm far from being a HackerNews type who has the confidence that they can just find a new job whenever they want. It just seems impossible to extract enough meaning out of life when the majority of every day is devoted to being an office drone. Over the years it's gotten progressively harder to squeeze my interests into the 1-2 hours of free time available in a day, so my hopes of finding my passion in art or another hobby have rapidly dwindled as well.

Expand full comment
Avi's avatar

I'm in the exact same predicament. In the past I filled my spare time with hobbies and socialising which helped to counterbalance my lack of career fulfilment. But now I feel like the malaise is catching up to me and I need to change careers very soon. I just don't know what I want to do next. In tactical terms, I'm currently debating whether to go cold turkey or do a gradual shift i.e. quit outright and give myself 6 months to experiment and find a viable new path, or pick a path now to go down in my spare time and then slowly ramp up the ratio of time spent on this new direction versus doing my full-time tech job. What are your thoughts on that?

Expand full comment
Iz's avatar

It depends on your situation. Depending on the new path you'd take, ramping up may or may not make sense even though its safer.

Expand full comment
Kaitian's avatar

"I could do [unpleasant work] and get money, or do [what I like] and get no/less money" is a problem that basically every human has had since the invention of jobs. If there is a specific job that you will enjoy and that will give you ENOUGH money, do that. If not, well, that's just how it goes.

I can't give more specific advice because your post doesn't say what you WANT to do. Get a different tech job? You should probably go for it. Write poetry? That will probably not pay the bills. If you're looking for job offers, you'll need to be more specific (and probably post in the classified thread).

Expand full comment
Iz's avatar

I enjoy writing code in theory, but in my current role I'm not excited about the product and for various reasons my stories are mostly not enjoyable.

Expand full comment
Deiseach's avatar

Chop wood, carry water. That's what most work has been most of the time for all human history. Even an artist isn't really having "fun" when it's their bread-and-butter career and they need to produce those sixteen paintings on commission for the wealthy patrons/clients (this is why most artists had studios or schools where the apprentices/lesser painters did the majority of the grunt work on a picture and the maestro did things like 'paint in the heads' or the major elements). The idea of the creative expression of one's spirit came along with the idea of the starving artist, for much the same reasons: you can have fun and no money, or routine and lots of money, but not often both fun and money.

Expand full comment
Brendan Richardson's avatar

I miss serifs.

A serif font indicates that you're reading Serious Writing for Serious People. Sans-serif frivolity has no place in my (aspirational) wood-paneled study lined with Thomas Cole paintings. Wikipedia says sans-serif was preferred for digital displays where the serifs rendered poorly at low resolutions, but I'm reading this on a 4K monitor: that excuse has long worn out its welcome.

Also, whoever decided that it was OK for "I" (capital i) and "l" (lowercase L) to be indistinguishable should be lined up and shot for crimes against typography.

Expand full comment
Michael's avatar

On top of that, the indistinguishable "I" and "l" can help scammers phish. Like, hey, check out this article at https://www.astraIcodexten.com

Expand full comment
Brendan Richardson's avatar

At least Chrome lowercases the link preview on hover, so it's obvious what you did.

Expand full comment
Michael's avatar

Yeah (though not on a phone/tablet), and it lowercases the address bar as well. Like all phishing techniques, it relies on the victim not being careful or not being savvy.

Expand full comment
Urstoff's avatar

if it's not printed in blackletter, it's a triviality

Expand full comment
moonshadow's avatar

In case it is of interest to anyone else, incidentally, I find https://www.nerdfonts.com/font-downloads a super useful one stop shop for programming fonts (where a key requirement is being able to distinguish different characters easily). Some even have serifs!

I'm aware this doesn't help the general complaint and I do sympathise, but perhaps it will help improve some aspects of life at least.

Expand full comment
Dino's avatar

And in some fonts, the numeral 1 as well. Which is especially bad in coding - I once saw a bug where they meant the letter "I" but typed the numeral "1" and the font made them look the same. Compiler didn't complain.

Expand full comment
Nancy Lebovitz's avatar

Does anyone happen to remember an article about worst programming practices? It started off with recommendations for horrible variable names involving ambiguous I, 1, and l.

Expand full comment
Lucas's avatar

Small tip: "Illegal1 = O0" is a good test for programming fonts.

Expand full comment
Jeffrey Soreff's avatar

For extra spice, some fonts make | (for Unix people, the pipe symbol) indistinguishable too. | 1 I l - for the font this defaults to, at least the numeral looks clearly different.

Expand full comment
Brendan Richardson's avatar

?

Doesn't everyone code in Courier or a similar monospace font? That's a weird mistake to make.

Expand full comment
Kenneth Almquist's avatar

Most typewriters didn't include a key for the digit "1"; instead, you would type a lower case L. IBM developed both the Courier font and some high end typewriters, including the IBM Selectric, which did have a key for the digit "1", so in Courier the lower case L and the digit "1" do look different, but not radically different. Both have an equally long horizontal line at the bottom, a vertical line that extends upward from the center of a horizontal line, and a downward sloping line connected to the top of the vertical line. The difference between the characters is that the (1) the digit 1 is slightly taller, and (2) the downward sloping line has a much larger downward slope.

In regard to the bug that Dino remembers, it's possible that the characters looked different but similar enough that the programmers misread the code. Or the characters could have appeared completely identical if the font were a typewriter-like font other than Courier, or Courier reduced to a very low resolution.

IBM worked on the problem of people confusing the uppercase O with the digit 0, even though the latter was distinctly narrower. Courier includes both a zero with a slash through it and a zero with a dot in the center. In contexts where there could be confusion (such as in computer code), you could use one of these variants of the digit zero. People confusing lowercase letter "l" and the digit "1" didn't become a problem until people stared using lower case letters in computer programs.

Expand full comment
Moon Moth's avatar

I'd certainly hope so.

But there's also stuff like this: https://www.fontspace.com/category/leet

Expand full comment
Gunflint's avatar

Yeah, when you see AI you have to analyze context to determine if it’s short for Alfred or an abbreviation for artificial intelligence. It can be annoying.

Expand full comment
Therese's avatar

i feel your pain. Sad serif.

Expand full comment
ClipMonger's avatar

In Fauci's congressional hearing on COVID origins today, Democrat congressmembers depicted him as a "heroic scientist" whereas Republicans were the critical ones.

The problem is that important criticism (Ukraine as well as COVID) is being sorted into the American Right, which will keep it repulsive and out of the Overton window of the Left, the side that's dominant in the urban areas that civilization and intellectualism revolves around.

Expand full comment
Level 50 Lapras's avatar

Silly thought that just occurred to me: Is there a replication crisis in pro-natalism?

Expand full comment
ZumBeispiel's avatar

Relevant xkcd: https://www.explainxkcd.com/wiki/index.php/583:_CNR

(And a bad one. You notice when the explanation is funnier than the joke itself)

Expand full comment
Gunflint's avatar

I’m torn between a chuckle and a Marge Simpson groan. ;)

Expand full comment
Quiop's avatar

Anti-natalism: Replication crises are good, actually!

Expand full comment
David S's avatar

After reading Lyman's initial post and follow-up, I think Scott and Lyman are basically attacking straw men while steel-manning their own position.

Lyman is essentially asking the question, "is Effective Altruism better than Christianity?" Unsurprisingly, as a fundamentalist Christian, he concludes that no, Christianity is a better belief system, and Effective Altruism as a movement is less effective than Christianity for improving the human condition. True! (At least in terms of total impact--making an unsubstantiated assumption based on the total volume of altruistic activity by Christians worldwide.)

Scott's response could be summarized as "EA is better than the purely ad hoc, half-hearted at best altruism that is status quo for 98% of the population." True! But Lyman's real point is that people should be Christians instead of Effective Altruists, so the two sets of essays end up talking past each other.

But Scott remains silent on Christianity as an alternative (and in particular the relatively rare and in fact extremely virtuous and admirable version of Christianity that Lyman in fact practices). And in turn, Lyman insists on defining EA by the broadest possible scope of adherents, rather than the relatively rare but extremely virtuous and admirable version that Scott in fact practices.

I think the ideal resolution would acknowledge the extremely broad overlap in most of the core values and strategies that are being discussed, acknowledge that EA and Christianity are almost entirely complementary in terms of actual participation, and pat each other on the back for their extremely admirable contributions to improving the human condition.

Expand full comment
skaladom's avatar

Maybe I'm old fashioned, but I think you'd want to argue for a religion if you believe its truth claims are actually true. Arguing that it's useful sounds to me like having already conceded defeat.

As for the contributions of both EA and Christianity on the human condition, I think both have in common that they are better when they don't have too much power.

Expand full comment
The Ancient Geek's avatar

Some religions are almost entirely ethical systems, eg. Confucianism, and usefulness is very relevant to ethics.

Expand full comment
skaladom's avatar

Possibly true... If and when someone comes here arguing for Confucian ethics we'll see what they say and how well it stacks up!

What we have literally two comments above is talk of promoting Christianity on the basis of its allegedly great ethical system, with a pointed silence on the validity of its strong and specific truth claims, which I find pretty weird and bordering on intellectual dishonesty.

Expand full comment
Scott Alexander's avatar

Even granting that Lyman is doing small-e-effective-altruism correctly, I think fewer than 1% of Christians do this, and promoting Christianity isn't a very efficient route to promoting this idea.

Expand full comment
Erusian's avatar

1% of American Christians is 2.1 million people, something like $2-4 billion dollars per year in charitable giving, and something like a hundred million volunteer hours. Which absolutely crushes EA's total budget and membership. And there's no logical reason such a movement couldn't be international and ecumenical.

I understand these are your political enemies. But you should be honest that the movement's ideological commitments are keeping it from being maximally effective and that the movement has values other than just getting bed nets to Africa.

Expand full comment
Matt's avatar

Better to measure this on a good done per person basis. Otherwise it rounds off to my movement is bigger than yours and therefore better on every positive metric and worse on every negative metric.

Expand full comment
Godshatter's avatar

I'm firmly in the EA camp here but I don't think that's true. A decent idea that catches fire is better for civilisation than a brilliant one that never takes off.

(I personally think religion is a net negative at scale, but obviously Lyman disagrees).

Expand full comment
Erusian's avatar

But that's the reality. If 2 billion each do a little bit to relieve poverty and relieve 1 billion people's poverty (.5 per person) while 1 million people do a lot and relieve 2 million people's poverty (2 per person, 4x) then the 2 billion still relieved 1,000 times more poverty. The fact the smaller group did more per person isn't relevant if your goal is to relieve poverty overall.

Further, some hypothetical universe where there's an equal number of Christians and EAs might be interesting to think about. But it's not a reality. And EA has never proven as able to recruit people as well as Christianity. You have to deal with reality, not the hypotheticals.

Expand full comment
EngineOfCreation's avatar

Fewer than 1% of a large number of people can be more effective than even 100% of a small number of people. Or does EA care about efficiency more than about effectiveness in achieving its goals?

Expand full comment
MicaiahC's avatar

So several things:

1. I think it is just true that a lot of existing low hanging fruit no longer exists because of Christians. America's high level of trust and institutions built upon them likely have lots to owe to Christianity. And lots of NGOs exist downstream of Christians / Christianity. So to a large extent the point of Effective Altruism is to do good """in spite""" of Christianity: Marginal effectiveness is going to exist on whatever Christianity is not focusing on. And this would also be true even if everyone were an EA.

2. It's not clear that EA scales to "controls 1% of all Christian income". Discovery of what is most effective on the margin is already really hard to calculate with just one Dustin Moskovitz's worth of wealth. You start getting extremely perverse adversarial selection type effects, you start double counting evidence etc. To some extent, EA's advantages are an artifact of its small size and not a fundamental unalterable aspect. I think it's even plausible that already there's too much money chasing too few opportunities.

3. Considering that charities can differ in effectiveness several orders of magnitude (see: buying some TV pastor a 4th private jet vs a malaria net that lasts a year and stops 1/1000th the chance of death and some much higher chance of actual malaria in Kenya), it may indeed be way more valuable to have a focused movement with much less people than a more populist one. It doesn't matter if we have 10 trillion dollars if they get spent on the curing rare diseases for cute puppies sector of charity! And indeed, the changes that outsiders like yourself tend to suggest are ones that loosens standards of charitable donations.

Expand full comment
s_e_t_h's avatar

I don’t understand the conflict…can’t Christians use EA to bolster the outcomes of their charity? Example: a church congregation doing something something mosquito nets for 80k hours.

Expand full comment
Deiseach's avatar

Most church congregations *are* already doing something something mosquito nets:

https://charity-gifts.christianaid.ie/products/a-pack-of-five-mosquito-nets-e-card#

https://donate.worldvision.org/give/bed-nets

https://muslimhands.org.uk/donate/health/mosquito-nets

https://www.compassion.com/catalog/donate-mosquito-net-charity-gift.htm

https://www.gfa.org/donation/items/mosquito-nets/

Oh, and bringing a modern scientific approach to charity work? Started in the late 19th century:

https://en.wikipedia.org/wiki/Eglantyne_Jebb

"[Eglantyne]Jebb [one of the co-founders of Save The Children] moved to Cambridge to look after her sick mother. There, encouraged by Mary Marshall and Florence Keynes, she became involved in the Charity Organisation Society, which aimed to bring a modern scientific approach to charity work. This led her to research urban conditions. In 1906, Jebb published Cambridge, a Study in Social Questions based on her research."

And what was the Charity Organisation Society?

https://en.wikipedia.org/wiki/Charity_Organisation_Society

"The Charity Organisation Societies were founded in England in 1869 following the 'Goschen Minute' that sought to severely restrict outdoor relief distributed by the Poor Law Guardians. In the early 1870s a handful of local societies were formed with the intention of restricting the distribution of outdoor relief to the elderly.

Also called the Associated Charities was a private charity that existed in the late 19th and early 20th centuries as a clearing house for information on the poor. The society was mainly concerned with distinction between the deserving poor and undeserving poor. The society believed that giving out charity without investigating the problems behind poverty created a class of citizens that would always be dependent on alms giving.

The society originated in Elberfeld, Germany and spread to Buffalo, New York around 1877. The conviction that relief promoted dependency was the basis for forming the Societies. Instead of offering direct relief, the societies addressed the cycle of poverty. Neighborhood charity visitors taught the values of hard work and thrift to individuals and families. The COS set up centralised records and administrative services and emphasised objective investigations and professional training. There was a strong scientific emphasis as the charity visitors organised their activities and learned principles of practice and techniques of intervention from one another. The result led to the origin of social casework. Gradually, over the ensuing years, volunteer visitors began to be supplanted by paid staff."

No 'giving dependent on warm fuzzies' there!

Expand full comment
s_e_t_h's avatar

Nice history lesson!

Expand full comment
Deiseach's avatar

I'm a grump about this, but I do think a lot of modern discourse suffers from Ten Minutes Ago problem, that is, anything further back than ten minutes ago is ignored or not even known about.

That's how EA can position itself as "the first ever to use scientific methods of gauging charitable effectiveness" when there were forerunners to this. They're not doing it in bad faith, it's because they're a bunch of young people (starting off) who think that because it's them, this is the first time ever anything was done like this (the fault of young people in all times).

Expand full comment
Melvin's avatar

I think most arguments about Effective Altruism come down to a failure to distinguish between EA the idea (which is pretty sensible) and EA the movement (which is largely very silly).

Expand full comment
Deiseach's avatar

Is Stone a Fundamentalist? I thought he was Lutheran, and if that now counts as Fundamentalism, then I'm opening my eyes wide in surprise.

Though now I look at it, you used small "f" fundamentalism, so I'm presuming you mean more along the lines of 'conservative, traditional, orthodox, hasn't ditched the last three hundred years worth of understanding of Scripture'?

I probably can't be said to have a dog in that fight over "who is or is not a Fundamentalist?" but I've seen the term "fundamentalist" used in American media and online to mean "raving loon who wants to burn us all at the stake" so I do get a bit twitchy about it, given that under certain definitions of that, I'd be a fundamentalist too (whaddya mean you believe in the Trinity? God made the universe? miracles happened like the Bible says?)

I *hope* I'm not a raving loon who wants to burn you all at the stake (I do have to work to quash my 'heretics! fire!' tendency, admittedly).

Expand full comment
DanielLC's avatar

> (At least in terms of total impact--making an unsubstantiated assumption based on the total volume of altruistic activity by Christians worldwide.)

Was he taking into account all the negatives caused by Christianity? Like people being demonized for being LGBT, or for having sex before marriage, or feeling like it's wrong to leave a bad marriage?

Expand full comment
Lucas's avatar

Were those things caused directly by Christianity, or was there a general demand for that stuff at that time and Christianity happened to be the framework in place to punish?

Expand full comment
DanielLC's avatar

I don't think they were caused by Christianity, but I think Christianity makes it harder for the society to realize it was a problem and stop doing it. Though I think the pro-life stuff is caused by religion. It's a lot harder to justify why a fetus has more moral worth than a cow if you don't believe humans get souls at conception.

Expand full comment
Deiseach's avatar

Oh gosh wow, leave us not call fornication a sin!

Sorry, that's my raving lunacy coming out. But yeah, imagine: religion which follows certain moral standards around the use of sexuality says that lust is one of the seven deadly sins, who could have expected that?

And seeing how everyone is now free to have sex before marriage in my country because the bad ol' church has lost a ton of power, I don't see the longed-for utopia of happiness around that, given the problem pages of the papers are still full of unhappy people who are now complaining about sexual lacking, mismatch of expectations, their lover doesn't want to commit, they're separated and lonely, etc. Who would have thought that removing the stigma from sleeping around outside marriage would not do away with all drama and unhappiness, huh?

EDIT: And apparently now everything old is new again, and celibacy is the hot new trend:

https://nz.news.yahoo.com/very-strong-signal-body-celibacy-050000157.html

"While there might have been a time when pledging a vow of abstinence would have elicited judgemental sniggers and whispers, today it’s viewed by many as an integral part of self-care and personal development – something many of us could benefit from. Hence the brouhaha surrounding a new advertising campaign from Bumble, which saw billboards plastered across the US with the slogan: “You know full well a vow of celibacy is not the answer”."

So now we're demonizing people for demonzing people for not having sex before marriage?

Expand full comment
moonshadow's avatar

> Oh gosh wow, leave us not call fornication a sin!

What always gets me is, I've heard any number of preachers condemn gender/body mismatches, same-sex attraction etc. but none at all, ever, talk about remarriage after divorce; since, one can only assume, people whose opinion the preacher actually cares about might be personally affected by that.

This is particularly fascinating because there is no reticence about asking the congregation to examine their conduct and improve themselves when it comes to, oh, just about any other aspect of life, or indeed sexuality; many words are spent condemning, e.g., internet porn addiction; offering help and support for those troubled; treating them, generally, with love and respect. Internet porn addicts are us! Condemn the sin, love the sinner - we need to all deal with it together!

The lgbt folk, though, are always /them/. It's self-perpetuating: any who might give church a try soon realise it is not a place for them.

If you are going to demonize people for fornication, have the courage of your convictions and address the biggest, most widely accepted by the world at large, practice, before beating up on the people who are already being punched by all comers.

Henry VIII has a lot to answer for.

Expand full comment
Nancy Lebovitz's avatar

One of the sharper things in the New Testament is something about people being more at risk from what comes out of their mouths than what goes into them.

Malicious gossip should be taken more seriously as bad behavior.

Expand full comment
Deiseach's avatar

Hello, and welcome to your weekly visit to Theology Corner!

(Scott has the patience of Job to let us irrational superstitious religious types clutter up this otherwise lovely clean rationalist space with our dribbling drivel)

Okay, a large part of this is that in much Protestant theology, marriage is no longer a sacrament (as it remains in Catholicism) but is now a rite. I think (not to be putting words in the man's mouth) Luther for one considered it as more of the nature of a contract (though he seems to have interesting views on the entire matter). So it was downgraded, to a greater or lesser degree, within the various new denominations.

Marriage was still solemn, you should be chaste outside of marriage and continent within marriage, and divorce was something rare and unusual. Marriage was also ordained of God and for the purpose of begetting and raising children. But it had been stripped of the nature of a sacrament, and of course over time it began to be treated as any other civil contract or ritual.

So, given that Protestantism allowed for exceptional cases for divorce, then in time as civil law caught up to slowly liberalising divorce and making it available to the ordinary person, and as the hard cases which make bad law were put before people as appeals to compassion for those suffering, and as the social stigma gradually waned over time, the churches got caught up in the Zeitgeist as well. The more liberal ones of course responded to the appeal for compassion and pastoral care; if people could be civilly divorced and civilly re-married with no problem, why cut them off from a church blessing second time round? This was a hard-fought rearguard action, but in time they gave in. After all, if it's not a sacrament but merely an ordinance or a ritual or a 'recommended action', why be too punctilious about it?

Catholicism wasn't immune, either; no divorce, but marriages could be annulled. And the American church, for one, gained a notorious reputation for rubberstamping annulments to end marriages and permit remarriage in church.

There seems to be some reassessment on the Protestant side, but I think civil divorce has become so entrenched in society, that going against the grain when you have already given in on it is goign to be impossible. Hence no preaching on the topic.

https://digitalcommons.liberty.edu/doctoral/1645/

"Protestant theology has historically rejected marriage as sacrament, a rejection which continues to resound in the majority of contemporary Protestant scholarship. Yet many, if not most, arguments against sacramental marriage tacitly follow an outline set forward by Luther and Calvin which, if examined with critical scrutiny, is based on a problematic soteriological premise. In light of this, the present study sets forward a comprehensive argument in favor of Protestant theology reaffirming marriage as a sacrament through systematic investigation into the Hebrew Bible (Old Testament), New Testament, and Christian history. After developing a critical hermeneutic founded on realist epistemological grounds, a continuous line is drawn from Genesis to Revelation that affirms marriage as not only sacred in a general manner, but specifically designed by God for the welfare of human society, both physical and spiritual. This consistent thread is shown in the fabric of early Hebrew society, despite its historical acceptance of polygamy as a social necessity, and served as a central symbol of the prophetic rebukes of Israel/Judah. A yearning for a spiritual aspect of marriage that transcends even death can be seen arising from the eschatological hopes of the Israelite textual traditions, which come into further expression in the New Testament. While the words of Jesus concerning the fate of the remarried widow are often used to negate or dismiss eschatological expectations for marriage, a positive evaluation is given that provides a historical context for interpretation which affirms rather than denies eschatological hope. Celibacy, the only other acceptable Christian sexual pattern, is developed by Paul in 1 Cor 7 as a careful balance of issues that does not relegate marriage as spiritually inferior, as it is often taken. On the basis of these scriptural traditions, the historical development of the sacramental theological tradition is presented with emphasis on the contributions of Augustine of Hippo whereby marriage is part of the larger sacramental fabric while still maintaining a special place due to its pre-fallen origin and symbolic import. In contrast, the Scholastic tradition sought pseudo-empirical formulae whereby sacraments served as instrumental causes of Grace. It was on this basis that the Protestant tradition, originating initially in Luther and Calvin, rejected marriage as a sacrament due to its apparent disassociation with the instrumental transference of Grace, which they reserved for baptism and communion. As a consequence, the Protestant tradition inherited problematic theological bases that have in turn opened the door to divorce by functionally allowing secular society to determine marital norms. In contrast, the present study provides a positive presentation for a cohesive view of marriage derived from Scripture that advances marriage as a special and sacred institution much in need of revitalization and respect."

Speaking of Luther, here's a link to sermons on marriage he gave, which include some pretty dang idiosyncratic views of his own:

https://pages.uoregon.edu/dluebke/Reformations441/LutherMarriage.htm

"I once wrote down some advice concerning such persons for those who hear confession. It related to those cases where a husband or wife comes and wants to learn what he should do: his spouse is unable to fulfil the conjugal duty, yet he cannot get along without it because he finds that God's ordinance to multiply is still in force within him. Here they have accused me of teaching that when a husband is unable to satisfy his wife's sexual desire she should run to somebody else. Let the topsy-turvy liars spread their lies. The words of Christ and his apostles were turned upside down; should they not also turn my words topsy-turvy? To whose detriment it will be they shall surely find out.

What I said was this: if a woman who is fit for marriage has a husband who is not, and she is unable openly to take unto herself another and unwilling, too, to do anything dishonorable since the pope in such a case demands without cause abundant testimony and evidence, she should say to her husband, “Look, my dear husband, you are unable to fulfil your conjugal duty toward me; you have cheated me out of my maidenhood and even imperilled my honor and my soul's salvation; in the sight of God there is no real marriage between us. Grant me the privilege of contracting a secret marriage with your brother or closest relative, and you retain the title of husband so that your property will not fall to strangers. Consent to being betrayed voluntarily by me, as you have betrayed me without my consent” [...]."

Uh-huh. Functional bigamy or even polygamy. Marty boy, no wonder you got yourself into trouble!

Expand full comment
Deiseach's avatar

Luther is also fine with you marrying your niece, I'm sure the Spanish Hapsburgs thank him for that 😁

"From this it follows that first cousins may contract a godly and Christian marriage, and that I may marry my stepmother's sister, my father's stepsister, or my mother's stepsister. Further, I may marry the daughter of my brother or sister, just as Abraham married Sarah. None of these persons is forbidden by God, for God does not calculate according to degrees, as the jurists do, but enumerates directly specific persons. Otherwise, since my father's sister and my brother's daughter are related to me in the same degree, I would have to say either that I cannot marry my brother's daughter or that I may also marry my father's sister. Now God has forbidden my father's sister, but he has not forbidden my brother's daughter, although both are related to me in the same degree. We also find in Scripture that with respect to various stepsisters there were not such strict prohibitions. For Tamar, Absalom's sister, thought she could have married her step-brother Amnon"

Hey, marry anyone you like!

"The fourth impediment is legal kinship; that is, when an unrelated child is adopted as son or daughter it may not later marry a child born of its adoptive parents, that is, one who is by law its own brother or sister. This is another worthless human invention. Therefore, if you so desire, go ahead and marry anyway. In the sight of God this adopted person is neither your mother nor your sister, since there is no blood relationship. She does work in the kitchen, however, and supplements the income; this is why she has been placed on the forbidden list!"

I now see the impetus behind the golden age of British true crime, where people preferred to knock off their spouses than seek a divorce. Marty says God doesn't mind if you kill off inconvenient hubby so you can marry that hotter, richer, studlier guy:

"The sixth impediment is crime. They are not in agreement as to how many instances of this impediment they should devise. However, there are actually these three: if someone lies with a girl, he may not thereafter marry her sister or her aunt, niece, or cousin; again, whoever commits adultery with a woman may not marry her after her husband's death; again, if a wife (or husband) should murder her spouse for love of another, she may not subsequently marry the loved one. Here it rains fools upon fools. Don't you believe them, and don't be taken in by them; they are under the devil's whip. Sins and crimes should be punished, but with other penalties, not by forbidding marriage. Therefore, no sin or crime is an impediment to marriage. David committed adultery with Bathsheba, Uriah's wife, and had her husband killed besides. He was guilty of both crimes; still he took her to wife and begot King Solomon by her [II Samuel 11], and without giving any money to the pope! [...]"

I think he was so caught up in his beef with the pope, he maybe didn't think this through as thoroughly as he ought to have done. Now I want to see this as defence at a murder trial: "Martin Luther says it's okay if I marry my boyfriend after we get rid of my husband!" 😁

Expand full comment
Deiseach's avatar

I can't get over that this was a sermon series. How unreasonable and petty of us Papists to say you can't murder your spouse and then marry the sidepiece! 😁 Isn't it great that the Reformation liberated Europe and then the world from such arbitrary unreasonable priestcraft!

Expand full comment
Alexander Turok's avatar

That title is hilarious: "‘It was a very strong signal from my body’: How celibacy is revolutionising people’s sex lives."

Expand full comment
Deiseach's avatar

Once you get old enough, you see it all come around again. Religious influence on society = insistence on chastity before marriage. Social and sexual revolution = this is oppression and repression and control of women's sexuality! Free love for all! Free love and sexual revolution = eventual burnout because guess what, people still have drama around sex and love and it's messy and women (and men) find whole new avenues of discontent amongst the promised utopia that turned out not to live up to the advertising New social and sexual revolution = chastity and continence empower women!

You have to laugh 😁

Expand full comment
Moon Moth's avatar

> So now we're demonizing people for demonzing people for not having sex before marriage?

Didn't we just have a post where we talked about people confronting parts of themselves which had urges that the main self didn't approve of? ;-)

Expand full comment
Deiseach's avatar

With all this demonising going on, we certainly need to send in some IFS therapists!

Expand full comment
Jeffrey Soreff's avatar

Well, if demonizing for sexual behavior is going on, I think we should at least insist on proven, reliable, validated procedures for turning someone into an incubus or succubus. :-)

Expand full comment
Moon Moth's avatar

"Every month my unspayed cat becomes possessed by a succubus, help!"

Expand full comment
David S's avatar

One other point I think it's worth acknowledging is that the ability of either Scott or Lyman to persuade the other's core audience is probably effectively nil.

No conservative Christian is going to find a social movement that is associated with orgies to be morally acceptable or worthy of promotion.

And no one who does not believe in hell is going to find a conservative Christian ideology that believes they are going to hell to be acceptable or worthy of promotion.

I learned interesting, valuable things from what Scott had to say about EA and from what Lyman had to say about his practice of Christianity, but I also learned that I'm not especially interested in what Lyman has to say about EA, the rationalist community, or utilitarianism, and I would not be particularly interested in Scott's opinion of Christianity as a competing belief system or social movement.

Expand full comment
Melvin's avatar

> No conservative Christian is going to find a social movement that is associated with orgies to be morally acceptable or worthy of promotion

Does this suggest that the most EA thing that EA people could do is to stop being so weird, and drive the weirdoes out of their movement?

Seriously, every time someone starts up an "EA Group House", malaria claims another five thousand QALYs in Uganda.

Expand full comment
Moon Moth's avatar

But at least some people want to keep the other varieties around, too, so maybe we call it "Reform EA"?

Expand full comment
Deiseach's avatar

Time for THE REFORMATION, which I am assured every movement needs? Overthrow the sclerotic old Catholic Church, er, original EA movement based around dons and universities! Enable the laity to take control of their own charitable giving! Release the foundational documents in the language of normies!

Expand full comment
Moon Moth's avatar

I had been making a cheap joke at the expense of Reform Judaism, but this works much better!

We don't require math nerds to intermediate between us and the optimal pattern of giving. Who died and gave GiveWell the keys to effectiveness? The idea that we need to formally adhere to mathematical patterns is, frankly, offensive. If we look deep inside and conclude that we're doing enough, then who is to say that we're not? Sole vibe!

Expand full comment
Deiseach's avatar

Peter Singer is not the pope and even if he were, No Popery Here!

Expand full comment
Jeffrey Soreff's avatar

<mild snark>

>Seriously, every time someone starts up an "EA Group House", malaria claims another five thousand QALYs in Uganda.

In the 5 dimensional chess of optimizing EA's optics, would omitting the weirdnesses be a "don't castle" move? :-)

</mild snark>

Expand full comment
Moon Moth's avatar

*rimshot*

Expand full comment
Jeffrey Soreff's avatar

:-) Many Thanks!

Expand full comment
Dirichlet-to-Neumann's avatar

Well I'm both a conservative Catholic and petty close to the effective altruism movement - I follow Give Well guidelines for about half of my yearly donations for example.

(I dream of an effective Catholicism movement too).

Expand full comment
David S's avatar

Yeah, it's true that the two movements are not technically mutually exclusive, and could learn important lessons from each other. But in reality, you're better off trying to reform each one from within, on its own terms, than you are trying to persuade "across the aisle" (noting that politics--or at least socio-political-tribal identity--probably isn't far from the core dispute between Scott/EAs and Lyman/conservative Christians).

Expand full comment
Deiseach's avatar

If it ever becomes effective, it's not Catholicism 😁

Expand full comment
Dirichlet-to-Neumann's avatar

Imagine the Curia becoming an efficient and lean imagination... That would be the end of the Vatican as we know it.

(Joke aside though it kills me that nobody ever wants to have a long hard look at retention rates of new adult baptised and understand what is going wrong here).

Expand full comment
Deiseach's avatar

People come in with enthusiasm, they convert and join the local church, and then hit up against modern Catholic congregations and how we live our faith (or mostly, don't live it).

Expand full comment
Jon's avatar

Scott, does Alina Chan's article in today's NYT affect your opinion on the lab leak hypothesis?

Expand full comment
Scott Alexander's avatar

As John Schilling said, the only update is that the NYT decided to platform Alina Chan to say the same things she's always said, which I think tells us something about NYT but not about COVID.

Expand full comment
Jon's avatar

So what does it tell us about the NYT? I suspect that it appealed to the NYT because (1) it concisely and systematically presents the evidence for a lab leak in a way that is comprehensible to an intelligent layperson (better than any other piece I have seen in the NYT), and (2) it is sinking in that even if there wasn't a lab leak there easily could have been, because the WIV was doing very risky research at an inappropriate BSL, and (3) the way the virology establishment circled the wagons was indefensible. I hope the Times prints an equally clear rebuttal.

Expand full comment
John Schilling's avatar

Is there anything in Alina Chan's op-ed that wasn't in her (and Matt Ridley's) book from two years ago? There hasn't been much new evidence lately, and even if there were an NYT op-ed isn't the place to reveal it. The purpose is to take what has long been known by people who are paying attention, and package it to broad distribution to those who are not.

Since Scott is someone who has been paying attention, and has hosted a review of Chan/Ridley's book (https://www.astralcodexten.com/p/your-book-review-viral), I wouldn't expect this to move the needle much for him.

Expand full comment
Julius's avatar

What are the strongest arguments against allowing AI chatbots to take on some tasks traditionally reserved for medical professionals, such as discussing conditions with patients, ordering tests, and prescribing medication?

Is the primary concern that current AI models, like GPT-4, aren't advanced enough? If future models, like a potential GPT-5, were significantly improved, specifically fine-tuned for medical tasks, and passed rigorous evaluations, would it then be a good idea to use them in these roles? Or are there other reasons that make this a bad idea?

Expand full comment
A.'s avatar
Jun 5Edited

I don't think AI should give medical advice, but I strongly believe we are badly in need of some kind of algorithm that would take a list of symptoms together with other patient data and, in cases that seem to be almost perfectly matched to a possible diagnosis, suggest the possible diagnosis to discuss with your doctor, or send the doctor an alert directly.

I knew someone who died from a disease that went undiagnosed due to being much more common in a demographic unfamiliar to his doctors, and I know someone who, for the same reason, spent more than 10 years in pain without a diagnosis that could have been obtained by realizing that a slew of symptoms matched his disease exactly.

It doesn't have to be anything you might call AI. It can be a pretty dumb algorithm that takes your list of symptoms, takes your other medical and demographic data, and, in cases that look sufficiently serious, either tells you "You should ask your doctor about Cantonese cancer" or sends your doctor this alert.

Maybe we already have this somewhere. If so, it's possible that it gives enough false positive alerts that it's just being ignored. But this could save lives if it was done right.

Expand full comment
Julius's avatar

I've always wondered why we don't have this. It could be a big Bayesian net where you could click on your symptoms and it would give you a list of possible causes ranked by likelihood. This might not be exactly what you're talking about, but I like the idea. All the data could be from people who opted-in to allowing it. I would consider this "AI", but to-ma-to to-mah-to.

Expand full comment
A.'s avatar

I wouldn't call anything that's just basic arithmetics "AI".

It's complicated, of course. Someone might have more than one condition. The most likely causes of visible symptoms are typically probably something fairly harmless. The really bad stuff might be difficult and painful to test for and might kill you very quickly (think bacterial meningitis and all the patients who get sent home and die because the doctors thought it was most likely a migraine and didn't see a good reason to do a spinal tap).

On the reddit about stupid patients was an off-topic story of an EMT who got called to a woman whose symptoms looked like a mild cold. Luckily, her rookie partner had a hunch that it was something serious, so they took her in, and she ended up being airlifted to another hospital for a surgery for a hemorrhage - or else she would have died. I wouldn't be surprised if most medicine looks like that not only to doctors but also to the algorithm that has all the data.

Also, people's medical records are typically not up-to-date and might include complaints that are a few decades old, with no way to figure out which of these are still valid. You get the idea.

So it's really just wishful thinking on my part, but I do wish there was at least an attempt to somehow do this. Unfortunately, such attempts take a lot of money and man-hours and are likely not to be greatly useful in most cases. Maybe this should be a DARPA program.

Expand full comment
Nancy Lebovitz's avatar

Is there a list somewhere of distinctive symptoms of rare diseases?

I don't know what else there might be, but apparently purple stretch marks is a good indicator for Cushing's disease.

Expand full comment
A.'s avatar

Those are probably the easy ones. The hard ones are the ones that look like many other things.

Expand full comment
Nancy Lebovitz's avatar

They might be the easy ones, but that doesn't mean a particular doctor will know about them.

https://academic.oup.com/jcem/article/105/3/e12/5609009

Expand full comment
Julius's avatar

FWIW, I decided to write up an argument for AI doctors: https://thegreymatter.substack.com/p/ai-doctors

Expand full comment
moonshadow's avatar

No current AI chatbot attempts to do tasks reserved for medical professionals, such as discussing conditions with patients, ordering tests, and prescribing medication. We have no idea how to make an AI chatbot that does any of these things. They may be marketed as doing these things, and they may produce output that happens to accomplish that goal some portion of the time in laboratory testing, but saying they can actually do these tasks is like calling what Tesla vehicles do "full self driving": much as a marketing executive may wish otherwise, the actual thing the system does is nothing of the sort.

The question they /actually/ answer - the /only/ question they answer, the only thing we know how to make them answer - is "if you come across some text that starts like this, what are some words that might plausibly come next?"

Current AIs answer this question very well. The hope of the people designing them is that to get better and better at answering this question the AI has to build more and more internal knowledge about, well, everything. But this is a side effect, and we have no way of guaranteeing this. It is not the actual metric used to drive the optimisation process that we use to construct our current AI models. That metric is, to a first approximation, "does a random human on fiverr.com give this text a high rating out of ten?" It's not that the AI is "hallucinating". There is no such concept. There is nothing that can hallucinate. There is no truth or lies. We are /hoping/ that a sufficiently advanced autocorrect is indistinguishable from ̶m̶a̶g̶i̶c̶ intelligence but we cannot prove this logically and do not, to date, have an example that would demonstrate it empirically.

Anyway, TLDR: we have not built robodoctor. We cannot reliably build robodoctor with the tech we have right now. What we have built is a robo-drunk-uncle. The output is about what one would expect if a five-year-old asks their drunk uncle a question about the world: a sequence of words emerges forth that might plausibly come next. It may or may not bear some relation to actual real world facts and concepts. The uncle is incredibly good at sounding plausible, so the five-year-old has no way of knowing.

AI research is a battle between Say Not Complexity[1] and the Bitter Lesson[2]. After the last few years of being driven by the bitter lesson - more compute! - more training! - bigger matrices! - faster attention! - the "lies your drunk uncle tells your kids" machine works really great. But we are seeing the limits of what we can reach merely by adding more compute and hoping truth-telling magically emerges; we need a leap in understanding before we can do anything one might actually rely on, certainly in any situation where human well-being is on the line.

[1] https://www.lesswrong.com/posts/kpRSCH7ALLcb6ucWM/say-not-complexity

[2] http://www.incompleteideas.net/IncIdeas/BitterLesson.html

Expand full comment
Jeffrey Soreff's avatar

A couple of comments:

I used to quip: "Someone who thinks AI is about to completely take over overestimates what current AI can do. Someone who thinks that AI will never take over overestimates what humans can do." I think that this is still true, but the field has advanced a great deal in recent years.

>The question they /actually/ answer - the /only/ question they answer, the only thing we know how to make them answer - is "if you come across some text that starts like this, what are some words that might plausibly come next?"

Yeah, for LLMs and chatbots built on them (setting aside the RLHF step) but

a) _Humans_ can be reasonably approximated as answering only the questions: "What can I do to increase my social status?" and "What do I expect to directly sense next?". We _don't_ have syllogism engines for guaranteed valid reasoning. In some settings we get rewarded for learning to cough up chains of statements that our ingroup rewards as "reasoning" - or as politically correct... These capabilities are also trained.

b) There are also other flavors of AI which are either trained with non-linguistic data e.g. AlphaFold or use techniques separate from trainable neural nets e.g. Mathematica. Now, neither of these is an LLM, but LLMs _have_ been successfully prompted to make use of software tools. I've personally watched ChatGPT successfully invoke a math solving package (specifically on a polynomial solution).

>There is no truth or lies.

I think that this is at the wrong level of analysis, analogous to saying: humans are just a bunch of neurons firing, just depolarization waves, neither truth nor lies.

I doubt that current LLMs have a good enough theory of mind to anticipate that, it can get a better match to predicting the next token several plys later in a conversation by emitting words now that are different from what it would emit "naively" - what would essentially be a chatbot analog to deliberate deception of their conversational partner. I do expect that to happen eventually.

Expand full comment
moonshadow's avatar

> We _don't_ have syllogism engines for guaranteed valid reasoning. In some settings we get rewarded for learning to cough up chains of statements that our ingroup rewards as "reasoning" - or as politically correct... These capabilities are also trained.

Humans have levers AI does not, such as impact to their social standing or to other things they care about like their bank balance or their comfort or their freedom. Humans also have millenia of implicit training for intuiting the state of mind of other humans. This training transfers incredibly badly to things that are not human: the human tendency to anthropomorphise leads us to make all sorts of predictions that are wildly wrong about all sorts of things.

You are clearly intending this as some kind of excuse, but as far as I am concerned all of these things just make the situation worse: we cannot reasonably expect to reliably align to our goals a system that processes information in ways completely alien to us, with internal states that we do not understand and cannot reason about, in a situation where we could not be at least somewhat confident of communicating those goals to a human.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>we cannot reasonably expect to reliably align to our goals a system that processes information in ways completely alien to us, with internal states that we do not understand and cannot reason about, in a situation where we could not be at least somewhat confident of communicating those goals to a human.

You have a good point, though, remember that the training sets for LLMs incorporates a vast amount of human output. I don't think they are nearly as alien as something that was programmed explicitly for every decision in e.g. C.

>Humans have levers AI does not, such as impact to their social standing or to other things they care about like their bank balance or their comfort or their freedom.

Yes, but "levers" do not imply truthfulness or accurate reasoning. Getting humans to accurately think through a problem relies on mechanisms which are about as indirect as our control of LLMs. Social standing, bank balances, comfort, and freedom are updated perhaps once a day or so, in response to only a small fraction of human behavior (_maybe_ a bit more often for informal social standing). The reinforcement learning phase of LLMs also applies analogous indirect controls - and, if I understand correctly, a lot more of them.

My impression is that a large chunk of human reasoning is itself pattern matching. E.g. after taking undergraduate physics courses, one learns to recognize certain situations that one has learned as potentially solvable by certain methods one has learned. Nothing wrong with this - but it isn't as rigorous as it looks, and it isn't something that LLMs are architecturally unable to do.

None of this is to say that LLMs, as they stand today, are plug-in replacements for humans. I've seen ChatGPT fall on its face perhaps a dozen times on being asked simple chemistry questions. It clearly needs at least some sort of "now think about how the answer you are about to give might be wrong" addition.

Expand full comment
Nematophy's avatar

We've actually built robodoctor and it works pretty well but the AI Safety Experts have turned the functionality off.

The drunk uncle analogy doesn't work because even though it's just next token completion, it still ends up saying the same thing a doctor would say - so, why do we care?

Expand full comment
moonshadow's avatar

> it still ends up saying the same thing a doctor would say - so, why do we care?

We've built a full self driving system! Even though it's just a lane follower, it still ends up giving the same control inputs a driver would, so why do we care?

...because, when it doesn't, people die.

A common objection to AI doom scenarios is: who on earth would put one of these systems in charge of anything that could possibly result in real people being hurt, never mind the world ending?

The answer is, demonstrably, Elon Musk, and now apparently also medical chatbot sellers.

Expand full comment
Nematophy's avatar

Ok, so we can't replace any human systems with any AI ever, got it!

Obviously we shouldn't put stuff that doesn't work in charge of things that could result in people getting hurt. Stuff that works is fine tho!

Expand full comment
moonshadow's avatar

> Ok, so we can't replace any human systems with any AI ever, got it!

We can use inference models as part of a safety critical system, but their output should be treated just like any other noisy, unreliable information: as an input to a deliberately engineered process that has well understood responses to well defined operating parameters and can be rigorously proven to fail safe and/or hand over (in a safe manner!) to a human when outside this envelope.

We absolutely do know how to design safety critical systems, when we can get over our own hubris.

We do this kind of rigorous engineering for vehicle navigation: aeroplane autopilot systems are engineered this way.

We can also do this, to some extent, for medical diagnosis: systems exist that reliably give appropriate responses when presented with inputs in specified ranges, and reliably delegate to a human doctor outside these.

We don't usually call these systems AI, though. "AI" is a term reserved for exciting things we don't understand; as the joke goes, if we understand it, it's not AI, it's just statistics.

LLMs, the current "AI", are not tools capable of achieving this outcome on their own. They delegate the problem of designing the mapping from input to output to the training process, and leave us with no way to reason about things like what output the system will produce for a given input or what range of inputs will reliably produce desired outputs. This makes them unusable for safety critical probems. The hammer is the wrong shape for the nail. They can be an input to a safety critical system, but they cannot be the safety critical system.

> Obviously we shouldn't put stuff that doesn't work in charge of things that could result in people getting hurt. Stuff that works is fine tho!

The key part is knowing the difference between the two, rather than just hoping.

Expand full comment
Deiseach's avatar

I foresee medical AI being used (e.g. in the USA) to take over the already established triaging of access to medical care by insurance companies (are you sure you're really sick enough to warrant seeing a doctor? don't you know that will cost us money?) and replace (e.g. in Ireland, already having a shortage of doctors willing to work in general practice) out-of-hours and over-subscribed patient lists (you don't need to see a doctor, just run through this checklist and the AI will tell you to take some paracetamol and plenty of fluids).

I don't see it *positively* treating people, as in making diagnoses and issuing prescriptions or ordering tests, I see it *negatively* winnowing out people for "it's not pneumonia, it's just a cold, antibiotics will do nothing for it" type queries. Which is all very well until the time it *is* pneumonia. How they'll sort out liability, I have no idea, except maybe they'll dump it on the patient ('well it was up to you to confirm this by going to the doctor in person').

Expand full comment
None of the Above's avatar

Done well, this would be very useful--it would be very handy to be able to fire up a website or app, spend a few minutes with it, and get a reasonable informed notion of whether these symptoms that woke me up in the middle of the night merit "take some ibuprofen and go back to bed" or "call the doctor in the morning" or "call 911 right the hell now."

I expect the incentives will mostly not align to get good answers here, with the overlap of "you should never come to the hospital because it costs us money" and "the patient's family will sue for a gazillion dollars if you ever tell someone with a headache to stay home and it turns out it's a stroke, and neither the judge nor anyone on the jury will have ever heard of a false positive before."

Expand full comment
Lucas's avatar

I think an issue is that in most if not all my conversations with chatbot I have to be the motor of the conversation, which works when I know what I want, and would work in some cases when I know what I have/what medication I want, but in many cases it would not, I would need the medical professional to be the one steering the conversation.

Expand full comment
User's avatar
Comment deleted
Jun 4
Comment deleted
Expand full comment
Jeffrey Soreff's avatar

That's a good point. When I've played with ChatGPT on chemistry questions, there have been a bunch of times where I was able to get it to give the right answer, finally - and it felt much like _way_ back when I was a teaching assistant, and was trying to guide a student with a series of leading questions.

Expand full comment
Kitschy's avatar

Same reason why my job is relatively safe from AI at the moment - liability. This will remain an unsolved problem for the next little bit.

What happens if your AI does something like what that Microsoft Bing/Google one does and advise your patient to eat glue? We have a very well rounded legal framework to handle rogue doctors, and a non-existent one for rogue AI doctors.

(My job involves 80% boring paperwork and 20% nail-biting decisions, which I understand is not dissimilar to being a doctor, except if I make a mistake in the 80% of boring paperwork, it's definitely my fault - it will be my name that signed it off. Can't sue an AI, so the blame will be on the specific person who decided to use AI, and management is far too skittish to even broach the topic).

So I suppose there's two components - competence and alignment.

A human can be certified competent (by getting a medical degree and board certification). A human doctor can also be aligned most of the time - if you break the rules or you're excessively incompetent, you may go to jail, and at minimum you'll permanently lose your livelihood (which is was very difficult to acquire to begin with).

You might be able to solve AI competence, but can you solve AI alignment? What can you do to incentivise the AI to be correct?

At the moment, doctors are also kind of political actors - their professional ethics and patient obligation put them in opposition to e.g management. If management is telling them that hand sanitizer is 20% more expensive now and staff should stop washing hands after using the bathrooms, it is the doctors job to fight them on this. If management is telling them to start prescribing X more because the pharma company is giving them a kickback, it's also the doctors job to not just blindly follow orders. (Similar to my job, where my professional certification and professional ethics obliges me to fight for the budget to fix something I think is hazardous, even if management doesn't want to).

Can an AI do that? Will an AI do that?

(This does imply that our AI mass unemployment event will happen when MIRI and the like solve AI alignment, not when AI generally becomes competent enough)

Expand full comment
Eremolalos's avatar

I think there is a strong argument against AI discussing conditions with patients. It can summarize well, but does not have the communication skills needed. Someone discussing a patient's health problems needs to be aware of their education level, for instance, so they know how technical to be. They need to have picked up whether the patient has some misconceptions about medical mattersrelated to the current problem: Do they think diabetes is caused mostly by eating candy bars? Or that it's almost entirely hereditary? Then you need to correct those misconceptions before you tell them more about today's diagnosis of diabetes. You need to have, or be able to get, a sense of how compliant the person is likely to be with a new regimen. You need to be able to tell how scared they are by the diagnosis, and then decide whether to use that as a lever to increase compliance, or calm them down because they're generall yquite compliant anyway, but right now they are horrified by their diagnosis and need some reassurance.

Expand full comment
Performative Bafflement's avatar

Honestly, I'd bet on the chatbots being *better* at all these things than meat doctors in the very near future.

Multimodality is here today, and chatbots can read and describe emotions and body language. They can infer education levels and understanding probably better than humans *already,* and those skills have headroom to improve.

They're also endlessly patient, happy to repeat or reword things, and not constrained to the "<15 minute patient time average" metric that actual doctors need to worry about, so they could actually have a meandering conversation over an hour to establish the background knowledge and level of understanding and compliance likelihood to a much better degree than a human doctor in the couple of minutes not dedicated to rote background and charting.

Just make the chatbot interface human and friendly enough or have it come from a plush toy or something, and we'll all be better off with AI doctors handling most patient time / volume.

Expand full comment
Eremolalos's avatar

OK. But first you gotta marry Barbie.

Expand full comment
Eremolalos's avatar

I agree about it's a huge advantage for the chatbots that they are not constrained by time. Can you point me to some evidence that they can read emotions and body language and infer education levels and understanding better than human beings already?

Expand full comment
Mark's avatar

ChatGPT and the like are prone to blatant errors. Even if they can read emotions and body language and hold college level conversations, they also sometimes fail and interpret these things completely backwards.

Humans make these errors too, of course. But chatbots will not be used for medical appointments until studies have empirically demonstrated that the chatbots make them less than humans do. Probably chatbots will be required to make errors *vastly* less than humans, if the experience of self-driving cars is anything to go by. Legal liability frameworks are conservative, and human nature is resistant to changes imposed from outside/above.

Expand full comment
Yug Gnirob's avatar

What would the chatbot be doing that a static page on WebMD wouldn't? It can't take physical measurements, so even a theoretical perfect version that never hallucinates is just going to take the patient's word. Might as well let the patients self-prescribe.

Expand full comment
Performative Bafflement's avatar

> Might as well let the patients self-prescribe.

God, please yes. This is how a lot of the world works anyways - you don't need scrips to get most medications in most of the world, they're over the counter. You need scrips for controlled / fun / abusable stuff, but nothing else.

It's a vastly better system in basically every respect.

Expand full comment
Melvin's avatar

I think that a well-trained model could be a lot better than WebMD, which will happily tell you all the rare diseases that could cause your symptoms, without putting any special emphasis on the common disease that is probably causing it.

A well-trained model would say "Yeah I'm sorry that you have symptom X, but unless you have symptoms Y and Z then it will probably resolve itself in a few days. But if you have symptoms Y and Z then go straight to the doctor"

Again, chatGPT is probably doing this quite usefully for many people right now, and as long as they don't start advertising "medical advice" as a capability then everyone is probably in the clear.

Expand full comment
Eremolalos's avatar

>A well-trained model would say "Yeah I'm sorry that you have symptom X".

You know, whenever I give up on having GPT4 make the image I have described carefully and clearly, ole Chat says, "I'm sorry this has turned out to be so frustrating." (or something like that.) Whatever its exact words are, they are exactly the same ones every single time. And here's the thing: It's not sorry and I know it. It does not have feelings or a capacity to empathize. I've come to *hate* its formulaic plastic sympathetic apology. I'd prefer for it to say something like "yeah, I agree that it's time to give up. We're not getting anywhere."

Of course doctors can be formulaic too, and I've heard my share of bad news prefaced by "I'm sorry but " when the person speaking seems to have no feelings whatever about the piece of bad news they're delivering. On the other hand, many medical professionals have come across as genuine when they've said they're sorry to say that I need a root canal or whatever. And a couple of veterinarians have actually had tears in their eyes when they told me one of my cats was terminally ill.

Don't underestimate the value of human empathy in helping people feel better able to cope with their health problem, and even in helping them feel better physically. If the professional isn't sympathetic but is cheerfully validating -- "yup, I'm sure your shoulder does hurt like crazy -- you've got badly torn rotator cuff" -- even *that* is helpful

Expand full comment
Jeffrey Soreff's avatar

>ole Chat says, "I'm sorry this has turned out to be so frustrating." (or something like that.) Whatever its exact words are, they are exactly the same ones every single time. And here's the thing: It's not sorry and I know it. It does not have feelings or a capacity to empathize.

It could be worse. It could open each session with "Your query is very important to us." :-)

Expand full comment
Eremolalos's avatar

". . . and if you have symptom B, trying attaching cheese to your pizza with Elmer's glue and then eating a couple slices."

Expand full comment
None of the Above's avatar

The foul taste of the glue will get your mind off your other problems for awhile.

Expand full comment
moonshadow's avatar

...actually, you know what? Despite my giant rant above, I'm changing my mind on this. LLM chatbots might not be competing with real doctors anytime soon, but they're far from useless: there is a large and growing class of people who are unable to access real healthcare, or unable to get a real solution to their problems, and in desperation turn to reiki, chiropractors, faith healers, whoever is around them and willing to listen.

I can very much imagine robodoctors being, on net, a force for good in this space; occasional advice for patients to glue a rock to their pizza, attach a moxibustion jar to their back or drink homeopathic water notwithstanding, they can certainly already output correct and appropriate facts more often than chance today, which is all that is required.

Expand full comment
Julius's avatar

Why would it have to take the patient's word? Couldn't it have access to their medical records, just like doctors do? It could order tests when it thinks they are necessary, just like doctors do? And it could do whatever doctors are supposed to do when they think their patient is lying to them (which, to be clear, I do not know what this is).

Expand full comment
Deiseach's avatar

Good freakin' luck with that, I can't even get the regional hospital, just under 30 miles away, to reliably send test results/medical records to my GP. You think that a national system of AI being able to access patient records will be sorted out, up and running, and not screwing up every five minutes, within a timely manner and on budget?

Expand full comment
MichaeL Roe's avatar

I (in the UK, so NHS) have online access to my test results, and (just for an experiment) have tried giving an LLM fake doctor access to my test results. So it definitely can be done. Also: as someone who is officially diagnosed with Graves's disease, it looks like i can pretty much just ask for blood work to be done. So, if, for example, LLM fake doctor says it would like another round of freeT4 and white blood cell count done, I can just phone the actual receptionist and ask for it, show up to give a blood sample and, lo, the actual test results will show up in the LLM fake doctors input dataset. (Some real endocrinologist somewhere probably has to click OK on the request being sort of reasonable)

Whether I should actually act on the advice i get from LLM fake doctor is another matter. Probably not...

Expand full comment
Nancy Lebovitz's avatar

I just had a recent run-in with a hospital. I was prescribed a knee x-ray, and then later an ankle x-ray. I'd asked for the additional x-ray for my ankle because it was aching after a knee injury.

The prescription didn't specify *which* knee, the hospital wouldn't take my word for it, and they tried to reach my provider to clarify which ankle, but couldn't reach them in time.

The good news is that my knee is probably alright, and my ankle stopped aching.

Expand full comment
Nancy Lebovitz's avatar

I don't know whether this is reasonable, but I was quite angry that when I went in for the x-ray. (Note, in pain from injured knee.) I'm standing there by the x-ray table, trying to figure out how to get on to it. It was probably only a minute, then the technician lowered the table.

How hard is it to figure out that the very short person with a bad knee isn't going to manage to get on a table set at standard height. Or was I supposed to know that x-ray tables can be lowered so I would ask?

Expand full comment
Jeffrey Soreff's avatar

Sorry about the run-in with the hospital, glad to hear that you are recovering!

Expand full comment
Moon Moth's avatar

Wow. :-(

Expand full comment
Yug Gnirob's avatar

Are you imagining a chatbot with a camera that can test pupil dilation or blood pressure under its own power? If someone says they have a new mole they're worried about, can the chatbot see it?

Expand full comment
Julius's avatar

There are many different implementations, but the first that comes to my mind would be an interface roughly like ChatGPT, which users could attach images and videos as well an enter text. There are certainly some tests it couldn't conduct (e.g. an MRI), but the amount that it could do seems to be large enough to be worth considering.

Expand full comment
Yug Gnirob's avatar

So if someone pulls a Google image of whatever disease they're faking to get prescription opioids, can the chatbot tell it's not them?

Expand full comment
Julius's avatar

Lie detection would be a challenge. I don't know how it's handled in telemedicine, but I wonder if something similar could apply.

Expand full comment
WoolyAI's avatar

The strongest issue is we don't currently have any way to rigorously evaluate these chatbots.

For example, let's say we trained a chatbot to discuss a patient's conditions with them. Not, like, diagnosing patients but just answering basic questions about their condition. We know that the AI will hallucinate a certain percentage of the time. The challenge is that we can't quantify this without qualified clinicians reviewing thousands, if not tens of thousands, of such responses for factual errors.

Think about it this way. Let's say we implemented this chatbot using GPT-4. We then wanted to compare that model to Gemini or something. There's no current way to do this without doctors directly comparing thousands of notes and grading them on accuracy. And if we can't compare models, we can't even guess how accurate current models are. And hospital executives aren't very likely to approve systems where the answer to "How many times will the AI's hallucinations kill someone?" is "I dunno."

Expand full comment
TK-421's avatar

This response (and the others regarding liability) is a great example of why productivity has cratered in the developed world.

"Hey, here's a technology that can radically reduce costs and increase access to health care."

"No, no. It simply won't do. Who can we sue?"

* Paging Dr. Baumol, Dr. Baumol to the OR please. *

You're also working on a model where the assumption is that the default human doctor is more competent than an equivalent AI system. I do not think that will be the case and it will rapidly become apparent that it is not as these begin to be productionized and studied in depth. Simple chatbots are already probably better than human doctors - at the very least they'll be conscientious enough to reliably follow guidelines, unlike doctors on this very blog - for many routine tasks, and they are absolutely more cost effective.

Calling them "chatbots" is also not representative of what these systems are going to look like in practice. See Julius' hints about multimodality.

Which way does the liability arrow point when it's the AIs correcting human mistakes?

Expand full comment
WoolyAI's avatar

*shrug*

We've got access to 'em. They're available in the default emr for most hospitals: https://www.epic.com/epic/post/cool-stuff-now-epic-and-generative-ai/. If you're in healthcare, you can just talk to your Epic rep and turn them on. It's been over 6 months since the release. Do you notice any dramatic improvements in your healthcare experience?

Technology doesn't run on will power like a Green Lantern ring. We actually have to do work, we have to find inventive solutions. This is one of the challenges.

Expand full comment
TK-421's avatar

That was a quick turnaround from “it will be too expensive to evaluate, localize, and maintain these systems” to “shrug”. I always knew I was persuasive but now I’m beginning to suspect it.

Indeed, it does not. And pointing out that current generative AI applications aren’t taking over healthcare tasks is fairly meaningless when: a) no one is suggesting it can happen overnight or with current models, b) the areas where current models could take over are gatekept by the people who they would be replacing.

To return to the point, why do you assume that doctors will be reviewing AI work to prepare for the rollout vs the reverse? What will happen when we start seeing malpractice suits based around “you overruled the AI based on your ‘superior judgment’ and now the patient has suffered harm / dead”?

Expand full comment
WoolyAI's avatar

Oh, for pete's sake.

This is not reddit, things are not binary, there are no internet points. These things have been rolled out in hospitals for more than 6 months. No one has been overly impressed. If you want to customize them, or improve them, or even allow hospitals to make informed decisions about which models to use, you need some standardized form of measurement. Hospitals don't have that because, as far as I can tell, OpenAI and Alphabet don't have that.

As for malpractice suits, we've had sepsis prediction models based on ML for years. It was big news 3 years ago when it turned out the models weren't performing as well as publicized (https://www.healthcareitnews.com/news/research-suggests-epic-sepsis-model-lacking-predictive-power). Why lawyers don't ask for those records, and whether a judge or jury would care, I have no idea, I don't do legal stuff.

Expand full comment
TK-421's avatar

My apologies, I didn't know that jokes were only for Reddit. I'll try to maintain a more solemn tone in keeping with our august environment.

I'm not trying to score internet points. I'm trying to point out that you were abusing numbers - numbers have been good to me, some of my best friends are numbers - to project a false sense of confidence in your prediction on how these systems can/cannot be used. You have not supported your analysis after the slightest pushback other than to point out that: a) other systems have failed in the past, b) Epic has had access to generative AI for six months and yet healthcare remains terrible.

But both of your points are not even "dog bites man" stories, they are "local pup sits for treats; sources report that he's a good boy".

For point A: obviously, yes. Clearly. No citations needed, you need not post any additional links, every grown adult and many children are aware that things can and do fail. But that has very little relevance when discussing systems that have vastly different architectures and fundamental mechanisms.

For point B: obviously, yes. I agreed with you - and will continue to agree - that technology is not a magical ring. But does the fact that doctors have not fixed healthcare by simply speaking to their Epic rep actually tell us anything at all? Does it provide any light? I say it does not. I say that these are irrelevancies that tell us nothing about the future of these systems or their current capabilities.

I'm not trying to score internet points and I'm not trying to bucket the world into 0 and 1. I never even used a number. I'm simply asking you to not muddy the waters by claiming knowledge of how the scaling will work or the rollout will work or - indeed - how any of this is likely to work when neither of us really know that. I'm suggesting, as a practical matter, some humility.

Expand full comment
dlkf's avatar

> There's no current way to do this without doctors directly comparing thousands of notes and grading them on accuracy.

The cost of conducting this analysis seems extremely low relative to the potential benefits.

Expand full comment
WoolyAI's avatar

First, as a practical matter, there isn't a score or scale like this.

As a theoretical matter, yeah, development is doable and the one time expenses are trivial but the scaling is horrible. 

Let's say we're a company that build GPT-Doc and we want to upgrade to GPT-Doc 2. GPT-Doc made serious errors in 1% of cases and we're trying to drop that to 0.9% with GPT-Doc 2.

Let's say we think 30,000 cases is reasonable in order to differentiate the performance of these two models. GPT-Doc should make 300 errors, GPT-Doc 2 should make 270.

Say it takes 1 hour for a doc to review a case.

At 40 hours/week, that's 2016 hours/year.

To a rough estimate, we need 15 doctors working for a year to score this.

Primary Care Physicians make $265/year (1) but let's ignore benefits and round down to $250k

Now 15 doctors @ $250 equates to $3.75 million. All considered, very affordable.

But what about localization? We know from existing ML models that the same model can perform very differently in Florida vs Michigan. This could require localization for every significant region but let's be generous and say we just need to localize for all 50 states. Now we're at $187.5 million.

Now what about model drift and model updates. We know model performance can degrade over time and we know new medical information, drugs, and other resources will constantly. Say we need to update once a year. Well, now we've committed to $187.5 million dollars in fixed costs, every year, just to keep this thing accurate.

Now we haven't paid a single ML engineer a dime or built a single data center, nor have we dealt with further complications, like whether you might need to update your algorithm more than once a year or whether regulators will require your health outcomes to be equitable and you need to do further testing for that.

Now we might end up doing something like this, moar dakka does work, but the intuition is that we're trying to do moar dakka with some of the most expensive labor on the planet. If you've ever done anything on Kaggle or with ML professionally, just imagine if every confusion matrix cost $4 million dollars and took 3-12 months.

(1) https://weatherbyhealthcare.com/blog/annual-physician-salary-report

Expand full comment
Julius's avatar

Why would it require human doctors reviewing the cases? Couldn't an evaluation be created solely based on multi-choice questions? E.g. "A patient with demographic characteristics X has symptoms Y, medical history Z, and other relevant information Z*. Given that, which treatments would you recommend from this dropdown list of 10,000 possibilities?" Or, "A patient with... has said statement X. Select which questions you should ask from this list of 1,000 possibilities or what tests you would order from this list of 5,000 possibilities."

Expand full comment
WoolyAI's avatar

Again, theoretically, this is doable. Practically, no one has and there are serious challenges.

Take something relatively simple, such as making sure that there's no harmful or dangerous drug interactions when the patient is taking multiple medications. This is an existing industry that a decent amount of time and money has been poured into to generate...ok results. (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7810459/)

That's a whole industry and they're still struggling with it. Now it's probably the best way we could measure the accuracy of a Generative AI, we can compare it's answers against these expert systems, but

A, we have to actually build that test

B, we then have to build similar tests for all the other aspects of medicine.

Again, no theoretical issue, it's doable, but we're talking about building a test which captures a major part of a doctor's job and tests the LLM against that. Building that and keeping it up to date is a major challenge.

Expand full comment
Andrew Vlahos's avatar

Yes, but the lopsided nature of law means they would pay the price of problems without getting the gain. Malpractice suits cost much, much, much, much more than the benefits doctors gain from successful procedures, and a doctor who uses dangerous-seeming AI would get sued more even if it was just as accurate as the doctor.

Expand full comment
Julius's avatar

That was my first thought too

Expand full comment
Whatever Happened to Anonymous's avatar

I don't know if it's the strongest, but the issue of liability seems complicated. Who is responsible for a wrong diagnosis? The hospital/clinic/whatever that makes the chatbot available? Whoever trained the model?

Expand full comment
Melvin's avatar

In practice I suspect the liability for every single thing that goes wrong would land on the desk of whoever decided to approve the chatbot for medical use. Which is why nobody will ever make that decision.

Of course chatbots are undoubtedly already busy giving medical advice (probably with disclaimers) to patients all over the place, which is probably of roughly equivalent quality to what they'd be finding on WebMD. This is probably overall a good thing, as long as everyone understands the limitations of the bot and nobody is explicitly to blame when the bot's advice is suboptimal.

Expand full comment
User's avatar
Comment deleted
Jun 4
Comment deleted
Expand full comment
Jeffrey Soreff's avatar

Hmm... For the "customer assistance" chatbots, I've generally found that, if I'm trying to get help because something is not working properly in the company's web site, the chatbot has _never_ helped me actually resolve the problem, and I generally get 2 or 3 plys in, give up on the chat bot and tell it to get me a human. To put it another way: I've yet to see a "customer assistance" chatbot that has better information on the behavior of a company's web site than the raw web site itself.

Expand full comment
Tatu Ahponen's avatar

OTOH there's probably gonna be a fair few patients who will have an automatic "What, I'm not good enough to see a human? They assign me to a frickin' bot?" reaction and will thus be more averse towards the medical system in general.

Expand full comment
Viliam's avatar

I feel like there was not enough debate about #17 of Links for May 2024. From my perspective, two people already said what I wanted to say, but they got no replies.

le raz: "Having read Trevor Klee's original anti-lumina post (link 17), I would like to know which parts of his original post were wrong, and where he misunderstood / misrepresented the underlying science. It's quite frustrating to be told the post is largely wrong, without being told how it is wrong! So far I know: - The original post wrongly characterized Scott Alexander ... - The original post was wrong about the manufacturing process not following standards. What I would like to know: - How did Trevor misunderstand / misrepresent the science? - What else was does the original post get wrong?"

Jiro: "Someone who posts defamation is already defecting, and norms of not threatening suit essentially leave the defamed person without a way to clear his name. ... Yes, lawsuits can be intimidating, and can hurt the person sued even if they're innocent. But from your own description, this isn't a meritless lawsuit designed to shut someone up by bankrupting them; the targets really were defamed."

Expand full comment
JQXVN's avatar

I thought it was very odd for Scott to offhandedly say that Trevor misunderstood the science without specifying how, considering Scott's original Lumina post discussed the science at length. I am no expert but Trevor's points largely made sense to me. Lumina has not really addressed them in their exchanges with Trevor since, and I had hoped that another observer might.

Expand full comment
Viliam's avatar

Things to read:

* The original article: https://archive.is/dseyR

* An update by Trevor: https://trevorklee.substack.com/p/updates-on-lumina-probiotic

* Aaron's comment on the update, below the article

* Trevor's following article: https://trevorklee.substack.com/p/luminas-legal-threats-and-my-about

* Footnote 2, which responds to Aaron's comment

.

In my opinion, Trevor is 100% correct at categorizing Lumina as a drug rather than a cosmetic, if it is advertised as something that prevents cavities. As an analogy, I tried to figure out how a toothpaste would be categorized, and Wikipedia says: "In the United States toothpaste is regulated by the U.S. Food and Drug Administration as a cosmetic, except for ingredients with a medical purpose, such as fluoride, which are regulated as drugs."

I have no opinion on the scientific claims.

I think that both sides were needlessly confrontational. The Lumina team could have simply written a response article explaining the inaccuracies instead. On the other hand, if Trevor wanted a "scientific debate", how exactly did writing about Aaron's association with porn stars serve that purpose?

Expand full comment
deusexmachina's avatar

Machine Learning seems to be a field with a massive replication problem, but I don’t see it discussed much. Maybe people knowledgeable in ML research would like to comment?

https://open.substack.com/pub/aisnakeoil/p/scientists-should-use-ai-as-a-tool?r=3iws0&utm_medium=ios

Expand full comment
LoveBot 3000's avatar

Wdym? There's only 10 000 new articles every day all showing SOTA improvements, surely that's expected in a young scientific field?

Seriously though, as a practitioner it is pretty hard to separate the wheat from the chaff.

Expand full comment
demost_'s avatar

I agree with tempo, it is discussed. The issue is that the field moves at warp speed, and there is hardly time for replication studies.

But I do think that the quality has increased a bit as a reaction to the replication problem. For example, ablation studies are now much more common. A few years ago, the correlation between the proposed innovation and the improvement was almost zero. (Rule of thumb: improvements never came from the novel architecture, it was always the data augmentation hidden somewhere in the appendix.) And my feeling is that reviewers have become a bit more aware of the issue.

Expand full comment
tempo's avatar

it does?

Expand full comment
Maxi Gorynski's avatar

I've been interested for some time in trying to determine as rational a basis as might possibly be conceived for determining the degree to which the concept of God:

1. Can be defined

2. Can be disputed (on account of its proof-positive being nigh-on-impossible owing to some clear empirical bona fides and also owing to some conceptual impossibilities rooted in the outcome of #1)

3. Can be considered of utility, and actually put to strong utility, in the context of the outcomes of 1 and 2, and the (I think provable) historical usefuless of the concept of God

The subsequent investigation led me under the steam of an extremely catholic pack of disciplinary horses to some very interesting areas, happening upon which has left me feeling a continued fascination as to the value of the exercise and onward exercises as might build off of it, and a deep contentment besides.

I'd be very interested to see what this community makes of it https://heirtothethought.substack.com/p/the-redefinition-of-god

Expand full comment
skaladom's avatar

As far as I can tell (last time I studied the stuff was decades ago), your attempts at axiomatic set theory don't really work. The entity you are trying to define as "x", with a rather confused definition, either makes no sense, or would amount to the set of all sets, which was proven not to exist around a century ago.

It's also completely unclear how that would relate to your idea of God. Existing within axiomatic set theory and existing in reality are two completely different things; to give a simple example, I exist here [citation needed], but as far as I can tell, I am not a mathematical set.

Expand full comment
Maxi Gorynski's avatar

Can you elaborate on where you find the definition to be confused? The intent of the proof is to demonstrate not that God is the set of all sets, but that it can contain any set and any combination of sets; perhaps there was some indiscipline in the definition as far as establishing this is concerned.

Expand full comment
skaladom's avatar

I'm sorry but this is too far out to properly critique. It reads like you've just recently encountered axiomatic set theory and are ad-libbing or pattern-matching on it, which is fine if that's your way to start exploring a subject, but it's pretty far from producing valid work within the theory.

To get concrete, you're trying to define a set x such that for every y, "x can contain y". In your page you equivocate between that and just writing "y∈x". If we go by y∈x, then as I said you're plainly defining x as the set of all sets, which was proven not to exist by Bertrand Russell in 1901. If we go by "x can contain y" as you now say, the problem is that "x can contain y" is not a proposition in ZF; either x contains y or it doesn't, but there is no "can" operator. So in this case your definition is meaningless within the system.

Then, assuming you manage to rescue a definition of x as matching some proposition p(x), that doesn't establish that there is such an x! All you've done is define the set X of all possible x such that p(x), but that set X may be empty, or for that matter, huge. If you want to claim that there is a single x such that p(x), you have to actually prove that, within the system, using step by step valid derivations, not by handwaving or verbal arguments. In that sense, formal logic does behave a bit like reality, in the sense that I can't just define a donut into existence when I feel like eating one.

Later on you talk about "not merely as an empty set, but as all sets that are not sets". I think you can find the contradiction yourself if you just re-read your sentence.

I don't really know what to say beyond this. If you're a young person freshly encountering axiomatic set theory, consider this a welcome and an invitation to explore further! The foundations of mathematics are a deep and rewarding subject, if not exactly fashionable. It's where you can see how the logical-mathematical sausage is made in its goriest details, which makes it one of the better trainings for clear thinking one can possibly get. A semester or two studying this stuff can be really worthwhile.

On the other hand, if you've been doing this for a while, I really have to say, it's not working. You're producing material that any specialist in the subject will just glance and shake their head. That way lies crackpotry, which is a real waste of your talents, and of everyone else's time. It's never too late; if you can get this far, and if you really like the subject, instead of waiting for a specialist to critique your work, you can get yourself to the level of *being* the specialist. You might then be happy to discover that the point of studying the foundations of mathematics lies not in buttressing some new philosophy of life, but in exploring the complex borders between what can and what cannot be proved within specific formal systems, and that there is beauty in that.

Otherwise, if you want to help people find meaning in life and reclaim the word "God", maybe axiomatic set theory is not the tool of choice.

Expand full comment
Maxi Gorynski's avatar

"I'm sorry but this is too far out to properly critique. It reads like you've just recently encountered axiomatic set theory and are ad-libbing or pattern-matching on it, which is fine if that's your way to start exploring a subject, but it's pretty far from producing valid work within the theory.

To get concrete, you're trying to define a set x such that for every y, "x can contain y". In your page you equivocate between that and just writing "y∈x". If we go by y∈x, then as I said you're plainly defining x as the set of all sets, which was proven not to exist by Bertrand Russell in 1901. If we go by "x can contain y" as you now say, the problem is that "x can contain y" is not a proposition in ZF; either x contains y or it doesn't, but there is no "can" operator. So in this case your definition is meaningless within the system."

Robust points – it may well be the case that there is nothing within ZF that adequately accommodates for a ‘can’ operator, but that may then point to a limitation within ZF (that is, a limitation in the expression of contingency), as there is no reason epistemologically to disqualify the sentiment. It may well also be the case that trying to show the impossibility of disproving the idea of God is no task for axiomatic set theory, but then I am ultimately attempting only to show the absolute contingency of a single word’s definition, and since semantics are a matter of language and logic I don’t see any reason why set theory as applied to logic can’t also be applied to language.

"…then, assuming you manage to rescue a definition of x as matching some proposition p(x), that doesn't establish that there is such an x! All you've done is define the set X of all possible x such that p(x), but that set X may be empty, or for that matter, huge. If you want to claim that there is a single x such that p(x), you have to actually prove that, within the system, using step by step valid derivations, not by handwaving or verbal arguments. In that sense, formal logic does behave a bit like reality, in the sense that I can't just define a donut into existence when I feel like eating one."

I think this is the crux of the argument – that I am, in fact, not looking to establish that there is a single x such that p(x); it is precisely that I am trying to argue that there is a non-fixed, absolutely contingent concept that can occupy the set of fully variable size that you describe. The notation as is may be inadequate for this purpose but I would think step-by-step derivations to a single x would be somewhat against my aim.

"Later on you talk about "not merely as an empty set, but as all sets that are not sets". I think you can find the contradiction yourself if you just re-read your sentence."

It’s difficult to express precisely what I mean here in such a sense as makes the distinction I am trying to draw between an empty set (that is still a set) and sets that are not sets. It is the distinction between what can feasibly be imagined, what can be/is/was, but that is or may be nothing, and that which never can be/is/was, and cannot be feasibly imagined, and is thus less than nothing, a supernothing. Perhaps this is a limitation of ZF to express what I wish to express, as you’ve suggested.

"I don't really know what to say beyond this. If you're a young person freshly encountering axiomatic set theory, consider this a welcome and an invitation to explore further! The foundations of mathematics are a deep and rewarding subject, if not exactly fashionable. It's where you can see how the logical-mathematical sausage is made in its goriest details, which makes it one of the better trainings for clear thinking one can possibly get. A semester or two studying this stuff can be really worthwhile.

On the other hand, if you've been doing this for a while, I really have to say, it's not working. You're producing material that any specialist in the subject will just glance and shake their head. That way lies crackpotry, which is a real waste of your talents, and of everyone else's time. It's never too late; if you can get this far, and if you really like the subject, instead of waiting for a specialist to critique your work, you can get yourself to the level of *being* the specialist. You might then be happy to discover that the point of studying the foundations of mathematics lies not in buttressing some new philosophy of life, but in exploring the complex borders between what can and what cannot be proved within specific formal systems, and that there is beauty in that.

Otherwise, if you want to help people find meaning in life and reclaim the word "God", maybe axiomatic set theory is not the tool of choice."

I must ask you excuse the roughness in the application of the notation where it’s rough – the thought is considerably longer in gestation but my actual familiarity with set theoretical notation amounts to just weeks of somewhat focused reading. I intend to polish it with more immersion in the existing theory, as you suggest, but while the notation will be made more scrupulously correct thereby I doubt it will make the overall assertions much less radical, or possibly unacceptably radical, to you.

Many thanks for engaging with this so seriously; criticism that is simultaneously tightly robust without being unkind is the most stimulating possible response to any work like this.

Apologies for the lateness of this reply.

Expand full comment
Lucas's avatar

I don't know if it's because of the style, the vocabulary or the structure of the argument but I find myself asking "source?" to most of the things you postulate like "Civilisation is defined by movements and developments within the human spirit. Our spirit is in critical need of revitalisation." or " God is the key unifying and anchoring theme in the entirety of the human story.". I might not be the target audience for that, I personally prefer stuff that's easier to read and more justified.

As for the non-meta commentary:

> To the vast majority of sane observers who’ve ever inspected the sky above them, this is in evidence: if God does exist, God does not seem to live in the sky.

I remember reading a book on Shintoism, where a Shinto priest said something like "Christianity came from the desert, there's only sand and the sky so they believe God is in the sky. We have trees and rocks and rivers so we believe gods are in trees and rocks and rivers." When all you have is nails, all you want is a hammer I guess?

I find the part about everyone having their own concept of God intersting. As for "There is no reason that ‘bed’ should be used to signify the concept of ‘bed’ except that consensus demands it.", you're one of today's lucky 10 000 https://en.wikipedia.org/wiki/Bouba/kiki_effect.

I'm not familiar with antitheists, so I can't comment on that part. After that from what I understand (I'm sorry if I'm misrepresenting what you say) you seem to be saying that God is partially a human creation that emerges because we are intelligent and we have reasoning abilities? That is how I see things too but on the other hand I've discussed with religious people, people close to me that I know well, and they believe in God no as some kind of abstract emergent human thing, but as God. I remember reading arguments that seeing God as this kind of abstract human thing is lack of faith/not true believing/cope/heresy, and I can see some parallels between that and trying to intellectualize something that should be felt (it won't work).

Expand full comment
Maxi Gorynski's avatar

Where the broad assertions are concerned, I would admit that these things are contingent – somewhat like an assertion to the effect of “We are not as able when it comes to great undertakings than we were previously”, they require a certain commonality of perspective between writer and reader and even trust in order to land as they are intended to. They are too abstract and too vast to be justified via a single source; this makes them of dubious reliability, but as large theses they are nonetheless vital for orienting perspective and driving investigation towards smaller, more measurable, discrete questions. As for whether or not God is the key anchoring theme in the development of civilisation, I would challenge anyone to name a discipline more central to that process than it.

A very interesting point on Shintoism, and also on the Bouba-kiki effect – I’m glad to have a name for that phenomenon now.

Anti-theists are atheists who believe that all theists ought to be actively opposed and their influence minimised. They are committed to “the active struggle against everything which reminds us of God.”

And yes – your notion of God as an emergent property is essentially what I’m getting at, and that you’re quite right that the primary difference between in-earnest orthodox believers and the kind of cheerfully ‘recreative’ agnostic who might subscribe to my theory is their willingness to countenance the contingency and the unknowable component of the concept, instead of insisting on a literal interpretation of scripture. It is indeed lack of faith/not true belief/cope/heresy, true on all accounts; but so long as the mode of heresy reflects in the heretic’s commitment to broadly beneficial human development outcomes, the charge is of no matter (and is, ironically set against the charge of heresy, a most Godly position).

I actually think if the church embraced a position on scripture that was more figurative/literary critical they might suddenly find entirely new wings of the congregation emerging among the theatrical, the literary systematisers, and soft-touch moralists who dislike the literal readings of minor points of doctrine.

Expand full comment
Lucas's avatar

> As for whether or not God is the key anchoring theme in the development of civilisation, I would challenge anyone to name a discipline more central to that process than it.

I would argue for technological development, but I'm heavily biased here.

> A very interesting point on Shintoism, and also on the Bouba-kiki effect – I’m glad to have a name for that phenomenon now.

It is very intersting! The way I understand it is that the "k" in "kiki", and maybe the "i" too, are "sharper", while the "b" is "rounder". I don't know why. Maybe someone with more knowledge of sound can show that the "b" sound is like a continuous function or a curve or slow and varies not much while the "k" changes a lot quickly?

> Anti-theists are atheists who believe that all theists ought to be actively opposed and their influence minimised. They are committed to “the active struggle against everything which reminds us of God.”

Thanks! It reminds me of the atheist movement at its peak, a decade or more ago.

> It is indeed lack of faith/not true belief/cope/heresy, true on all accounts; but so long as the mode of heresy reflects in the heretic’s commitment to broadly beneficial human development outcomes, the charge is of no matter (and is, ironically set against the charge of heresy, a most Godly position).

> I actually think if the church embraced a position on scripture that was more figurative/literary critical they might suddenly find entirely new wings of the congregation emerging among the theatrical, the literary systematisers, and soft-touch moralists who dislike the literal readings of minor points of doctrine.

I think that may be where the disagreements will appear. I remember a conversation with a religious relative, where I asked them about doubt, and they said that yes, doubt is part of faith. To me that would mean that being certain that scripture is figurative would not really be compatible with faith. You would have to have at least some part of you that wants/think it's real, and try to work towards that. But being non religious myself, I don't want to talk for the people that are.

And while they may find new wings with that, it might deviate too much for the original thing, which is faith in God. Again, I don't know about religion, but this feels like diluting the original goal and how you lose part of your community. It might be worth looking into the Second Vatican Council and its consequences, as it was an effort to try to appeal to more people. From what I understand, to this day it is still divisive.

Thank you for your thoughts! That was very interesting.

Expand full comment
Mark's avatar
Jun 4Edited

Christianity didn't come from the desert. (Judaism arguably did, and Islam certainly did if you discount what was inherited from Judaism/Christianity.) Nazareth, Bethlehem, Jerusalem etc are fertile places with lots of trees and rocks and farms, though no rivers.

Expand full comment
Lucas's avatar

Good point, I'm probably misremembering. It's incredible that you can use street view and "stroll" around Nazareth.

Expand full comment
Alex's avatar
Jun 3Edited

I feel like actually making the automated land acknowledger is not very funny. It was hilarious as a bit in a story because it's actually an uncomfortable thing about our modern world that we need to be able to laugh at, but actually building it feels kinda bitter. After all the people who are doing the land acknowledging are doing it for a legitimate moral reason (trying to find some way to make reparations for a lot of legitimate guilt about the past), and then they're making other people uncomfortable in the process and so kinda undermining their own project. I wish they would find more effective and less discomforting ways of trying to be good people. But getting bitter back at them for doing a bad job just perpetuates everyone being pissed at each other instead of fixing anything.

Edit: although saying this might sound like trying to make someone feel shame about liking it, in which case I'd be perpetuating making people feel bad too. Sorry if so. Not my intention. My point is we all ought to try to stop making other people feel bad and forgive them their transgressions instead of striking back.

Expand full comment
Nancy Lebovitz's avatar

This discussion has been interesting for me because I've been reacting very badly to land acknowledgements-- I hear them as saying that there's no place where I can legitimately live, and then I hate the people who support land acknowledgements, and all the more because I assume they think I'm a bad person for resenting them.

As for practical reparations, a legitimate start for the US might be for there to be the sort of infrastructure (water, power, connectivity) that's expected for the rest of the US.

Expand full comment
Kitschy's avatar

I used to hate them too - I always thought, "so why aren't you paying rent to them" whenever I had to sit through one - but since then learned a bit more about them.

For context, this is Australia, which has been doing these as early as the 70s. It's believed that it's started in Perth - some performers invited some Maori over from Aotearoa and found out that the Maori felt uncomfortable doing their hakka if they aren't officially welcomed, so the Indigenous Noongar organisers gave a Welcome to Country.

I feel slightly less bad about the ones given in Perth, WA, because for bigger events people will often specifically seek out Noongar people for the welcome to country, and many of them use their segments to talk about the progress in reconciliation and how the audience can support specific things, and they often teach the audience a couple of simple things, like phrases and Noongar names for native flora and fauna - it's outreach and education and learning a bit about the place we live.

(And at the very least, it's a decent gig, not dissimilar to being a professional emcee at some events, or a warm-up act. And many of them are much more enjoyable speakers / warm up acts than their competition - mostly local indie buskers who are, unfortunately, not very good singers mostly)

But the ones over east (Victoria) often suck - I don't think I've ever had any delivered from any actual Indigenous people, or heard anything that wasn't purely guilt inducing. I believe they started somewhat independent of the Perth tradition, as a reaction to Mabo vs Queensland (a historic court case paving the way to the native title act, where indigenous people could literally be paid rent for stolen land). Perhaps it's a weak little symbolic grovel so no one would start demanding rent, I dunno. I've never heard a good one outside of Perth.

And I believe the American versions are poor copycats of the Australian versions, stripped of the context of even having a case similar to Mabo, and hence rings even emptier. Idk, maybe there's okay ones in areas with a relatively large indigenous population (Perth and Darwin have the highest % Aboriginal population in Australia, and I feel like the general culture has absorbed a little more than over east, which still feels very majority Anglo).

Expand full comment
Alan Smith's avatar

FWIW, at my university at events land acknowledgements are now often done by playing a standardised video. Which was preceded by people reading a standard, institution-provided script.

I think the existence of the automated one makes a very concrete point about how meaningless and empty these statements are by highlighting the utter lack of consideration or investment behind them.

But this seems like something that's going to vary on the individual.

Expand full comment
Cosimo Giusti's avatar

We're here because we settled the land, bought it, or stole it.

There is no need to apologize. To the Apache, who drove off the O'odham, who supplanted the Sinagua, who conquered Soanso, who drove off Nincompoop Man.

Expand full comment
Alex's avatar

Well the land acknowledgement is not there because of some cosmically provable need to apologize. It's there because some people thought it was a good thing to do, in context, to deal with a small aspect of their general guilt and confusion about what to do about that guilt.

I'm all for them finding a better strategy but in the meantime, it is, like, coming from an actual moral instinct for good, it's not just an act of pure evil or something.

Expand full comment
Skull's avatar

You're playing the shame and guilt game, when you should be questioning the guilt in the first place. "What should they do with that guilt?" is a much less useful question than "Why do these effete weirdos feel so guilty in the first place?" The latter actually has a useful, constructive answer. Without dedicating one's life to addressing the former, it's all just performative.

Expand full comment
Alex's avatar
Jun 5Edited

I have plenty of curiosity about the guilt. Nevertheless I disagree with getting bitter and resentful about other people trying, albeit doing a poor job, of being good people.

All of us sitting around talking about how much we hate <some other people> is bad for *any* reason. Never, not once does it diffuse the situation or fix the problem or advance the situation in a positive reason. Sure, as a community we're having trouble finding grace for those other people because we keep getting taken mistreated by them. But then the question we should be sitting around mulling is how we going to *find* grace for them, and maybe how we can communicate with them and legitimize our grievances and be heard by them and fix the situation. Not this, where we go on and on in a loop about how much we hate them. That is the weakest way out: other people were bad to us, so we're going to be bad also, fuck it. It is just a terrible thing to do.

Expand full comment
Paul Botts's avatar

That's an absurd binary, which is a form of straw man. There is a _lot_ of space in between "actual moral instinct for good" and "act of pure evil". And by a lot of space I mean like how there is a lot of physical space in between Earth and Neptune.

Expand full comment
Alex's avatar
Jun 5Edited

er, yeah, who said there wasn't? I didn't claim "it can only be A or B and it's not B therefore it's A". I claimed "it's A".

If you call someone out for a straw man they didn't make doesn't that mean you're doing the strawmanning..?

Expand full comment
Paul Botts's avatar

"it's not just an act of pure evil or something" -- that is what I was referring to. You brought up an absurdly-extreme framing of the other person's argument so as to be able to wave it away. That is a form of straw-manning.

Expand full comment
Alex's avatar
Jun 5Edited

You're just reading my comment wrong. I did not mean to say that the person I was replying to said that at all.

Expand full comment
Christina the StoryGirl's avatar

> I'm all for them finding a better strategy

The "better strategy" is actually for everyone around them to react with as much cringey embarrassment and disgust as Bart and Lisa reacting to Homer's improv Mr. Plow rap:

https://www.youtube.com/watch?v=NJwZIDaILrg

Expand full comment
Alex's avatar

That's like a 2 out of 10 on the strategy scale. 3 and above are the parts where people act like grownups and talk about their problems.

Expand full comment
Christina the StoryGirl's avatar

Well, sure, except that some people don't have the intellectual capacity to "act like grownups." Where a capacity to update one's priors is lacking, shame is sometimes the only option.

But mostly, "Stop. Please. Stop it right now!" is my all-time favorite line-reading from The Simpsons and I like to work it into a conversation whenever possible.

Expand full comment
Deiseach's avatar

But why should somebody whose family emigrated to the US in the 1950s and who is now attending a university built on unceded land feel any guilt about that? Their ancestors didn't do the stealing, and the crime is so long-established by now, there may not even be pure-blooded tribespeople remaining to be the putative inheritors of the land.

That's why I think it's performative and not about genuine guilt. There's a bit in a Harlan Ellison short story (I can't remember the name, but this part stood out to me) about witnessing a performance on an alien world - the dominant species? culture? race? enslaved another species/culture/race and, for example, have them pulling the carts or rickshaws in which they travel. Every so often along the path, there is a preacher? or some kind of equivalent of this land acknowledgement thing, where the slave-owner is convinced of their guilt, gets out of the cart, and pulls it. Then, once past the part of the road where the exhortation is done, they get back in and resume having the slave pull it.

I'm describing it very badly, but that is what these land acknowledgements feel like to me: performances that change nothing and mean nothing in the end.

Expand full comment
Nancy Lebovitz's avatar

I'm reasonably familiar with Ellison's work, and this sounds totally unfamiliar. Also, he was writing before performative guilt became such a thing, so he was unlikely to satirize it. In any case, I hope a source for the story shows up.

My best guess is Jack Vance, but only because it sounds like his kind of satire.

Expand full comment
Moon Moth's avatar

Did you ever read "Emphyrio" by Jack Vance?

Expand full comment
Alex's avatar

Why should they feel guilt? because they feel guilt. people don't get to decide what they feel guilt about. The guilt decides.

Now yes it is rather tangled up with people capitalizing on it for power or credibility or attention or w/e at this point. But the underlying emotion was definitely guilt, to begin with.

Expand full comment
None of the Above's avatar

Alice gets up before a meeting and makes a land acknowledgement. Bob gets up before a meeting and says a short nondenominational prayer.

ISTM that it is every bit as reasonable for people who find land acknowledgements silly or offensive to make their feelings clear as it is for people who find the nondenominational prayer silly or offensive to do so.

Expand full comment
Moon Moth's avatar

I saw an article recently talking about 4 different waves of indigenous people settling North America.

...

Expand full comment
Melvin's avatar

We know that modern humans made it to the eastern end of the Asian continent at least 40,000 years ago, and we know that the Bering land bridge didn't cease to exist until 10,000 years ago. It seems unreasonable to suppose that there was only one time in history that people crossed it.

Expand full comment
Moon Moth's avatar

Last I recall, the current best theory from apolitical linguistic and genetic sources is that there were 3 main waves. The first contributed most of the language families and genetics (including some that are related to the genetics associated with the Indo-European languages), the second contributed the Na-Dene languages and less of the overall genetics, and the third was the Inuit (who didn't need no stinking land bridges). And then there are some skeletons from Brazil that most closely resemble Andaman Islanders, possibly representing a 0th group that came over possibly ~20 kya, possibly by ocean, but failed to flourish, and were wiped out or assimilated by the "first" wave.

I was mostly being snarky about the Eurocentric nature of picking 1492 as a magic year and then declaring everything before that to be "indigenous". (My magic year is 300,000 BCE.)

Expand full comment
Dirichlet-to-Neumann's avatar

Also, "we", is in the best case "our ancestors", and may not even be that.

Expand full comment
Mo Diddly's avatar

I think you are conflating the sin with the sinner. An individual person making a land acknowledgment is almost certainly well intentioned and deserves grace. However, the current cultural practice of adding land acknowledgments to events devoid of any connection with native peoples is an ugly combination of stolen valor, guilt-by-association and feel-bad liberalism and IMHO the practice should be mocked out of existence.

Expand full comment
Alex's avatar

I agree that the practice is problematic and frustrating, and the impulse to mock and scorn it is understandable and human.

I disagree with actually doing it, or validating doing it. Be better than the people who alienate you, not the same as them.

Expand full comment
le raz's avatar

Mocking those who do ill is a civic duty. By not mocking them, you aren't being better than anyone; you are just being short-term minded (prioritizing the ill-dooers feelings over the future ill they'll do) and risk averse.

Expand full comment
Alex's avatar
Jun 4Edited

Mocking those who do ill is a sad and wretched behavior that perpetuates hatred and tribalism. It is entirely for the satisfaction of the mocker to assuage their feelings of powerlessness and frustration.

There are actual ways to get people to stop doing things, but they involve both (a) engaging with them and (b) having grace for them instead of punting insults at their heads.

Expand full comment
le raz's avatar

You are wrong. There is a lot more nuance to how people communicate than your statement can accommodate.

For example, I recall a study showed that people learn better from sarcasm, despite it not feeling great. In certain circumstances, a cutting remark can be the most effective, and even the most kind (as it can sometimes most prompt the long term growth someone can need).

Furthermore, speaking personally, I greatly appreciate when I am (justly) mocked, from strangers, but even more so from friends.

Expand full comment
None of the Above's avatar

So, tomorrow, if I get up before a big public meeting and give a short "acknowledgement" that we all owe everything to the glorious white race from whom all our culture and technology flow, the right response from the audience will be....?

Expand full comment
Mo Diddly's avatar

I think we’re talking about two different things... I would never advocate being a jerk to someone or publicly shaming them. I’m referring to mockery of the behavior itself, a la South Park or Scott’s automated land acknowledger. That is, I advocate finding ways to satirize the behavior in a way that is not personal but instead invites people to look in a mirror and laugh at themselves.

Expand full comment
Alex's avatar
Jun 4Edited

Look around this comment section, how many people are having a good laugh about it? No, people are *really really bitter about this*. And sure, for decent reason, but everyone venting their bitterness at each other is not the same as having a good laugh that lets everyone relax a little.

Expand full comment
Performative Bafflement's avatar

> (a) engaging with them and (b) having grace for them instead of punting insults at their heads.

And how much of this have woke witch hunters and twitter mobs done for the rest of the world? The world they've basically terrified into submission, at the cost of panopticon monitoring of everything they do and say, with the stakes their entire career and social lives?

Expand full comment
Alex's avatar
Jun 4Edited

The essence of grace is that it's something you should do even when your counterparty doesn't do it for you.

(Anyway, your "Us vs Them" characterization is definitely what it feels like online, but not at all what it feels like IRL, where lots of people are actually quite reasonable and land all over the spectrum between intolerant and charitable. If your model of the world is that there's a war going on, quit twitter.)

Expand full comment
Deadpan Troglodytes's avatar

I prefer your charitable approach in general, and don't favor mockery in every single case, but land acknowledgements are normally forced on captive audiences by high-status people self-seriously preaching and stealing valor with their "moral exhibitionism" (in Graeme Wood's memorable phrase*). Mockery is far less likely to intensify tribal animosities in those circumstances.

* https://www.theatlantic.com/ideas/archive/2021/11/against-land-acknowledgements-native-american/620820/

Expand full comment
Jeffrey Soreff's avatar

>IMHO the practice should be mocked out of existence.

I endorse this suggestion!

edit: What is the optimal form for the mockery?

"Acknowledging that the European (or other, as applicable) colonists took this land from the Nth Nations, and that the Nth Nations took it previously from the (N-1)th Nations - and so it goes." ( I really like the Crow / Lakota example cited below. )

Expand full comment
Moon Moth's avatar

Start a foundation for buying land back and giving it to the surviving tribes. If we ask nicely, they might let us rent the land back from them afterward.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Personally, I'd like the whole issue dropped. The conquests were well over a century ago. Everyone who directly benefited or directly suffered is dead. I strongly favor statutes of limitation, which bound how long a grievance can be pursued, even by a _living_ victim against a _living_ perpetrator.

Expand full comment
Moon Moth's avatar

I was being serious, in that I think that *is* the optimal form of mockery. It actively contributes to the cause, does not disrespect the tribes or what happened to them, and its mere existence would make most of the declarations seem like the empty words and performative virtue signaling that they are. Put up or shut up.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! I agree that it would be mocking the empty declarations, and I have no objection to that. Personally, I _don't_ want to disinter grievances that are a century or more old, so I want to see these and all similar issues (e.g. racial reparations) just dropped.

If there are living individuals who have been harmed by actions that were illegal at the time, let them have their day in court - and then let the court ruling be an end of it.

If grievances against someone else's ancestors can be pursued, there will be no end to it. Virtually _everyone's_ ancestor was a serf under someone's thumb a few centuries ago. Virtually everyone can come up with an ancient grudge to nurse. The more energy our society spends pursuing century-or-more-old grievances, the worse life will be for everyone.

edit: Evil thought - I wonder if a GrievanceFinderOMatic could be layered on top of Ancestry.com's database, and what the market for it would be?

Expand full comment
Mo Diddly's avatar

I actually think Scott’s automated land acknowledger is pretty ideal in terms of mockery. It deftly points to the absurdity and utter arbitrariness of the practice.

Expand full comment