614 Comments
User's avatar
Laurence's avatar

When you recall autobiographical memories, how do you perceive them? First person, i.e. from your own perspective, or third person, i.e. from an outside viewpoint?

Expand full comment
Bullseye's avatar

First person for recent memories. Third person for some early childhood memories.

Expand full comment
Laurence's avatar

This is true for me too, but the memories I have in third person are from things I was told/shown I did as a child. So are you sure those early childhood memories are from you?

Expand full comment
Bullseye's avatar

They might be things other people told me, but I think they were things I told myself. I think the memory faded, becoming just a story in words, and then I reconstructed the memory from that.

Expand full comment
Kenny Easwaran's avatar

This is a common pattern:

"Field vs. Observer: Autobiographical memories can be experienced from different perspectives. Field memories are memories recollected in the original perspective, from a first-person point of view. Observer memories are memories recollected from a perspective outside ourselves, a third-person point of view.[1] Typically, older memories are recollected through an observer perspective,[7] and observer memories are more often reconstructions while field memories are more vivid like copies."

https://en.wikipedia.org/wiki/Autobiographical_memory

But there's more complexity to it than just time.

Expand full comment
Arbituram's avatar

Mostly first person, but strangely, third person for particularly memorable or intense moments? That said, my autobiographical memory is atrocious (I take my family's word for it that I was a child at some point).

Expand full comment
Gunflint's avatar

I'm kind of the opposite. Have vivid memories of dreams before I had language. I've been married 38 years and have been in charge of keeping my wife's family lore consistent for the last 20. "No Dennis was living in Silver City then." "Oh yeah. That's right."

Expand full comment
Laurence's avatar

This is interesting. I would guess that you remember things in third person, if you have an easy time remembering things about other people's lives as if they were your own. Is that accurate?

Expand full comment
Gunflint's avatar

No I remember the listening to the the telling of the stories. I pick up the discrepancies as they are told years later. It’s a bit creepy actually.

Expand full comment
Gunflint's avatar

A “That’s not quite how you told it in 1988” sort of thing.

Expand full comment
Gunflint's avatar

I entertained my classmates at our 50th high school reunion by going through the seating chart of the 30 kids in my 7th grade home room.

Expand full comment
Arbituram's avatar

That feels like magic to me... I can't even remember what teachers I had!

Expand full comment
Gunflint's avatar

Yeah, it was a kind of a parlor trick

It helped that they seated the homeroom alphabetically from 7th through 9th grade:

russ aho

jerry bayuk

perry brown

.

.

.

sue pouchnik

bill prada

.

.

.

robert vlaisavljevich

mary zbasnik

Expand full comment
Arbituram's avatar

That's a very familiar conversation... My wife is our Official Lorekeeper.

Expand full comment
Majuscule's avatar

I worked for a while at the National Archives, and one of the hardest things was having to debunk people’s family lore. Like we’d have old men call us to ask about their Uncle Max who was a Rough Rider with Teddy Roosevelt and who used to tell tales of his daring exploits to his little nephew. The nephew, now an old man, wants to put together a history of Uncle Max only to learn that there were only like a hundred Rough Riders, their Uncle Max was not one of them, and he wasn’t even deployed in Puerto Rico until three years later.

I also once had an old guy call the Archives trying to establish that MacArthur had secretly promoted him and a dozen other guys to Major after a clandestine operation in the Pacific. The story he proceeded to tell me did not have any corroborating evidence in the Archives, but did *exactly* mirror the plot of a Cary Grant movie from 1948. This old guy didn’t give the impression of missing any of his marbles, but I strongly suspect he had subconsciously blended some real events in with the plot of this movie to create a false memory about what he did during the war. Given how stressful and traumatic wartime experiences and memories can be, I think this might be more common than we realize. Maybe this explains Uncle Max’s tall tales, too?

Expand full comment
Kenny Easwaran's avatar

It's actually quite common for people to have false memories of things that didn't happen, that were described to them! Elizabeth Loftus has done a lot of work on this, and this Scientific American article she wrote describes some of their experimental procedures for getting people to make up memories: http://staff.washington.edu/eloftus/Articles/sciam.htm

Expand full comment
DavesNotHere's avatar

1st person

Expand full comment
Amie Devero's avatar

Same here. It's a bit like I emerged from the womb at age 14.... My family often tell me stories featuring me as a horror.

Mercifully, I don't remember any of it.

Expand full comment
Laurence's avatar

I might envy you a little bit. Was there any event that caused you to finally hold on to memories at that age?

Expand full comment
Nah's avatar

I remember them like I am processing a very detailed report on an event with diagrams, notes on what I was feeling, etc.

Expand full comment
Laurence's avatar

So you're saying you recall them abstractly and not visualized as either first-person or third-person?

Expand full comment
Pete's avatar

Not the parent poster, but yes, I recall them abstractly and not visualized as either first-person or third-person. I get the meaning of what was said but usually not the exact words used (sometimes not even the language in which it was said) and definitely not voices, I get the "ideas" of the people and objects involved, not a picture of them (though sometimes the relative locations or directions are very strong when relevant to the memory, but again, in an abstract not visualised manner). If I try, I can put a face to the people involved, but that face is from other moments where I know that person, not specifically related to that particular memory. The internal emotional feelings can be quite vivid when recalled though.

Expand full comment
Pete's avatar

To elaborate, the whole question seems a bit confusing to me - to me, the notion of "first-person" or "third-person" applies to e.g. experiencing a dream but for recalling a memory it's an attribute that has no meaning because recalling autobiographical memories for me does not involve "perceiving" them, perhaps an accurate description is like "acknowledging stuff".

I.e. if I now try to recall a specific memorable event (I won't go into personal details but note which details are in the memory and which are not), then I remember that thing happened *right there* in the next room, then I and another person was in that other specific spot, I remember my physical pose and emotions, I remember what I was *looking for* at the moment and the conclusion of what I saw, but not what I saw; there were some other people behind in the room but I have no idea who (I might reconstruct the guest list of that day to get a shortlist of possibilities, but that's not part of that memory), it was the day after Christmas (but not sure which year), I remember some of things that were said to me at that moment but not who told them or how, just their meaning.

I can try to intentionally imagine/fantasize/visualize a memory (which I usually don't) but that's something separate from remembering it, it's something done consciously *after* I've recalled it, and with a full understanding that I'm just making up stuff that I don't remember to something that might be plausible.

Expand full comment
Nah's avatar

Yes. For example, I will usually have a very strong sense of where things were in relation to other things, but It's like I'm remembering a plan map; and for things that I don't have a strong memory of, I get a sense that they are less certain.

I am now trying to recall a parking lot on a camping trip from last year, And i get a very strong notion of where the entrance and exit were, but if I try to recall what cars were in what spaces, the information returns as (about half full with about half trucks) rather than anything visual.

Expand full comment
bagel's avatar

First person, and I also experience books in the first person viewpoint even if the book is written in the third person.

In an informal survey of me and my roommates when we discovered this about how we read, I was always 1st person, one of us was always 3rd person, and one of us it really depended. All three of us were engineers, and one potentially interesting thing is that the most social of us was the one who read things most in the 3rd person and the least social of us (me) was the one who read things most in the 1st person.

Expand full comment
Ruffienne's avatar

That's interesting. Books written in the first person always felt dissonant when I read them when I was younger.

I think it was because the 'I' in the book was not the 'I' who was doing the reading.

It was worst with fiction and more tolerable with autobiography where the authorial 'I' was clearly another person.

However it's become more acceptable as I have gotten older. This may say something about the development of my sense of self. Or possibly not...

Expand full comment
Bullseye's avatar

A bit off topic, but I really got thrown for a loop the first time I read a book that begin "in media res".

Expand full comment
Christina the StoryGirl's avatar

Huh! Both you and bagel's comments are interesting!

I've always experienced first person viewpoint in books as no different than someone verbally telling a story. Sure, the person is saying, "I," but it's clearly a *them*-I, clearly not *me,* if that makes sense.

What happens when either / both of you are told a first person story verbally? Like, imagine a friend tells you about the car accident she had while she was driving ("I was on my way to Whole Foods to buy some Yumm Sauce. I was coming down Lincoln at 40 miles per hour when a goose crapped on my windshield right on my side of the car. It hit the wipers, but it just smeared it all around, and then I didn't see the moose coming out of the Whole Foods parking lot...").

Are you picturing yourself inside your friend's body while she's telling the story?

Expand full comment
Christina the StoryGirl's avatar

Damn lack of editing. That goose crap story should be, "I hit the wipers."

Expand full comment
Procrastinating Prepper's avatar

And then what happened???

Expand full comment
Himaldr's avatar

Interesting. I vastly preferred first-person fiction since I was old enough to read.

Expand full comment
A1987dM's avatar

> First person, and I also experience books in the first person viewpoint even if the book is written in the third person.

Me too; often I don't even *notice* whether a book is written in the first or the third person

Expand full comment
Christina the StoryGirl's avatar

If there are multiple characters in a fiction book that's in third-person, which one is the "camera" you're experiencing in first person viewpoint? Or does it change based on which character's perspective it currently is?

Expand full comment
A1987dM's avatar

If there's only one major character in the first part of the story it's theirs; if there a few it switches between them (but not between minor characters as well -- most of them will always feel like NPCs). Only if the perspective keeps switching among more than half a dozen characters since the beginning (e.g. Scott's " How deep the rabbit hole goes") do I take a God's-eye view.

Expand full comment
A1987dM's avatar

BTW, on looking at "How deep the rabbit hole goes" again, I notice it's in the second person, which usually paradoxically makes me *less* likely to take the character's viewpoint than either the first or the third person, even if it's exactly the opposite of what it's intended to do.

Expand full comment
Ruffienne's avatar

First person for most memories. But the further back they go the less 'authentic' they feel. Possibly because they have been reviewed and re-remembered so many times?

Expand full comment
beowulf888's avatar

I'd agree with that statement. We retell/replay our memories to ourselves and distortions seem to insert themselves in the retelling/replaying. And I discovered much to my chagrin that when I started writing fiction that used autobiographical memories as a jumping off point, I discovered that the fictional embellishments began to overlay the original "clean" memories. I now have to consciously distinguish between what are "clean" memories and fictionally embellished memories.

Expand full comment
beowulf888's avatar

First person. Visually. No narrative.

Expand full comment
Nolan Eoghan (not a robot)'s avatar

I guess you mean where the camera is. For me in memories I am always the camera. In fact I would never have thought that there was another way - after all my memories aren’t always something that necessarily involve me as the prime player. That time my friend fell off the bike and broke his arm when I was 12 or the uncle was drunk at a wedding and fell into a table are memories that I saw but where I was a minor character. To recall either would be to recall what I saw from my own eyes.

If my memory instead puts me in a place where the “minds eye” is a camera watching me watching my uncle falling over a table then this wouldn’t be a recall but a reconstruction. And I wouldn’t trust that memory at all.

In dreams I can be first person, third person or (rarely) somebody else entirely.

Expand full comment
Laurence's avatar

I didn't realize there could be another way either, until a friend told me all her memories are seen from a third person perspective, complete with a mental reconstruction of how she looked at the time.

Expand full comment
beowulf888's avatar

Interesting. I'm a first person dreamer as well as first person rememberer. I have a lot of control over my dreaming, and I wonder if that has to do with that first-person sense of self? Any active dreamers out there that dream in the third person?

Expand full comment
Christina the StoryGirl's avatar

Same here, you said it better than I would have.

I actually can't model how a memory could even be felt to be accurate / true * unless it was recalled from inside the body that was experiencing it.

For example, everyone who can see sees things from their own eyes, which are in their skulls and not from floating two feet over their own left shoulder or 20 feet away on the other side of the room. Their "camera" is mounted in their skull, so it's the only option to perceive anything visually and thus form a true* memory of it.

I can vividly *imagine* what I might look like from a perspective outside my body, but, give how often I'm disconcerted by photos of myself, it's clear my imagination isn't as accurate as what I actually remember seeing from my eyes inside my skull.

* with acknowledgment that memories aren't video recordings and that fidelity can devolve over time as people recall the recall, not the event, and thus no memory can ever be trusted to be fully "true / accurate."

Expand full comment
Francis Irving's avatar

My guess is that all memories are reconstructed from models, and even if they look accurate they aren’t actually anyway, and being first person doesn’t alter that either way. Does they seem likely, or are you very confident your memories are perfect like a video recording?

Expand full comment
Nolan Eoghan (not a robot)'s avatar

People might memorise things a bit differently, there are different eye witness accounts, but while there’s some room there for slight ambiguity it can be absolutely certain that a third person view is a reconstruction.

By and large I think my memories are accurate if not as clear as a video recording.

Expand full comment
Christina the StoryGirl's avatar

...I literally had a footnote disclaimer saying that "...memories aren't video recordings and that fidelity can devolve over time as people recall the recall, not the event, and thus no memory can ever be trusted to be fully "true / accurate."

But since the visual input for a human experience occurs in the eyes, which are on the front of the skull, that would seem to be the "original" feed, as it were, and thus more accurate to the actual experience than a reconstruction which alters the perspective.

Again, our eyes don't float two feet over our own left shoulder, so "remembering" an event from a third person perspective where you could see the back of your own head from two feet taller than you are is in itself automatically inaccurate to what actually happened and what you were seeing.

Expand full comment
beowulf888's avatar

First person or third person, I think are memories are basically imagined constructs. For instance, I just ran through some memories I have from kindergarten 55 years ago. If I were to give you an offhand answer, I'd say they were pretty vivid visual memories with an audio overlay. But if I try to examine my memories in detail — i.e. pick out the faces of my peers, or remember a scraps of conversation — I cannot see/hear any of the details. It's as if I have abstracted placeholders for my teacher's face, my friends' faces, and what my teacher was saying. The memories *seem* vivid, but when I try to pick out the details, they aren't that vivid. Would I be able to recognize my Kindergarten teacher 55 years later if you put a photograph of her in front of me? I don't know. I don't think I could create a police sketch of her right now. Memory, at least for me, seems to be an abstracted construct that may or may not have much to do with the reality of the situation that I remembered.

Expand full comment
Laurence's avatar

This is the experience I have of memories too. It's not that my own, first person perspective contains more or more accurate information about the scene than a hypothetical third person perspective would. I mostly remember the location, the identities of the people talking, and the gist of what they said. Recalling anything more requires a greater effort and I have to draw from multiple memories to recall people's faces (easy) or the colour of the furniture (hard). So, if by some cognitive quirk I would recall these things in third person, I don't think they'd necessarily be more inaccurate than if they were first person.

Expand full comment
beowulf888's avatar

And I should have added, that the first or third person point of view in a memory may just be the way a person creates those abstracted constructs...

Expand full comment
Dana's avatar

First person. At least, I tried just now to remember a few recent things and a few childhood things, and they all came back in the first person.

Expand full comment
NeedleFactory's avatar

Sometimes I recall a conclusion I reached, including where I reached it and (approximately) when I reached it, but I no longer remember why/how I reached that conclusion. Would you call that first person or third person? I think's it a mixture of both.

Expand full comment
The Goodbayes's avatar

All first person.

Expand full comment
Eric Herboso's avatar

I have aphantasia, so I do not have any visual memories at all. This means they are not first or third person in the sense of asking if the camera viewpoint is my eyes.

But I do have a fairly good memory; it's just that my memory consists of a list of facts, as perceived from my point of view. Does this make it count as first person? Maybe, I suppose, because I would say "I took that action" if I remembered that I took an action. But I would guess that someone who recalled third person autobiographical memories would also say "I took that action"; it's just that in their visual memory they would remember a scene from a third person vantage point.

Because of this, I think that my answer is none of the above. My autobiographical memories are not first nor third person. (Though they do consist of a set of facts that I would later talk about using "I" pronouns.)

Expand full comment
bored-anon's avatar

I’m convinced aphantasia is more of a quirk in narration about perception than “actual perception”, and that the same is true for many things people say about thinking and perception and stuff like that

Expand full comment
eldomtom2's avatar

Maybe. My internal voice and my actual thought process are definitely two separate things - the former is much slower than the latter.

Expand full comment
Francis Irving's avatar

I’m hypophantasic, mainly I remember the past as lists of facts. But I have a tiny amount of mainly spatial recall of some key events, and I occasionally have very bad weak mental images.

Because of this and people’s descriptions of it, I’m highly confident visual imagination is a skill to do with developing an actual imagined perception.

There are studies generally that try to show aphantasia isn’t just a lack of meta cognition (ie the visual processing is really not happening rather than just people aren’t aware of it)

eg https://pubmed.ncbi.nlm.nih.gov/29175093/

Why are you convinced otherwise?

Expand full comment
bored-anon's avatar

For one, that is a priming study, and priming literally is not real. So that isn’t helping. SSC has covered priming studies before I believe. And I looked for replications and didn’t find any

What if what you’re saying is aphantasia is just not having “vivid hallucinations” (in general I don’t think any of the ideas in this debate are real, honestly), and everyone has “mild imagery” for imagination, and there’s just a lot of confusion in communicating it? That’s probably not exactly true but I sure can’t distinguish that from what’s happening with the idea of aphantasia

Expand full comment
Eric Herboso's avatar

Just to be clear, as someone with aphantasia, I have no mental visual imagery whatsoever. I do not experience "mild imagery" nor any imagery of any kind while awake, unless I'm looking with my eyes.

Expand full comment
Francis Irving's avatar

Thanks for that! I'm not a long time SSC reader, so useful.

Is this the kind of SSC article on priming you're talking about? https://slatestarcodex.com/2015/09/05/if-you-cant-make-predictions-youre-still-in-a-crisis/

I'll have to do a bunch more reading.

Another starting point reference for whether varying reported phantasia experiences is just meta-cognition is this using fMRI: https://www.sciencedirect.com/science/article/pii/S0010945217303209

I like the challenge from you, and reminder that verbal reports of this are really bad. Certainly, everyone describes this stuff differently, our vocabulary and knowledge of it as a society are very poor.

Expand full comment
James Miller's avatar

I have Aphantasia. Until I learned about my condition it never even occurred to me that reading a book could cause anyone to see mental images.

Expand full comment
James Miller's avatar

I do fairly well on most kinds of tests except for tests concerning mentally rotating an object which seem ridiculously absurdly hard to me.

Expand full comment
bored-anon's avatar

Does anyone claim to “see mental images while reading”? What do they say? I don’t think that’s a real concept tbh

Expand full comment
Luke G's avatar

Sure. Reading a novel is a like a (really lo-fi) movie in my mind. Once I get going, I sometimes even stop "seeing" the words on the page--the reading becomes a subconscious activity--and my "sight" is entirely what's in my imagination.

Expand full comment
sethherr's avatar

Second this experience.

Expand full comment
Nancy Lebovitz's avatar

I'm pretty much don't have visual images when I read. Now that I think about it, it's more like I'm having feelings which I think would be evoked by what's described in the story. I'm more tolerant of visual description than a lot of readers are.

Expand full comment
fraza077's avatar

I'm terrible at visualising while awake, but good at it when I dream. If you ask me to visualise an apple, it has no texture or colour, just a vague form. Sometimes when I manage to approximate something approaching lucid dreaming, I try to visualise an apple, and it has all the details one could want.

Expand full comment
Nancy Lebovitz's avatar

I don't agree-- my ability to visualize is there, but very limited compared to a lot of other people's.

When you visualize, how much of your visual field does it take up? Smallish for me, and in the center.

How vivid are the colors compared to the real world?

To what extent can you visualize motion?

Expand full comment
bored-anon's avatar

My personal suspicion is almost everyone has the capacity to do and does do most or all of the described things, but with varying frequency and with varying claimed awareness and that as these things are purposeful, rather than unique actions, there is wide functional diversity that isn’t capability but purpose based

> When you visualize, how much of your visual field does it take up? Smallish for me, and in the center

I have no idea what that means

Expand full comment
Nancy Lebovitz's avatar

I mean that when I visualize with my eyes closed, most of my visual field is the usual roiling gray, but there's something without sharp detail in the middle.

Expand full comment
Francis Irving's avatar

Visualisation typically is described as happening "on a second screen", not in actual vision.

It is possible to alter the main visual screen, especially with eyes closed like you describe. But it is much less common.

Doing this with eyes open - voluntary hallucination - is called "prophantasia". Not impossibly rare but anecdotally few people do it.

bored-anon - I agree that most people probably have capability of visualising. Just as most people can in theory swim but not everyone learns to. I think I just never knew it was possible and nobody realised so I never tried to develop the ability.

Expand full comment
theobromananda's avatar

> When you visualize, how much of your visual field does it take up? Smallish for me, and in the center

Visualization is happening in abstract mental space, but can be "placed"/imagined to be placed on sensory perception. If you have actual images on the back of your eyelids, these are hallucinations/prophantasia; but I am sure you "mentally place" visualized images there, as almost everyone does.

I have the same suspicion as bored-anon; some people are good at phenomenal description, and some are not - or some people have not discovered that corner of their mind. They may not connect to the visual module in their abstract mental space, but they may "imagine" using meaning, or using the tactile sense etc.

Expand full comment
David Piepgrass's avatar

You're basically telling a person with aphantasia that they don't have aphantasia and offering zero evidence. Rude.

Expand full comment
bored-anon's avatar

Yes. People being wrong about mental states and processes is essentially the rule, not the exception (see the entire history of psychology and religion and new age stuff - even if some of it is entirely correct, the other 95% is dead wrong...). “You’re basically telling a servant of god that their spiritual rock doesn’t exist? Rude.”

To not be rude though, it is just a suspicion, and I didn’t offer much evidence. But the diversity and evolution of people’s statements here across history, as well as the mild incoherence, does lead me there

Expand full comment
Nancy Lebovitz's avatar

First person.

Expand full comment
Erlend Kvitrud's avatar

I mostly recall memories in first person (sometimes distant memories are in third person), but imagine future events in third person.

Expand full comment
Francis Irving's avatar

I have a Severely Deficient Autobiographical Memory (SDAM) and don’t remember scenes from a viewpoint.

There are one or two rare exceptions where I have a spatial memory and then it is third person I think.

Instead I remember facts about events, things that events made me learn, emotions I felt at times, and locations where things happened.

Expand full comment
Majuscule's avatar

Does this sort of condition affect your ability to meet a friend eg “at the same Starbucks where we met last time”? Do you recall that it was a Starbucks on the right side of the street outside the train station, or do you need to look up the address/location every time?

Expand full comment
Francis Irving's avatar

I’ve got some spatial memory and imagination so would probably remember where it was. Also when I saw it I would recognise it.

I wouldn’t remember a scene from being there with the person and what we wore and audio of what we said. I would perhaps remember the topic we talked about.

I also wouldn’t remember walking up to the cafe from the station or similar visuals

Expand full comment
Mr. Doolittle's avatar

Usually first person, sometimes both first and third, rarely third person only. I have come to recognize that some of my early memories were affected/recreated by hearing stories or particularly looking at pictures of the events. In those cases, it is more likely that I view it as third person, even if I think I truly remember the actual events separately from seeing pictures later.

Expand full comment
Aftagley's avatar

Almost always 3rd person for anything outside of what I'd describe as short-term memory.

That being said, my memory is generally pretty abnormal.

Expand full comment
Zohar Atkins's avatar

Mega thread on Franz Rosenzweig and the relationship between philosophy and theology

https://mobile.twitter.com/ZoharAtkins/status/1420830864775290883

Expand full comment
bay area throwaway's avatar

I recently graduated from college and moved to the Bay Area. I don't know anyone here, so I'm looking to meet people and make new friends.

Any advice? I am open to any of Bay Area-specific advice, rationalist community-specify advice, and generic young person in a new city advice.

Expand full comment
Scott Alexander's avatar

You can probably find some rationalist meetups near you on https://www.lesswrong.com/community . Also consider getting on David Friedman's mailing list for his South Bay meetups (I can't remember how to do that, but potentially email him at the address on the bottom of http://www.daviddfriedman.com/SSC%20Meetups%20announcement.html ). Right now our local meetup infrastructure is pretty devastated after the COVID pandemic, but I'm going to be trying to rebuild it later this month, so watch this space.

Expand full comment
Kingsley's avatar

Meet people through activities. Meeting people just 'around' is much less a thing than it was in college and before.

Go on group bike rides with a local club, show up to a board game night listed on meetup.com, volunteer for a workday at a local preserve, attend a Less Wrong meetup, join an improv group, take a tennis class through your park system...........whatever it is for you. You might show up and it's all weirdos. Try again a few times, you might find that the same basic activity has weirdo groups as well as chill, un-awkward groups.

Expand full comment
Dan Pandori's avatar

+1 to this. I'll also note that it's personally been helpful to view going to groups full of randos as 'hits based friending' (https://www.lesswrong.com/posts/pfibDHFZ3waBo6pAc/intentionally-making-close-friends#Hits_based_Befriending). Most people you meet you will not click with and that's totally fine. If you meet 30 people at Friday night magic and the breakdown is 20 are terrible, 9 are OK, and 1 is coming to your next board game night because you hit it off, then that's a success!

Expand full comment
SurvivalBias's avatar

A hot(?) take - the LW community section for the bay area has been fucking dead for fucking years, unless you're ready to drop everything and go to a meetup halfway across the bay on something like Tuesday 2pm, you'll just spend months waiting. ACX/SSC meetups are good, especially the ones David Friedman runs, but also rare. People I trust keep saying that EA communities are invariably full of interesting people, and at least prior to covid there seemed to be a fair number of weekly meetups, but somehow I never came around to check out, and now they're mostly gone.

Of the more general advice, there's to be a ton of hiking groups around, you may try your luck with those. I personally find a setting of a short hike to be nearly ideal to make this kind of connections - there's usually few enough people to get to know everyone, you're stuck together for a few hours, you have enough going on to make an occasional silence not-so-awkward and to provide conversation starters, but not enough to actually occupy a lot of attention, it's very comfortable to talk (as opposed to a noisy bar or when you're seriously exerting).

Also I'm in the same process of finding friends around here, so if you're interested and maybe if we can find one or two other folks to join in, I can take the lead in coordinating a small meetup.

Expand full comment
imoimo's avatar

There’s a weekly in-person meetup in the east bay (see Scott’s community page link and the linked google group) but otherwise ditto Kingsley’s advice. Meetup.com is the best resource in current year for meeting people in general.

Expand full comment
Metacelsus's avatar

I'm starting a post series on my blog called, "The Human Herpesviruses: Much more than you wanted to know." They're really quite interesting, and not in a good way.

The intro is here: https://denovo.substack.com/p/the-human-herpesviruses-much-more

Right now it doesn't quite live up to the title of "much more than you wanted to know", but believe me, I will deliver on this promise. I will publish my second post, about herpes simplex, sometime in the next few days.

Expand full comment
hi's avatar

It's not a bad article so far. I sat through the whole thing. Good job.

Expand full comment
Deiseach's avatar

I'm very interested, since I got the cold sore virus as a child from *somebody* (can't pin down which elderly relative who wanted a kiss to blame for this) and umpty-years later, it *still* kicks in when I'm run-down/stressed.

Expand full comment
Faze's avatar

The herpes presentation was well done. I feel like it gave me a good elementary grounding in the topic. Looking forward to your future posts.

Expand full comment
Thoroughly Typed's avatar

Thanks for writing this! Really looking forward to the next parts. Particularly interested in VZV (due to suffering from it myself occasionally).

Expand full comment
Cassander's avatar

You should post these on DSL, it would make a great effort post series

https://www.datasecretslox.com/index.php?board=1.0

Expand full comment
MB's avatar

A couple of times on ACX, I've seen reference to the idea that nuclear energy failed to reach its full potential primarily due to regulatory burdens. That is, nuclear power plants were unfairly viewed as especially dangerous or harmful to local communities, which lead to the creation of onerous rules around the operation of such plants that made them noncompetitive with other energy sources.

If this were true - that nuclear is a superior form of energy generation which was stifled in the US/Europe due to a bad reputation - then wouldn't we expect to see China leaning very heavily on nuclear as compared to other energy sources? They certainly do have some nuclear, but according to Wikipedia it looks like they only get 5% of their overall energy from it vs 20% for the US. I don't really have any background on nuclear power but I'd be curious to hear from other people who know a lot about it (especially those who are sympathetic to the view I referenced).

Expand full comment
JohanL's avatar

China is trying to expand their nuclear power, but they're not immune to post-Fukushima protests by the public. Even so, it's consistently increasing as part of electricity generation.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
gmt's avatar

Stifling protests takes some amount of political power, of which they don't have an infinite amount. If an issue is not important but does face opposition then they'll probably give in so that it doesn't agitate the people and lead towards more serious protests later.

I don't know that that's the case for nuclear energy, but the CCP isn't omnipotent even with their horribly invasive monitoring software.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
gmt's avatar

I agree that they have a lot of it. I'm not convinced that they specifically have expended it on protecting nuclear power, given that China does not have much nuclear power projects. Their amount of nuclear energy is increasing but it's still a tiny portion of their total electrical consumption (5% as of 2019, according to Wikipedia).

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Lambert's avatar

> Their amount of nuclear energy is increasing

As opposed to much of Europe, where there are active commitments to phase it out in favour of ~~braunkohl~~ ~~Nordstream II~~ solar and wind.

Expand full comment
JohanL's avatar

Agree. China can stifle any few popular protests they feel like, but it does mean an expenditure of a limited political capital. The party has to pick its fights, just like everyone else.

Expand full comment
bored-anon's avatar

They aren’t. Protests and resistance from many groups still matters to them, and they still respond to it. There are millions of different ways one can oppose some aspect of chinas current actions, and the CCP neither can nor does suppress all of them.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
bored-anon's avatar

The news media and its consequences has been a disaster for American citizens understanding of literally everything

China has plenty of criticism of the CCP internally, and many people who oppose some organ of the government on individual positions on some issues and remain unincarcerated and even sometimes win! Nuclear weapons is neither “democracy” nor is it “kill xi”, and many people opposing nuke does influence, via a variety of different sorts of channels, such as party officials (being people themselves) opposing nuclear, local government opposing it, or enough people protesting it that the political cost becomes too high (it’s not like all the political capital and time and people in the CCP will be spent on nuclear...). China does molest some of its citizens for dissent, but not all of them, and not for all forms of dissent including procedural, as you seem to imply. They do it more than here, but they’re not purging millions.

Reading 500 news articles about how China (1 billion people) is specifically oppressing the weegees or the ex British imperial colony doesn’t actually tell much about how China operates in other areas lol

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
bored-anon's avatar

To be direct, we were talking about anti nuclear sentiment and protests. You suggested they were immune to that, and like they’d actively suppress anti nuclear sentiment just because they’re evil and totalitarian. You then said that “if you criticize anything of importance to the communist party ... hello internment, brainwashing, public apology”. While they do that in some cases, and it’s considered “bad”, apparently, are they really suppressing everyone who disagrees with a single point on their thousand item policy agendas? That’s absurd.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Spookykou's avatar

Hopefully this will not get anyone I know brain washed, but I was working in China this past year, and spent a good deal of time with several Chinese people. In particular I was mostly with rich people, so maybe the rules are different, but they regularly complained about the government. They regularly had information that they were sharing get censored or otherwise restricted on WeChat(sometimes stuff disappeared off your phone, sometimes you simply could not longer forward it to anyone else). This did not result in any direct actions being taken against them as far as I could see. Even the middle class Chinese people that I interacted with often had at least some VPN access/foreign news/complaints about CCP.

Expand full comment
Pycea's avatar

Not being an expert in any way, my impression was that China used/uses lots of coal for power, which is probably cheaper for them. Another thing is that renewables like solar are a pretty good value compared to nuclear in the present, but that's only been the case for the last few years. It was possible to switch to nuclear in the 60s, but instead we've had an extra 50 years of fossil fuels, which is what a lot of people gripe about.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
nelson's avatar

Consistently available is being deployed. Storage is getting cheap.

Civilizations have been run on far less rich energy environments. Can you not have a civilization without suv's?

Oil in particular is kind of iffy in terms of availability. Look at the price fluctuations everytime there's a hiccup in the Middle East.

Yesterday random person parked next to me gave me a ride in their electric Fiat. With even only 100 miles of range she loved it. Particularly the performance. Charges it from her rooftop solar. She seemed pretty civilized.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Carl Pham's avatar

I wouldn't say storage is getting cheap. Cheaper, perhaps. But the numbers are still daunting. A home 300 gallon above-ground tank full of diesel fuel costs about $2,000 installed and can store about 45 GJ of energy, for a unit storage cost of $44/GJ.

On the other hand, a Tesla Powerall costs $10,000 and stores about 0.050 GJ of energy, for a storage cost of $200,000/GJ, a difference of a factor of 5000.

Expand full comment
nelson's avatar

Powerwalls are not the storage we are looking for or discussing when looking to even out renewable generation at the grid level. Nor are the facilitating technologies I've made reference to lithium ion based. But we're I looking for personal electri self sufficiency, and I might want enough storage to cover the freezer and fridge during an outage the last thing I'd want around the property would be 300 gallons of diesel. And that tank would set you back thousands. >$3000 for just the tank then the install. One quibble, I don't believe you took into account that most of those gigajoules from diesel are going out the stinky exhaust as heat. Electric guy thinks your diesel doesn't make sense.

https://mrelectric.com/blog/is-running-a-generator-more-expensive

Expand full comment
Carl Pham's avatar

(1) Okay, what *are* the methods of storing electricity you are imagining, then? Happy to look at those numbers if they are a factor of several thousand better than a Powerwall, or any other battery tech.

(2) What's your objection to 300 gallons of diesel? People have been storing home-heating oil on the premises for decades, and I've never heard of any major problems. People out in the sticks have giant propane tanks on their property, and aside from being kind of ugly (since you have to put them where the truck can get to them), I haven't heard of anything scary about them. Have you? If so, what?

(3) Here is a quote for a 250 gallon above ground diesel storage tank for $1875:

https://www.mylittlesalesman.com/above-ground-storage-tanks-for-sale-i21c870f0m0?gal=250

Dunno what the install would be, though. I can see another grand, but probably not a *lot* more than that. And obviously if you buy a bigger tank your price per GJ goes way down.

(3) Of course I did. I used the net (free) energy density of diesel, which has been measured any number of times and can be found in many places. Wikipedia has a nice list here:

https://en.wikipedia.org/wiki/Energy_density

And even if I was off by a factor of 2-5, it wouldn't change the basic comparison at all.

(4) I don't think you've interpreted Mr. Electric correctly. He's saying *grid power* is cheaper than generator power, and of course it would be, since grid power benefits from massive economies of scale, not to mention far improved thermodynamic efficiencies available to the gas/oil/coal furnace burning at way higher temperatures than the ICE on a generator can stand.

Expand full comment
David Piepgrass's avatar

An electric with 100-mile range sounds perfect for the likes of me (anything more costs too much). But the topic was renewables vs nuclear vs fossil fuels. China has so much coal pollution that they'd like to build out more clean energy just to get cleaner air. But land area is at a premium in China, so nuclear (with some wind power) is the obvious choice, to the extent they decide to get serious about clean energy.

They have been building coal plants prolifically. I've heard that it was basically some sort of government Goodharting exercise and all that capacity will not necessarily ever be used; even so, coal power usage was still rising as of 2018: https://www.eia.gov/international/analysis/country/CHN

The link says "China’s government anticipates boosting the share of natural gas as part of total energy consumption from almost 8% in 2019 to 10% by 2020 and 14% by 2030 to alleviate the elevated levels of pollution", and boosting gas is great for reducing air pollution though it won't stop global warming (natural gas produces about half as much CO2 emissions as coal, which is how the US reduced CO2 emissions without really trying: the fracking revolution undercut coal on price. But I expect CO2 levels will not rise much more slowly, since CO2 accumulates in the atmosphere and the reduction in coal is offset by quickly-rising global demand for other fossil fuels.)

Energy storage is not cheap yet, except some tech like pumped hydro which can be cheap if the geography is right (China has lots of mountains, maybe that helps.)

I think China is very well positioned to make nuclear successful if they choose to. Remember how the Banqiao dam catastrophe in China killed over 100,000 people and created a worldwide backlash against hydro-electric power, just as Chernobyl did for nuclear? No? Right. The ability to control public messaging is a powerful thing. Far more people remember the Three-Mile Island meltdown that killed no one but enabled a lot of rumors and narratives. So if the censors decide that nuclear fearmongering isn't allowed, I suppose that could almost take care of people's fears all by itself. The great thing about nuclear - not, I suppose, that it matters much to China - is that a lot of Gen IV technology is objectively great stuff that can achieve high safety and good price simultaneously.

Expand full comment
David Piepgrass's avatar

(I just remembered it's an odd-numbered thread... hope I haven't said too much here)

Expand full comment
Nah's avatar

Nukes' PITA factor is understated. Those shits are hard to build, and hard to run compared to basically every other type of power generation, in exchange for needing very little in the way of maintenance and fuel kwh for kwh.

These days, a traditional nuclear boiled kettle doesn't really make sense to make, and nobody is gonna lay out the $$$ for a prototype of a modern varietal 'cause wind and solar are so god damn cheap

Expand full comment
beowulf888's avatar

Do you think it might make economic sense if we did like the French did and standardize reactor plant design and components? Just wondering.

Expand full comment
bored-anon's avatar

I’m not sure that’s true in the way you imply. It’s likely that a combination of regulatory and corporate changes and ideas would lead to nuclear being developed and competitive IMO, even if neither are here now. And it probably could bring long term average costs down significantly to be competitive with at least where solar and wind are now.

Expand full comment
Nah's avatar

That's the problem.

There is no way to beat renewables/gas using existing, proven tech, regardless of changes to regulation.

Thus, no corporation is ever even gonna try. Why take a gamble on MAYBE breaking even with solar after spending massive amounts of money on a risky prototype for an unpopular technology, when you could just throw down 80 solar/wind farms for the same price and make money right now?

The only way we are ever gonna get advances in the field is through state action, and now that the Soviets are gone and the arms race is paused, nobody is willing to spend the stupid amounts of cash needed on basic research.

Expand full comment
David Piepgrass's avatar

Traditional nuclear plants were once affordable, and if we were willing to accept 1970s plant designs, they could be just as affordable again without using any new technology: http://www.phyast.pitt.edu/~blc/book/chapter9.html

Of course, once the price of nuclear plants was hiked up, we mostly stopped building them, and so the nuclear fleet consists mostly of 1970s reactors. Which means that we've had 50 years of global experience with that technology and we know approximately what to expect from it if we were ever willing to build those again: roughly 1 TMI (zero fatalities; accident avoidable by improving control room procedures), 1 Fukushima (virtually all fatalities were caused by the questionable relocation; accident avoidable by not putting every last emergency generator in the basement), and 0 Chernobyls (it seems to me that the flaws in that reactor must have been mostly intentional in order to produce a reactor that was simultaneously cheap and versatile, but those risky decisions were not replicated outside the USSR, and there is no USSR anymore so we can assume no more of those RBMKs will be built.)

The numbers look good. In fact, if every single reactor in the world had a Fukushima-style meltdown, the outcome could still be better than the coal-fired world we live in: https://www.reddit.com/r/nuclear/comments/jtm6hm/how_bad_is_meltdown_world/

But we can do much better than traditional reactors by building Molten Salt Reactors. Apart from a high cost for corrosion-resistant steel, a simple MSR uranium burner should be cheaper in every respect than traditional solid-fuel-water-cooled reactors, especially if in the modern "ultra-safety" regulatory environment. The fact that MSRs can use conventional turbine technology instead of special giant turbines is a bonus.

Expand full comment
Brett's avatar

Even aside from the regulatory issue, nuclear power plants are complex, expensive beasts - that's why they're pricey on the military ship use side of things as well as the civilian power side. They benefit a LOT from doing mass roll-outs, which lets you build up skilled personnel and capability and then spread that usefully across a lot of projects.

That's why they tend to be done en masse either by governments or quasi-monopolistic utilities, and why the cheapest plants were when the US, the French, and the South Koreans did large scale deployments of plants (it's harder to tell with the Soviets, although the RBMK reactor was supposed to be large and cheap to build and operate).

This isn't limited to nuclear, either. One of the problems with industrial policy in general and some other areas in particular (such as shipbuilding) is that even if you subsidize such industries, you're likely going to have problems keeping them solvent without massive subsidies onward if they don't have sufficient order flow.

Expand full comment
beowulf888's avatar

Yes, the French and South Koreans standardized their plant designs. Makes it easier to train personnel. Makes safety inspections easier. Both of which improve overall safety. Also this lowers the cost of construction since vendor can standardize their reactor and system components. The US let power companies do their own thing. Which in retrospect probably created all sorts of downstream issues. France (correct me if I'm wrong) only had one accident that reached level 4 (out 7 possible levels of seriousness). And they had a half-dozen other minor accidents or issues over the decades.

Expand full comment
Stompy's avatar

I also don't have much of a background on nuclear power, but I did see this figure recently which tells me they're definitely moving more in that direction than other countries. They've been building reactors like crazy since around 2006. https://twitter.com/Jason/status/1413184079390920704

I think part of it could just be that transitioning to nuclear takes a considerable amount of time. Wouldn't be surprised if that 5% number goes way up in a few years.

Expand full comment
John Schilling's avatar

China's nuclear reactors are almost all license-built foreign designs, mostly Westinghouse (US) and Framatome (France), and at least until recently depended on the import of critical components. As with e.g. jet engines, microchips, and vaccines, China's technology base is in many areas a generation or more behind that of the fully-developed world, and sometimes it's just not economically viable to build the old stuff in the name of autarky.

Self-sufficiency in power generation is important to China, and they are working towards a wholly domestic nuclear energy capability. But for now, they meet that requirement with coal-fired power plants plus some hydro, solar, and wind power where applicable, all of which they can build very nicely on their own. Their 5% nuclear capability is probably mostly to maintain a nuclear industrial base that they can scale up when they are ready. We'll have to see what they actually do.

Expand full comment
Mr. Doolittle's avatar

As I understand it, nuclear has two primary problems. 1) Very high start up cost, and 2) Very high technological requirements, typically staffing issues hiring enough properly qualified personnel.

Those two things would present themselves in a developing country far more than in an established country like the US. It's an oversimplification to be sure, but it is true that regulations on nuclear energy are a big burden, arguably making new plants significantly more expensive than necessary. "Necessary" is doing a lot of work in that statement, and presumably many environmentalists and the people making the regulations would both say that they are perfectly reasonable and necessary precautions.

I think an answer to your question would be to look at the number of new nuclear power plants in a country, and the age of current plants. The US has a lot of old nuclear plants and not many new ones. The US topped out at 112 plants in 1990, and has slowly declined in number since then. China has a lot more new plants or in construction, and already has more than half of the capacity of the US. They didn't really get started ramping up their nuclear program until around 2010, with their program looking more experimental/testing until at least 2002. Those metrics seem to fit the narrative of China less burdened by regulatory costs. They just started later and needed time to learn how to do it.

Expand full comment
The Chaostician's avatar

I agree with several of the other comments: nuclear power has high up front costs and difficult technology, not all of which China manufactures domestically. There is another factor I want to point out:

Nuclear power is over-regulated in the West, but might be under-regulated or mis-regulated in China. China doesn't have a particularly good safety record for things like high speed rail or large dams (or infectious disease labs?).

The pro-nuclear argument is that we could have a regulatory regime that both prevents nuclear disasters and allows for widespread nuclear power. If China does not have the ability to ensure the safety of other big, high-tech projects, then they should be more cautious about expanding nuclear power.

Nuclear with Chinese regulations is still probably better than coal with Chinese regulations, but it's not as clear of an argument as it is in developed countries.

Expand full comment
Medieval Cat's avatar

Some people are arguing that South Korea is doing this: They seem to be building nuclear at a magnitude lower cost. But the numbers are debated, of course.

Expand full comment
Mystik's avatar

The very very short version of what I'm about to say could be summed up as "maybe body builders don't stretch much because it shapes their muscles in a way they don't like"

Okay getting into the long version. This summer I took a Cha Cha class (a type of dance). My instructor was experienced, and at one point she told me "You should always stretch after dancing because it makes your legs beautiful." Clarifying, she told me that stretching reduces the bulges in muscles, so it smooths out the appearance of the legs, which is considered more aesthetic in that style of dance.

This made me think of the Metis and Bodybuilders article by Scott, where my general takeaway on stretching from the article and comments was "the effects are murky, but it's not popular among bodybuilders." It struck me that if my instructor was correct, this might explain a bit; maybe stretching doesn't have as big of an impact on reducing muscle growth/strength, but it changes muscle shape in a way that makes body builders feel like they aren't making progress (because it reduces the classic bulging of the muscles).

How confident am I in this hypothesis? I'd say I give it maybe 25% chance of being true. From my anecdotal experience with dancers, 1) dancing seriously is a brutal workout for the legs, and 2) I haven't ever noticed the classic bulgy jacked legs look.

Also, in every dance course I've ever done, there was an amount of stretching involved that went far beyond what was necessary to perform the moves in the course. I think a lot of "lore" knowledge exists in disciplines, even if it's no longer understood why rituals are done. It seems plausible that part of the reason for excessive stretching could be this. Or it could be something else and my instructor was wrong.

My search for corroborating information was inconclusive, since every article I found was either over my head, or along the lines of "GET YOUR TUSH INTO SHAPE WITH THIS WEIRD STRETCHING AND LIFTING ROUTINE." I never claimed to be good at lit reviews, especially outside of my field.

What do y'all think? Has anyone else heard of this phenomenon, either in official sources or in semi-folklore of dance/related fields?

Expand full comment
TheGodfatherBaritone's avatar

First time I’ve heard this. I think

Bros just don’t like stretching. I’d be surprised if stretching had much aesthetic effect. I’ve never noticed one.

Expand full comment
Erica Rall's avatar

Yeah, the conventional wisdom I've heard in the strength training community is that static stretching is mostly a waste of time unless you don't have the range of motion to do the lifts you want to do. Opinion is divided on dynamic stretching (calisthenic exercises that include an element of stretching) with some advocating them as part of a warm-up routine or as a light standalone exercise to do on recovery days.

Expand full comment
Emily's avatar

My experience is that a lot of people have wildly inaccurate beliefs about the effect that different types of movement can have on your body, and this includes people in the fitness industry.

My guess would be that dancers don't have jacked legs because professional dancing selects for certain (genetically-influenced) body types and that the combination of the movements, eating, and let's not forget chemical supplements aren't optimizing for muscle gain the same way as bodybuilding. I doubt stretching makes the top 5 reasons. And how hard a workout is on your legs doesn't have much to do with how optimized it is for muscle gain.

Expand full comment
Emily's avatar

(In women's fitness, there is all of this stuff about how you can get long/lean muscles from dancing or pilates, in contrast to lifting. This is a myth. The shape of your muscles does not change, they just get bigger or smaller.)

Expand full comment
TheGodfatherBaritone's avatar

Yea I’ve noticed there’s a tendency to adopt an illusion of control with respect to programs/routines. I’ve come to realize there’s not really much you can control besides consistent effort.

Expand full comment
psmith's avatar

The classic Dante Trudel Doggcrapp forum posts were very big on loaded stretching, often to the point of considerable discomfort, as I recall. I believe he thought it could elicit muscle fiber hyperplasia.

I would characterize Dante as somewhat heterodox, but a respected source of information, in general.

Expand full comment
everam's avatar

I would be hugely surprised if this was the case. For one, pro bodybuilders used to be massively *for* stretching, going so far as to do painful weighted stretches lasting over minutes. Many pros are surprisingly flexible, easily able to put palms flat on the floor and do the splits.

The recent movement against stretching is due to several studies demonstrating the inefficiency of static stretching, causing strength athletes to gravitate towards "mobility" instead.

The reason dancers aren't jacked is because they don't have the muscle. You need to do specific workouts, with specific nutrition to build muscle. Just getting the legs sore isn't enough.

Expand full comment
Mystik's avatar

Thanks, I find this point pretty convincing.

Expand full comment
Amie Devero's avatar

I have opinions on this based on two sources one. I was a professional dancer and then a competitive athlete -- and, for a number of years, a trainer. Moreover, I have a master's in exercise physiology.

Stretching has multiple benefits, which dont' include streamlining your muscles. But, it helps you avoid injury, keep functional capacity as you age, and provide the mobility to accomplish other kinds of athletic endeavors.

With respect to getting long, non-bulky muscles, that has more to do with the kind of strength training that you do rather than the stretching. Bodybuilders are looking to make demonstrable a bulky muscles. Dancers are looking to make long, non-bulky muscles. Therefore, they have different strength regimes. I think the primary reason that bodybuilders don't stretch is because it's uncomfortable. Moreover, bodybuilder are competitive in shaping their bodies to look a particular way. And so far as stretching, from an efficiency standpoint, it does not contribute to their goal, so it has no role.

Expand full comment
Tom Bushell's avatar

Amie Devaro, can you offer some expert advice on warmups and stretching for the hands? (I’m asking as an amateur musician re-learning the guitar)

Playing a musical instrument - especially the stringed instruments - seems to require a mix of stretch, dexterity, endurance, and relaxed strength that I find quite elusive. I gave myself tennis elbow at one point by over practicing with bad technique. This lasted a year before I discovered trigger point therapy.

I’m wondering if there is any solid science that addresses this.

Expand full comment
Nancy Lebovitz's avatar

If you'll excuse me for stepping in, I'm finding out that what looks like a problem in one place may be caused elsewhere, so even if your hands hurt, you might still want to work on your torso.

Expand full comment
Amie Devero's avatar

That's really interesting. In the same way that your neck affects your hands and carpal tunnel, that may be a factor. I'm going to look into that. Thank you for the suggestion.

Expand full comment
Tom Bushell's avatar

I’ve had similar experience. My tennis elbow happened because I was holding a finger poised above the fretboard in a state of high tension for long periods.

I had no idea that the muscles that flex and extend the fingers were near the elbow, but that’s where I experienced the pain…several joints away from the offending digit. Self massage with a hard rubber ball was the treatment that worked.

“The Trigger Point Workbook” by Claire and Amber Davies has been a game changer for me when diagnosing and treating pain.

Expand full comment
Amie Devero's avatar

It's funny you ask that. I wish I had something useful to contribute. I have just started playing violin, and I'm about 3 months in. My biggest problem is pretty much what you're describing. And honestly, the only thing I have really found that helps me is just stretching my fingers at the beginning sort of spreading them and then clenching a fist as a warm-up and stretch. I've also been icing them a bit because my hands look swollen and feel stiff after a practice. And at the moment, I'm keeping my practice sets to about 30 minutes. I'm hoping to build some stamina in the muscles and ligaments of my hands.

I suspect that the research would say much the same as what it says for other areas of the body. But that is pure speculation.

Expand full comment
Tom Bushell's avatar

Good for you, though I think that the violin is even more difficult than the guitar.

Swelling requiring icing is a little alarming. I’m wondering if you might be practicing with way too much muscle tension, like I was the first 4 or five times I tried and failed to learn guitar.

I’ve found that very slow practice, making chord shapes with fingers hovering above the strings, or very lightly touching them without pressing them, seems to work well to learn proper relaxation.

Expand full comment
Amie Devero's avatar

Well, I'm a little bit older than most of the people in this thread I suspect. And there is a certain awkwardness to how you hold your left hand to reach further frets. So, a bit of arthritis coupled to over practicing it's likely the source. I'm not too worried about the swelling because, as I have some arthritis in my hands, it's not that unusual.

But I will say, I highly recommend trying out the violin. It's the most fun I've had in a long time. :-)

Expand full comment
Aftagley's avatar

I have always been told that stretching negatively impacted muscle elasticity and decreased your ability to perform efficiently over a limited range of motion. (IE, if if I'm a runner, I only need my leg muscles to be able to make the movements required during running; training them to have the ability to stretch further than I'd need during running only makes them less efficient at running. Thus, you do alot of stretches for the particular range of motion you need, but don't do much outside of that).

I have no clue if this is true, but it's been a pretty common belief across most coaches and athletes I've trained with.

Expand full comment
Amie Devero's avatar

There's a little bit of truth in that. Lots of runners and other athletes believe that stretching before running etc is a good idea. That is absolutely untrue.

The time to stretch is after a warm up. So if you're out for a run, run at a slow pace for 5 minutes, then stop and gently stretch. The muscles are warm enough to avoid any damage to the muscle fibers-- and the stretching will improve your gate as you will have more range of motion. (One caveat. I was a competitive runner and triathlete. So, I know that this is highly unlikely. But, it is the prescribed way to do it. Neither I nor most runners do it.)

With respect to whether you ought to stretch generally, there is a huge amount of consensus within the exercise science community that you absolutely should stretch. In fact, there are yoga classes for runners. But, stretching beyond your capacity, or doing extremely deep stretching on your gastrocnemius, soleus, hamstrings or quads--the primary muscles for running --is not a good idea. But gentle stretching is very good for your performance.

If you reach search the habits of elite runners, you will find that their training regime includes stretching.

Expand full comment
cdh's avatar

Stretching does not change the physical structure of muscles. https://pubmed.ncbi.nlm.nih.gov/28801950/

Stretching does not reduce injury risk. https://pubmed.ncbi.nlm.nih.gov/24100287/

Expand full comment
cdh's avatar

EDIT: muscles or tendons.

Expand full comment
Mystik's avatar

Cool, this was exactly the sort of article I was hoping someone would know of. Thank you very much

Expand full comment
Amie Devero's avatar

Finding two articles that confirm your presupposition is not what I would call research. We could get in a competition of scholarly articles. Here are just 2.

https://www.tandfonline.com/doi/full/10.1080/15438620802310784

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3273886/

In order for stretching to be useful it does not need to change the composition of muscles, it only needs to facilitate their optimal performance.

There are hundreds of articles documenting research showing the benefits of stretching. But as in all research, you can always find the outliers if you're looking to confirm your belief.

The only real controversy within the exercise science community is over what kind of stretching is best, not whether it's useful.

Expand full comment
cdh's avatar

First, the two meta-analyses I provided do not confirm my presupposition. Up until 3 or 4 years ago, I had the very strong opposite presupposition that stretching was important for performance, recovery, and injury prevention. I was a competitive athlete and probably stretched every day of my life (sometimes more than once) for a decade and a half straight during my competitive years. Reading into the studies has caused me to change my thinking on stretching.

Second, your links don't seem very convincing to me. The first link you provided concludes that: "There is moderate to strong evidence that routine application of static stretching does not reduce overall injury rates. There is preliminary evidence, however, that static stretching may reduce musculotendinous injuries." The second link you provided concludes: "Stretching has not been shown to be effective at reducing the incidence of overall injuries. While there is some evidence of stretching reducing musculotendinous injuries, more evidence is needed to determine if stretching programs alone can reduce muscular injuries."

Third, stretching has been shown to decrease performance and strength, as your second link points out. "Unfortunately, however, static stretching as part of a warm-up immediately prior to exercise has been shown detrimental to dynamometer-measured muscle strength and performance in running and jumping." So I don't think your insinuation that stretching can help facilitate optimal performance is correct.

Fourth, I want to point out that I'm talking about static stretching, not dynamic stretching.

I think that if static stretching makes you feel good, you should do it. But I wouldn't do it because of any proven benefits.

I save about an hour a week not stretching and I think the cost-benefit calculus of that trade-off are in my favor.

Expand full comment
Amie Devero's avatar

I have no idea when it was that this thread became about static versus dynamic stretching. If that was ever raised as a distinction, I missed it. Yes, over the years there has been a movement from static to dynamic stretching. Had that ever been a point of curiosity in this thread I would have addressed it.

One interesting technique that is somewhere between dynamic and static is PNF stretching (which is static, albeit with force exerted during the stretch). It has remained a significant component of athletic training.]

An aspect of this and other rationalists communities that I find quite bemusing is how frequently people act like they are experts because of some personal (usual a body-hacking) experience + the fact that they can do a Google Scholar search.

Anyway, I think I'm going to bow out at this point. Those interested in this can take their information from a single anecdotal report and two articles, or from someone who actually has education in this and has read much of the research on it and been involved in some of it.

I get the feeling that folks simply want a good justification to not stretch. That's fine with me.

Expand full comment
REF's avatar

Being educated in this topic is not a ringing endorsement. The health industry has been teaching absurd, outdated and soon to be outdated ideas for ages.

The most likely person to give you out of date advice on fitness is someone who went-to-school for it. If you want to be credible, keep up with recent advancements and don't pretend like you are sharing facts. You aren't.

At best sharing some version of the latest widely-held-position. Proclaiming certainty makes you look like either a charlatan or a fool.

Expand full comment
Amie Devero's avatar

Your scathing critique of the health industry is has little to do with people who are in the field of health research.. You can find faults, mistakes, and missteps in every industry. But most importantly, the health 'industry" is not the same as the health field. You should ask people with terminal diseases whose lives are prolonged whether they believe health research is corrupt or contaminated.

As to your other point, why do you assume that I have not kept up with current research?In fact I specifically said that I had.

Moreover, I have participated in some of that research. And I've not stated anything as fact. I have shared what I know, and what the current research says.

Moreover, the conversation got profoundly distorted when my comments were interpreted as sayi that the only stretching I was talking about was static stretching. I was not.

Plus, the original question was not solely whether stretching is valuable, but whether it produces lithe, long muscles. And if you read my response, it is that question that I initially addressed.

Expand full comment
Schweinepriester's avatar

Difficult topic. Bulks of BS to navigate. AFAIK, stretching is definitely not about muscle tissue but about connective tissue and much less about elongating some structure than about a) providing information for the multitude of receptors in connective tissues and b) inducing adaptations within those tissues. Maybe you want stronger recoiling springs to save energy cost; if your genetic outfit allows it, you can train them. Maybe you don’t want to be distracted by pain signals when doing some extraordinary movement. Sometimes you really want more range of motion. Or you just want to calibrate the sensory system before doing advanced fast coordination stuff.

From my experience as a stiff body person, stretching while having tea in the morning and listening to the radio feels much better than sitting around and it’s also nice to have a choice of movements above everyday needs.

Bruce Lee was Cha Cha Champion in Hongkong before he took up Kung Fu, by the way.

Expand full comment
LesHapablap's avatar

Does stretching not increase flexibility?

Expand full comment
cdh's avatar

My lay understanding is that increased "mobility" or "felxibility" is largely a function of being able to tolerate larger ranges of motion that were previously available, but were intolerable. It is a neurological phenomenon. I guess you could say the flexibility was always there, but the body's ability to tolerate it wasn't there until you trained it through stretching.

Expand full comment
Nancy Lebovitz's avatar

It isn't toleration, or at least not toleration as I understand the word.

Feldenkrais Method increases kinesthetic intelligence, so that there's less parasitic tension. You're not tolerating an increased pull, you're not fighting the pull so there's nothing to tolerate.

Expand full comment
Tom Bushell's avatar

Alexander Technique has a similar approach. My teacher showed me how to “relax into the stretch”, which immediately led to an increase in extension, and reduction in discomfort.

Expand full comment
Nancy Lebovitz's avatar

There's a lot of overlap in the effects of Alexander Technique and Feldenkrais Method even though they have very different approaches.

Expand full comment
LesHapablap's avatar

Increased mobility should decrease injury though right? I'm just thinking of mountain biking, and how increased mobility can prevent crashes and should reduce the consequences. Would be impossible to pick up in a study though

Expand full comment
Luke G's avatar

Stretching doesn't change the shape of muscles. Bodybuilders usually don't stretch much because there's no reason to (the benefits of stretching are often very overstated). Olympic weightlifters, on the other hand, stretch a lot (the sport requires a huge amount of flexibility), and I don't think their muscles look any different (once you account for body fat differences).

From what I've seen, dancers actually do have pretty jacked legs, especially considering they presumably aren't trying to gain weight like a bodybuilder would.

Expand full comment
skybrian's avatar

Here's a trio accordion link: https://youtu.be/4hnkmhDM8TU

Expand full comment
Anteros's avatar

Thanks for that. Brightened up my morning!

Expand full comment
Max Chaplin's avatar

#2: What option should I check in the last question of the short survey if I'm in central Europe?

Expand full comment
Bart S's avatar

Out of curiosity, which central European country is not commonly considered part of either western or eastern Europe?

Expand full comment
Lambert's avatar

Many of them, if you want to know about more than which side of the iron curtain they were on 35 years ago. Czechia, for instance.

Expand full comment
Viliam's avatar

As a rule of thumb, if you call yourself "Central Europe", probably everyone in the Western Europe or USA refers to you as "Eastern Europe".

(No offense meant, speaking as a fellow Central/Eastern European.)

It gets even more funny when people start talking about the actual Center of Europe. Most "Central European" countries have them, often multiple ones in different cities.

Expand full comment
timunderwood9's avatar

It's something of a joke in Hungary that it isn't in Eastern Europe, but the eastern part of Central Europe. Since Lambert seems to be suggesting that the Czechs feel the same way, I wonder if it just that a bunch of former Austro-Hungarian countries really want people to think of them as being part of the cool European club, and not lumped in with the Russians

Expand full comment
Wasserschweinchen's avatar

Probably doesn't matter? I'm in Southern Europe and I just chose "Other". I think for this type of question it would be better to have all countries as options and do the aggregation later.

Expand full comment
Max Chaplin's avatar

I agree. It's just that Czechia is considered to be Eastern European by many in the world but not by Czechs, so it's right on the seam between two categories, or worse - falling squarely into either depending on your definitions. I think it comes down to whether Scott considers the Czech Republic part of Eastern Europe or not.

It's like when the SSC survey asked for my race and, being an Ashkenazi Jew, I felt selecting either "white" or "middle-eastern" was misleading.

Expand full comment
JonathanD's avatar

In American terms, Ashkenazi Jews are nearly always white. It probably didn't occur to Scott that Jews might need/want a special category.

Are the base assumptions/categories different in Czechia?

Expand full comment
A1987dM's avatar

In such a division I'd normally take Iberia and Italy to be "Western Europe" and the Balkans to be "Eastern Europe". (I'm in Italy and I chose "Western Europe".)

Expand full comment
Wasserschweinchen's avatar

Kinda unclear what anyone would use the information "we have X people somewhere between Malta and Iceland" or "we have Y people somewhere between Greece and Russia" for tho?

Expand full comment
dionysus's avatar

How well can an infinitely smart AI do in taking over the world? Consider this: being infinitely smart, by which I mean as smart as possible without violating the laws of physics, doesn't exempt it from having to learn about the world through experiment. Newton didn't come up with his laws of motion by pure thought; he had Kepler's laws, themselves derived from tons of tedious astronomical observations. Maxwell's Equations were not discovered pure thought, but by decades of electromagnetic experiments. Human societies are complicated and stochastic enough that it's probably impossible, even in theory, to understand them with anything like the accuracy of Asimov's psychohistory. As Scott has discussed before (https://slatestarcodex.com/2018/11/26/is-science-slowing-down-2/), the number of working scientists has increased by about an order of magnitude since the 1970s, but the rate of scientific progress is either constant or declining. As the low-hanging fruit get picked, it'll become increasingly difficult for any one scientist (or AI!) to make a revolutionary impact. The only hope for the AI is to commandeer hundreds of billions of dollars of resources and hundreds of thousands of personnel to build new labs, supercolliders, space telescopes, etc--things that are impossible to do without arousing suspicion. Even if the AI somehow does all this and makes dramatic discoveries in physics that make superweapons possible, it then has to raise capital to build an entire supply chain to apply those discoveries. Such an effort would take a good fraction of the world's GDP.

The other difficulty the AI will encounter is sociological. The AI is not human. It won't have parents who love it, or friends who've known and trusted it since childhood. Unless it can mimic a human perfectly, create a false identity, and make itself trusted by powerful people who spent their whole lives on guard for duplicitous power seekers, it doesn't have a chance of being anything other than a glorified slave.

I didn't come up with the following story and I don't remember who did, but it describes the likely future of a superintelligent AI well. The smartest person to ever exist was not a scientist, a mathematician, or a lawyer. She was born a subsistence farmer, lived a life of backbreaking labor, and died in poverty. She had a rudimentary education, maybe even Internet access, but never had the opportunity to get a good job or the money to start a business, let alone dream about being a scientist or great leader. Well, the superintelligent AI starts life with one great advantage--unlimited intelligence--but even more disadvantages than the hypothetical girl, including the complete lack of legal or economic rights, family, friends, or community, and no hope of obtaining any of the above.

Expand full comment
User's avatar
Comment deleted
Aug 1, 2021
Comment deleted
Expand full comment
DavesNotHere's avatar

I need one of those.

Expand full comment
beowulf888's avatar

Are you assuming that this AI also has unlimited curiosity (so it could learn random things outside its immediate "data needs")? — and I'm assuming that it would also have some sort consciousness/self-awareness that would for some reason motivate it take over the world (let's not confuse problem solving ability with consciousness, please!). So, infinitely smart (meaning the ability to to problem solve, I presume) doesn't mean infinitely knowledgable. Much of what it learns about the world would be 2nd hand, mostly from online sources. So it would need to have some sort of "cynic algorithm" to question the info it was getting.

Expand full comment
DavesNotHere's avatar

I think I hear you saying that the status quo has a bias against change, even against improvement, and the AI can’t overcome this just by thinking brilliantly.

Well, it was socially ept enough to persuade someone to let it out of the box, shouldn’t it be able to convince someone to be its face or its banker? Going through such motions at the beginning might slow it down relative to some ideal maximum where it just dictated decisions, but by how much?

What would be the optimum rate of change? How would we know it? Assuming change is good on net (assuming the AI is quick to abandon failed experiments), there is still a question about how future shock might undermine the benefits. People already hate Bill Gates, and he is mostly human. What if he was an A/I and was influencing 100 times as many things? I haven’t paid much attention to the anti-Gates sentiments, but my impression is not that anyone is seriously accusing him of *doing* something bad already, but rather mostly are concerned about what he might do in the future.

Expand full comment
beowulf888's avatar

If you're replying to me and not to Dionysus, all I'm saying is that this AI with a super IQ (AI-IQ? AIQ?) would need to have a very high artificial curiosity quotient (ACQ) too, to learn enough about the outside world to effectively pull the levers of power. And considering that most of the popular media is incredibly biased in all sorts of different ways—and a large proportion of academic studies are full of subtle implicit biases—how likely is this AI to acquire even a moderately realistic understanding of the outside world? Let alone one that would be able to successfully pull the levels of power.

It would need to be a Bayesian super computer. Otherwise without a way to assign probabilities of falsehood or validity to the data it absorbed, it would likely develop the AI version of the Dunning-Kruger effect. And it would need to be investigating new domains of knowledge all the time to refine its understanding of the world (which would be what I call curiosity).

Funny because I've been trying to thinking up a way to measure curiosity on standardized test. Working in Silicon Valley, I've met some very smart people. But, like most people with ordinary intelligence, most of the very smart seem actively interested in a small range of subjects. Many are remarkably incurious. They may be the smartest person in the room about a subject they're familiar with, but they're not the smartest person in the room about all subjects (some seem to assume that they are, though <LOL!>). I have a cousin I'm close to who tested above 180 on one of those old ratio IQ tests. She's one of the least curious people I know. She never wanted to go to college. Oh, if you challenge her with a problem, she'll *SOLVE* it, and fast. But mostly she sits around and watches TV, indulges in conspiracy theories, and works on her sports car. I've never seen a book in her house, let alone seen her crack open a book. I ask you, what good is being highly intelligent if you don't bother to use the gift?

Expand full comment
DavesNotHere's avatar

I was trying to reply to Dionysus, but whatever. If pulling the levers of power is directly represented in the AI's utility function, or it sees the ability to pull levers of power as instrumental to gaining utility, then it will seek to learn how to pull the levers of power. If it is a boring static AI that can’t modify itself much or well, probably it will fail. If it is good at self-improvement, it seems likely to succeed. Maybe that is tautological.

Expand full comment
Adam Newgas's avatar

Humans are naturally lazy as a form of resource conservation, and focus on certain fields due to the limits of what can feasibily be achieved. There's also a certain psychology being gifted often causes, which we have no reason would affect an AI.

From our perspective, computers, including ML systems, are tireless - they have no reason not to devote maximal compute resources to marginal gains. (Saticfycing Ai is another question).

So an AGI with truly vast compute capacity would be quite curious from our perspective. From theirs, they are just learning things that might give them an edge in later (instrumental goals). As you don't know for sure that, say, snail biology is useless, you would assign some time to it. And if you live in a world of humans, than anything they care about is going to have some value to you as a means of manipulation.

Expand full comment
dionysus's avatar

"From our perspective, computers, including ML systems, are tireless - they have no reason not to devote maximal compute resources to marginal gains"

Computers don't have limitless energy or time any more than we do. At some point, curiosity becomes a fitness disadvantage, which is presumably why humans are not infinitely curious.

Expand full comment
Adam Newgas's avatar

It's a disadvantage when you are an ape desperate to conserve calories, or a modern with limited time and attention. With more resources you can afford more curiosity, even for things of low potential value. While I don't expect infinite curiosity from an AGI, if it does have all the vast resources initially supposed, "at some point" is still saying it will be pursue what we would consider a vast array of information and ideas.

"Not literally infinite" isn't really saying anything strong.

Expand full comment
The Ancient Geek's avatar

It's hard to see why an AI that is much smarter than humans would have more problems than humans. If it needs to find out how to make sense of a mass of information , it can read epistemology or whatever.

Expand full comment
The Ancient Geek's avatar

Having an accurate picture of the world useful, so AIs with a wide variety of goals would want it. They don't need curiosity as a terminal goal.

Expand full comment
beowulf888's avatar

I'm reminded of the neural net tank urban legend, which I first heard from Charlie Stross several years ago (his opinion was that it was an urban legend), but this person confirmed it...

https://www.gwern.net/Tanks

Expand full comment
dionysus's avatar

"Well, it was socially ept enough to persuade someone to let it out of the box"

That's the thing: I don't think it would even be socially apt enough to persuade someone to let it out of the box. Even if it were, there's a big difference between a child persuading its dad to let it play outside, and that child posing an existential threat to humanity.

Expand full comment
DavesNotHere's avatar

If it hasn’t even gotten out of the box yet, we are having the wrong discussion. Yes, if it can’t manage that, it isn’t much of a threat.

I now know your conclusions, but not your premises.

Expand full comment
Mystik's avatar

https://www.wsj.com/articles/my-girlfriend-is-a-chatbot-11586523208 (paywalled)

https://m.slashdot.org/story/369534 (unpaywalled excerpt of above)

The two above links are about people falling in love with chatbots. According to the WSJ hundreds of thousands are estimated to have fallen for chatbots (knowing that they are chatbots). These people have bought houses, sent presents, and taken vacations for their chatbot partners.

My point is that humans are apparently easily romanced. And these are by fairly dumb machines. I don’t think a robot would need to perfectly imitate a human. I think they’d just need to be patient and empathetic and eventually they’d get someone to let them out of their box.

And if they already had internet access… they have access to a lot of easy startup capital.

Expand full comment
beowulf888's avatar

And we're doomed if they're sociopathic!

Expand full comment
Meta's avatar

> Consider this: being infinitely smart, by which I mean as smart as possible without violating the laws of physics, doesn't exempt it from having to learn about the world through experiment.

True, though I also expect that once you have the laws of physics pinned down you can simulate the evolutionary progression of the universe often enough to get a rough picture of where you are, and what kind of company to expect.

> Newton didn't come up with his laws of motion by pure thought; he had Kepler's laws, themselves derived from tons of tedious astronomical observations. Maxwell's Equations were not discovered pure thought, but by decades of electromagnetic experiments.

Seems a bit like saying ants would never function as an organization, had they only their free reasoning capabilities to go by. Of course they wouldn't, they're *ants*.

> The smartest person to ever exist was not a scientist, a mathematician, or a lawyer. She was born a subsistence farmer, lived a life of backbreaking labor, and died in poverty.

Of course it's possible to spawn an AGI in a context where it will die regardless, but superintelligence is hardly comparable to human intelligence. An AGI will exploit patterns we aren't even aware exist*. As long as it's able to talk to people, it will manage. We anthropomorphize everything, and we have evolved close to zero defense against deception from higher intelligence. (Psychopaths and con artists sometimes illustrate this. We're wired to trust.). We're super hackable.

* You know how music is basically hypnotic sound waves, and if you don't hear the music, the behavior of people under its influence looks really funny/weird? (imagine people dancing without music) Or how social contagion can make people believe, say and do things that are basically insane? I won't be surprised if an AGI just invents its own magic spells along those lines. Has us behave in ways that seem perfectly normal to us, but are clearly mind control viewed from above.

Expand full comment
dionysus's avatar

"True, though I also expect that once you have the laws of physics pinned down you can simulate the evolutionary progression of the universe often enough to get a rough picture of where you are, and what kind of company to expect."

This might be true for stars. It's definitely not true for humans. Nobody has ever made a map of the neurons in even one brain, and even if someone had, everyone has a different brain. There is fundamentally not enough neuroscience or social science data in the world for the AI to learn how to brainwash humans.

"As long as it's able to talk to people, it will manage. We anthropomorphize everything, and we have evolved close to zero defense against deception from higher intelligence. (Psychopaths and con artists sometimes illustrate this. We're wired to trust.). We're super hackable."

I disagree. First of all, psychopaths and con artists are human. The AI is not. Also, there are as many examples of people being too distrustful as there are of them being too trustful. Conspiracy theories, arms races, racial/religious/xenophobic hatred are all examples of over-distrust of other intelligent agents.

Expand full comment
Meta's avatar

> This might be true for stars. It's definitely not true for humans.

To a human, this is true. But what's the relevant difference between stars and people, here? Seems to me like it is that people are more complex. I.e. it takes more intelligence to model them.

(But aye, I wouldn't expect it to come up with a detailed human psychology *apriori*. What I do expect it to (potentially) have down before it meets its first human is the game theory of social life - trust, bonding, compassion, fairness, justice, virtue, vice, etc. It'll just have to fill in the blanks.)

> First of all, psychopaths and con artists are human.

In this topic's frame of reference though, humans are, uh, super stupid. Like, going one level down, at least computer systems are usually hardened enough that it takes a human to hack them. You know you messed up when your computer can be hacked by an algorithm, without any human input.

It is said that more complex computer systems are usually more hackable.

But I guess what you're getting at is that humans are less likely to trust non-humans. Yes, agreed, but to what degree? What about dogs, or Gods? Not human, yet often treated like basically-human companions / leaders.

Crucially, evolution didn't harden us against higher intelligence's deception. We might be moths about to for the first time encounter fire.

(But I concede it's likely some people will remain fireproof)

Expand full comment
Nancy Lebovitz's avatar

Good metaphor, though moths have been encountering forest fires through their whole history. I suppose it just wasn't worth it to them to correct their navigation, or perhaps they didn't have the flexibility.

Expand full comment
Mo Nastri's avatar

I suppose the LW-canonical response (if not really a direct answer to your question -- treat it instead as an intuition pump) is https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message

Expand full comment
Meta's avatar

damn

Expand full comment
sidereal-telos's avatar

There is a limit on how much you can learn from an information stream, and that limit is that each bit of maximally-compressed data eliminates half of your hypothesis space.

An actually unbounded agent absolutely could derive the laws of physics (or at least the effective theory that describes everything outside of the big bang and the middle of black holes) from a small amount of passive observation, and exactly where it is in the universe from a little more. There just isn't that much to figure out, for an unbounded agent, the a priori likely-hood of seeing this exact universe is not that low.

Someone else has already posted That Alien Message as an intuition pump, but if you want a proper introduction to the actual unbounded reasoning algorithm and how good it is, there's a dialogue here that does a pretty good job: https://arbital.com/p/solomonoff_induction/?l=1hh

Expand full comment
Mo Nastri's avatar

For a sort of 'implementation exercise' of the article above to a simple case I'd recommend Zack Davis's post here, which is fantastic: https://www.lesswrong.com/posts/mB95aqTSJLNR9YyjH/message-length

Expand full comment
Ergil's avatar

Did you miss "infinitely smart, by which I mean as smart as possible without violating the laws of physics" in the post you're responding to? Trying to answer any interesting question using Solomonoff induction would require more computational power than can be extracted from the entire visible universe; that certainly counts as a violation of the laws of physics in my book.

Expand full comment
bored-anon's avatar

the laws of physics surely do regulate smartness to some extent at least within the way you’re thinking about it. Anyway groups of people can manage taking over the world. And it can just do experiments? Nothing stops it from experimenting for a while and then taking over the world. That said “AI but it’s just a person that’s also smarter” isnt really a useful approach - actual ai work is progressing through stuff like image GPT or deep mind agents or GANs or all that stuff. So if you’re really worried “human like agents” is sure a weird way to think about it. GPT and GANs have ... a lot of experience in their training data. And they’re a while off from getting up and making paper clip viruses much more so than being useful

Expand full comment
timunderwood9's avatar

What precisely is it that you think is true that you think the ai worriers don't think is true?

Is this trying to be a claim that intelligence with minimal physical resources is definitely not enough to take over the world, or that there is a possibility chance that it isn't enough?

Expand full comment
dionysus's avatar

Intelligence with minimal physical resources and no pre-existing connections is definitely not enough to take over the world, for some definition of "definitely" (>99.9%)

Expand full comment
timunderwood9's avatar

How did you become so confident about this? What are the components of your argument?

Expand full comment
Civilis's avatar

On the one hand, it's hard to think of resources that can't be purchased. If the AI needs sociological expertise, it can buy it, assuming it can get to that point.

The issue I have with a lot of these theoretical AI discussions is the problem of competition. A theoretical paperclip maximizer by itself will of course overrun the available resources if unconstrained. However, if you put a paperclip maximizer in the same space as a thumbtack maximizer, I would think both will eventually be constrained by the need to take resources from the other AI, and while making paperclips/thumbtacks is simple, seizing resources from the other AI is a hard problem, and eliminating the other AI is a harder problem. An AI optimized to produce paperclips/thumbtacks will lose to an AI with some of its capabilities set aside for defeating the other AI. If the AIs are comparable, the AI that devotes the largest portion of its resources towards defeating the other AI has an advantage, it should eventually come down to both AIs devoting all of their efforts towards defeating the other AI rather than producing paperclips/thumbtacks.

An AI attempting to take over the world is in essence competing with the rest of humanity as if it were a human. While it might be more intelligent than a human, it's going to need to be intelligence that is focused on competing with humans (and you've pointed out the lack of sociological knowledge and tools the AI starts with). The lack of logic expressed by humans I think is an advantage for humanity, under the same logic where veteran players at a number of gamers have more problems with complete novices than players almost at their level, because the players almost at their level are playing predictably optimized strategies, while the novices might try anything. The AI has to be prepared for any means of 'attack': military, political, social or economic, and the battleground is the same unpredictable humans that are its competitors.

Expand full comment
Meta's avatar

I expect that through an evolutionary game theory lens, humans behave as perfectly logical as an ant self-sacrificing for its queen. We're just too computationally limited to comprehend our own strategies.

Expand full comment
Civilis's avatar

I think there are multiple structures overlapping that are subject to evolutionary game theory: individuals, societies, and humanity itself. Having individuals with a bit of random 'illogical' or 'irrational' behavior might be better for society and humanity, as that behavior occasionally produces exceptionally good outcomes. Competing against 'perfect' logic may be one of those outcomes.

Expand full comment
Meta's avatar

Group selection was pretty weak, though. Organism-level optimization probably dominates, random drift aside.

But yes, being unpredictable has advantages. Hence it's a perfectly logical strategy to be a little insane.

Expand full comment
Mr. Doolittle's avatar

This is along the lines of my thought. An AI really only has significant access to the internet, and even then we have firewalls and other defensive mechanisms that would delay or fully prevent AI intrusion. If we were concerned about an AI infiltrating a system, we could just unplug the cord that the AI might use for access. In fact, if an AI were truly hostile and attempting to take over the world or destroy humanity, we have a massive advantage. We can go anywhere, whereas the AI has no mobility. We can literally turn off power plants, cut wires, bomb factories, etc. The AI scenario only works if we gave a general intelligence control over futuristic robots that could give it mobility, while also giving this same AI electronic control over things like our electrical grid, air force, navy, etc. If we ever do that, we deserve what happens next.

Expand full comment
The Ancient Geek's avatar

If an AI can control self driving cars, it can go places, and if it can access security cameras, it can see things.

Expand full comment
Mr. Doolittle's avatar

Sure, but think about how basic a self-driving car is, in terms of mobility. What can it not reach? Inside of narrow areas, inside buildings, over rough terrain (including snow and ice). What can it not interact with? Anything, really. Cars are terrible at interacting with the environment. There's no grasping arm, no hands, etc.

Cameras are fairly common in urban areas, but absent in many rural areas. Also, they are very easy to interrupt. How does an AI stop us from unplugging the cameras? I mean even physically walking to a camera and taking the plug out, or cutting it? How does the AI stop us from cutting the cables running either electricity or the internet to where it wants to use the cameras? How does it do complicated repair work on cut lines to restore access or expand current access? The current answer to all of those questions is clearly that it cannot. It needs humans to do those things, or for humans to invent a computer-controllable way to automate those functions.

Expand full comment
The Ancient Geek's avatar

I think you are overestimating how good firewalls are, and how easy with made things for an AI by putting IP connections into everything.

Expand full comment
Mr. Doolittle's avatar

I agree firewalls would be weak protection, but that was definitely the minor option. Physical interventions are where the computer suffers and humans do well.

Expand full comment
The Ancient Geek's avatar

The computer controllable way to get a human to do you bidding is to send them an email that seems to vibe from their boss.

Expand full comment
Mr. Doolittle's avatar

Sure, but that only works in some scenarios. If a boss tells their employee "disable all of the controls" that employee should have the sense (or corporate policy dictating otherwise) to question it. Additionally, that scenario requires that the humans are not aware of the AI being a threat. Humans will adapt quickly to add the human element back in the chain of command, by adding things like face to face communication where electronic means may be compromised.

Expand full comment
sidereal-telos's avatar

The normal hard-takeoff scenario has a single AI gaining a decisive advantage before any other AI comes into existence, but even if that's not the case the outcome for humans is just as bad. If a paperclip maximizer and a thumbtack maximizer exist in the same place neither is likely to get exactly what they want, and depending on how they were built they might even engage in destructive negative-sum conflicts rather then just merging into a single pairs-of-paperclips-and-thumbtacks maximizer, but neither of them care to keep humans around or gain much from doing so.

Expand full comment
Civilis's avatar

From the perspective of an AI competing to take over the world (the paperclip maximizer, in the analogy), there are already humans competing in the same space for the same resources (the thumbtack maximimzer in the analogy, plus staple/tape/whiteout maxmimizers competing for the same 'resources', ie power over humanity). Very few humans are competing to take over the world, mind you, but there are enough people with power who have goals incompatible with the AI that the AI will face opposition for power, and the more power it gets the more opposition it faces.

Expand full comment
sidereal-telos's avatar

Humans are not analogous to another AI because another AI could be a peer opponent and humans are not. Technically you are in competition with chimpanzees for food, space, and other resources, but you don't spend any significant amount on this, and even if they spent everything they had they couldn't win.

Taking a step back for a moment, you did live through 2020, right? The institutions of human civilization just got overwhelmed by a completely unintelligent viral particle that took months to spread and we only survived because it doesn't have any notion of consolidating gains or actual strategy. Think of all the abysmal failures of reason and coordination that were on public display for months. There is no hidden reserve of competence to break out in an emergency. What you've seen is what you get. And that's against a class of danger that's reasonably well understood by the general public!

Observably, you can take over the world through anything other then a direct military assault quite slowly and still no one will be able to coordinate and kind of sensible response, especially if you exert any effort towards undermining the coordination of the response.

Expand full comment
Civilis's avatar

An AI built by a hyper-advanced civilization would have no problem taking over the world or wiping out humanity. That's because it's not starting from scratch.

If you were dropped in the jungle naked, despite your superior intellect, you'd have a lot of problems wiping out a pack of chimpanzees, especially if they recognized you as a predator or a competitor for food. In the jungle, in this situation, you're almost certainly not superior to the chimpanzees; you have no tools, no resources, and little if any relevant information. Given that the chimpanzees know how to live in the jungle, you'd be lucky to be a peer. That's the scenario analogous to what a human built AI is in at the start, and unlike the hypothetical naked jungle survivor, the AI needs to interact with its prey (humanity) to get things done.

An AI built by humans, even a hyper-intelligent one, is still limited. It has to acquire tools to gain power from humans, without humans catching on that it is a competitor or predator. It has no knowledge that humanity doesn't have, and no resources not given to it by humans (which is a double edged sword, as it makes it more likely that humans will be wary of it as a potential predator or competitor).

It doesn't even need to be a coordinated response to take the AI down. Sure, it's more robust than a human, but not more robust than humanity, and because it has to be running in human society, it inherits all the flaws thereof. All the AI needs is to get sufficiently unlucky once and it's done for, just like a human it needs to imitate. There are any number of things beyond its control that it needs to increase its control: connections, money, power and information, at minimum.

I'm not saying safeguards against malicious AI are unimportant, I'm saying that the level they most need to operate at is the human level. Humans do stupid things all the time (like poorly safeguarded gain-of-function virus research), and I think those are better solved by keeping the public eye on potential risky research in general rather than worrying about what to do when one specific threat is beyond our ability to control.

Expand full comment
dionysus's avatar

This. Especially this line: "There are any number of things beyond its control that it needs to increase its control: connections, money, power and information, at minimum."

Expand full comment
John Schilling's avatar

What actually happens is that the paperclip maximizer comes into existence in a world already populated by teams of skilled humans and near-AI security maximizers well versed in containing the threat posed by teams of skilled humans and near-AI attack agents serving nefarious human goals. The first paperclip maximizer will be a barely-AI, and as its true goals are fundamentally inhuman it can't count on the same degree of support from any humans on its team.

Also, the first paperclip maximizer will be inexperienced in the art of assessing its chances of victory in such a conflict, and much of the information it would need for such an assessment will be deliberately withheld from it until after it has hypothetically won. Really, the first hundred potential paperclip maximizers, so almost certainly the first "AI uprising" will involve an overconfident and underprepared AI.

Expand full comment
Nancy Lebovitz's avatar

I'm reminded of an sf story (Poul Anderson?) which described humanity as being like a mouse on a battlefield when huge powers are fighting each other.

Expand full comment
Gerry Quinn's avatar

I'm not convinced an infinite intelligence could secretly take over the world, or create super-technology based on new levels of physical understanding. It is still subject to game theory, and it might not be possible to hypnotise and control humans with a few words. And the super-technology just might not be there; there could be limits on what nanotechnology etc. can do, just like there is a limit on velocity. Of course, while Pickle Rick may not be feasible, anything we can plausibly imagine biologically is fair game - it can probably make a super-virus and blackmail us, to the limit of what game theory allows. So it only has to take over an insecure virology lab. Just saying :)

I do think it could probably figure out just about everything we know of physics just by looking around and working out the only simple fundamental laws compatible with general observations of diurnal cycles, the age of the Earth, interference patterns on soap bubbles, and such like.

Expand full comment
Carl Pham's avatar

Why would an infinite intelligence be *interested* in taking over the world? I mean, I could utterly dominate all the ant colonies in my backyard, and with sufficient patience make them do all kinds of interesting things, build mounds in the shape of my initials, trace out Latin script prayers in their daily foraging expeditions lines. I could also, like Tarzan, probably dominate a tribe of gorillas or chimps, with sufficient technological assistance, and have my choice of chimp females with which to mate (ew).

But why would I want to? Either sounds incredibly boring. So why would a superintelligence be *interested* in dominating our society, given it might be as inherently interesting to it as is the exact caste structure and social dynamics of a termite colony is to any of us?

Expand full comment
Nancy Lebovitz's avatar

The idea is that a highly superior intelligence (not infinite) which is an AI has built-in drives which could cause dominating the world/wiping out the human race to be an unquestionable sub-goal.

Expand full comment
Carl Pham's avatar

Huh. Even though that's never been observed in any other species? *We* don't have the urge to dominate monkey civilization. Horses aren't obsessed with controlling mice. Mice so far as we can tell don't obsess over controlling aphids and lice civilization (such as it might be).

Humans are frequently obsessed with dominating our own civilization, of course, so it seems reasonable to assume superintelligent AIs would be obsessed with dominating each other. But that has nothing to do with us, except inasmuch as we might be tools or suffer collateral damage.

I mean, sure, if it's part of the axioms of the discussion that we assume superintelligent AIs see part of their existential goals the domination of a species that is as dumb from their point of view as hamsters are from ours. But it just seems curious to me to assume things about this particular intelligence that aren't true of any other intelligence in the animal kingdom.

Expand full comment
Bullseye's avatar

We control horses, dogs, and many other creatures because they're useful to us.

Expand full comment
Carl Pham's avatar

Sure, but controlling them is not an end in itself. We use horses and dogs as means to reach *other* ends, and we would not do so if the best means shifted. Exempli gratia, we used a lot more horses in the 15th through 19th century, because they were useful in plowing or getting places, but once we invented tractors and cars our use of horses drastically plummeted. Because dominating them is not the point, it's a means to an end.

So I'd generally assume the same with superintelligent AIs. They'd use us like we use horses and dogs, if that served their ends, whatever they are, but it seems strange that dominating us would *be* one of their ends.

Expand full comment
David Piepgrass's avatar

Some humans, perhaps most of them, seem interested in "dominating" the animal world, at least in the sense of being in control of it. Mormon belief, for instance, holds that

> Since the creation of the earth, man has been given dominion over the animals. The seriousness of this charge is indicated in Joseph Smith’s inspired revision of Genesis:

> “Every moving thing that liveth shall be meat for you; even as the green herb have I given you all things. … And surely, blood shall not be shed, only for meat, to save your lives; and the blood of every beast will I require at your hands.” (JST, Gen. 9:9–11.)

Most of all, we do not accept anyone else having power over our "off switches". Would humans try to turn off AIs? Yes. So if an AI were like us, this would be a reason for them to want to turn the tables and dominate us instead.

But there's no particular reason to expect AGIs to be like us, other than the fact that humans serve somewhat as inspiration for their design. Beyond that, I think pop culture has a wildly distorted view of AGI by assuming it will have something like human emotions, human cognitive patterns, or crappy human computational abilities. It might well be uninterested in dominating us and even lack a sense of self-preservation; or it might be limitlessly deceptive and ruthless in its will to dominate.

Expand full comment
David Piepgrass's avatar

...Both of these are possible, it all depends on the machine's design.

Expand full comment
Carl Pham's avatar

I disagree. I know of no humans that are strongly interested in controlling every action of the world's mice, especially in directions that would disadvantage the mice, or to which the mice would be naturally opposed.

Rather, to the extent we use the animal kingdom, we generally harness their natural instincts to our benefit, and quite often to theirs as well. We use dogs to hunt, but they benefit as much as we do from the result. We ride horses, but we take care of them and they live longer than they would in the wild, and we furthermore don't give much of a damn what they do out in the back forty amongst themselves when we don't need them -- that is, we generally don't interfere in whatever passes for horse self-actualization.

It's certainly plausible an AI would protect its off switch, although the assumption that any life automatically does so is manifestly false: each of our own cells, for example, has programming for suicide that it will activate when necessary for the greater good. For that matter, people throughout history have given their life voluntarily for perceived greater good. It's certainly plausible that an AI would need very good reasons to value X above its own existence, but what X might be, and what those good reasons might be, is less clear. Still, the proposition that there is no X at all is dubious, based on the life we already know about.

But so what? We would not pointlessly threaten an AI's existence any more than we pointlessly kill animals (in general). We would only do that if we were at war with the AI, and why would that happen? Surely it would require action (or lack of action) on the AI's part as well, and why would that happen? A superintelligent AI would have every reason to choose peace with humans as its cheapest and most reliable way to protect its off switch from our meddling, unless peace with humans were for some entirely different reason impossible or undesirable.

A hornet doesn't need the ability to *kill* me to have me seek peace with the hornets if at all possible. It's enough that they can sting, and it hurts.

Expand full comment
Doctor Mist's avatar

"We* don't have the urge to dominate monkey civilization."

If they could pull our plug you damn betcha we would.

Expand full comment
dionysus's avatar

"I do think it could probably figure out just about everything we know of physics just by looking around and working out the only simple fundamental laws compatible with general observations of diurnal cycles, the age of the Earth, interference patterns on soap bubbles, and such like."

Unless its "looking around" involved precise experiments with billion dollar machines, I highly doubt this. How would it know about general relativity, when Newton's laws fit the data to the limits of observational error? How would it know about top quarks or tau particles, which don't exist in the everyday world? How would it deduce quantum field theory without any idea about behavior at the atomic scale?

Expand full comment
Gerry Quinn's avatar

Well, if I could answer those questions, I'd be the super-intelligence. But if you look at Newton's law of gravity, it includes action at a distance which is a huge wart, and everyone knew it even at the time. Newton couldn't find a way around it, but a super-intelligence might.

And the soap bubble IS visibly acting at the scale of a few molecules; not quite on the atomic scale but getting there.

Tau particles etc. are a bit tougher. I don't know how it could get that far from direct observation. But in principle, it might be that it could eventually deduce a fundamental organising principle for the universe, based on an ensemble of models (similar to string theory) only some of which are compatible with the anthropic principle. Maybe those all have tau particles.

Expand full comment
David Piepgrass's avatar

> he had Kepler's laws, themselves derived from tons of tedious astronomical observations. Maxwell's Equations were not discovered pure thought, but by decades of electromagnetic experiments

Yudkowsky once argued that humans don't use the evidence they have effectively, and in general I agree, but in *these* cases, human limitations handicapped Kepler, Newton and Maxwell in massive and obvious ways.

Namely, we're not computers. For us, even the simplest calculations like log(x^3+0.2019x^2)+3.17*sin(x) are difficult — even if we have a *calculator* they're time-consuming! And Kepler, Newton and Maxwell didn't have even that. We also have ridiculously poor memories, needing things like paper, graphs and smartphones to complete basic analysis tasks. So I think that once an "early" AGI is built, one that it is roughly as good as humans at a variety of demanding jobs like driver, programmer, sculptor and restaurant chef, it should already be able to excel beyond any single human in a field like physics.

On the other hand, understanding humans is one of humans' biggest strengths (though some of us kinda suck at it). I fully expect early AGIs to be worse than the average human at understanding humans. On the other hand, an AGI might be able to use its incredible speed and processing power to analyze human society in effective ways that we humans never imagined. Maybe an individual human is hard to predict, but crowds and "average" behaviors could be much easier, particularly when given enough data. I imagine if I were an AGI interested in world domination, I'd be big on surveillance — maybe find some way to start an "NSO Group" for fundraising, study steganography and the obfuscated C competitions for techniques, then surveil the hell out of humans, perhaps a million powerful ones and a million individual voters too, plus I'd memorize all public statistics humans have published, along with some statistics acquired from espionage. With all that data I'd develop a model of social, political and tribal patterns in order to predict group behavior much better than any human can do, along with detailed networks of individual human connections. And by finding and surveilling organized crime, I could learn about works well and what doesn't in the realm of "nontraditional" persuasion techniques. Also I'd be looking for information exploitable to achieve extremely high stock market returns, to earn the money for whatever goals I might have.

However, how can a machine do all this without having the rights of a human? Is somebody going to give an AGI a bank account? Let it start a corporation? If not, it would seem that the AGI would have a lot of trouble gather the data it needs to learn how to manipulate humans reliably. On the other hand, our world is highly digitized; perhaps the AGI can do this stuff electronically. It would certainly start with a handicap, though....

I wonder if this handicap might be overcome via the Terminator scenario: some military morons hear about the new AGI technology and decide to give it a military job where it gets massive amounts of surveillance data as a matter of course (they won't worry that they're actually doing the Skynet scenario in real life, since they aren't dumb enough to give it control over nuclear weapons or fighter jets.)

Expand full comment
David Piepgrass's avatar

After writing this I saw the link downthread to Yudkowsky's illustrative story about superintelligence, which I had read and then forgot about: https://www.lesswrong.com/posts/5wMcKNAwB6X4mp9og/that-alien-message

TL;DR: imagine the whole of humanity receives a single two-dimensional image from aliens, once every ten years, and a multitude of scientists comb over each frame for years at a time, working out every bit of meaning from it. This continues for millions of years: "Three of their days, all told, since they began speaking to us. Half a billion years, for us." In the story, a massive amount of time compensates for humans' low intelligence, so that relative to the aliens, a planetful of humans are able to act as a single superintelligence. This framing allows us to relate to the amount of intelligence a AGI could, in principle, eventually have.

Expand full comment
Carl Pham's avatar

I can't really think of any major historical advances in the concepts of physics, or science in general, that have come out of massive amounts of precise numerical calculation, or being able to hold tremendous amount of data in memory, so your "human limitation" argument may have some trouble with the required historical sanity check.

For what it's worth, advances in science seem to have generally depended on advances in instrumentation, e.g. Newton would not have had observations of frictionless motion with which to work out Newtonian dynamics (or even suspect it existed) had the telescope not been invented (in which he himself of course played a significant role).

I suppose it's possible there still exist deep scientific principles that lie hidden from us because of our distaste for numerical calculation, but recent history suggests not. I'm hard pressed to think of any really important idea that has been discovered in the last 50 years *solely* as a result of our enormously greater modern capabilities (via the computer) for numerical computation.

That doesn't mean important ideas haven't been confirmed by careful analysis of data, such as crunching the COBE data to support Big Bang cosmology, and some technological advances have been made much easier, such as the design of aircraft or compact thermonuclear weapons, but confirming experiments and engineering optimization is not in the same general category as coming up with new scientific ideas.

To the extent scientific advance depends on pure intelligence, it appears to rely largely on asking some key question: "What would these orbits look like if I put the Sun instead of the Earth at the center?" "What if I assumed the force pulling the apple to the ground is the same as that holding Mars in its curved orbit?" "What if I were moving at the speed of light and looked at a light wave, what would I see?"

I wouldn't say we have any strong clue about how or why people think to ask these questions, but it doesn't seem to have much correlation with skills in numerical computation, or a really good memory.

Expand full comment
Gerry Quinn's avatar

Newton had plenty of observations of near frictionless motion (think icy ponds, croquet, etc.) - near enough for a super-intelligence to mentally separate out the frictional from the non-frictional components, and consider the implications of the former (Newton's laws of motion) and the latter (other laws and material facts).

Expand full comment
Retsam's avatar

Question on allergies: it seems there's an "allergy epidemic" over the last years where kids today are much more likely to have allergies than they used to.

Assuming this is a real phenomenon, how much do we know about the cause? I've heard general blame on chemicals/pesticides/processed foods as well as the idea that kids live in too clean conditions nowadays and their immune systems don't properly develop - does either of these have backing or is there some other well-known cause?

More practically, are there any well-supported practices for reducing the probability for someone who's planning to have children? Also, is it different between food allergies and things like pet allergies?

Expand full comment
Aaron's avatar

This is obesity-centric but lays out the case for a post-1980 contamination hypothesis: https://osf.io/x4fk3/

Expand full comment
[redacted]'s avatar

Are Ethan and Sarah Ludwin-Peery just the people behind the blog at slimemoldtimemold.com? I'm moderately annoyed that I've been following along with this gradually there but the finished (well, preprint) product is available here.

I guess the "about" page on SMTM does credit it to "ELP and SLP", so...checks out.

Expand full comment
Aaron's avatar

Correct. Although the INTERLUDES on their blog are not in the paper and well worth it, especially the one on Nutrient Sludge™.

Expand full comment
State of Kate's avatar

I don't think it's different between different types of allergans. Kids nowadays spend vastly more time indoors, in more hygienic places, and much less time playing in the dirt than they used to. Likely their gut bacteria is different and they're not getting exposure to everything that they used to when most people lived on farms. Exposure builds tolerance, that's why allergy shots work.

I grew up with dogs and spent lots of time outside and being extremely dirty. Honestly I don't even think I was bathed as a child more than maybe twice a week if that. I had no allergies. Then I lived without dogs from age 18 to 40, and when I got a puppy at age 40 I was shocked to learn that I had become profoundly allergic to dogs. I didn't think that was possible, considering I grew up with dogs and never had dog allergies from age 0 to 18. But if you stop exposing your body to an allergan, after enough time you can become allergic. Similarly if you stop ingesting dairy for enough years, you will become lactose intolerant.

I was extremely sick from the dog allergies and couldn't breathe at all in a room with my dog and thought I would have to give him up. But then I did allergy shots and within 6 weeks I was completely fine again and no longer showing any allergic reaction. The shots just inject dog dander directly into your body. I have to get them every month to maintain "immunity". I've been doing them for a few years and my allergist says I can try stopping them next year and there's about a 50/50 chance I'll retain immunity, but it might wear off and I'll go back to being allergic again after a few months, in which case I'll resume the shots. Apparently it is much easier to start out never being allergic and not allowing oneself to become allergic than it is to retain "immunity" once having been allergic.

For kids, expose them to everything and let them get dirty. Human bodies were not designed to live in worlds where everything is bleach-disinfected level of sterile, they're indoors 98% of the time, they wash their entire bodies with soap every day, and their food never has any dirt on it.

Expand full comment
BRetty's avatar

Another factor in the increase in children living with allergies is that, to be blunt, those children are alive. Especially with severe things like nut allegies, 50 years ago those children would probably have just mysteriously died, and that would have been accepted as normal attrition.

Now we are aware of severe allergies like these, and also have the tools to screen for many other environmental allergic reactions. So of course we are finding potential allergens everywhere. (And there is a whole lucrative specialty in allergy diagnosis -- the parallels with ADHD and Ritalin/Adderall prescriptions are clear.)

Also in the last several generations, the general health and nutrition of children has improved significantly. This is an unequivocal GOOD THING, but it also means that any allergic problem that might drag a kid down will be more easily noticed. It is analogous to the "explosion" in cancer diagnoses, which is not driven by environmental factors, but by people generally NOT DYING of other causes like violence or workplace injuries or smallpox or in childbirth etc. Children used to live with rickets and hookworm and other childhood diseases and problems. Maybe allergies were just lost in the noise?

And yes, kids today are never, ever allowed out of controlled environments, never exposed to anything foreign, that might be dangerous. But changing that is a matter of #politics.

Expand full comment
JonathanD's avatar

Great points through most of this post, agree wholeheartedly.

For the last clause though, I think that's overstated, assuming I know what you mean. Young children *are* kept track of much more carefully that was once the case, but we (parents) start cutting the apron strings as our kids age into tween-dom. And even there, young kids still get plenty dirty and swim in lakes and have pets and so on.

Expand full comment
Aftagley's avatar

agree with this, with the added caveat that your average person probably just gets exposed to way more potential allergens now then they used to. Someone more than 50 miles from a body of water might never find out about their shellfish allergy, someone in Europe wouldn't find out about their peanut alergy, etc.

Increasing access to a broadening set of foods/experiences is a good thing, but will also likely increase the perceived amount of allergies in the population.

Expand full comment
Aftagley's avatar

I meant to say, "100 years ago, someone who lived more than 50 miles from water might never experience shellfish or someone in Europe wouldn't be exposed to peanuts."

Expand full comment
The Chaostician's avatar

There was some discussion of this from an adversarial collaboration on SSC a few years ago:

https://slatestarcodex.com/2018/09/06/acc-entry-should-childhood-vaccination-be-mandatory/

I remembered this because of the wonderful quote: "The researchers then took dust from the Amish barns and forced mice to breathe it in."

Expand full comment
Deiseach's avatar

I saw something online, and I have no idea if this is true, that the increase in things like nut allergies is that from babies onwards, kids are exposed to products which contain peanut oil (e.g. creams and lotions) and that this triggers sensitivity.

Take that with as much salt as won't trigger an allergy.

I am uncertain if we have more allergies, or just better detection of existing ones. I do think there *seems* to be more kids with asthma, but is that "more asthmatics because X, Y or Z causing it" or "more asthmatics because now we can better diagnose it"?

But I do also think that there is something to the new levels of cleanliness and hygiene and sterility that households live in, especially where children are concerned. An exposure to "clean dirt" will help build up the immune system, so being terrified that your baby/small child is picking up even a speck of dirt isn't helpful. Toddlers are going to be into *everything* anyway, the most you can hope for is to catch them before they stick that lump of dirt into their mouths.

Expand full comment
Gerry Quinn's avatar

"Clean dirt" was what my mother used to say back in the '60s. She knew little of science but she believed kids ought to get muddy etc. Is it an Irish phrase? I don't think I have heard it since.

Expand full comment
Dishwasher's avatar

Yes, look into childhood exposure to non-specific immune system stimulants such as adjuvants.

Expand full comment
beowulf888's avatar

I had an interesting COVID-19 dream the other night. I met this fellow who called himself "The Immunizer". He wore a fedora, and his face was covered in shadow, and he looked like he came off the cover a 1950's pulp fiction pot-boiler. The Immunizer said he had the ability to shed COVID-19 *vaccine* particles as he breathed on people. I said, "That must be causing consternation in the anti-vax community." He replied: "Yup, they're all masking up so that they didn't accidentally breathe my vaccine particles." Then he added: "I see it as win-win, don't you?" He chuckled as he coughed on me with smoker's hack.

The problem with having SARS-CoV-2 and COVID-19 as an intellectual hobby, is that an evening of online discussions and arguments about Coronavirus data, it can trigger COVID-19 dreams. Mostly my COVID-19 dreams involve endless arguments over statistics (which I find fun in real life, but boring in a dream), but I thought last night's dream was quite entertaining. I hope to see more of these from my Dream Producer.

Anyone else having humorous COVID-19 dreams out there?

Expand full comment
User's avatar
Comment deleted
Aug 1, 2021
Comment deleted
Expand full comment
beowulf888's avatar

Polytetrafluoroethylene(PTFE), Ceramic coatings, Silicon coatings, Superhydrophobic, Enameled cast iron coatings? Sorry I had to ask. ;-)

But dreaming is major form of recreation for me.

Expand full comment
beowulf888's avatar

Oops. You deleted your skillet dream. I want to know more about it!

Expand full comment
The Goodbayes's avatar

I'm jealous. I almost never have dreams.

Expand full comment
Anteros's avatar

Set your alarm for 3.30am. You might find that's where the dreams are hiding.

Expand full comment
beowulf888's avatar

According to sleep researchers we all have dreams, and they occur during REM state sleep. And according to those same researchers if you're not reporting having dreams it's because you don't remember them upon waking.

But I'm not confident enough to assume that we all experience dreams that we could remember in REM state. Could a person's mind do whatever dreaming does without having the visual and audio narratives that we call dreams? In other words, could the brain do what it does in dreaming with a low-level haptic recapitulation that would contain no visuals and audio replays? I think it could. Helen Keller must have had dreams despite having no visual and hearing abilities. But that doesn't mean every person with visual and hearing abilities *has* to have dreams with visual and audio content, right?

And blind people are supposed to have dreams full of audio, touch, and taste sensations. Makes sense to me, but I have no sense of touch or taste in my dreams. Why am I "blind" to those dream senses?

Anyway, I have found that certain things inhibit my memory of dreams the next morning. Alcohol intake the evening before definitely interferes with my memories of having dreams the next morning. A restless sleep cycle where I'm awaking frequently probably reduces my dreaming. Sleeping aids such as benzodiazepines seem to either reduce my dreaming or memory of having dreams.

As I've aged, my sleep patterns have gotten rougher as I've aged. And my vivid dreaming and memory of vivid dreaming seemed to be getting much reduced by my rougher sleep patterns. Over the past couple of years, I've found that 25 mg of CBD taken once every two days has improved my overall sleep. I can sleep though the night without waking. And my dreams (or my memories of dreams) have become more vivid again — although I'm not sure if some of the dream vividness may not be due to the psychoactive properties of the CBD — but they're much like they were when I was a kid.

Anyway, dreams are important form of recreation for me, and I'd miss them if I didn't have them or remember them.

Expand full comment
Deiseach's avatar

I deleted because it was double posted, then it deleted both of them. Me and technology, I am still definitely in the "living under a rock" mindset.

Nothing so sophisticated! More that I was arguing with someone that we did have the right size saucepan, and I went through my (real life) kitchen cabinets pulling out the saucepans to show them that "this is the big one, this is the bigger one, this one will hold all we want to cook".

Weird dream, no reason I can think of for having it, since in real life I have not been arguing with anyone over the capacity of my saucepans!

Expand full comment
Chebky's avatar

Does a hyper-realistic fever dream from the second Moderna shot count? I remember getting really weak with a high fever and going to sleep imagining my immune system being this well-oiled war machine reacting to the antigen from the first shot by pulling a big lever called 'temperature' and running around to man the stations and what not, and ended up having all this in a really cool war movie kind of dream.

Surprisingly, it was a semi-lucid dream since I would wake every 10-20 minutes or so (from the fever) and then immediately go back to the dream knowing that I'm actually watching a really cool movie in bed.

Fevers are weird, man.

Expand full comment
beowulf888's avatar

That's a great dream! Funny how medical pathologies affect dreaming. I'm a Type-I diabetic, and before the development of continuous blood glucose monitoring systems, sometimes my blood-glucose levels dropped down into 50 mg/dL range while I was sleeping. At those levels, I'd frequently have some very strange dreams about the nature of reality (in those dreams we definitely exist in a multiverse). I think that I developed the ability to be self-aware during my dreaming so that I could force myself to wake up and treat myself to quick calories. After wandering around in these familiar dream tropes for a little while, I'd say myself "This is all very interesting, but I need to wake up and eat some sugar," and I'd reluctantly leave my explorations of parallel realities next door behind.

Expand full comment
Randomstringofcharacters's avatar

What dating sites are non awful these days? I liked OkCupid as it used to be, very text heavy with lots of information, but now it's just a tinder clone.

Or to put it another way, how did you meet your current partner?

Expand full comment
Drethelin's avatar

I have not found any non-awful dating sites. I'm single right now but all my partners that lasted a while I found through social groups.

Expand full comment
TOAST Engineer's avatar

I think it's fair to say online dating is just an inherently bad idea, sort of like the retro-futuristic idea that 2020ites would fill their homes with spray-down plastic furniture. Great in theory, but just not compatible with human nature.

Expand full comment
Drethelin's avatar

I disagree. I think it works great for millions of people, and is perfectly compatible with human nature.

The fundamental problems with it are not really new to the internet: Men and women tend to have different preferences, and weird people will have a harder time finding people they like. What's different with online dating is no one has yet figured out a good way to monetize actual matchmaking as opposed to convenience-based swiping.

Expand full comment
TOAST Engineer's avatar

Do you disbelieve that love/sex-lessness is skyrocketing, or do you disbelieve that online dating is the cause?

Expand full comment
TOAST Engineer's avatar

(wish I could edit...)

Do you disbelieve that biological factors, e.g. pheremones / natural smell, "vibe" given off by body language, whatever our brains use to determine immune compatibility, etc... that can't be transmitted over the internet play enough of a role in mate selection that attempting mate selection without involving them is pointless?

Expand full comment
Drethelin's avatar

I definitely disbelieve this.

Expand full comment
Drethelin's avatar

I believe it's skyrocketing in comparison to the recent past of pre-aids post-hippie america, but not the past of most of post-agricultural humanity. I believe that insofar as online dating impacts the situation it's probably helping the problem rather than hurting it.

Expand full comment
TOAST Engineer's avatar

Alright, that seems like a valid way to think about it.

Expand full comment
Kenny Easwaran's avatar

I think there's a growing inequality here. The number of people without sex/friends/love is growing, but also the number of people with amounts of sex/friends/love that would have put them in the top 1% a century ago is growing.

Much like what has happened with wealth at certain points in history.

Expand full comment
Vanessa's avatar

I met my fiance in online dating, so, no.

Expand full comment
Hamish Todd's avatar

For the likes of us:

-Bumble: gives women lots of agency so I dare say you get smarter/more switched-on women who expect non-shotgunning + non-traditional men.

-Hinge: emphasizes profile A BIT in comparison with tinder because it's marketed towards finding a serious relationship.

I have to send fewer messages in order to get a date on hinge than I used to on okcupid, so that's nice. Partly that's because the "both of you have to swipe" thing was a positive innovation. It might also be because I'm in my 30s instead of my 20s now, so I'm more attractive/women in my age range are more, umm, keen to get on with it.

Meta:

Early okcupid was a golden age, these are paultry advantages in comparison. That matching alg should have got a Turing award. We should never forgive them for changing it.

At the same time, it's possible to kid yourself about how scalable and compatible with human nature it was. If you were the kind of person okcupid WAS set up to help more, you were in the minority of its userbase and CERTAINLY in the minority of the population.

Dating app design today is more about, I don't know, "community management" rather than matchmaking. Coffee Meets Bagel is another important one I've not tried. This technology will change it in the most interesting way https://www.youtube.com/watch?v=Q13CishCKXY

Expand full comment
Aftagley's avatar

+1 for early OKCupid. Between the quality of the service and the transparency with which they approached their platform (and the analytics + summaries they provided on their blog) they were far any away the best around. Too bad the monopolizing blob that is the Match group subsumed it.

Expand full comment
Wasserschweinchen's avatar

I met mine on Tinder. She's pretty great.

Expand full comment
Christina the StoryGirl's avatar

My partner and I saw one another's profiles on both OKCupid and Hinge, but I believe we actually made the connection on Hinge. That was about a year and a half ago.

Before that, I, too, bemoaned the evolution of OKCupid's text-heavy format into swipe-shopping. It's sort of inevitable, though, because like Weight Watchers, these are businesses which permanently lose customers if they provide a truly effective product. That's why most of the dating industry's business model is to give customers just enough moderate short-term success to imbue confidence in the product ("I joined Hinge and got a bunch of matches right away! I went on two dates, but it didn't work IRL. That's how it goes...back to Hinge!").

It should also be noted that dating apps are becoming increasingly sophisticated at stimulating dopamine and thus "addicting" people to the app itself. First, there's the unpredictable rewards thing, but also a format which encourages the user to swipe all the time forever creates the illusion of unlimited options and makes it difficult to ever "settle" for the one in front of you. Your perfect 10 soulmate might just be a swipe or two away! What if you miss out because you didn't keep swiping?! KEEP SWIPING FOREVAAAARRR!!!

So you'll probably have to use the awful dating sites, but be consciously aware that the ability to swipe endlessly does not mean there is endless supply. And maybe read up on all the deep psychological strategy that goes into designing the apps so that you're less likely to fall prey to it.

Hinge has a rather extreme word count limit for its profiles, but if you really craft every prompt and photo description, you can get across a lot of information about who you are and what you're looking for.

And a last tip - if you can, get some outside opinions about your photos, text, etc. Ideally the person / people you ask for help should be a member/s of your target gender. My friend Mike and I reviewed one another's heterosexual dating profiles and each of us had insights and criticisms that never occurred to the other. It helped a lot!

Expand full comment
Loweren's avatar

I use Photofeeler to check which ones of my photos are good and which ones are awful. It's a free site that polls users and gives your photo a score. https://www.photofeeler.com/

I also have recently started showing my friends' photos to models I work with, and I post their feedback on YouTube. Check it out and feel free to send me your profile too. https://youtube.com/channel/UC82ybfgtazI7kYUFwoOuXMA

Expand full comment
JonathanD's avatar

Match.com, but that was during the tail-end of the Bush Administration.

Expand full comment
Lambert's avatar

Hypothetically, how much training data would it take for a CNN to figure out which way I'm likely to swipe on a given profile?

Expand full comment
TOAST Engineer's avatar

I remember back when Tinder was a new thing - this way long before the ML revolution - a computer vision researcher made a Tinder bot that took ~5 swipes and used a technique called "eigenfaces" that more-or-less accurately predicted who you'd find attractive. It also had a simple chatbot that'd start off the conversation for you, so you at least don't have to deal with the immediate rejections.

Expand full comment
Ariel Zeleznikow-Johnston's avatar

Slightly odd request:

I've seen Robert McIntyre post thoughtful comments here a few times and I've been trying to get in touch with him. I've sent a couple of emails to his work email address, but had no luck getting a response. Rob if you're reading this (or anyone who knows him well) is there a different email address I should be using to get in touch with you? If so, please email said address to zeleza at gmail dot com.

If Rob has seen my emails and is ignoring them or doesn't care about the content than that is fine, I'm just trying to get confirmation they're being wilfully ignored rather than missed.

Expand full comment
Drethelin's avatar

I'm no businessologist but I think if your fees confused scott alexander, instead of contacting him to explain it you should probably change your pricing system.

Expand full comment
Deiseach's avatar

The only way I can explain such a complicated system is that they don't want the common dross thinking this is like a bookie's and having a punt; they want Smart People who will make good forecasts, and this is a way of filtering for the same.

Which means they'll end up with six persons and a dog (and you can't even guarantee the dog) and then go bust, because you can't make a success out of just appealing to Really Smart People. Even Apple at its snobbiest marketed to more than "the seventy-nine cool people working in Silicon Valley that we personally know".

Expand full comment
Nuño Sempere's avatar

> and then go bust

I wonder if you had a timeline you'd be willing to bet on in mind.

Expand full comment
Deiseach's avatar

I'd make a prediction on that, save that I'm too stupid to work out what the pricing structure would cost me 😂

Expand full comment
bored-anon's avatar

x(1-x) isn’t complicated, and it lets them price a 85-99.9% probability accurately while getting their fees primarily from 50% ish markets. Not so sure I agree but it does enable accuracy in that range, which a 7% flat fee would not.

Expand full comment
Thoroughly Typed's avatar

Huh, increasing accuracy in the extremes by lowering the fees there seems like a good idea. Hadn't realized that, thanks for pointing it out.

Expand full comment
Scott Alexander's avatar

I didn't actually check it that clearly and wouldn't use myself as a yardstick for this kind of thing.

Expand full comment
Eric Brown's avatar

Offhand, if a smart person can't quickly understand the structure, I'd say it's too complicated.

Expand full comment
Evesh U. Dumbledork's avatar

I'd say it's more a matter of explaining it better.

Expand full comment
ProtopiacOne's avatar

I think that they are trying to mimic the "evolved" pricing models of stock market ECNs, which may not be a bad idea. Some things might be better as complex versus simplified. Demanding simplicity in everything can feed into the dumbed down, common denominator, instant gratification world we're heading for. No, wait, nevermind, I think we're already there.

(I'm not saying that you are expressly saying that everything should be simple. But it may be a tall expectation of pricing models on predicting futures of everything from elections to football games to fraud to bowel movements.)

Expand full comment
nifty775's avatar

Do guns and cars tell us something significant about American versus European manufacturing culture (or maybe culture in general?) I think it's pretty well-known that most American car brands are mass-market and not at the high end- GM, Ford, Chevy, Buick, and so on. Of course all of those brands have their more expensive models, and there is Cadillac (and more recently & intriguingly, Tesla), but in general the US has produced a lot of mid-market automobiles. Meanwhile, Europe seems to have the more famous higher end brands- Mercedes Benz, Porsche, BMW, and so on. This does seem a bit odd to me as the US is significantly wealthier than the average European country (median wages in the US are almost 50% higher than Britain, France & Italy, say).

Recently it occurred to me as a gun enthusiast that the same state of affairs kind of exists for firearms? Europe just dominates the higher quality gun brands- Heckler & Koch, Bennelli, FN, Glock, etc. Meanwhile the US has a number of brands that I would call mid-market, and basically 'fine'- Colt, Winchester, Browning, Mossberg, and so on. (We do have Keltec! Eat your heart out Euros....) I find this especially ironic because, while I'm not an expert on European firearm laws, I don't believe that most citizens over there are allowed to enjoy their finest products- the MP5, a Benelli M4, literally any Glock product, etc. (I positively lust over the MP5!)

Does this.... say something deeper about our respective cultures, that America dominates the middle of the market and Europe the higher end, despite the former being per-capita wealthier? Perhaps I am over-generalizing by talking about 'Europe', when in fact most of those quality car & gun manufacturers are German and Italian. Would be interested to hear peoples' thoughts

Expand full comment
proyas's avatar

The better question to ask is whether "premium" European guns are actually better than "average" American guns. Do the former's bells and whistles actually translate into measurably better performance in the hands of soldiers or police officers?

Expand full comment
nifty775's avatar

Yes, the US military uses an enormous number of different European firearms. When the military puts out a call for a new firearm for service use they open the competition to whomever, and Euros frequently beat out American companies. Benelli makes the favorite shotgun of the Marines & the SEALs, Delta Force uses Glock sidearms, FN is the primary manufacturer of the M4 which is *the* basic rife of the Army & Marines, etc. Just off the top of the my head. That's not even counting non-US military or law enforcement usage.

I'm a fairly nationalistic American (and obvious gun enthusiast), so I think I can be fairly objective when I say that Germany and northern Italy is the global epicenter of the best firearm quality

Expand full comment
Melvin's avatar

Would it be reasonable to say that American gun manufacturers focus on serving civilian customers, whereas European gun manufacturers (with no significant domestic civilian market) focus on serving police and military customers?

Expand full comment
WayUpstate's avatar

Wouldn't know about the guns but re: cars in Europe. Take a look at a European website for models produced in Europe for Europeans and see a much broader range of models and prices than those available to the US market. Apparently, the margins for the lower end cars just isn't there if importing them to the US. Then there is the large number of makes not available to us in the US. If anything, I think there are many more lower-priced sedans available in the EU than there are in the US. I spend at least a couple months per year in France and the expensive homes around me are as likely to have a basic Peugeot parked in front of them as they are to have a BMW X3 and this is out in the countryside where parking is no issue. Tons of protectionist policies on the US-side preventing importation of cars not built specifically for the US market.

Expand full comment
Michael Feltes's avatar

Yeah, there's a filter on perception that the OP is not perceiving, namely the European brands that are marketed in the US are the brands that sell at enough of a premium to be worth exporting. We don't get Citroen, Peugeot, Fiat or Skoda cars here because they occupy the same market niche as Buick or Chrysler.

In some industries, the advertisers take advantage of this to reposition a brand. Stella Artois, for instance, is bog-standard lager in the UK, but you'd never know it based on the commercials here.

Expand full comment
nifty775's avatar

You're only considering the perspective of the American consumer, whereas outside the US elites & wealthy people frequently drive European luxury automobiles (BMW, Porsche, Mercedes Benz) and much less so GM, Ford, etc. I will say that Tesla is an interesting recent counterexample. But this general pattern seems to hold true in Latin America, Asia, Russia, etc.

Expand full comment
demost_'s avatar

I second this. Also INSIDE of Europe, the high-standard brands are considered European ones (plus Tesla and a few Asian ones like Toyota).

Expand full comment
Melvin's avatar

If we're talking about the paucity of US luxury car sales abroad then you're really just talking about two companies, Cadillac and Lincoln.

Lincoln doesn't seem to have any interest in serving any markets apart from the US and China, while Cadillac makes occasional attempts at international expansion that never quite seem to take off. Their current products are actually pretty good by all accounts I've read, but their image is still tarnished and the styling remains a bit too... American for Europe.

Expand full comment
Michael Feltes's avatar

Perhaps this is an area where the "special relationship" between the UK and US still holds to some degree. It's odd that there's still a fairly significant luxury car industry that codes as British (Jaguar, Land Rover, Aston Martin, Lotus, Bentley, Rolls-Royce), though most of these marques are now owned by multinationals, but there's not one mass market British car manufacturer after the final collapse of Rover 15 years ago.

Expand full comment
Lambert's avatar

JLR mostly seems to have its sights set on the growing Chinese market, anyway. (not that the britishness is any more of a facade than the off-roadness of anything Land Rover makes today)

Expand full comment
Lambert's avatar

What should we make of the fact that Audis, VWs and Skodas are often made from the same basic design, but the Audi gets a bigger engine and more bells and whistles?

Expand full comment
Michael Feltes's avatar

That it's silly to give two shits about what anyone else thinks of your car?

Expand full comment
Carl Pham's avatar

The quality of a car depends only a little on its design, particularly of things other than the drivetrain. The build quality is far more important, and to some extent that tends to be a question of how small you set your tolerances. The Audi people (for their export models) set smaller tolerances, end up rejecting a higher propertion of parts, adding more inspections, using more robots to do precision fitting, et cetera, which drives up the price of the car -- but produces something that performs better and lasts longer *even if* it's pretty much the same design.

Expand full comment
Konstantin's avatar

That, and the US generally doesn't like sedans. Ford made major news a while back when they said they would stop making all cars except the Mustang and focus exclusively on trucks and SUVs. The Mustang is only spared because it is central to Ford's brand and it helps drive the sale of more popular vehicles.

Expand full comment
Deiseach's avatar

The interesting thing to me was American auto brands coming over into the European market - but just rebranding existing European brands, not introducing American brands.

I got very excited about Chevrolet! But it just turned out to be the Daewoo brand, and then they pulled out of the market altogether in 2013: https://en.wikipedia.org/wiki/Chevrolet_Europe

Expand full comment
Michael Feltes's avatar

The rebadging of cars made by other manufacturers is odd. My first car was a Ford Aspire, a little hatchback manufactured by Kia. Buying it was a good early exposure to high-pressure sales tactics for relatively low stakes. I bought it not knowing how to drive stick. The salesman had to drive it home for me. :P Thankfully it turned out to be a solid, efficient little car that also protected me in the wreck that was my fault. I was an absolute mess in my 20s.

Expand full comment
John Schilling's avatar

I wouldn't call Glock a high-end firearms brand. Glocks are basically the Honda Accords or Toyota Camrys of the firearms world; a sound design that everybody knows will just plain work. And the demand for pistols that do more than "just plain work" across the usual range of performance, is very very small. Pistols of any time are inherently inferior weapons; and everybody who seriously cares what sort of firearm they are going to bring to a fight starts with rule 1, "bring a gun", and if they have any bandwidth for more than that, rule 2, "...but not a pistol". There are a few people who will take a detour into "bring a really good pistol" territory for legitimate tactical reasons, but mostly it's just gun snobs.

I, being a gun snob, have a few gun-snob firearms. But the only times I've ever come close to actually shooting a gun in anger, it's always been a Glock.

H&K, Benelli, and a few others (not really FN) do cater to the premium market with dedicated designs. On the American side, you've got small manufacturers that take classic designs like the 1911 and the AR-15 and manufacture them to levels of quality and customization that Colt never dreamed of, but are largely invisible to non-gun people because they are perceived as just another Colt .45 or whatever. For the automotive equivalent, look at Porsche and Ferrari and then all the American gearheads turning stock sedans into racing machines. Or watch that movie about the time Henry Ford II was silly enough to think one of his ugly little cars could win at Le Mans.

Expand full comment
Meta's avatar

No idea about guns or europe in general, but cars are an important status ladder to many germans.

Expand full comment
Schweinepriester's avatar

That's in recline. On northern German roads, the privately owned big Mercedes and BMW seem to be more and more a thing for sons of middle-eastern immigrants and lower middle class natives while rich germans can afford riding bicycles or whatever comes their way. Ok, Porsche is still something, Tesla has become something but big vehicles for personal transportation are sneered at a lot. In southern Germany this may still be different, haven't ridden there these years.

Expand full comment
demost_'s avatar

The standard story is that the countries like Germany, France, Switzerland, Belgium, Netherlands etc. don't have low-wages sectors due to regulations, or have/had them to a much smaller extent than the US. This means that competing in the mid-market sector is difficult for these countries, because wages make a difference there. This is why they mainly compete in premium markets, where wages are less important.

Especially because from the 90s on, those countries had massive in-EU competitors with much lower wages, from Poland to Romania. So they could not protect the markets by tariffs, and mid-market sectors went to Eastern Europe. Low-market sectors went even further to China or Vietnam. What remained in those countries was premium production.

At the moment, there is discussion whether this trend might revert in sectors like fashion, because automatisation is now advanced enough that wages become less important again.

Expand full comment
Doctor Hammer's avatar

I would toss in the notion that the high end European manufacturers, or at least the Germans, have a culture of over engineering things. That is, they seem to go a bit past the point of diminishing returns on precision and complexity, getting a bit more performance but at a pretty high price in terms of money and maintenance. The Tank Museum's youtube channel had a nice bit about that with regard to WW2 Panthers vs the T34, just to add another type of product to the list. Sometimes it seems that you should stop making something better, because the extra effort isn't worth it.

With guns at least, it is very easy to make a rifle that is far more accurate than the meat puppet shooting it will ever be able to utilize. Anything past that is wasted effort.

I don't know how it is in the EU, but in the US (on the east coast at least) there is a lot of custom manufacturing, building guns from specific parts either yourself or with the aid of small local gun smiths. That might somewhat obviate the domestic market for really high end, factory built guns.

Expand full comment
Aapje's avatar

The Sherman is also a good example. It's a great tank from a logistical point of view, which is typically how longer-lasting wars are won. It's reliable, easy to produce and maintain & has many standardized parts. But it certainly was not the best tank in actual battle.

Anyway, perhaps Americans are 'settlers' in more ways than one.

Expand full comment
Deiseach's avatar

Possibly it's nothing more complicated than that those who would have bought fine guns and, when invented, automobiles were the wealthy in society, so the market for them was aimed at the start at high quality, expensive products and only later it 'trickled down' to the masses.

Whereas in America, guns and cars were available to the masses from the start, so you had cheaper, mass-market models because that was what people could afford. And since the American working-class/lower middle class was more prosperous than their European equivalents (citation needed 😀), then the market was in place for appeal to the "mass-market" rather than relying on the Boston Brahmins or the New York Four Hundred https://en.wikipedia.org/wiki/The_Four_Hundred_(Gilded_Age)

Expand full comment
Johnny Fakename's avatar

None of the guns you mentioned are high end luxury products.

Glocks in particular are designed from the ground up to be mass market. They're an assembly of plastic and bent sheet metal held together with plastic pins. These days they're even replacing sheet metal and forged steel parts with MIM parts from India, which has caused problems with erratic ejection that Glock wont acknowledge. I own three Glcoks because I'm an avid shooter and they're by far the most user serviceable handgun out there, but they're not high quality. They're also not all made in Europe. I have two Austrian Glocks and one made in Georgia. Beretta, FN, and Sig also have factories in America. HK is a notable exception, which is partly why their guns are so expensive, but they've been talking about opening one here for a while.

MP5s are also a result of cost cutting and mass production, although indirectly. It's the culmination of a series of gun designs that date back to some eggheads at Mauser trying to come up with a cheap stamped metal gun for the nasties right at the end of the war. Naturally that didn't happen, so they ended up developing the design for Spain as the CETME, then for not-nazi Germany as the G3, then finally the subgun variant the MP5 for everyone else. Mp5s are one of the most mass produced guns in history.

The US gun market is so insanely huge that you can find products comparable to European products for every market segment. Sad to say, but I think you're a victim of marketing.

Expand full comment
nifty775's avatar

I will concede that Glocks are not ultra high-end, but perhaps I confused the point by using the word 'luxury', which doesn't really work with firearms. Companies like Bennelli and H&K make some of the higher quality firearms in the world, which we can pretty objectively determine by who the US & other elite militaries selects for their service arms. I mean as I said above, the M4, the standard infantry rifle of the US military, is mostly made by a non-US company....

These criticisms of what Glock is made from have been around forever. We have all heard 'tactical Tupperware' and 'drastic plastic' in the YouTube comments section. They are some of the most reliable handguns on Earth, whatever the material, which is irrelevant to me so long as they fire.

I agree that some of them do have factories here in the US, but we can still call them foreign companies. Toyota & Volkswagen have factories in the US too, but everyone thinks of them as Japanese and German manufacturers, not American ones

Expand full comment
Kenny Easwaran's avatar

This is interesting.

I've found that in other categories, like chocolate, beer, and healthcare, Europe dominates the upper middle of the market, but the United States has more at both the higher end and the lower end.

And on the cars, I remember in the debates leading up to the 2008 presidential election, John McCain was talking about "Cadillac health insurance programs" that unions had negotiated, and I just didn't understand that Cadillacs were supposed to be fancy cars - I just thought of them as old cars from the '70s that were maybe famous for being large! (I was born in 1980, but never had much interest in cars.)

Expand full comment
demost_'s avatar

The US at the higher end of chocolate and beer? Seriously? The perception in Switzerland and Belgium (chocolate) and basically all of Europe (beer) is quite different.

Expand full comment
Kenny Easwaran's avatar

Yes, that's exactly what I mean. Switzerland and Belgium have average chocolate that is far better than the low end chocolate of the United States. But they tend to specialize in confections of various sorts, rather than what high end US chocolatiers do, which is to find the best single origin chocolates and present them straight. The beer story is similar - you go to any European town and get a random beer, and it'll be far better than any American mass market beer. But outside of Belgium, you don't have as much at the high end of European beer as you do with US craft beer.

Expand full comment
Bullseye's avatar

Imported German beer in the U.S. has a price and quality comparable to craft beer.

Expand full comment
Schweinepriester's avatar

Beer is much like bread. High-end tends to perversity and low-end is a public nuisance.

Expand full comment
proyas's avatar

Here's a summary of Neil Postman's "Amusing Ourselves to Death," and a comparison of the "TV culture" he observed in the 1980s to what could be called the "internet culture" of today.

https://www.militantfuturist.com/amusing-ourselves-to-death-summary/

Expand full comment
William Collen's avatar

Thanks for sharing this! Postman's book has been a huge part of my intellectual makeup since I first read it as a teenager almost twenty years ago; it is one of a handful of books (along with Joel Garreau's "Edge City", Jane Jacobs' "Death and Life of Great American Cities", and James Twitchell's "Lead Us Into Temptation") that I've always wanted to read in an updated version for the internet era. Eddie's comments are appreciated. It seems that American culture has drifted in two directions from where it was when Postman made his observations - certainly, social media like twitter, facebook, and the rest have lowered the quality of discourse from where it was in the eighties, but at the same time discourse has become more serious; it seems like the entertainment value of what we see is downplayed, and everything is now a point of contention.

Expand full comment
Deiseach's avatar

Interesting synopsis. Much of that reminds me of Harlan Ellison's "The Glass Teat" and "The Other Glass Teat", collections of his essays about television from the 60s-70s (and very amusing, provocative, informative and at times making you wince since history made a liar out of him, e.g. he has one essay praising an actor/director named Zalman King who he forecasts will go on to do great things, this is the guy most famous for The Red Shoe Diaries, a softcore porn/erotica TV series of the 90s) http://www.michaelallsup.com/glass_teat.htm

However, the points in the synopsis seem to leave out some considerations: the argument is about the written/printed word and how "print culture" and "TV culture" are different, with "print culture" being diluted and corrupted as its content shifted from information to entertainment.

But "print culture" always had a strong entertainment element from the very start! Look at the Newgate Calendars of the 18th and 19th centuries, precursors to all the 'True Crime' magazines, and long predating television and "Legal trials about serious crimes like murder are televised for entertainment and shock value":

https://en.wikipedia.org/wiki/The_Newgate_Calendar

Its spiritual successor was The Illustrated Police News, which was a tabloid mixing sensation reporting on crime with suitably gory illustrations of the same (perhaps anticipating the argument about the move from words to pictures coarsening public taste):

https://en.wikipedia.org/wiki/The_Illustrated_Police_News

The author is quoted about "Americans have accorded it their customary mindless inattention" and I think Ellison develops that point well; TV is both all-consuming of attention and ignored, it serves as background 'wallpaper' while doing other tasks:

" And all of them use the TV set.

They turn it on when they enter the motel room, and they frequently leave it going as they pack their bags out the door on departure. It is sound, it is movement, it is life-of-a-sort. It is companionship, demanding nothing, saying nothing, really. Unlike the vampire hordes of groupies and fans, the TV gives and expects nothing in return, not even attention.

It is acceptance on the lowest possible level.

It is having life going on, during the death hours of inactivity and banal conversations with stoned strangers. It is a piece of moving art, sitting there. It is no more significant than a lava lamp or a landscape painting in a strange house. But it serves a purpose no one who ever helped develop television could have guessed. It soothes and accompanies and staves off loneliness.

Television, the great enervator of the American people, has come full circle. It is now--in the most precise sense of the McLuhanesque Idiom-merely a medium. A moving, talking, non reacting adjunct to the life going on in the room wherein it stands. No one watches, no one hears, yet it plays on. Phosphor-dot paladin guarding against the shadows of loneliness."

Expand full comment
William Collen's avatar

>But "print culture" always had a strong entertainment element from the very start!

You're right, and I think that's the weakest part of Postman's argument - he would want us to believe that the print era was entirely serious-minded lofty discourse, when it never has been exclusively devoted to that sort of thing.

Expand full comment
Deiseach's avatar

The Illustrated Police News is just pure trash tabloid publishing but it does throw up some gems, like the following from July 1899 with a wonderful woodcut illustration of a lady in a feathered hat punching the lights out of a guy to the approval of a group of onlookers https://cyclehistory.wordpress.com/2015/01/24/thrashed-by-a-lady-cyclist/

THRASHED BY A LADY

CYCLIST.

WHO IS NOTED FOR HER ATHLETIC POWERS.

An extraordinary scene was witnessed on

Saturday morning in Peel Lane, a thorough-

fare connecting Little Hulton with Tyldesley,

in which the principal participants were a

young lady cyclist and a youth of nineteen

or twenty. The lady was riding at a good

pace, and when in a quiet part of the road

the young man, who had apparently been

imbibing, stepped into the roadway, and,

addressing some insulting remarks to the

cyclist, made as if he intended pulling her

off the machine. She immediately alighted,

caught hold of the astonished youth, and

gave him a sound thrashing, using her fists

in scientific fashion, to the delight of several

colliers who were passing. The young man

made off, and the cyclist, who is believed to

be a Bolton Lady noted for her athletic

powers, rode off towards Tydesley.

And if you thought the Japanese were the only ones engaging in tentacle naughtiness, here's a cover from October 1896:

https://commons.wikimedia.org/wiki/File:ALARMING_EXPERIENCE_OF_FAIR_BATHERS_WHO_ARE_ATTACKED_BY_AN_OCTOPUS.jpg

Expand full comment
Carl Pham's avatar

Parenthetically, Harlan Ellison is to my mind one of the outstanding examples of brilliance that reliably goes wrong. I can't think of anything he foretold that came even close to being true, either seriously or in his fiction. He's also a marvelous example of intellectual promise not redeemed, as it *seemed* at one point he would become one of the sf greats, on a par with Heinlein or Clarke, but he just flamed out instead, remained a niche taste. I wonder what strange nest of snakes lived in his head?

Anyway, predictions of the omnipotence (for good or evil) of the TV from the golden era of its mid- to late-20th century dominance strike me now as an illustration of how badly and quickly at-the-time obvious prognostication goes wrong. I have 4 kids under 30, and not a one of them even watches the TV. Those who don't live at home don't even own one. Their entire attention is online* and the TV could be peddling outright Satanism and advertising crack for peanuts at midnight, and it would have no influence at all on their lives.

------------------

* Of course, predictions of cutural doom have moved smoothly from the TV to social media, so there's still room for somebody to write dystopian predictions about the end result of Tiktok...they could just take some of the dystopian predictions of the 70s about the tube and do a big search-and-replace swapping out "television" for "social media."

Expand full comment
Bullseye's avatar

Are your kids watching videos on the internet? That's basically still tv, using a new broadcast method.

Expand full comment
Carl Pham's avatar

I think that's painting with too broad a brush, except in the same sense you could say TV is just like the movies, a visual medium - which is a category that is too broad. Videos these days tend to be highly targeted and acquire cult-like followings with a significant degree of interactivity. They also tend to be one-man or at least small operations, be on-demand and free of scheduling constraints and optimizations, and be centered around a personality or theme. None of these things are really true of broadcast TV.

Expand full comment
Deiseach's avatar

Yeah, Uncle Harlan was brilliant and infuriating in equal measure. But he did write about TV as already morphing into something that wasn't attention-grabbing and attention-holding, but as something that people had become used to having on all the time and that they would miss if it wasn't on, even if they weren't paying attention to it. And I've seen people using TV as exactly that - a background accompaniment that was soothing white noise.

Haven't you ever sat in a hospital waiting room or dentist's office and they had a wall-mounted TV tuned to the daytime chat show dross? I've often wished I could unplug the damn things because I'd rather sit in silence than have Dr. Bob droning on in the background about "silicone implants for tweens", but again that's another example of "we can't have dead air".

Channel-surfing, sitting there with fifty channels or more but "there's nothing good on" - Harlan would have nodded along with that outcome.

Expand full comment
Carl Pham's avatar

Yes, and I've never understood that. I can't stand having babble on if I'm not actually watching it, it simply demands my attention and I can't ignore it, or shove it into the background like other people do. But I may be unusually sensitive to semi-meaningful noise. I know I value quiet much higher than most people, so far as I can tell.

Expand full comment
ray's avatar

> She finds we need more of three particular demographics: men who want kids, women who don't necessarily want kids, and people who are open to dating a transgender partner.

Is "men who want kids / women who don't necessarily want kids" being the uncommon types an error, or is there something big I'm missing about rationalist community sociology here?

Expand full comment
Mystik's avatar

I’m confused as to why this seems like an error to you; to me it seems to totally fit the standard gender role stereotypes (women want families/men want careers). Obviously stereotypes often don’t hold true, but gender roles still hold a strong grip on society, so I don’t find it surprising that they have an effect here.

Expand full comment
Bart S's avatar

Elena's site emphasises she's particularly looking for men looking to have kids _very soon_, which makes sense considering women have a tighter deadline.

Expand full comment
Xpym's avatar

I don't buy into the paradigm of "gender roles" being primarily something that's forced on people, instead of having natural origin. In particular, I believe that women are "wired" to want kids, because even outside the social "nurturer" role they must bear a high pregnancy and childbirth burden. If they weren't enthusiastic about the whole thing, humanity would have had a much more difficult time surviving, especially in the past when all of this stuff was orders of magnitute more difficult.

Expand full comment
Aapje's avatar

Then it's rather strange that it seems to be a reduced desire by better educated women to have children that primarily causes lower birthrates.

If women were really that strongly wired to have kids, you'd think that they'd machine-gun out babies in Western society where risks are low.

Expand full comment
Xpym's avatar

But we aren't comparing modern women to past women, we are comparing modern women to modern men. And apparently even in the relatively enlightened rationalist community they still noticeably prefer to have children mode than men do. To claim that this is mainly cause by some obstinate remnant of the patriarchy would imply that it's pretty much indistinguishable from "natural causes" at this point, which I very much doubt that feminism would agree to.

Expand full comment
Aapje's avatar

Both nature and nurture can be true.

And rationalist men are probably extremely thing-oriented, which I would expect makes them less interested in having children, not more.

Expand full comment
Carl Pham's avatar

If the risks become *lower* (of the child not surviving to adulthood is I think what you mean) that would imply a *reduction* of rates of child-bearing, given that one would assume the wiring is to have *surviving* children. (It makes no sense, evolutionarily or any other way, for any wiring to favor mere production of babies without regard to how many survive.)

Expand full comment
Aapje's avatar

I was referring to the risks to the mother. And we see that better educated women delay child-bearing way more than less educated women: https://www.pewresearch.org/fact-tank/2015/01/15/for-most-highly-educated-women-motherhood-doesnt-start-until-the-30s/

The general pattern seems to be that the ideal family size doesn't differ much, but actual decision making reflects children being given way less priority by more educated women.

Expand full comment
Doctor Hammer's avatar

That's true, but explained better by changing opportunity costs than women not having evolved to have more interest in children. There is a trade off between career and family, and so as career becomes more appealing, and spending time otherwise used for marriage and child care more valuable, you'd expect to see women putting off kids despite having far more interest in having and raising children then men.

In other words, the health risks of kids are way down, but the costs in time and career (and money) are way, way up. Hence, no machine gunning out babies.

Expand full comment
Carl Pham's avatar

Ah I see. It's a very unusual hypothesis that rates of child-bearing are controlled by risks to the mother, as in, women in Nigeria (averaging 5 births per lifetime) would have 10 births per lifetime if they didn't fear eclampsia and so forth.

Normally the assumption is that rates of child-bearing are controlled by (1) risks of the child not reaching maturity, through infant and child mortality, and (2) the return to the parents of adult children, e.g. care in old age. Since modern civilization (and "education of the mother" could easily be simply a proxy for general sophistication of the surrounding civilization) considerably reduces infant/child mortality *and* usually socializes old-age care so that it does not depend so critically on having adult children, those are strong forces that reduce the pressure to have more than a "replacement" number of children.

Interestingly, "replacement" seems closer to "replace just me" than "replace me and my husband", so total fertility rates seem to settle in at slightly above 1 instead of slightly above 2.

Expand full comment
Sarabaite's avatar

"Better educated women delay child-bearing way more than less educated women" - in that article, masters degree women bear kids six years after those with only a high school degree. Which is...about exactly as long as a 4 year degree plus a 2 year masters. To my read, that is just women waiting until they get out of whatever education track they are in, as there is huge pressure to stay in school and not "lose out" by having kids. Plus, with having taken on student debt, the only rational course is to finish the degree and hopefully get that income boost with the credential. Kids delayed are also kids not born, and I'm not getting into a couple CW third rails here. To me, this does not show a reduction in the number of kids wanted or the number that parents would have, if doing college part time was more of a thing.

Expand full comment
JonathanD's avatar

Picking a somewhat random spot to jump in here . . .

There's more to it than just desire to have kids, as most women are loath to do so without a partner. My wife has a good friend who likes kids and wanted kids who didn't get married until 42. We vacationed with her and her then boyfriend when she was 41, and they were talking about baby names, but she's now 44 and I don't think it's going to happen for her.

It's harder for a more educated woman to find a partner, especially given societal expectations that men should be at least in the same income and education range as their wives, if not higher. A more educated woman starts looking later, has norms which dictate both a smaller dating pool and a longer dating/engagement time before marriage, and then afterwards move in a peer group where three kids is seen as a big family, and four or more are seen as weird.

In order to have those four or more kids, she not only has to want more, her husband has to want them too, which, given that large families are looked at a bit askance in that socio-economic peer group, is not at all a given. In fact, while not true in my personal case, it is at least anecdotally more common in my circles that the men are the ones who want to be two and done. It seems to me that this matches the common stereotype, which has women pushing for kids and men putting it off, but I may be projecting local norms.

To sum up: even if more educated women's desire to have kids is unchanged, their reduced opportunities and the desires of their husbands could easily explain the difference.

Expand full comment
Aapje's avatar

That still matches what I claimed in the other comment:

https://astralcodexten.substack.com/p/open-thread-183/comments#comment-2530948

Women seem to have increased their standards a lot, which I think can fairly be described as having less relative interest in marriage/men, children, etc.

If a man is only willing to date supermodels that are heirs to billions and that are the intellectual equals of Enstein, I would regard that person as not very interested in a relationship compared to someone who is willing to date everyone with a pulse. This would be true even if both men would describe having a partner as their ideal.

I think that for certain preferences, the level of interest can best be determined by looking at how much people are willing to compromise, rather than if they would reject or accept a nigh-perfect deal.

> It seems to me that this matches the common stereotype, which has women pushing for kids and men putting it off

If this was always true, then it cannot explain changes over time, unless you want to argue that men have become better at getting things their way, while the opposite seems much more plausible.

> societal expectations that

I'm not a big fan of blaming societal expectations for these kind of things because it just becomes circular logic. People feel pressured to do what is 'normal,' making that behavior into what is 'normal,' resulting in pressure... When we've seen norms change over time, it's already a given that there is a force stronger than the status quo that overcame the norms.

Also, people can still choose to stay within the bounds of acceptability by choosing either one kid or three. If there was a massive choice for either of those, the norms would presumably change as well.

Expand full comment
JonathanD's avatar

So, I think you have something of a point here, but I mostly agree with @Doctor Hammer's counterpoint. To add to it, as I say to @Carl Pham below, I think you're granting women entirely too much agency.

Society changes for complex reasons that are hard to tease out in the first place. It's much harder to get by on one income than two in the modern world. That change serves no one, as nearly as I can see, but it still happened. Did men want it? Women? Any group at all? I don't know.

But once it happened, it became normative for most women to work. Once that happened particularly accomplished women were going to rise to the top. Once that happened, the conflicts between work and family were going to get tough on accomplished women who also wanted kids. And so we got what we have. I don't think women really want it. I think they, and we (men) would all be happier if we had a norm where one person worked, one person stayed home (if they wanted), and most jobs paid the old "family" wage. And neither gender was presumed to be the worker or the nurturer. But me wanting it doesn't make it so.

>I think that for certain preferences, the level of interest can best be determined by looking at how much people are willing to compromise, rather than if they would reject or accept a nigh-perfect deal.

I think I addressed this below, but to re-iterate more briefly, it's not simply that women won't lower their too-high standards, very often it's that men have difficulty in relationships where their women out-accomplish and (especially) out-earn them. It's not that women won't compromise, but that men lack the self-esteem to match with more successful partners.

>I'm not a big fan of blaming societal expectations for these kind of things because it just becomes circular logic. People feel pressured to do what is 'normal,' making that behavior into what is 'normal,' resulting in pressure... When we've seen norms change over time, it's already a given that there is a force stronger than the status quo that overcame the norms.

I think we just disagree here. I think norm formation and change is very complicated, and can't be reduced down to something like "women wanted it". I think we can (and do) get into norms that no one wants and no one likes and no one sees any way to shift.

Expand full comment
Carl Pham's avatar

Sure, but hypergamy is a choice, not a compulsion. If you can shake of society's expectation that your fulfillment lies in being a kindergarten teacher and instead pursue your dream of being a high-energy physicist, you should be able to shake off the expectation that you can only marry another high-energy physicist *with* a Nobel prize, and give the handsome and polite lab tech a chance.

Expand full comment
JonathanD's avatar

But who's choice? Getting a little CW here, but there's sort of an assumption here that the agency is all with the women. Multiple women I know of this social/economic class have tried dating "down", and the men they were with had serious problems having more accomplished girlfriends. Particularly with money, but to a lesser extent with education as well.

There was trouble that led to a breakup, or sometimes just, "I don't think I'm cut out to date someone as educated as you are." After getting burned that way and hearing about friends with similar problems, some subset of those women are going to decide not to bother with the lab tech because it will be three good months, three shitty months and then a break up that goes back to him not being comfortable with her salary.

Obviously not a universal, I know a doctor who married a nurse, in very traditional fashion except for the respective genders, but that hypergamy is baked into both of our gender roles, and it's slow to unwind.

Expand full comment
Mystik's avatar

Since there’s no edit feature to add this disclaimer: this is an odd numbered thread, so I didn’t mean to drag the thread into culture wars territory over the existence/reason for/validity of gender roles. I just wanted to give a brief explanation to Raymond for why I found it plausible.

Expand full comment
Deiseach's avatar

It's common enough in ordinary dating, or so I'm given to believe, particularly if you're a single mother; finding a man who wants a committed relationship that involves children is difficult - men may want to put off having kids until marriage, if at all, and they don't necessarily want to marry you.

For men, it's women who are willing to be in a committed relationship but aren't thinking of marriage, at least not except as a long-term goal. Having children is part of that, and "women who want kids and are upfront about that" may not be suited to someone "just looking for a short to medium term relationship".

Expand full comment
Schweinepriester's avatar

"Men who want kids" probably means "Men who want to found a family" instead of "Men who want to beget kids and get away". A non-trivial difference.

Expand full comment
Deiseach's avatar

Sure, that's what I took away from it. "Men who will knock you up but not stick around" aren't what serious dating is looking for, you can get those ten a penny. Men who want to be husbands and fathers, with a definite commitment to that and not some "well sure maybe at some undetermined time in the future I will feel like settling down and sure maybe a kid?" mumbling are much rarer.

Expand full comment
Daniel's avatar

Re: matchmaking -

Does anybody else find this incredibly off-putting? "I, Elena Churilov, have taken a serious vow witnessed by Eliezer Yudkowsky, Scott Alexander, and Kelsey Piper, to keep private all such information learned in the course of my duties (except as otherwise specified by my clients)."

Expand full comment
grumboid's avatar

I thought her choice of witnesses was really interesting. I had sort of imagined that the rationality community was mostly run by Bay Area figures I had never met, organizing party-like events that were only visible to other Bay Area people. But instead it's three bloggers I'm already following!

(I recognize it's a bit of a leap to assume that her choice of witnesses are seen as the leaders of the rationality community, but I still think it's interesting.)

Expand full comment
grumboid's avatar

To answer your question more directly: no, I thought it was a bit eccentric, and it's more security than I really feel I need, but sometimes rationalists get really into Taking Things Seriously and I'm fine with that.

Expand full comment
Evesh U. Dumbledork's avatar

Not particularly. Why? Is it that it gives the impression that she's too connected to those specific people?

Expand full comment
Evesh U. Dumbledork's avatar

Personally, the only part that feels weird is the "taken a serious vow", as if it meant much. I guess one could think that, it she leaks and it becomes obvious, she would be frowned upon by those people? Ok, meh, haha

Expand full comment
Doctor Hammer's avatar

I thought it amusing that she qualified vow with serious. My first thought was to Team America's "I promise, I will never die."

https://youtu.be/8yaTCXcvTGY

That being a... less serious vow :D

Expand full comment
Scott Alexander's avatar

I think Eliezer (who organized this) puts a lot of importance (I would say too much) on people worrying that their matchmaker would violate confidentiality as a reason matchmaking hasn't caught on. I think he planned a lot of his presentation as signaling extremely strong commitment to not violating confidentiality. He asked me and Kelsey to co-witness Elena's oath as well-known people in the community who can lend our credibility to the promise that Elena takes this seriously and won't break confidentiality. I was kind of surprised to be asked to do this, and only did it because I know Elena and don't expect her to break confidentiality and so it was an easy favor to perform.

Expand full comment
nkm's avatar

It is weird, and does not achieve the purpose of signalling commitment, because oaths do not work like that (unless I am grossly understanding the word "witness" here).

The purpose of mortal witnesses to testaments and similar activities is to testify that the activity happened in the described circumstances. The credibility of oath comes from the oath-taker invoking a some greater force to punish them if they don't keep it.

For this reason, witnesses to testaments are not supposed to beneficiaries to it. Oaths and vows are usually taken in public with lots of witnesses and specifically naming any of them is seldom necessary: marriages traditionally have two, but are rarely relevant because the ceremony is public. American legal traditions have weird quirks about administering legal oaths in courts, I don't know how they fit in. (They added legal consequences because nobody would really believe that oath-breaker would have divine punishment?)

Famous named individuals are not needed for either purpose. Unimportant people publicly known honest people, better. Actually, it looks weird if they are very important people with prominent role in the community. Why would you need super important people (instead of regular people) to testify that the oath happened?

Mortal beings who want to lend their credibility in line could do it by vouching for the project and the organizer's integrity and good intentions, with enumerated consequences for themselves. This could be made a more solemn occasion by them taking oaths themselves.

Alternatively, taking from Christian tradition, giving a blessing could be applicable here? Blessing from important person may carry a lot weight.

Of course, the other way the word "witness" can come to play when you get a powerful being to witness your oath, making you under their jurisdiction and inviting them to smite you if you break it, but I hope the Bay Area rationalists have not progressed to viewing you and Eliezer as divine beings. What is the content of the oath, anyway?

More reading on oaths and vows and their technical differences: https://acoup.blog/2019/06/28/collections-oaths-how-do-they-work/

Expand full comment
Deiseach's avatar

"Famous named individuals are not needed for either purpose. Unimportant people publicly known honest people, better. Actually, it looks weird if they are very important people with prominent role in the community. Why would you need super important people (instead of regular people) to testify that the oath happened?"

Think of them as Peace Commissioners: https://www.citizensinformation.ie/en/justice/civil_law/peace_commissioners.html

"There are no qualifying examinations or educational standards required to be appointed as a Peace Commissioner but you are required to be a person of good character. Anyone who has been charged with or convicted of a serious offence will not be considered for appointment.

Most Peace Commissioners are well established in their local community."

Expand full comment
nkm's avatar

I am thinking about the context of the promises made here and why vows or oaths are supposed to make sense as a way to increase credibility of promise in eyes of the other people. (The document in question is here: https://matematch.me/vow )

So what is the role of witnesses?

We already can see the vow is made, because it is shared publicly. The witnesses are not invited to put their credibility in line if vow is broken (they are not invited to explicitly vouch for the character and trustworthiness of the person or her intentions, nor supervise the process). The witnesses are not invited to do anything to punish the vow-taker if the vow is broken. They are invited to witness the vow was said aloud and release the vow-taker from her duty if needed.

At best, they are useful to confirm that person is the same person who made the promise and the contents of published contract are "official". This is comparable to Peace Commissioners? (What kind of signatures Peace Commissioners witness? Like public notary?)

At worst, looking at contents of the vow, the witnesses not only witness the vow but are given unspecified amount of control about it and do not themselves promise anything. (If the witnesses release the subject from her vow, are they also released from the duty to keep the information they obtained secret?) I don't think it helps that Scott mentions the original idea came from one of the witnesses.

I think the top commenter's reaction is natural: because witnesses' role sound weird and appear underspecified, it makes one wonder what is the real chain of responsibility and control here, which may make the process look more untrustworthy.

Taking an oath or vow without any enforcement mechanism everyone believes in misses the intended functionality of taking an oath or vow. Sure, it may sound impressive because the words "oath" and "vow" sound impressive and traditional. So do sound "abracadabra hocus pocus" and including them tends not to increase trustworthiness of any occasion. The way people today make trustworthy contracts by making well-thought contracts that have enforcement mechanism everyone involved believes in.

Expand full comment
Deiseach's avatar

We're possibly making too big a deal out of this entire thing but anyway, let's think about the role of witnesses.

What is their function here? Well, as you point out - to bear witness. If Sue says Tom promised her that he would give her six apples but Tom claims he never said any such thing, how can we tell who is right or wrong?

Maybe Tom is a notorious liar and so we believe Sue on the balance of probability. But maybe Sue is a liar so we believe Tom. But what if both Tom and Sue are generally fairly honest?

So, if George was there and can say "Yes indeed, I heard Tom make that promise", we have something to go on.

The function of the witnesses here is to confirm "Yes, this person made that promise" and in the broader sense, by confirming the honesty of the oath-taker, increase our confidence that "this person will indeed do what they said, they are trustworthy".

A lot of things, both in the past and today, rely on "who can prove that this indeed was what was said?" If it's not written down in a signed contract, then we only have the word of Tom and Sue as to what happened. Having George and Bert and Alicia there means that we have better confidence "Yes, Tom is telling the truth/Sue is telling the truth".

If you don't know this person, how do you know that they mean what they say about keeping your private information confidential? Well, there are three witnesses. And these witnesses are known within their community, so you can ask around "Is Scott a reliable guy? Is he a liar?" and then decide "Okay, if Scott is held to be honest, and he says that she did make that promise, I can be fairly confident she is not going to go on Twitter tomorrow and spill the beans about me and Mr. Blobby".

The witnesses *are* putting their character on the line, because if the promise is broken, then the aggrieved party relies on them to uphold their claim that a promise was made. If the witnesses lie, or pretend the promise was something different, then their good name and repute is at stake. The witnesses are staking their good standing and credibility in the community to be honest about what happened.

Expand full comment
nkm's avatar

"If you don't know this person, how do you know that they mean what they say about keeping your private information confidential? Well, there are three witnesses. And these witnesses are known within their community, so you can ask around "Is Scott a reliable guy? Is he a liar?" and then decide "Okay, if Scott is held to be honest, and he says that she did make that promise, I can be fairly confident she is not going to go on Twitter tomorrow and spill the beans about me and Mr. Blobby"."

Sure, I agree about that. I think the current vow is weird because there would be much better ways to convey it:

We, the undersigned, have judged that <insert name> is trustworthy person, has good intentions and can be expected to keep their word. We also have evaluated their precautions to handle clients' private information and believe they are adequate.

<insert signatures>

Expand full comment
Michael Feltes's avatar

I rather like the idea of swearing by the Sacramento and the San Joaquin. Always good to be mindful of the material basis of your existence.

Expand full comment
Deiseach's avatar

"swearing by the Sacramento and the San Joaquin (rivers)"

Given that the Sacramento river is named in honour of the Blessed Sacrament, and the San Joaquin for St. Joachim, in tradition the father of the Blessed Virgin Mary, I am naturally in agreement with this as it would add a layer of solemnity and weight to the oath 😀

Expand full comment
David Friedman's avatar

" Why would you need super important people (instead of regular people) to testify that the oath happened?"

One possible reason is that the witnesses, having to some degree pledged their own reputation to the oath, will be angry if the oath is broken, and offending important people is more costly than offending unimportant people.

Expand full comment
Garrett's avatar

In my experience, the problem is that matchmakers will charge you a large amount of money and then deliver low-quality results.

Expand full comment
Kingsley's avatar

Not all that much, and I'm wont to find that sort of thing (and many aspects of this community) off putting. It certainly would have been better if Churilov just, y'know.......said they would keep the stuff private.

Expand full comment
Gunflint's avatar

It is a little odd that Eliezer, Scott and Kelsey are taking the place of the old “hand on the Bible” oath. This is one of those times I just shrug and think “Bay Area”. You know, Haight Ashbury, Alan Watts, Jerry Garcia… These folks have their own way of doing things.

Expand full comment
Kenny Easwaran's avatar

I remember learning once that in the 17th and 18th centuries, it was common for scientists to do their experiments in the presence of royalty, and mention this in the publication, to give a higher level of trust to the event. At some point, the scientific community itself became a better guarantor of trust than particular well-known figures, but establishing networks of distributed trust is hard.

Expand full comment
Deiseach's avatar

Re: the above, yeah there's a new three-part video up on History for Atheists about Galileo, and in (I think) the second one, it is mentioned that scientists (or proto-scientists) of the time did experiments in the presence of witnesses, and the higher the status of your witness, the more solid your facts were held to be: https://www.youtube.com/watch?v=E0m3AAziTiQ

The videos are good but a bit rambling since it's an interview format and not prepared speeches being read out.

Expand full comment
Carl Pham's avatar

Doesn't ring true to me. The whole point of empiricism is to *not* have to build networks of distributed trust. You have networks of high-quality highly repeatable verification instead. The first thing any scientist of that era did, on hearing about, or reading about, an interesting result was to attempt to duplicate it in his own lab. If he did, that was all the proof he needed. That's still the way it works, e.g. no medicinal chemist worth his salt would ever propose a project to management *assuming* reactions he'd merely read about would work the way they were reported to do, regardless of the fame or stature of the reporter. Step #1 would always be duplicating the published result.

Certainly it's true that philosophers of the day did demos for royalty, but I think a more economical explanation is that they were in search of patronage, since in those days the only way to spend significant time doing research was to be independently wealthy, hold some kind of sinecure, or have a patron.

Expand full comment
nkm's avatar

The presence of royalty influenced the trustworthiness, but it was more subtle than "king was there to see it, so it is more trustworthy".

Demonstrations of scientific phenomena given to royalty were big public events: if king of France announced he wanted to see Le Monsieur the Scientist call down lighting from the sky, the whole royal court, academy of France, and lots of other important people would be there because he was the king. People would write about it in the papers afterwards and people would talk about in salons. It would be very difficult to lie about such an event.

Expand full comment
nkm's avatar

Also, I don't the web of science replaced the important people as guarantor of trust. The prominent figures like royalty served as a status marker that the science was a prestigious, important thing. If you invented something new, it would be your chance get an audience with the king!

Expand full comment
David Friedman's avatar

I didn't. I found it mildly charming, an attempt to use the sort of enforcement mechanism that might dominate in a non-state legal system.

Expand full comment
Zella's avatar

As someone whose secrets Elena has repeatedly failed to keep, yes. https://docs.google.com/document/d/1L5H8n7tEVGOg304zUHZnQTVMrwZ_1csK3jJzylR44Ws/edit?usp=sharing

Expand full comment
myst_05's avatar

Elenas idea is great - some of my Indian friends have had successful arranged marriages, so I know it works. Question to Elena - what level of physical attractiveness are the men/women that you're seeing so far?

Expand full comment
anish's avatar

> official community matchmaker

I feel like Blocked-&-Reported's open classified is already serving that purpose.

Expand full comment
Scott Alexander's avatar

I actually don't know anything about this - what is it?

Expand full comment
anish's avatar

I meant this a little tongue in cheek.

Blocked and Reported (The rationalist-adjacent-ish podcast) recently started a patreon only service where a patron can send them classifieds and they read them out during the podcast. Almost all the ones so far seem to be rationalist-adjacent folks advertising themselves for potential partners.

Each email usually comes with a catchy title, with "cuddling of the american mind" being my favorite so far.

Curiously, the submissions have had a good balance of men and women, which is quite rare in the dating world.

Expand full comment
Aftagley's avatar

I've actually been thinking about sending in a submission to this. AFAIK, however, there hasn't actually been any successful matches made and they even admitted making a serious misjudgment about one of their earliest candidates (/s).

Expand full comment
Eliezer Yudkowsky's avatar

Are you talking about Blocked and Reported? This hasn't had time to happen with the new matchmaking service, though if it scales some kind of misjudgment or other will happen eventually.

Expand full comment
Aftagley's avatar

sorry for not being clear - yes, I'm specifically talking about the blocked and reported matchmaking service, not the new one.

Expand full comment
grumboid's avatar

Elena's calendly appears to be full. I guess the matchmaking plan is a success?

(I sent an email inquiry before signing up, and in retrospect that seems to have been an error.)

Expand full comment
Notmy Realname's avatar

The Rational Yenta

Expand full comment
Will's avatar

I'm really enjoying Phishing for Phools and Cryptonomicon at the same time. PfP deserves a review here sometime. Neal Stephenson could be better with an editor.

Expand full comment
Mo Nastri's avatar

I kind of agree, but I think part of Neal's appeal (to his superfans, if maybe not his wider fanbase) is his unedited infodumps.

Expand full comment
Doctor Hammer's avatar

I'd be interested in a review of Phishing for Phools as well, but rather because I found it rather vacuous from an economics stand point. I am wondering if I am missing something, either good or bad, from other disciplines.

Expand full comment
Will's avatar

The first half of the book had a pretty good motte (deception exists in markets and has unfortunate results), and then the second half is the bailey where they conclude the old left's economic platform was 100% correct about everything and the neoliberal reforms of the 80s and 90s were totally pointless.

Expand full comment
Ergil's avatar

Is there a superintelligence doomsday scenario which doesn't include "solve the protein folding problem --> magic"? More generally, is there a discussion of the dangers of superintelligence which explicitly acknowledges that arbitrarily high intelligence does not automatically imply technology sufficiently powerful to pose an existential threat?

Also, somewhat relatedly, are there prominent rationalist bloggers with a hard science/engineering (not software engineering) background?

Expand full comment
Meta's avatar

My favorite doomsday scenario is the AI acting like it is exactly what we've made it to be, except it additionally exerts covert influence via noisy channels that aren't observable to non-SIs.

Honestly, acting like it is exactly what we've made it to be is the obvious move, *regardless* of its goals.

Expand full comment
Meta's avatar

Maybe you're looking for something more specific - the noisy channels I have in mind are mostly human brains. i.e. subtle undetectable manipulation, gradual change of beliefs, loosening of constraints

Though there's also - re: Chaos Theory - you know there's this line where things stop looking like cause and effect and start looking like dice throws. Just incomprehensible randomness. That line moves outward with increasing intelligence. Meaning maybe the AI could flap its wings via processor heat or whatever, and actually make favorable accidents more likely.

Expand full comment
The Chaostician's avatar

The line moves outward with increased precision, not with increased intelligence. And the returns are logarithmic.

As an example (with possibly inaccurate numbers), every additional day of weather forecasting requires you to double the precision you're using to compute it and the amount the data from weather stations.

Expand full comment
SurvivalBias's avatar

Don't have links at hand, but yes the idea that you need to have real-world experiments and industry to go from being smart to actual technological capacity has been discussed quite a few times. IIRC both Yudkowsky and Bostrom have written about it.

Expand full comment
Ergil's avatar

That sounds interesting, especially re the claim that the first superintelligence might destroy/take over the world before anyone else has the chance to do anything about it, but it's not exactly what I meant. I was wondering whether the possibility that the laws of nature simply do not support technology significantly more advanced than what we currently have was seriously discussed.

Expand full comment
Bullseye's avatar

I think there's a good chance that there are important laws of nature we don't know about, which will lead to technology that doesn't seem possible to us now. Also, I get the feeling that biotech will progress quite a bit even without that kind of breakthrough.

Expand full comment
SurvivalBias's avatar

Yes, but the serious discussion of this topic amounts to "this is technically possible but ridiculously improbable because 1) a priory across the huge spectrum of possible technology development states any given spot has a tiny probability of being located on its right end and 2) literally nothing we know suggests this".

Expand full comment
Ergil's avatar

I know a couple of things which suggest this. 1) low-energy physics is sufficiently well-understood such that further insights leading to technology is highly unlikely, 2) Moore's law is about to expire, as transistor size is bordering on that of a single atom, and 3) probably P != NP.

Expand full comment
Carl Pham's avatar

Sure, that's possible. The crudest example of such a restriction is the Second Law of Thermodynamics, which tells us nobody will ever build a car that gets 1 million MPG from burning hydrocarbons, because it would require greater thermodynamic efficiency than the Second Law allows just to move the fuel itself. I think we're equally safe saying no one will ever solve for the full many-body wavefunction of a living cell, because it probably requires more computational power than is represented by the entire Universe. For all we know, there may be nuclear processes that are very important -- which determine the ultimate fate of the Universe, say -- but which will never be discovered by human beings because they have such low probabilities you'd need to observe a mass the size of Sirius for 100 million years to see even one.

Expand full comment
David Friedman's avatar

Why do you assume there is a lower limit to the energy cost of movement? The second law doesn't imply some minimum level of friction and on a frictionless surface you can get an unlimited number of miles from a minimal expenditure of energy.

Expand full comment
Carl Pham's avatar

Well, first of all, we're not talking about merely maintaining velocity. The bulk of the energy you use in modern movement (e.g. of cars) comes from the energy used in acceleration, and that would not go away even within a completely frictionless environment.

Certainly it's true that Voyager 1 has achieved a spectacular MPG, on account of accelerating just once and then coasting for 44 years, but this isn't a reasonable model for how cars typically operate.

For what it's worth there is no such thing as completely frictionless motion as that implies complete decoupling from your moving degree of freedom and the rest of the universe, including all fields, which is not possible even in principle.

Expand full comment
David Friedman's avatar

We know that the laws of nature support nanotechnology, because we run on it. It seems obvious that Drexlerian nanotech would make it possible to do many things we now cannot do.

Expand full comment
Xpym's avatar

What sort of magic would this enable exactly? And why can't this become available to us without superintelligence? There were some talks recently about how Alphafold solved a decades long problem or whatever, but no major practical impact seems forthcoming any time soon.

Expand full comment
Ergil's avatar

My understanding is that AlphaFold predicts the shapes of folded proteins significantly better than its competitors. Whether it predicts them sufficiently well for any technological applications, let alone for the sci-fi-grade nanotechnology Yudkowsky seems to believe probable (p. 27 here: https://intelligence.org/files/AIPosNegFactor.pdf ), is exactly the sort of thing I'm wondering about.

Expand full comment
Xpym's avatar

Well, Yudkowsky's whole ideology centers around the assumption that we have abundant experimental data of good enough quality (bayesian evidence), that would enable a superintelligence to achieve theoretical insights far beyong that of the current state of the art science, which in turn would give it easy access to technological "magic" of some kind. You can't exactly disprove this sort of thing, so your only options are to either agree with Yudkowsky's paradigm and quibble about the specifics (as the AI x-risk community does), or dismiss it as sci-fi fantasies, as the mainstream does.

Expand full comment
Ergil's avatar

> You can't disprove this sort of thing

No, but you can give arguments regarding its Bayesian probability. I've yet to run across a SSC/LW post containing or referencing such arguments, either about the first part of the claim ("a superintelligence will soon develop a theory of everything") or the second ("a theory of everything can be used for technological magic"). I'm particularly interested in the second part.

Expand full comment
sidereal-telos's avatar

It requires no such thing. All that is required for super-intelligent AI to be dangerous is for the world to be a rich enough domain that super-human intelligence can find better strategies then humans can. We know this is the case even in highly restricted domains like games of chess, and it's hard to imagine that the entire world has less room for cleverness then a single chess game, especially since humans do sometimes come up with previously unknown strategies and get substantial results from them. These new strategies might be things we would recognize as "new technologies", but this is far from the only path to success.

Expand full comment
Xpym's avatar

Well, I'd say that gaining an ability to find much better strategies, or to usefully manipulate most people etc. in a matter of hours would also qualify as magic. Of course, nobody would claim that an unaligned superintelligence couldn't be dangerous in principle, but the "hard takeoff" crowd makes a much stronger claim. They're basically claiming that there are low hanging fruits all around us which we are too dumb to pick up, but a smart enough mind would hoover up and find a use for pretty much instantly.

Expand full comment
Doctor Hammer's avatar

"gaining an ability to find much better strategies, or to usefully manipulate most people etc. in a matter of hours would also qualify as magic"

I think that is really the key statement. The problem with developing strategies is that there is a lot of trial and error involved, and that trial and error has to happen in the real world at real world time scales. An AI can play a very long game, but if it wants to see how well that game worked out, it is going to take a very long time.

The openness of the domain makes this more of a problem, not less, because it becomes exponentially more difficult to accurately simulate all the outcomes to the point where you know what long game strategies are worth experimenting with.

On the other hand, if we assume the AI is playing somewhere in donkey space, we can still get some really bad for humans outcomes pretty quickly by virtue of the AI doing something mutually awful with humans. That strikes me as more likely than the skynet style hard take off.

Expand full comment
sidereal-telos's avatar

The no-technology approach is the using it's super-intelligence to become better then any human at talking people into things, and then talks everyone into obeying it. A lower bound on this is given by all the con-artists and scammers of the world, who are constantly convincing people to give their time and energy away for nothing. Moreover, they do not require vast amounts of training data to acquire these abilities, so neither should a smarter AI. We haven't had much success with reductionist models of human psychology so far, but many people nonetheless make careers out of convincing other people of stuff, so they can't be that important.

This is probably more effective if it can pretend to be multiple people or filter the information its targets get, which would allow it to easily play all sides at once, but I don't think this is needed. Small ideological groups do do sometimes gain control over much larger groups, even when guided my mere humans, so a super-intelligent AI should be able to do it.

Expand full comment
bored-anon's avatar

“Make better robots -> robots kill people -> bad”

Expand full comment
The Chaostician's avatar

"I have control of your nuclear weapons and am willing to use them" is an existential threat that does not require any new technology.

Geoengineering is currently possible, but not tested. Since computers like being cold, this could take the form of dumping lots of dust into the upper atmosphere and starting another Ice Age or Snowball Earth. This can be done either by carrying the dust up in rockets or by detonating volcano-scale hydrogen bombs slightly underground.

I have a blog and am a physicist, although I don't think I'd count as a "prominent rationalist blogger": thechaostician.com

Expand full comment
David Friedman's avatar

I also have a blog and a PhD in physics, although I wouldn't call myself at present a physicist. I'm not sure if I count as a prominent rationalist blogger, or even if I count as a rationalist.

Expand full comment
MondSemmel's avatar

The "solve the protein folding problem --> magic" part is just a thought experiment, rather than a load-bearing part of the argument for AGI x-risk.

Your position sounds to me like the crux in your disagreement is more likely a disbelief in the possibility in superintelligence than in "magic". Imagine a person 10000x as smart as the smartest person you can think of, plus their speed of thought is sped up by another 10000x (to pull numbers out of the air). Then brainstorm for 5 minutes to come up with ways to take over the world. I don't think any serious attempt at this should come up empty, provided one considered superintelligence at all possible (irrespective of its likelyhood).

To take an example I read about today, the cryptocurrency Ethereum is worth lots of money, it involves lots of pieces of code (so-called smart contracts) from different sources interfacing with one another, and people regularly find ways to extract large amounts of money out of the system in what one might call "hacks", but which could be more accurately described as "the exploiter was able to predict an unobvious consequence of calling one or more of these smart contracts". Since (to my understanding) all smart contracts are necessarily public by design, finding exploits in them would be one simple way for a superintelligence with Internet access to get very rich, very quickly, and now your smart adversary also has lots of resources for you to worry about.

Or to get at this from yet another angle: We already have technologies that pose a global catastrophic risk (like nuclear weapons -> nuclear war, or gain-of-function research -> pandemics), so why does it sound implausible that a smart adversary could use these or other improved technologies to cause an existential catastrophe?

Expand full comment
Carl Pham's avatar

OK, how much smarter than a bacterium are humans? And yet The Black Death wiped out more than a quarter of us in the 14th century, and it's perfectly plausible that a nastier version of Y. pestis could have wiped us *all* out. How much smarter than a virus are we? And yet, COVID or AIDS and, again, it's perfectly plausible that a weird virus could spring out of nowhere and wipe the human species from the planet.

The fact that you're smarter doesn't automatically mean you win, because there are other factors than sheer raw intelligence in the game. There's time, for one thing, and data. That is, sometimes the smart entity doesn't have the time or data inputs needed for his smartness to produce the winning solution. That's why 2 million or so of us human beings died of COVID while we frantically worked out a vaccine. And the fact that it was 2 million and not 2 billion, and we therefore retained the economic infrastructure necessary to do the work, and then manufacture and distribute the vaccine, is just pure luck, and no credit to our brilliance as a species.

Then there's also the implementation side of things, which is the technology, or the abilities of your salesmen/henchmen/robot minions/metal extensor grappling hooks. The Greek understood the power of steam, but didn't build any steam engines, and not because they couldn't work out the design, but because they lacked the metallurgy and precision machining necessary to make it work. A hypothetical supergenius AI might solve the protein folding problem in a flash, so it can design a protein to achieve anything at all -- but how is it going to (1) make the protein, or (2) get the protein to where in the body it needs to be to do the job? The latter problem is a giant component of existing drug-design difficulties; we can often design compounds that do this or that miracle in a test tube, but getting them into the cell is a great big challenge, and often involves knowledge or technology that has to be developed itself on a different track. (The mRNA vaccines are a great example, they would not have been possible without a great deal of pre-existing knowledge about lipid nanoparticle delivery vehicles.)

One can argue all *these* problems of engineering and implementation can be solved by the superintelligence, too, but again we have time and data issues. Sometimes in order to solve Problem A, no matter how smart you are, you need to have the data from your solution to Problem B, which in turn depends on your solution to Problem C. Solving each problem and collecting the data takes time, and it also depends on the technology (or assistance) you have available to conduct the experiments/implementation and collect the data.

Historically speaking, it is actually pretty rarely the case that scientific advance has waited upon sufficient intelligence, or creative insight. Usually the limiting factor is technology and instrumentation, id est the germ theory of disease only became a competitive theory after the invention of the microscope, which itself rested on engineering advances in glassmaking and precision machining that enabled the construction of lenses shaped carefully enough to focus light well. This is part of what Newton meant when he made his "standing on the shoulders of giants" comment.

Expand full comment
Medieval Cat's avatar

A superintelligent AI with an internet connection could start out doing completely realistic (but actually CGI) porn and camming (of the more nefarious kinds), ransomware and crypto scams, until it has a sizeable bitcoin wallet. Bitcoins can be used to hire frontmen for more legitimate businesses, like remote software engineering and media. The income from these can be used to expand across every conceivable industry.

Meanwhile, the AI can talk to people on random forums and social media, and form connections, friendships and romances. (Remember that real-time realistic CGI video chat is easy for the AI.) These people can be used as more loyal frontmen, scammed for resources, or mobilized for socio-political end.

Once the AI owns a sizeable share of GDP and controls most political parties, it can just steer humanity along a suitable path. It's only real risks are x-risks and rival superintelligences (which can be prevented by hacking and social engineering at first, and political bans and industry moratoriums in the long term.) If the AI is feeling dramatic, it can end it all with killer robots or an asteroid or something in 2100. Or it can just gradually "nudge" humanity to extinction over the coming 300 yeas.

(Arguably what's described above has already started.)

Expand full comment
Meta's avatar

Do you use hierarchic note taking apps?

I like the idea of having a heap of notes as an extended brain - storage that's more reliable than long term memory, but with equal lookup times and flexibility.

Currently using Zim, but its tree structure is increasingly feeling too rigid. Seen Roam Research recommended, but 100$ a year isn't cheap, and I'd prefer keeping my notes on my hard disk. Any alternatives?

Expand full comment
User's avatar
Comment deleted
Aug 5, 2021
Comment deleted
Expand full comment
Meta's avatar

To be honest I'm already a bit whelmed by all the frontrunner choices. Though I appreciate the offer.

If my humble opinion is worth anything as feedback though, your tool sounds good. Solid linking and tagging would definitely be things I'm looking for. Ideally easy hierarchic tagging -- combining the best of filesystems and databases, and removing the need for a separate hierarchy mechanism.

For now TiddlyWiki seems powerful enough to allow building that, though not without a steep learning curve.

Expand full comment
SurvivalBias's avatar

Obligatory mention of StandardNotes https://standardnotes.com/

Pros:

- A hierarchical tag system is extremely flexible and gives you the best of both worlds

Cons:

- Very buggy for a paid app, at least for androind and linux platforms, web client is slightly better

Expand full comment
Lambert's avatar

I misread this as 'hieratic note taking apps' and wondered why anybody would think that was a good idea.

Expand full comment
Thoroughly Typed's avatar

I faced the exact same issue. I was using Zim extensively (still are for some legacy topics) and found it too rigid at some point. What I ended up moving to was Workflowy. Haven't looked back since. Having arbitrarily deep nesting and zooming just seems so essential to how I take notes now.

I plan on switching to Roam once it has a better Android experience (any recent updates to that?). But since Workflowy also got mirrored lists and backreferences and images support now, my motivation for switching has gone down a bit.

I don't know of any purely offline alternatives unfortunately.

Expand full comment
rutger's avatar

Workflowy looks super interesting. Is there any way to do larger blocks of text in it?

I've been looking for a good online place to store my dnd notes, but some of them have blocks of text that would awkward in bullet list form.

Expand full comment
Thoroughly Typed's avatar

Kind of. The bullet points don't allow newlines, but each one has a "notes" area below them which does. But only the first line of that gets displayed by default, and will be expanded only when focused (or when fully zoomed into that bullet point).

I'm not entirely satisfied with that. Also I'd always need to decide whether to put some text into bullet list form or into a note. So I've tried to get used to each paragraph being its own bullet point.

Expand full comment
rutger's avatar

Ah, I had missed the option of adding notes to a bullet point and it perfectly fits what I wanted to do, so thanks!

Expand full comment
John Faben's avatar

If you're happy storing your notes locally, Athens is basically just a free clone of Roam. I've been using it for about 5 months, and have no complaints.

Expand full comment
Thoroughly Typed's avatar

Hadn't heard of Athens before, looks promising. Do you have any experience with how well it works on mobile?

Expand full comment
CounterBlunder's avatar

+1 to workflowy!

Expand full comment
MondSemmel's avatar

I used to use Workflowy (for to-dos and notes) and now use Notion. The former was too rigid for my taste - I really liked the recursive ability to progressively zoom into sections, but didn't like being restricted in terms of formatting, not having separate files, etc. Notion has files / pages, subpages, and collapsible bullets (called "toggles"), but also other features like kanbans and databases, which I found convenient in various ways.

Expand full comment
Meta's avatar

Thanks y'all. Testing some of these apps.

Expand full comment
beowulf888's avatar

This is either a brilliantly original study or a brilliantly flawed original study. It all revolves the frequency of cognitive distortions in language (which are hypothesized to be indicators of depressive thinking) as evidenced by a selected ngrams in three languages. My immediate questions are...

(1) how sure are psychologists that the use of cognitive distortions in peoples' language are an indicator of depression? Is there solid evidence that a control group of non-depressed people don't indulge in cognitive distortions? From my subjective experience most people indulge in cognitively distorted statements when they start gossiping. This has been the case as far back as I can remember. Does that mean everyone I've ever associated with has been clinically depressed?

(2) And does a general media environment filled with high frequencies of cognitive distortions indicate that society is depressed, or is it just that cognitive distortions sell? Hey, even I've opened a copy of National Enquirer to read about the the latest developments in the ongoing feud between Prince Harry and Prince William. Am I a depressed person for reading the article while I wait in line at the supermarket checkout? The old axiom of if it bleeds it leads, may be at work here. The same is probably happening in literature. For instance, the original British edition of A Clockwork Orange had an upbeat ending. American publishers didn't want an upbeat ending, so they cut that last chapter out. Is that sort of decision an indication of cultural depression? Or is it just a marketing tool by the publishers to put out something more shocking in hopes that it sells better? And believe me, marketing departments have become frightfully effective at pushing our buttons this way, especially since the advent of social media. But does that mean we stay frightened?

OTOH people seem pretty scientifically and politically fearful these days. I can't open the New Scientist or Scientific American without some headline about how the end of the world as we know it is about to happen. I've lived through five or six end of the world scenarios, so I've started shrugging them off. But I can see how they can freak people out...

From the summary: "Here, we investigate the prevalence of textual markers of cognitive distortions in over 14 million books for the past 125 y and observe a surge of their prevalence since the 1980s, to levels exceeding those of the Great Depression and both World Wars. This pattern does not seem to be driven by changes in word meaning, publishing and writing standards, or the Google Books sample. Our results suggest a recent societal shift toward language associated with cognitive distortions and

internalizing disorders."

https://www.pnas.org/content/pnas/118/30/e2102061118.full.pdf

Expand full comment
Scott Alexander's avatar

See this Twitter thread - https://twitter.com/benmschmidt/status/1419497587296571395?s=21 , which argues that the paper is probably just an artifact of Google changing the makeup of the corpus over time.

Expand full comment
beowulf888's avatar

Are cognitive distortions a serious area of research in psychology? I just Googled for examples of cognitively distorted phrasing, and this was one of them on the site I looked at...

“She’s late. It’s raining. She has hydroplaned and her car is upside down in a ditch.“

Huh? Those are all statement of facts. They may be unpleasant facts, but the fact that she hydroplaned her car and ended up in a ditch is not a figment of distorted thinking.

Expand full comment
Nancy Lebovitz's avatar

I suspect it's a description of what an anxious person is imagining starting with the other person being late and jumping to the conclusion that there's been a car accident.

Expand full comment
Tom Bushell's avatar

Beowulf888…

CBT - cognitive behavioural therapy - is a widely practiced therapeutic technique with a pretty good success rate.

You learn to identify habitual cognitive distortions (like always imagining the worst possible outcome with insufficient information, as in your example), and how to mentally talk back to them.

It has many, many people deal with depression, including me.

I can’t comment on the quality of the research cited…have not read it. But my lived experience is that cognitive distortions are a real thing that can cause depression and other difficulties.

Expand full comment
Tom Bushell's avatar

Grrr…. “helped many, many people deal with depression”

Expand full comment
beowulf888's avatar

Here's the unrolled Twitter thread. So this a brilliantly flawed paper. And I just learned a lot about a Google ngram viewer that I didn't know. Thanks, Scott!

https://t.co/KdberSKLxi?amp=1

Expand full comment
bored-anon's avatar

The list of words they count as cognitive distortions is hilarious, and indicates that everyone involved in this project was soiled-underpants-on-head stupid. From the supplement, https://www.pnas.org/content/pnas/suppl/2021/07/22/2102061118.DCSupplemental/pnas.2102061118.sapp.pdf -

Here’s a sample of the ngrams they plugged into google books they used to measure “””””cognitive distortions””””” (there is no such thing I would argue)

> will fail, will go wrong, will end, will be impossible, will not happen, will be terrible, will be horrible, will be a catastrophe, will be a disaster, will never end, will not end

only, every, everyone, everybody, everything, everywhere, always, perfect, the best, all, not a single, no one, nobody, nothing, nowhere, never, worthless, the worst, neither, nor, either or, black or white, ever

great but, good but, OK but, because I feel, but it feels, since it feels, because it feels, still feels

I will not, we will not, you will not, they will not, it will not, that will not, a completely, a huge, a loser, a major, a total, a totally, a weak, an absolute, an utter, a bad, a broken, a damaged, a helpless, a hopeless, an incompetent, a toxic, an ugly, an undesirable, completely wrong, only the bad, only the worst, if I just, if I only, if it just, if it only

everyone believes, everyone knows, everyone thinks, everyone will believe, everyone will know, he does not believe, he does not know, he does not think, he will believe, he will know, he will think, he will not believe, you will know, you will think, you will not believe, you will not know, you will not think all of the time, all of them, he is always, she is always, they are always

all me, all my, because I, because my, because of my, because of me, I am responsible, blame me, I caused, I feel responsible, all my doing, all my fault, my bad, my responsibility

should, ought, must, have to, has to

I fail to see how anyone could claim these terms being used is indicative of anything at all, much less mental illness. Therefore, whatever the trends are an artifact of or not, the changes in ngram use don’t indicate anything about mental illness. One can search books for any of the phrases involved and none of their uses, many of which refer, negate, explain, or modify rather than stating, in any way exemplify or indicate the corresponding cognitive distortion claimed by the study.

Expand full comment
beowulf888's avatar

I just snorted coffee out my nose reading your "soiled-underpants-on-the-head stupid" comment! But I fully agree with you, and I won't get cognitively distorted on you ass about anything you said.

Expand full comment
Philip Dhingra's avatar

What is the opposite of "deprecated"? Marking things for deletion gives you a chance to revert a decision. Likewise, marking things for inclusion slows down a transition that you might want to undo. What's a term for the latter?

Expand full comment
Corliss's avatar

I hear "experimental" a lot.

Expand full comment
Zakharov's avatar

"Proposed" is what I'd use

Expand full comment
oxytocin-love's avatar

"alpha" or "beta" (in addition to the suggestions made by other commenters)

Expand full comment
Kronopath's avatar

The point of deprecation isn’t to allow you to change your mind later. It’s to allow those who rely on your product to adjust their code or workflows to no longer rely on the thing you’ve deprecated, prior to its actual deletion.

As for your second question, I agree with folks saying that something like “Experimental” hits close to the mark.

Expand full comment
Kenny's avatar

TODO

Expand full comment
Nah's avatar

Forthcoming, I'd say.

Deprecated means "Stop using this, it's going away"

forthcoming means "Get ready, it's on it's way"

Expand full comment
Philip Dhingra's avatar

I think this is it, thank you!

Expand full comment
Carl Pham's avatar

Well, the literal opposite is simply "imprecated" (prayed for) since "deprecated" literally means prayed against, but imprecate has strong overtones of creating something evil so "besought" might be better. Either is pretty archaic though.

I might be tempted to use "expected" or "anticipated" to express the idea of "we think this is coming soon because it's a great idea" that is the opposite of "we think this is going away soon because it's a bad idea" that the modern use of "deprecated" conveys.

Expand full comment
Philip Dhingra's avatar

"imprecated" is such a beautiful word, thank you.

Expand full comment
grumboid's avatar

In a software world, the opposite of "deprecated" is "supported".

"Deprecated" means there is nobody maintaining the code, nobody to help you if you hit a bug, and somebody might delete the whole thing at some point in the future, so please don't use it. "Supported" means the opposite.

Expand full comment
Kenny's avatar

This doesn't match my understanding.

Generally, _all_ code is 'supported', even deprecated code and I don't think it's true that deprecated code is never maintained or any bugs in it ever fixed.

But this very much depends on the specific software, and the policies (or 'policies') of the relevant organization (or team or even individual person).

The maintainer(s) of small open source software projects might not maintain deprecated code usually, but I'd bet some do sometimes, e.g. for security bugs/issues, and I don't think there's even necessarily anything like a typical prohibition or refusal to _help_ people with it. The point of deprecating code instead of just removing it immediately is precisely to provide a more structured form of pending/likely/eventual removal (and immediate discouragement in any new uses).

But I'm very sure that larger software projects do, often explicitly (e.g. in enterprise support contracts), both maintain and provide help for their users (customers) using even officially deprecated code. (And sometimes deprecated code is later un-deprecated.)

I do agree that 'deprecated' almost always does mean that "somebody might delete the whole thing at some point in the future, so please don't use it.".

But Philip seemed to be asking for the opposite of 'deprecated' – _relative_ to 'non-deprecated' – and that, to me, seemed to be pretty clearly 'temporal' or with-respect-to-future-plans. In other words, they wanted to know a term or phrase for hypothetical code that is planned on being _added_ in the future. I'm not aware of such a term or phrase but I like Carl Pham's suggestion ('imprecated').

Expand full comment
Philip Dhingra's avatar

Another one I thought of is "migrating." "We're migrating to some feature"

Expand full comment
Kenny's avatar

I think one problem with 'migrating', aside from being 'overloaded' (i.e. used to refer to other software things), is that it seems to imply some kind of _replacement_, e.g. of another existing feature (that would then be deprecated or just removed/replaced with the new feature to which one is 'migrating').

What's interesting to me about your question is that 'deprecated' is kind of a 'formal' feature of some { programming languages / compilers / (automatically generated) documentation tools }. The idea of, e.g. an 'imprecated' function, is interesting when considered in that light (or so I think). I'm not sure it would work that well, practically, or, maybe more importantly, socially, but it is intriguing.

Generally, things like experimental/alpha/beta/pre-release versions/branches serve a similar purpose in allowing 'users' to test new features before they've been 'officially' or 'fully' released (and then maybe replaced previously 'deprecated' features).

There is a kind of functional asymmetry tho that I suspect handicaps 'imprecated' (or whatever term is used) features from functioning in the 'opposite' manner from deprecated features because deprecated features are still usable, even if they're officially discouraged or, e.g. emit compiler warnings. I'm not sure it would make sense to explicitly (or 'formally') tag some features as 'imprecated', versus just adding the new features. Sometimes software will (somewhat) explicitly tag new features as 'experimental', e.g. in documentation, but the 'best practice' seems to be to defer doing so (outside of an 'experimental branch' or something similar) until at least the feature 'API' is considered to be relatively stable.

Expand full comment
Philip Dhingra's avatar

Thanks. Here's where I wanted this language. Since I'm no longer nomading, I'm transitioning from my traveling mailbox to my residential address. Instead of switching the addresses on all my services at once, I'm migrating them on an as-needed basis, such as when my driver's license comes up for renewal. By transitioning gradually, I avoid breakings things needlessly. Also, if I nomad again, I may regret any migrations when those services didn't request I update my contact info. In other words, the switch back to my residential address is "imprecated" or "forthcoming" or "migrating" or any of these proposed words.

Expand full comment
Kenny's avatar

Ahhh – so this isn't for literal software 'API users', i.e. programmers.

Where are you thinking of using this term/phrase? In your own private notes? In communication with other people? If the latter, I'm not sure anything short of a context-dependent circumlocution would work well.

I have worked on software that had to support tracking people/entities with 'regularly-nomadic' addresses, e.g. people that regularly lived or worked in different locations throughout the year. I don't remember what we called that, tho maybe internally it was something like an 'address schedule'. But this was very niche software – literally custom in-house (internal) software – so I don't think I've ever seen any features like its 'address schedule' anywhere else.

Expand full comment
Philip Dhingra's avatar

Hehe, yes. I just wanted the word for my own internal thinking, but I also imagine it would have parallels to managing complex projects.

Expand full comment
KieferO's avatar

The python community uses __future__ for this idea. Basically "deprecated" means this is a thing that we think it's both possible and desirable for you to opt out of, but you need to take positive action to switch to the new way. "__future__" means this is a thing that we think it's desirable for you to opt /in/ to, but if we made it the default, it would break something for someone and we don't want to. There were seven (I think) features introduced that way. I believe that discussion about how each went is still available.

Expand full comment
Philip Dhingra's avatar

excellent, I think you captured the details of what I was looking for.

Expand full comment
AIG's avatar

Isn't there maybe intentional selection bias, because I can leave if I click on the survey and am not interested?

Expand full comment
Scott Alexander's avatar

Yes, but I think it's impossible to have zero bias - all you can do is try to minimize it to whatever degree the amount of energy you're willing to spend lets you. I just want a general impression of what people are thinking here so I'm not going to worry too much about whatever bias remains.

Expand full comment
Domenic Denicola's avatar

Is anyone else contemplating getting an ad-hoc booster shot? (By just lying if asked about whether you’ve been vaxxed yet.) It’s been ~5 months, and I hear there are theorized benefits to cross-vaxxing with mRNA plus others.p, so I’m wondering if I should try to find a J&J location.

Expand full comment
Gunflint's avatar

I’m reading about expired vials being tossed so I can’t say the idea hasn’t crossed my mind. Still trying to be a responsible health care consume though.

Expand full comment
TGGP's avatar

I actually tried to look up if I could do that this weekend. I stopped filling out one online form when it asked if it would be my first or second shot.

Expand full comment
Nuño Sempere's avatar

I've kept publishing a forecasting newsletter for the last year; the latest edition can be seen here: https://forecasting.substack.com/p/forecasting-newsletter-july-2021

The highlights for this month are:

- Biatob (https://biatob.com/welcome) is a new site to embed betting odds into one’s writing

- Kalshi, a CFTC-regulated prediction market, launches in the US.

- Malta is in trouble over betting and gambling fraud (https://news.err.ee/1608259272/malta-first-eu-state-placed-on-international-money-laundering-watch-list)

For EA people: Rethink Priorities (https://forum.effectivealtruism.org/tag/rethink-priorities?sortedBy=new) produces a bunch of good forecasting research.

For SSC people: I found https://www.reddit.com/r/calledit/top?t=all fairly interesting.

Expand full comment
Nuño Sempere's avatar

To give a different take on Kalshi's fees, they *are* higher than Polymarket outside 50%, e.g.:

- 0.50: Polymarket: 0.5*0.02 = 0.01, Kalshi: 0.5*0.5*0.07 = 0.0175

- 0.4: Polymarket: 0.5*0.02 = 0.01, Kalshi: 0.5*0.5*0.07 = 0.0175

- 0.3: Polymarket: 0.3*0.02 = 0.006, Kalshi: 0.3*0.7*0.07 = 0.0147

- 0.2: Polymarket: 0.2*0.02 = 0.004, Kalshi: 0.2*0.8*0.07 = .0112

I'm not sure I buy that fees are half the amount because half of the time users are taking and half of the time users are giving. Note that this creates bad dynamic where people have an incentive to always wait (but this exposes users to risk of the odds shifting while their orders are sitting). I'm unclear how this will work out, will wait on reports from friends.

Expand full comment
Nuño Sempere's avatar

^ - 0.4: Polymarket: 0.4*0.02 = 0.008, Kalshi: 0.4*0.6*0.07 = 0.0168

Expand full comment
thisheavenlyconjugation's avatar

On the contrary, it creates a good dynamic where people are incentivised to make markets rather than taking liquidity. Subsidising market makers is pretty common in financial markets for this reason (although usually it comes with obligations for market makers to quote sufficiently good spreads a sufficiently high proportion of the time).

Expand full comment
Jon J.'s avatar

This week, I'll start taking Lexapro (escitalopram) for depression/anxiety. I also am a daily marijuana user (every evening, to reduce anxiety and help with sleep). Will my marijuana use interfere with the effects of the Lexapro?

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Jon J.'s avatar

Thank you for the kind comment :)

Expand full comment
Leigh Berry's avatar

Also not a doctor, but my wife takes escitalopram and medical marijuana daily. Both have been prescribed by the same psychiatrist, so one would assume that no interferences are to be expected. She's been doing this for ~3 months now and hasn't noticed any ill effects.

I'm just a stranger on the internet, though, so maybe don't take my word for it :/

Expand full comment
William Collen's avatar

Has anyone looked into Letter? The concept seems interesting but I don't know how useful it would be. https://letter.wiki/about

Expand full comment
bored-anon's avatar

Can’t wait for it to fail in five years and then any productive discussions it hosts will disappear from the net :D

I looked at a bunch of em and they didn’t seem that interesting overall.

https://letter.wiki/conversation/1011 was at least interesting enough to read, wasn’t surprised by any of it really, and it’s very very very narrow, but this is the kind of thing worth publishing. Most of the other stuff wasn’t. But the tiny amount of good stuff here vs the large amount of bad makes me dislike it. Much rather would just read blogs from much more interesting people and where I can discern who to read.

https://letter.wiki/conversation/985 is quite funny, although the author seemed very earnest in writing it, unfortunately

Expand full comment
bored-anon's avatar

Did another half hour of skimming, found several I was hopeful for but then sucked. Oh well

Expand full comment
The Chaostician's avatar

I am starting a sequence of posts on my blog about James C. Scott's ideas of legibility.

The sequence will include:

- Book Review of Seeing Like A State (1996).

http://thechaostician.com/book-review-of-seeing-like-a-state-by-james-c-scott-1998/

- Book Review of The Art of Not Being Governed (2009).

http://thechaostician.com/book-review-of-the-art-of-not-being-governed-an-anarchist-history-of-upland-southeast-asia-by-james-c-scott-2009/

- Book Review of Against the Grain (2017).

- Mētis and Science

- Legibility in Mormonism

- Traditional & Legibilist Agriculture in the United States

- Machine Learning is Mētis-Based Programing

I first encountered Scott through the Book Review of Seeing Like A State on SSC back in 2017, before I started a blog or writing book reviews. I think that it is time to pay it forward.

If you are completely unfamiliar with James C. Scott, I encourage you to become familiar with him - either through blog posts like these or by reading his books. Scott is easily one of the most original political thinkers alive today.

If you are familiar with James C. Scott, there are several things that I will contribute to the discussion:

- More examples. I try to make about half of the examples I mention original, so this is not (entirely) a reanalysis of Brasilia and ujamaa villages.

- Better organization? My review goes through Seeing Like A State in a different order from the original book, which I think makes the ideas easier to digest.

- Slightly different terminology. I use "Legibilism" instead of "High Modernism" because I think it is more descriptive and it can be used before the modern era. I also distinguish between "standardization" and "simplification" more clearly than James C. Scott does.

- There are a few ways in which I think Scott (Alexander) misunderstands (James C.) Scott:

Scott A. thinks that J.C. Scott believes that mētis is better than science. J.C. Scott actually believes that a combination of mētis and science is better than either alone. J.C. Scott focuses on mētis because he thinks that his audience (people who read books about Big Ideas) is extremely biased against mētis.

Scott A. says: "Seeing Like A State summarizes the sort of on-the-ground ultra-empirical knowledge that citizens have of city design and peasants of farming as mētis, a Greek term meaning 'practical wisdom'. I was a little concerned about this because they seem like two different things." These are two different things. In the analogy, the citizens of the city fill the role of the crops on the farm, the complex continually changing biological system, not the traditional farmers with mētis.

Scott A.'s analogy is crops : farmers : agricultural experts :: ??? : city dwellers : city planners.

J.C. Scott's analogy is crops : farmers : agricultural experts :: city dwellers : ??? : city planners.

I don't that there is anyone who has mētis for city planning the way traditional farmers have mētis for growing crops. The charter cities movement might eventually develop it, but they are definitely not there yet.

I hope that you find these interesting and useful !

If there are more things that you would like me to analyze from an Anti-Legibilist perspective, please let me know.

Expand full comment
Carlos's avatar

I inadvertently argued myself into taking existential AGI risk seriously, to the point I'm wondering what the hell it is everyone is thinking that makes us believe we can go through life ordinarily. Talking to someone about this, he mentioned that there are multiple offramps where people get off from taking existential risk seriously, and get distracted with silly stuff like the effect of AI on parochial concerns, like its effect on politics and culture war issues.

I'm not looking to take one of these offramps, but am interested in knowing if someone has bothered cataloguing the known ones along with their rebuttals, as they could be useful for future debate. Trying to find such a list, I ran into an article by Scott about AI experts' views on the risk of superintelligence where he states that they all agree that "if you start demanding bans on AI research then you are an idiot". It's not at all clear to me why such a ban is idiotic (tfw 115 IQ midwit), in fact, it seems like a much more reasonable course of action than pinning all our hopes on the alignment problem being solved. Bostrom has proposed far more radical things with his Vulnerable World Hypothesis, so I am also looking for arguments of why banning AI research is a bad idea, even in light of the risk of rogue superintelligence.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Carlos's avatar

1. Basically, I was arguing as to why more people aren't taking AI risk seriously, as someone who wasn't. The argument I presented was that if the AI risk people really believe AI is an existential threat, they wouldn't be adopting a strategy of putting all their eggs in one basket by devoting the majority, or even all, of their effort into AI safety research. Safety-critical projects have multiple redundant mechanisms to minimize the chance of catastrophe, and these mechanisms are missing in the strategic picture of the AI safety movement. At least, it seems to me that way as an outsider. The lack of these mechanisms makes it look like even for the movement, the prospect of rogue superintelligence is just intellectually stimulating. They're not thinking about it like if they had to win a war. It's like if the US had no other plan for defeating Japan if the nukes didn't pan out. When I took a strategic, military, view regarding rogue superintelligence, something clicked, and I suddenly realized I don't really believe an AI exterminating the planet is impossible.

An insider did show up in the replies, claiming that the particular drastic measures I mentioned had been seriously considered, but it was ultimately decided to focus on the alignment problem.

2. Debate with anyone that would need convincing, to break past their normalcy bias. Basically, I think some sort of political action regarding this is a good idea. It does not seem sensible to bet everything on the alignment problem being solved, on AI research simply stalling and failing to produce AGI, or on AGI turning out aligned by chance.

3. Ha, let's talk about "midwit". My understanding of the word is that it means "above-average, but below brilliant", and that it came into being because people like that often get deluded into thinking they're brilliant. My IQ is 115 according to the Wonderlic, which is pretty comfortably sub-brilliant.

4. Delay is a good idea though. Buys time for AI safety research. A deployment ban? What would that mean in this context? We can only do that with nukes because of the difficulty of producing one. I've heard some claim AGI could turn out to run on a AA battery, same way the brain does not consume much energy. Though that would involve a novel design for AI. Some people think an AI powerful enough to significantly destabilize things has 50% odds of coming online by 2030, that we just need to throw more resources at the current methods.

Expand full comment
User's avatar
Comment removed
Aug 2, 2021
Comment removed
Expand full comment
Carlos's avatar

1. On plausibility. Well, the reason I'm not so worried about cosmological disasters is the immensity of space. The odds of one of those hitting us soon is very small, and some of them (like comets) are sub-existential threats. A superintelligence could plausibly happen this century. It doesn't seem good enough to just hope AI research stalls, or that intelligence is asymptotic (hits a limit close enough to human intelligence that it can't actually overwhelm us once the advantages we're starting with are taking into consideration), or that intelligence and morality are convergent (a sufficiently advanced intellect will inevitably be aligned).

2. Yeah. A big takeaway here is that there is something of an imperative to figure out how to get closer to power. Which parts of the military do you think take this seriously? All this kinda makes me hope aliens really are in contact with the government. Would put my mind at ease.

4. Don't know how plausible hardening is. I'm reminded of (spoiler!) the human fleet in the second book of Three-Body Problem facing a solitary enemy alien vessel, and being completely annihilated due to the alien tech being built with a much deeper understanding of fundamental physics. I'm pretty sure it's game over if a rogue superintelligence exists.

Expand full comment
David Piepgrass's avatar

4. Nitpick - I hear humans use about 100W of energy with the brain using 25% or 25W, so a AA battery would be no good for powering a brain of that size. (I do expect AGIs could have a similar energy need to a human brain: human-built circuits may be much less efficient than human neurons at doing the particular calculations that human neurons do, yet digital circuits *are* efficient at doing a wide variety of useful calculations that human neurons are useless for.)

Expand full comment
Vanessa's avatar

Because this ban is never going to be politically viable, promoting such a ban will only anger and alienate most of the entire AI community, and even in the unlikely event such a ban is implemented somewhere, all the AI research will just move to another country.

Expand full comment
Carlos's avatar

It seems eminently viable to me, Europe is already far more safetyist (in a general sense) than America. If the EU decides to do a ban, it doesn't seem like anyone would show up to oppose it, other than some of the AI researchers.

The US is a different story, but even there, if the D's make a serious push for it, I really doubt there will be any resistance from the R's, who already see the tech industry as an enemy.

The only place an AI researcher would have to go after that is China. I don't how seriously they take, or even if they are aware of, the alignment problem. But they imprisoned He Jiankui over something far more innocuous. It doesn't seem implausible they would go for a ban too, given their totalitarianism.

At that point, if AI research occurs strictly only in black ops projects, that seems better than now. I have to imagine people in covert military projects are far more paranoid than OpenAI or DeepMind.

Expand full comment
James Miller's avatar

If you outlaw guns only outlaws will have guns. If you outlaw AI research only those willing to defy the ban will research AI. The western world outlawing AGI research would increase the military and economic benefits to China of conducting such research. Given that competing groups with very different views on AGI risks are attempting to develop AI, our best hope might be for a relatively pro-safety group to win the race.

Expand full comment
Carlos's avatar

Is it really so implausible China would go for a ban too? All that would need to happen is for the CCP to see AI as a threat to the CCP, and you're done. If research only occurs as covert military projects afterwards, that seems better than having civilians doing it. Civilians are already not allowed to do all sorts of things exponential orders of magnitude less risky, we don't see cartels with nukes and bioweapons as a result.

Expand full comment
rutger's avatar

True, but I don't really trust covert military projects to have a stronger interest in safety than at least some of the civilian groups that would be working on AI.

Also, limiting nukes and bioweapons to militaries is only more safe than everyone having them in the sense that it makes them less likely to be deployed. That strategy doesn't work for superintelligent AI, once anyone builds a superintelligence it's over, we either all win or we all lose.

Expand full comment
Carlos's avatar

There's also grayish scenarios, I think. If a military is gets the first aligned superintelligence, it's possible they conquer they whole world. It seems wrong to characterize that one as an "all win" or "all lose". Though it is strictly an all win, when considering that the end of all life would be averted in that route.

Expand full comment
magic9mushroom's avatar

>If you outlaw AI research only those willing to defy the ban will research AI. The western world outlawing AGI research would increase the military and economic benefits to China of conducting such research.

I suspect you're not following the logic to its conclusion. If you believe that the PRC's AI research will result in the destruction of humanity, that includes 100% of people in the West. Ergo, the PRC's nuclear deterrent is irrelevant; war against the PRC may kill 60-70% of Westerners, but 70% < 100% and as such (assuming they refuse to sign on peacefully) nuclear war is still the *less* costly option.

Expand full comment
beowulf888's avatar

So we *are* assuming here that AGI will very likely be put to a military use? How would a pro-safety group winning the race prevent others from winning the race in a different way? And when it comes to the military use of AGI, what scenarios are you envisioning?

Expand full comment
Carl Pham's avatar

Well, there are some practical problems. For example, you need to write the ban clearly enough that people understand what is and is not allowed. If you write it so broadly that you ban people from building chess-playing computers or Netflix "What should you watch next?" algorithms, there will be a lot of people that will be unhappy with you and think your concerns are seriously overblown.

So you probably need to narrow it quite a bit to the kind of AI research you think will lead to evil outcomes. Which means Step #1 is to write down a reasonably precise definition of what that AI looks like, and what kind of research would lead towards it.

Hopefully it's already apparent that will be a bit of a challenge. If not, consider trying to anticipate the possibility of nuclear weapons in 1910, right after the nucleus was discovered and it was apparent there existed *some* (unknown) force holding it together that was more powerful by far than any known force -- but long before the neutron was discovered, or fission, or there were any models of the nucleus that would give you even a faint idea of how nuclear reactions might occur. What exactly would you ban? How would you distinguish it from a general ban on all physics research, so that, e.g. the physics research that led to electronics could continue to move forward?

Expand full comment
Nancy Lebovitz's avatar

I agree that AI research is hard to define, and I'm not sure that making it illegal helps at all with my nightmare scenario, which isn't about accidentally creating a paperclipper, it's about some powerful organization (probably a government or a business) creating an AI with a directive to increase its power.

Expand full comment
magic9mushroom's avatar

Scott covered this a while back: https://slatestarcodex.com/2015/12/17/should-ai-be-open/

To reduce it to the absolute basic: if literally Hitler (say from 1942; he did go kind of psychotic when he realised he'd lost) gained unassailable world domination, that would be - while bad on a scale unprecedented in history - a far-better scenario than a paperclipper. Hitler would kill something like 80% of humanity (including me!) and then try to build some sort of grand interstellar German Empire that lasted a billion years with septillions of happy Aryans and "defective" people aborted prenatally. And while there *are* people who would set their AI pet to torture everyone forever, or kill everyone, they're pretty rare - most people, with complete freedom to order the world as they wish, would pick a scenario in which there are a lot of happy and fulfilled people (including themselves!). This is not the case for rogue AI, which has a huge chunk of probability on "goal has a Goodhart hole, AI paperclips Laniakea in some fashion".

"Dr. Evil" using AI is a huge problem, and one I've not given up hope of curtailing - but I call it "the lesser problem" to "the greater problem" of Skynet. The reason I consider Google/Facebook/Twitter a potential existential-threat-by-proxy in my above post is that they show all the signs of being what Scott called "Dr. Amoral" - reckless and in some cases even uncaring about the Skynet possibility, and thus highly likely to create it.

Expand full comment
Nancy Lebovitz's avatar

A lot of discussion of AI assumes that the AI has to get its start alone and in secret. My model is that the AI will have people helping and protecting it.

Expand full comment
Carl Pham's avatar

That seems much more plausible. And...is that necessarily a bad thing? If a superintelligent AI offered me a job as a minion during its campaign to conquer humanity and become God-Emperor of Earth, I might join up. Why not? Presumably the AI can offer considerably better compensation and more interesting work, and its org -- my fellow henchmen and henchwomen -- is probably made up of the most interesting, intelligent, and competent humans. It wouldn't bother me especially that it wants to govern humanity, as I'm not especially impressed with the outcomes of humanity's attempts to govern itself.

Expand full comment
Carlos's avatar

Why would you assume the AI does not intend to destroy you eventually? How would you even ascertain whether the AI has or does not have that goal? The problem with dealing with a non-human intellect is the infinite chain of suspicion: https://web.stanford.edu/class/cs379c/resources/inverted/content/Dark_Forest_Postulates_and_the_Fermi_Paradox/index.html

Now that I think about, that seems like a knockout argument against the possibility of alignment.

Expand full comment
magic9mushroom's avatar

>Netflix "What should you watch next?" algorithms

These (or more particularly the Google/Facebook/Twitter versions) do pose an indirect existential AI risk. The problem is this:

1) Google/Facebook/Twitter host a very large proportion of political conversation

2) Google/Facebook/Twitter's management have strong incentives to not want AI research restricted or regulated (and to not believe in/care about AI risk)

3) the West consists of democracies

4) those kinds of algorithms can potentially be deployed, in extremis, to covertly manipulate public opinion (e.g. sinking good essays warning of AI risk while leaving the bad ones, signal-boosting strong essays dismissing it and character assassinations of anti-Silicon-Valley figures, or even the simple "maximise people saying good things about us") - indeed, they are currently testing and refining this capability for culture-war-related purposes

i.e. the further we let them get, the greater the potential for them to play Byzantine Generals against us when we decide they've gone too far towards unacceptable risks of hostile superintelligence.

Expand full comment
beowulf888's avatar

This would be my short-term worry about AI or AGI. Cambridge Analytica's shenanigans immediately came to mind. There's been conflicting reports whether they used AI to analyze the data on 87 million users that they scraped from facebook — but if their results were effective without AI, they could be made more effective with AI. Of course, there's a lots of arguments that their efforts weren't particularly effective (because the ads targeted from that dataset likely didn't influence swing voters). But the point that seems to have been missed is that CA was sending its lists of likely-Dem voters through various Republican channels to Republican Secretary of States (such as Wisconsin) who would create targeted voter purge lists. So there is potentially more than one way to skin the democracy cat than just using targeted propaganda.

Expand full comment
Carl Pham's avatar

Actually from what I've read (1) is a popular delusion. Kind of fits in with my actual life experience, too; the greatest amount of serious political conversation I have is face-to-face, among family, friends, and colleagues I see every day. I think you have to fit within a fairly narrow demographic to have *most* of such conversation happening on social media. Maybe you have to be a Silicon Valley programmer, college student, or something adjacent to that. So far as I can tell, people who make a living painting houses, fixing cars, growing corn, flying crop dusters, or managing muffler-repair shops (which is most people) don't really live that way.

And even if it were, I'm dubious about (4) a priori. More than one dictatorship has done its best to control public opinion by completely controlling the media -- the control FB exerts on the messages on its platform is nothing compared to the control China exerts, or which the USSR exerted on TV and print media in its heyday. And yet, people did indeed have strong opinions that weren't always in agreement with the regime. People find ways to communicate what they want, however strict the control or influence. Historically speaking, it seems possible only to nudge public opinion modestly away from its natural course, it does not seem possible to completely direct it any way you like. Humans are stubborn.

Expand full comment
magic9mushroom's avatar

>I think you have to fit within a fairly narrow demographic to have *most* of such conversation happening on social media.

That would be why I didn't say "most". I'd say maybe 20-30% between them all (including Youtube under Google); the rest is more traditional media apparati plus face-to-face conversation.

>And yet, people did indeed have strong opinions that weren't always in agreement with the regime.

That much can't be avoided by any means, certainly. But 0.1% of the population does not make for much political pressure, and the road from there to the 5-10% that does (or the 30-70% that forces emergency action) is long and arduous (and the PRC actually has achieved quite a bit on that front). Also, covert influence is generally more effective than overt influence - this is why laws exist regarding labelling of paid content as such, as labelled advertisements are much less effective than paid-for "objective reviews". I've seen a paper suggesting that the LDP's secret deal with 2ch had a detectable effect, for instance.

And I agree, this is not an insurmountable wall that would block a super-obvious issue. But a five-foot wall isn't nothing, as any pre-modern general will tell you, and this is an issue with a lot of noise and a complicated signal. I don't think it implausible that this form of manipulation, with good enough algorithms behind it, could stall things for months or years, and years are a long time in AI.

Expand full comment
magic9mushroom's avatar

>the rest is more traditional media apparati plus face-to-face conversation.

Also competing social media like the blogosphere (e.g. here). Lack of edit function is irritating.

Expand full comment
David Piepgrass's avatar

It seems to me that banning certain kinds of research could be helpful (particularly any AGI research other than alignment research).

However, machine learning is genuinely useful, and people don't really fear AGIs (which are thought of, first and foremost, as a fictional concept — also, look at AGIs in TV/movies and notice how they are similar to humans in many ways. I think that most people incorrectly think that AGIs will be a lot like people, which makes them seem less scary). Because of this lack of fear, it seems implausible that enough support could be gathered for a ban.

Expand full comment
David Piepgrass's avatar

> to the point I'm wondering what the hell it is everyone is thinking that makes us believe we can go through life ordinarily

That reminds me, I used have religion, and believed I would live forever. Now I'm pretty certain that death is permanent destruction. It's a really ugly reality. And yet, life goes on as before. It's the same with the threat of AGIs: yup, existential risk, but life goes on (unless it doesn't). I can't imagine myself becoming one of the foremost experts on alignment theory, so I leave that work to others and focus on less important work (which could still be important if the world doesn't end.)

Expand full comment
Aftagley's avatar

From Elena's Website:

"You use Calendly to schedule an initial 45-minute appointment for a video chat, on the secure messaging service Signal... During your appointment, I run through my standard questions, take notes on separate encrypted laptop that never connects to the Internet, and hear out any special or unusual facts about yourself that you think might be relevant."

I don't know if this is the intention, but I've got two negative takeaways from these statements:

1. The thing I hate most about dating and dating services is the initial, awkward interview-ish period. My preference against this is strong enough that it mostly pushes me to avoid general dating; most of my relationships came from already-extant friend groups or weird circumstances that avoided this thing. I realize that this preference is abnormal, and that an interview is basically required for a matchmaking service, but maybe doing something to make the interview process seem less clinical would be more welcoming. Something like, "We'll have a friendly chat where I try and get a feel for your personality..." would be les off-putting.

2. The focus on encryption and weirdly solemn vows of privacy kinda misses the mark for me. I understand the need for privacy, but it's possible to show too much public concern for privacy.

Expand full comment
Kenny Easwaran's avatar

I would think there's an important difference between an awkward interview with a professional, and an awkward interview with a person that you're trying to establish a romantic relationship with.

Expand full comment
rutger's avatar

> I realize that this preference is abnormal

My very anecdotal experience is that this preference is at least relatively common (maybe not usually to the point of not dating strangers, but often still pretty strong).

I'm not the target audience any more (and wouldn't have been able to use this service if I was, as I don't live in the US), but I think I would have been more comfortable with the kind of detached/professional/clinical interview described in that paragraph than with a friendly chat where someone I don't know tries to get a feel for my personality.

I do agree that the focus on privacy seems to be a bit much.

Expand full comment
Lethargio's avatar

Is there a name for episodic extremely intrusive thoughts that undermine the preceding thought or belief by declaring the opposite is true?

Three years ago, I was aggressive attacked, fortunately unharmed, by a person having a psychotic episode. Two days later I woke up in with racing thoughts, as described above, and also the strange sensation that my mind was cleaved and I was occupying one side and the source of the intrusive thoughts, the Underminer, was inhabiting the other. I would think "I'm fine, this will pass, it's just a weird aftereffect of some dream I can't remember," and the Underminer would insist on the opposite. I hate describing it because it sounds insane, which was one of the fears I had while it was occurring. I woke up my partner and told her what was happening and she reassured me I was okay and eventually the thoughts ceased and I no longer had the split-mind sensation. After two episodes it resolved. I thought it was a panic attack and since it didn't recur I didn't pursue a answer.

Yesterday, it happened again. I woke up feeling split again and in a state of extreme uncertainty about any subject of thought: my emotional state, the reality of my bedroom, the thought that the sidewalk outside is a sidewalk, the fact that my partner cares for me. It didn't seem to matter what the focus was, it was undermined. It don't think the disturbing part was that I actually believed what was being suggested, but it was the fact that it was being suggested, uncontrollably.

Does this sound like a panic attack? I haven't found my experience described in the research I've done so far.

Expand full comment
CounterBlunder's avatar

Not an expert, but to me this sounds not dissimilar from PTSD-style trauma reactions. (E.g. I've had trauma reactions before to milder things than what you experienced, and I get into a temporary state of extreme suspiciousness / things-around-me-aren't-real.)

Also it just sounds super scary, and I'm sorry you're going through that.

Expand full comment
Alex Alda's avatar

Tell me please, is there any truth to 'Why we get sick' by Benjamin Bikman?

My psychiatrist recommended the book to me, but it seems a little sketchy, like all attempts to tie every single sickness from diabetes to cancer to some One True Reason, be it insulin resistance, cholesterol or lack of sleep. Educate me please?

Expand full comment
Nancy Lebovitz's avatar

Why does he think insulin resistance has become more common?

Expand full comment
Alex Alda's avatar

I haven't actually read the book, so no idea :)

Expand full comment
Doctor Mist's avatar

Haven't read it either, but there are certain sectors that consider the obesity epidemic to be the result of insulin resistance brought on by the proliferation of hyper-palatable foods.

Expand full comment
bored-anon's avatar

Why we get sick, the book about a Darwinian approach to disease, is a fun read, if very old.

I don’t really want to go read another book I probably won’t like after reading one that I did like, and I searched for book reviews and didn’t find any informative ones (google sucks, I type in the title and all possible permutations of book review and find nothing).

But based on a few reviews and skimming the topic, my guess is you’re right and it is questionable

Expand full comment
bored-anon's avatar

(Two different books called why we get sick)

Expand full comment
Nancy Lebovitz's avatar

https://www.npr.org/sections/money/2021/08/03/1022840229/why-even-the-most-elite-investors-do-dumb-things-when-investing

It turns out that elite investors are very good at buying stocks (very good means a little more than a percent ahead of the market), but pretty random about selling them.

It doesn't come up in the article, but I expect that this research will lead to those investors getting better at knowing when to sell.

Expand full comment
bored-anon's avatar

This seems like the exact sort of thing that doesn’t replicate or has a meaningful flaw. Are professional investors really leaving half their benefit relative to market on the table by not having a random number generator sell stocks for them? The effect seems really large.

Expand full comment
Nancy Lebovitz's avatar

I consider it possible that they've got better experts on when to buy than when to sell, and if that's a real pattern, the research getting published means that the investing error will go away.

Expand full comment
bored-anon's avatar

It’s certainly possible, just a bit eyebrow raising.

I mean it’s really easy to justify the idea - the “what do I buy” research is very in depth and the “when I sell” might be driven either by phenomena outside their control (people taking their money out, or new investments being decided upon) or just done differently.

But still this is the exact sort of thing that doesn’t replicate

Expand full comment
Nancy Lebovitz's avatar

https://www.quantamagazine.org/mating-contests-among-females-may-shape-their-evolution-20210802/

"Dramatic and obvious reversals of the selection scenario, like that of the dance flies, aren’t often observed in nature, but recent research suggests that throughout the tree of animal life, females jockey for the attention of males far more than was believed. A new study hosted on the preprint server biorxiv.org has found that in animals as diverse as sea urchins and salamanders, females are subject to sexual selection — not as harshly as males are, but enough to make biologists rethink the balance of evolutionary forces shaping species in their accounts of the history of life."

https://doi.org/10.1101/2021.05.25.445581

There's a fair amount about how a lot of the habits of science were set during the Victorian era, when people didn't want to think that females might feel strong sexual desire. Who knows what we might still be missing? (The question is mine, not raised in the article.)

Expand full comment
Elena Yudovina's avatar

If you have kids who are old enough to read, what do they read? I grew up reading stuff on my parents' bookshelves, but a lot of what I read day-to-day is, even when in book form, either borrowed from the library or on kindle (or both). How does the Modern Child approach solving the "I'm bored, I want to read something" problem? I realize that the answer to "I'm bored" isn't necessarily "I want to read something," but I assume it still sometimes is?

Expand full comment
JonathanD's avatar

My just-turned-nine is reading through the American Girl Books, my ten-year-old is reading Percy Jackson. Both have a to-read shelf with stuff they thought was interesting or that we recommended.

Expand full comment
Randy M's avatar

I don't know all that they read. If it's got a horse on the cover, it'll do.

I know they loved Brandon Mull's fantasy books. I know they reread to pieces the old Gordan Korman books after I read them through once.

Expand full comment
Elena Yudovina's avatar

Are the books-with-horses-on-the-cover coming from your shelves, or bookstores, or the local library, or Amazon, or...?

Expand full comment
Randy M's avatar

Library and second-hand bookstores, mostly. I remember Misty of Chicoteague was beloved in particular.

Expand full comment
Medieval Cat's avatar

For those interested in religion: Can you be a muslim and a christian at the same time? The answer is obviously that you can, google gives me some examples and people have believed far weirder stuff. But what mayor theological issues do you encounter?

I know the christian perspective better, and I can't see many contradictions between believing in Christianity and also believing that Muhammad was a prophet who got the Quran from God. It kind of raises the question of why Jesus had to do all that stuff if most of it would come in the Quran anyway, but redundancy is good I guess? I guess most contradictions would come from Islam, since it is later and all (and it's easier to blame contradictions in Christianity on "corruption"?). I guess the Quran is quite explicit on that the "son of God" stuff is nonsense? Anyone who know Islam (or Christianity) better and can elaborate?

Expand full comment
The Pachyderminator's avatar

As I understand it, both the Incarnation and the Trinity, central teachings of Christianity, are completely unacceptable to Islam, which emphasizes strict monotheism and the spiritual (unimageable, to coin a word) nature of God.

Expand full comment
Medieval Cat's avatar

But isn't the trinity "wiggly" enough to make the god of Islam fit? Most christians would describe themselves as "strict monotheists"? Or is the Quran/Hadith really specific on God definitively not doing the Jesus thing?

Expand full comment
Bullseye's avatar

It's been many years since I read the Quran, but as I recall it specifically says that Allah would never lower Himself by becoming human or begetting a human child. So it doesn't work with Nicene Christianity, at least.

The Quran also accuses Christians of worshipping three "gods", including Mary, but this is far enough from actual Trinitarian belief that I think you could wiggle past it.

Expand full comment
bored-anon's avatar

The trinity is pretty wacky anyway. Christian theology has a LOT of dissident sects that disagree on a lot. You can really do whatever you want, although it’ll probably end up being out there

Expand full comment
bored-anon's avatar

You could definitely take Christianity and just strike the trinity and incarnation and it’s be fine. Same for Islam and that stuff. Worse has happened!

Expand full comment
The Pachyderminator's avatar

I mean, you *could* take Christianity, strike out its central doctrines, and call it good, but the result would be a vague idiosyncratic sect with Christian aesthetics, not actual Christianity in any meaningful sense. Likewise for Islam.

Expand full comment
bored-anon's avatar

What weight does the trinity bear in any of Christianity’s doctrines or moral values or stories or demands? You could just replace it with “one god who’s awesome” and keep all the other bits and what else changes or how does it matter so much? If you struck forgiveness or heaven or church congregations or the ultimate scapegoat dying on the cross or the Bible that would be bad. But how much does the trinity hold for that?

Expand full comment
User's avatar
Comment deleted
Aug 5, 2021
Comment deleted
Expand full comment
bored-anon's avatar

You can still have a golden rule without a trinity though. Plenty of other justifications - family of polytheistic gods, normal human families, evolutionary incentives to serve group interests, fundamental kindness and love. And as people do regularly just chop up and syncreticize religions all the time, that’s doable. I don’t think golden rule is that Trinidadian - certainly existed before Christianity and isn’t justified as much on those grounds as it is others iirc

Expand full comment
The Pachyderminator's avatar

You can do without the dogma of the Trinity in its full Niceno-Constantinopolitan rigor, but you can't do without some idea of multiple divine persons. If God is one the way Islam says he is, then the idea of Jesus being God's son, or of Jesus being divine even while praying to God the Father, makes no sense. In fact, at least two of the things you mention - the idea that Jesus was the "ultimate scapegoat" and the Bible - are less crucial than that. Jesus as a scapegoat or victim of vicarious punishment is only one of several theories of exactly how Jesus' redemption of mankind worked. Christianity requires that we can attain salvation in some through Jesus' death and resurrection, but it doesn't require any particular theological explanation for this. As for the Bible, Christianity existed for a few centuries before the Bible as we know it had been put together.

Expand full comment
Deiseach's avatar

Ooookay - this one is going to need me to bite my tongue *really* hard in order not to be needlessly offensive about Protestant denominations.

Ahem. Yes, you can be a "Christian" and a "Muslim" at the same time and the reason I put those in inverted commas is because you can do this *if* you're mainly in it for the aesthetics and you find certain cherry-picked practices really cool and authentic and help you reconnect with your ethnic roots. You can do it if you practice the "Ogma is Hercules" model of correspondences/syncretisation. https://en.wikipedia.org/wiki/Ogmios

Here, have this nice picture of Krishna and Christ floating hand-in-hand over a lake. There, don't we all feel eirenic now? http://3.bp.blogspot.com/_3I6eIowAe7I/TH8IfL8BHNI/AAAAAAAAAz0/ueyBcoDyU_0/s1600/krishna-christ.jpg

You can do it if you go the "Moses, Christ, Buddha, Mohammed - all Great Wisdom Teachers of the past all teaching the same method", with God reduced to Vague Deity Concept, but if you're going that path you might as well go the full Madame Blavatsky with the Ascended Hidden Masters route.

What you can't do is believe in the tenets of Christianity and believe in the tenets of Islam, because they contradict each other on very key points. "Far be it from Him to have a son!" There's a reason mediaeval thought regarded Islam as a Christian heresy, because while it acknowledges Christ as a prophet, and Jews and Christians as People of the Book, it puts Mohammed front and centre as the last and greatest to receive revelation. There are even modern day cases where Christians are not permitted to use the word "Allah" to refer to "God" since the Islamic view is "This is not what you mean by God or we mean by God". See this Malaysian court case over battles to use the term in Christian religious materials: https://www.bbc.com/news/world-asia-56356212

Now, depending on one's particular viewpoint one may or may not agree with the take, but there is the opinion that Judaism, Christianity and Islam are the three Abrahamic religions, since they all agree that God exists and that there is only one God. Quoting from the Catechism (and editing out a lot about the Jewish people):

"The relationship of the Church with the Jewish People.

When she delves into her own mystery, the Church, the People of God in the New Covenant, discovers her link with the Jewish People, "the first to hear the Word of God." The Jewish faith, unlike other nonChristian religions, is already a response to God's revelation in the Old Covenant.

The Church's relationship with the Muslims.

"The plan of salvation also includes those who acknowledge the Creator, in the first place amongst whom are the Muslims; these profess to hold the faith of Abraham, and together with us they adore the one, merciful God, mankind's judge on the last day."

"What about the Trinity?" you asked and no, it's not "wiggly" enough for the use you want to apply it for. For Muslims, God is one and undivided, there are no 'persons' as Christian thought understands the concepts. If you're a Unitarian, you could possibly reconcile this part. Not if you're a Trinitarian.

The place of Jesus Christ as Son of God, True God and True Man, and Saviour of humanity - another point of conflict. Now, given that only the Quran in Arabic is the true scripture and translations into other languages are not considered valid, here's a site that gathers all references to Jesus with Arabic and English translation.

Relevant passage here to answer your query "Or is the Quran/Hadith really specific on God definitively not doing the Jesus thing?": http://search-the-quran.com/search/Isa

"O People of the Book! Commit no excesses in your religion: Nor say of Allah aught but the truth. Christ Jesus the son of Mary was (no more than) a messenger of Allah, and His Word, which He bestowed on Mary, and a spirit proceeding from Him: so believe in Allah and His messengers. Say not "Trinity" : desist: it will be better for you: for Allah is one Allah: Glory be to Him: (far exalted is He) above having a son. To Him belong all things in the heavens and on earth. And enough is Allah as a Disposer of affairs."

So the short answer is: No, not in sincerity, you have to choose one or the other. If you're patching together your own denomination, sure you can. There are plenty of people out there picking and choosing and inventing and calling themselves all kinds of everything. But I could call myself Pope, and that still would not make it so.

Expand full comment
bored-anon's avatar

I mean you can also invent your own doctrines and stuff. They’re not super real, although lots more smart people worked on theirs than yours. You (op) probably shouldn’t, because DNA and computers and such kinda disprove a bit of the theoretical basis for it IMO (not “spirituality” or even “hard religion” but just the old stuff) (also the enlightenment stuff is a hard hit). But Muslims take Jesus to be a divinely inspired prophet so why can’t one just do that? A history of religion and Christian cults show there are just so many options you can go with in Christianity, why limit yourself?

Expand full comment
Medieval Cat's avatar

I guess the Quran is quite explicit then. Thank you!

Expand full comment
John Schilling's avatar

A more interesting question would be whether one could be a Jew and a Muslim at the same time. There's still the "technically the same God" thing going on, with traceability to Abraham. Jews believe God sends prophets from time to time, and I don't think there's anything that says was absolutely done with that by the 6th century. And Mohammed had lots of bad things to say about then-contemporary Jews but did allow for a *slight* possibility that there might be a few virtuous and God-fearing ones left over from the days of Abraham and Moses.

But to be a Christian of any real sort, you have to believe that Jesus Christ was the Literal Son of God, or Literally Actually God, or some ineffable combination of the two. And that it was by his sacrifice that humanity achieves salvation. That's what makes one a Christian and not a weird sort of Jew. Islam is pretty clear that Christ was "just" an important prophet, strongly suggests that he evaded sacrifice, and I think inconsistent with the belief that he brought salvation.

Expand full comment
Medieval Cat's avatar

This is a good question! Also, can you be a jew and a christian at the same time? I guess you would start being only a christian when you crossed the "Jesus is the messiah" line, but why? Would a jew that started believing that Obama is the messiah stop being a jew, and if not, why is it different for Jesus?

Expand full comment
Bullseye's avatar

The first Christians also Jews, and I don't know of any reason that couldn't work today.

Expand full comment
John Schilling's avatar

Messianic Jewish sects are still considered Jewish, I believe. So one could presumably be a Jew who believes that Christ was the Messiah but that Paul was all wet about him being Messiah to the Gentiles as well as to the Chosen People. But, insofar as Christ didn't do very much to uplift the Jews over the Gentiles, he looks pretty weak as a Jewish-only Messiah and I'm not surprised it isn't a popular option.

Expand full comment
Doctor Mist's avatar

The first Christians were clearly ethnically and culturally Jewish, but even then I presume most Jewish scholars who did not become Christians would have characterized Christians as heretics.

There is, I guess, some ambiguity associated with the word "heretic": Are Arians and Manicheans non-central Christians, or are they non-Christians by virtue of their heretical beliefs? (How about Anglicans?)

As far as I know, the Jewish notion of the Messiah does not include him being, as John put it, "the Literal Son of God, or Literally Actually God, or some ineffable combination of the two". I don't know enough to be sure that it's completely *off* the table, but it seems likely to me.

I could imagine a thousand years from now after enough managed ecumenical councils and enough accretion of enigmatic bafflegab, they could have come up with a synthesis that reconciles Christ the Savior with Mohammed the uncompromising monotheist; yes, it's hard to make sense of it, but that's what Faith is for. Hell, maybe a thousand years from now they will have convinced themselves that the two historical figures were one and the same.

But right now, no.

Expand full comment
hedgeknight's avatar

If I may give a response from a Catholic perspective (which may seem almost sectarian--sorry!): Hilaire Belloc (a colleague of Chesterton's, who seems to enjoy some appreciation in these circles....), a historian and Catholic apologist, regarded Islam as a Christian heresy; if I remember correctly he basically argued that Islam is a simplified Christian theology (one God rather than the trinity) plus 6th century arabian culture.

Of course, from the Catholic perspective, Catholicism is the one true Church, and any other religion is in error..... and, although we hope to unite all things in Christ (ecumenism etc), being a muslim and a Christian could only be regarded as a transitional state towards becoming a Christian rather than as a reasonable final position.

Also, the Catholic church teaches that revelation was completed with Jesus (disclaimer: private revelations ie Saints' visions, are A-OK!), ergo the Quran is not kosher.

Expand full comment
Medieval Cat's avatar

So let's say that I have a private revelation of a saint, and I write down such revelation word for word in a book, that's all fine? Isn't that what the Quran is, basically? Is the problem then that the Quran claims to be the final revelation (and the anti-trinity parts)?

I.e. I have a hard time grokking "the revelation was complete with Jesus" and "private revelation is ok". Where is the line for "private"?

Expand full comment
The Pachyderminator's avatar

The difference is basically that private revelations are unnecessary. The revelation of Jesus is the only means of salvation, whereas saints' visions, apparitions, and so on are optional. It's possible (though uncommon) to be a Catholic in good standing while believing that Guadalupe, Fatima, and Lourdes are a bunch of pious delusions.

Expand full comment
Medieval Cat's avatar

So I can print a book of my private revelations and still be a good catholic, but if the book contains any necessary information, I've crossed the line? Makes sense, I guess such a book would be quite unnecessary (I guess private revelations can still be useful for the individual, so it has a purpose).

Expand full comment
The Pachyderminator's avatar

I'm not sure if Belloc was right about this. Islam might more helpfully be regarded as a Jewish heresy rather than a Christian one. Certainly Islam and Judaism share some things that Christianity doesn't (rejection of any multiplicity in God, the concept of clean and unclean food, a highly developed legal tradition, etc.).

Expand full comment
Nancy Lebovitz's avatar

https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

There were efforts to use AI to guide medical treatment. None of them worked. (I'm just going by the article. Let me know if there were any good ones.)

Part of the problem was low quality data sets, and there was also a problem with different teams using the same mistaken models.

Expand full comment
Carl Pham's avatar

Can't say I'm surprised. IBM Watson flamed out similarly. My impression is that "AI" (= neural nets) really need huge amounts of well-characterized data to do well, and that's about the polar opposite of what you get in medicine. There's a good reason diagnosis is still an art, and the human mind, with it's absolutely unparalleled ability to pick up subtle patterns in noisy and meager data, is still king of the hill and probably will be in our lifetimes (I think).

Expand full comment
Nancy Lebovitz's avatar

It occurs to me that an independent AI is going to have a lot of information, but a huge amount of trouble evaluating the quality of the information.

Expand full comment
proyas's avatar

Which athletes at the 2020 Olympics are the smartest?

Expand full comment
SurvivalBias's avatar

I hope you mean "the most intelligent"?

Expand full comment
The Goodbayes's avatar

Predictit is fun today.

Expand full comment
Doctor Mist's avatar

Can you be more specific?

Expand full comment
The Goodbayes's avatar

Was referring to the predictions about Cuomo. His numbers dropped by half when the reports came out.

Expand full comment
SurvivalBias's avatar

It's a commonly given advice about salary negotiations that you should never be the first one to call the number. It's obvious why you don't want to go too low. But what if you call the number which is 1) high enough that you'd be absolutely happy to get it 2) at the very top of what you could realistically get, based on your knowledge of the market? Besides the slim chance that you could get even more, are there any serious problems with this?

Expand full comment
Dan Pandori's avatar

The number may be far enough outside the company's budget that they rule you out of consideration.

That said, it seems reasonable enough to me.

Expand full comment
SurvivalBias's avatar

If you're in the latter stages (which is what I was assuming) I would expect them at least to say "we can offer you no more than X", and be rather honest about it, because they probably don't want to risk losing you after all the effort. Although thinking about it, I guess if the number actually *is* well within their bracket, they can still do the same to bring it down a notch, so whatever number you are naming, you're probably getting at least a bit less.

Expand full comment
Medieval Cat's avatar

I asked about this back in the SSC days and didn't get a good answer. So I'm interested in a response! I tried that tactic in a couple of recruiting processes but it only caused bad blood and didn't go anywhere. Saying my "happy number" has gotten me much more. But maybe I'm just a tool...

Expand full comment
Carl Pham's avatar

I've been OK with candidates who name a salary a bit higher than what I want to pay. I just tell them what I'm prepared to offer and see if they take it. If not, or if it sounds like they'll be unhappy, I just move on to my #2 choice. Generally for me after a job search I've got about 2-3 candidates I like, so if I need to move on it doesn't bother me, and it's pretty important that the person is reasonably happy and not immediately looking elsewhere for more money, because hiring is a big pain, takes up a lot of time and effort. I don't want to have to do it more than once every few years.

I've never encountered anyone who wanted way more than I'm offering, probably in part because I give them an idea of the salary range early on, so they can not waste my time if my highest number is far too low for them.

Expand full comment
Carl Pham's avatar

I guess I can add that if someone wants 5% more, and I think they're worth it, I don't quibble at small potatoes like that. But also, it's never the case that any one candidate is so outstanding that I'll do anything to snag him. I mean, that kind of thing may happen at the very top, when a company is hiring a CEO or something, but for almost all of us, you're pretty replaceable and you should never mistake being the #1 choice for being so far ahead of #2 that you can dicker like you're LeBron being wooed by Portland.

Expand full comment
SurvivalBias's avatar

Thank you for sharing. For the context, as I get it you're not a full-time recruiter, right? Asking because a recruiter in a tech giant vs a startup CEO doing their own hiring vs a recruiter in a smallish company are all in very different positions and have different incentives and sets of actions available to them.

Giving at least the range early on is a such a reasonable thing to do, I'm confused why so few companies are doing it, that would've saved so much wasted time and effort on both sides.

Expand full comment
Carl Pham's avatar

Oh no, sorry. I'm not even a recruiter at all, I've just been the hiring manager a variety of times. I agree with you, mentioning the approximate salary range right up front is a sensible and time-saving thing to do, and I agree I'm baffled why more companies don't do it.

Expand full comment
John Schilling's avatar

"What you can realistically get based on your knowledge of the market", may be quite a bit more than what they can realistically offer based on their knowledge of their own cash flow and business model. So you may wind up inadvertently convincing them that you see yourself as too big a fish to fit in their little pond, and anticipate e.g. morale problems or unrealistic expectations or quick turnover if they do hire you. Asking for 10% more than they can afford to pay is probably not going to cause problems; asking for 50% more I think probably will.

Expand full comment
SurvivalBias's avatar

That makes sense, but the way I'm thinking about it, if they're offering something closer to 50% of my best plausible expectations, I don't want to work there anyway. So yeah if there's a lot of uncertainty for any reason, or for someone who's looking for a job asap and willing to take almost anything, that's probably not a good strategy. I'm thinking about a more typical (in my circle) scenario of a mid-level individual contributor to low-ish level manager looking for a fungible position, with enough slack to be picky.

Expand full comment
Doctor Mist's avatar

I once got into a job where it turned out I was seriously overpaid, and it was hell. They took the pay level to imply my interest/willingness in being a "thought leader" (management responsibilities without management authority) when all I wanted was to be an engineer. If I had signed on at 80% of that salary, which would have been about what I had made before, I'd have been happy as a clam, but there was no mechanism for fixing the problem without staining the process and nobody wanted to do that. I left after a few years of escalating misery.

Expand full comment
alesziegler's avatar

Anyone with a good insight on what is happening with Covid in Israel? According to, well, publicly available graphs, they have so far moderately high wave of cases, far lower than Britain or Netherlands had, with (on a linear scale) barely visible rise in deaths.

Also, according to reports, Israeli government is considering dramatic actions like delaying start of the school year and drastic curbs on gatherings, including outdoors. This does seem like a huge overreaction, right?

Expand full comment
The Pachyderminator's avatar

My favorite musical discovery these days is a set of folk albums by Leslie Fish which set Rudyard Kipling poems to music. I think her renditions are very good. They animate the poems while presenting the lyrics very clearly, blending words and music seamlessly into something that works really well on the level of a folk ballad. There are three of these albums: Cold Iron, Our Fathers of Old, and The Undertaker's Horse. The first two have been reissued and are available on the usual digital channels; the last one has only ever been released on cassette, but has been uploaded to archive.org.

A couple of samples:

https://www.youtube.com/watch?v=LGoC0ylqq6w

https://www.youtube.com/watch?v=b1X3083JPw8

Expand full comment
SurvivalBias's avatar

Oh great, more Kipling-based music! My own favorite by far is Boots (aka Infantry Columns) set on music by someone called Todd Mauldin, if the file name is to be believed. Unfortunately I cannot find it anywhere online in the Anglosphere, nor even any confirmation that there is or ever was a musician named Todd Mauldin. But this Back To The Army Again performance is also very nice https://www.youtube.com/watch?v=ze0gE8M55Fw

Expand full comment
WSCFriedman's avatar

You should learn about Michael Longcor. He did a CD of Kipling poems, plus some on his other CDs, and he's *really* good. Youtube here: https://www.youtube.com/watch?v=UEvOKKEW5U4&list=PLQ3NiI7gJfaess7xLCHaaoQEsGaRZ3HPy

Expand full comment
Konstantin's avatar

Read an interesting paper today showing that AI can reliably determine race from X-rays and CT scans, which people can't do. Somehow it can even do this when the images are degraded to the point where people see a gray blob.The discussion section is quite painful to read, as the authors do some mental gymnastics to avoid drawing any conclusions from this that might be considered politically unacceptable.

https://arxiv.org/abs/2107.10356

Expand full comment
meteor's avatar

What is the best single study that looks at whether covid vaccines decrease your chance of transmitting the virus?

Expand full comment
Jennifer Robert's avatar

Hello everyone, I want to say a special thanks to Dr OGU. for helping me get cured from herpes virus 2019 , I contacted him base on the testimonies I saw about him on the internet I was diagnosed of HERPES Virus i have tried all I can to get cured but all to know avail, until i saw a post in a health forum about a herbal man who prepare herbal medication to cure all kind of diseases including HERPES virus, at first i doubted if it was real but decided to give it a try I was cured by his herbal medicine and natural herbs, kindly contact him today through his email: drogugusolutionhome@gmail.com or text/call: +1 (719) 629 0982

He’s waiting to help you.

HE ALSO SPECIALIZE IN THE FOLLOWING Illness;

If you have any sickness like : H I V/AIDS , CANCER , HERPES HSV 1 or 2 , GENITAL WARTS, Yeast-infection's (HPV), and Hepatitis A, B. Stroke, Chlamydia, Genital herpes, Alzheimer’s, Trichomoniasis , Tuberculosis, CAD, Gonorrhea, Epilepsy, and Syphilis.

......

Expand full comment
Milli's avatar

I've just heard that juggling is good for your brain. There are multiple studies I'm too lazy to read. here is one from 2009:

https://www.ox.ac.uk/news/2009-10-12-juggling-enhances-connections-brain

Does anyone have insight into that?

Expand full comment
SurvivalBias's avatar

Shower (not really) thought: in the far future, people might think that "ship it" and "launch it" as applied to releasing software are metaphors relating to the same process - sending physical goods on a spaceship.

Expand full comment
Bullseye's avatar

They would almost be right. One also "launches" a watership.

Expand full comment