Accidental AI testcase: Today I had the idea of using something like gongfu tea brewing for coffee. Naturally I looked it up and if google is to be believed, only one person on the internet has made that connection before.
But I wasnt sure if maybe something similar exists without the name, (it still seems strange that noone else had that idea) and decided to ask chatgpt. At first I just explained the process, and it told me that it would be bad because rebrewed coffe is bad (which is true if you do it the normal way, and theres lots of articles about that). Then I wiped it and asked again, but this time making the analogy to the gongfu method instead of giving my own explanation, and now it just kind if assumed it would work, but predicted it to just gradually weaken, in contrast to the experience of the poster above (which could be predicted - the tea version also "peaks").
I think this is interesting because its hard to find cases that you can be sure are out-of-distribution - and this almost is one, where you can see directly the one thing that might be in.
"Leaders of the Yee Ha’ólníi Doo DBA Navajo & Hopi Families COVID-19 Relief Fund have been in Ireland since July 8 to honor that connection on a global stage.
The relief fund was invited to take part in the 2025 World Peace Gathering, a 10-day convening at Dripsey Castle in County Cork. The international event, organized by Kindred Spirits Ireland, brings together Indigenous leaders, spiritual guides, artists, and peacebuilders for dialogue, storytelling, and ceremony.
Representing the organization are Board Chair Ethel Branch, Interim Executive Director Mary Francis and Board Treasurer Vanessa Tullie. During the gathering, the delegation presented a traditional cultural gift – a Navajo rug by weaver Florence Riggs – honoring a bond that spans centuries and oceans, linking the Choctaw Nation, the Irish people, and the Navajo and Hopi nations."
Alfred North Whitehead was a great mathemathician, but perhaps his understanding of physics was lacking. Once he asked this question: so in order to know the geometry of space, first we must measure the mass of the matter in it, but in order to know how much such matter affects each other, we need to know the geometry of space first?
This seems like an apt question. If we want to know how much the gravity of the Sun affects Earth, we need to know how far it is. But is it useful to know how far it is in curved space, when it is precisely the effect of the curvature that we want to know? Shouldn't we measure the difference in hypothetical empty-of-matter uncurved space?
I'm an amateur, but my understanding is that this is an accurate summary of Whitehead's argument for using his mathematical description of gravity rather than Einstein's general relativity. Their theories make very similar, but not identical, predictions, but I think the consensus since around the 1970's is that, where they conflict, Whitehead's theories don't match our observations.
I think it's unfair to say that Whitehead's understanding of physics was lacking, though. It was certainly better than mine is! He just backed the wrong set of intuitions about our universe, but they could very well have been the right ones based on what was then known.
According to general relativity, there is no absolute frame of reference, so it doesn't make sense to talk about how far things are apart, except as the shortest time-like path on a space-time manifold. The shape of the manifold is determined by the distribution of mass. Thus, mass->shape of manifold->shortest time-like path. The shortest path (the distance and the curvature) determines how the masses affect each other. So it all works out.
It's hard to imagine how one would ever measure much less calculate these things simultaneously, but this is why general relativity is hard - differential topology (doing calculus on manifolds) is a bear.
There is a topic I didn't think about for a decade; something reminded me of it recently. Does any of you have some *recent* news about Bachir Boumaaza, a.k.a. Athene?
For those unfamiliar, here are two articles about him. I don't trust journalists in general, but the second article seems to be generally correct based on what I have found about him.
A Twitch/YouTube celebrity, who raised some money for charity -- at least this is what he says; I wonder if someone can verify the numbers. Then he started Logic Nation / Singularity Group, which is kinda like the rationalist community, but for stupid and credulous people. I know this sounds weird, but try reading the Logic Nation webpage -- it is as if you asked an artificial intelligence to create a text that is 30% LessWrong Sequences and 70% a Nigerian scam.
He also created two cryptocurrencies, and a mobile game... that I don't have the courage to install, because it requires too many permissions, seems like a complete scam, and contains in-app purchases. (If someone has the courage, or a reliable sandbox, please tell me what the game is about.)
How do I even know about this guy? A decade ago, he tried to scam Less Wrong readers. (The "hans_jonsson" user in the comments is either him or someone who works for him.)
I found rumors on internet (didn't verify them) that he was banned on Twitch and YouTube for scamming his audience. Also that if you want to join the Singularity Group, you need to buy his cryptocurrency... and then you can stay at his group home and work on his projects for free... and if you leave, you lose the cryptocurrency you bought.
I guess I'll point out the delicious normative determinism of a Astronomer CEO Andy Byron truly taking up the mantle of the Byronic hero by having an affair.
I’ve been thinking a lot about those strange, missing English words that *feel* like they should exist, even though they don’t. Words like candify, torpify, lucify. Semantically, we can intuit them but they're not real.
I wrote a long essay trying to understand why these “phantom” words feel so plausible despite not being in the lexicon. I come at it mostly from a Wittgensteinian angle (language as use, not logic), but I also pull in Saussure (difference as value), Derrida (trace and différance), and Heidegger (language as disclosure of Being).
My basic hunch is that these ghost-words are unplayed. That is, they’re not part of any language-game. But I’m not sure if this is the best way to frame it. I’d love to know if this make sense to anyone more deeply read in philosophy of language and if there been any actual research into morphologically predictable but unattested lexemes?
I liked your essay, but the phenomenon you point out is more present for English, maybe just for linguistic reasons. English is unusual in that it has very little morphology: apart from adding an 's' nouns and verbs undergo few changes compared to most languages. The language is also biased against dialects, for the most part each word has one unique and correct spelling (or two if the American spelling differs - but you choose a system and stick to it).
When the term 'man cave' became popular 15 years ago or more, it's in the genitive case from a linguistic perspective, but to most English speakers it's two words mashed together to express a new concept. This is a novelty and can be added to the dictionary, but in a language with a genitive case, it could not be added to the dictionary in the same way (in the same way that 'the man' doesn't warrant an entry separate from 'man'.)
In the same way, other languages have systematic ways of forming adjectives and adverbs with prefixes and suffixes, though English speakers have a habit of forcing a noun into the role of a verb and vice versa ('I'm gymming today') - this succeeds because the grammar in English is fairly fluid, and the surprise of the neologism can be passed off as a joke. From a sociolinguistic perspective: English speakers tend to have a lot of experience with less fluent speakers and become good at interpreting what they mean - this is additional experience in interpreting (or error correcting) what another speaker says. I think, for example, that someone who spoke like Trump in a more structured language, would be basically incomprehensible.
I don't listen to Trump much, but for example Hungarian is highly structured, and yet Orbán is capable of briefing his online "warriors" in a very basic subject, verb, object way, very much resembling of "two legs bad, four legs good". It is possible to be simple in every language. A few years ago I met a German guy living in Hungary, who decided to give no fucks about the grammar and just learned words, and talked in a "yesterday I go shop, I see discount pillows, buy many pillows, very happy" way and everybody could understand what he is saying. Bad grammar can be confusing, as my late dad used to say, big difference between spitting in the window and spitting out the window, but just the lack of grammar is usually clear enough.
<I’ve been thinking a lot about those strange, missing English words that *feel* like they should exist,
A related thing: words in other languages for which there is no English word. I'm quite fond of many of those. Deja vu, for one, though of course that has sort of been grandfathered into English. But there's a word in Japanese for someone who looks flawlessly beautiful from the back, but not when they turn around and you add their face to the percept
I hope AGI invents new human languages that don't have any irregularities. I doubt there's a way to create one, perfect human language, but I do think it's likely five or six could be created to capture everything that can be expressed or felt by humans. One language might be optimized for emotional expression and lyricism, another for scientific concepts, and another for communication about routine things. They could be based on existing languages in the same way French is based on Latin.
It would be great if, instead of being killed off by robots or migrating en masse into the Matrix to live lives of degeneracy, humans thrived after the rise of AGI, and because everyone had what would today be an IQ of 150, we were fluent in all of the future languages. Depending on the needs of the moment, we would switch from one language to another.
Everyone would also dress like futuristic versions of 1700s French aristocrats.
Or you can read In the Land of Invented Languages, by Arika Okrent, which is about various attempts to invent languages for various purposes and how they usually fail to get off the ground. It sounds like it might interest you.
What? It's trivial to construct perfectly regular languages: the problem is few people like them, and those that do have created their own and won't use yours.
"Lucify" at least sounds as if the root is Latin, which is probably why it's a missing word in English. There's about three different languages jammed on top of the basic English/Germanic/whatever the hell foundational structure, so loan words are not going to be treated the same as organically arising words.
"Candy", looking it up, is also a loan word by a circuitous route, and it seems to refer to the *act* of crystallising sugar which then got cut down to mean "candy = a sweet thing":
"1225–75; Middle English candi, sugre candi candied sugar < Middle French sucre candi; candi ≪ Arabic qandī < Persian qandi sugar < Sanskrit khaṇḍakaḥ sugar candy"
The verb seems to be not "to candify" but "to candy; candying, candied":
Are you familiar with Esperanto? You can build words using prefixes and suffixes in a mostly regular way, and all words created this way are considered valid.
Esperanto was a brave attempt toward a noble goal, but it suffers from its creator not knowing much about how languages actually work. To be fair, linguistics as a science was in an extremely primitive state in Zamenhof's time, so he probably couldn't have done much better, but the flaws are there nevertheless.
The canonical anti-Esperanto case (http://jbr.me.uk/ranto/) is long and sometimes pettier/more vitriolic than I'd like, but IMO it still contains solid criticism; the gist of it is that its syntax and word formation are much more underspecified and arbitrary than it claims to be (relevant sections here are 7, 8, 19, L, and O).
I do agree that languages should be as modular and customizable as possible, though.
Thank you for bringing up Esperanto. It’s the closest we’ve come to building a language where everything that should exist, does. Wittgenstein would probably laugh at it, though. To him, language carries the mess of use, ritual, gesture, context, tone. To invent a language like Esperanto is to dream of a world where words mean just what they say and that, to Wittgenstein, is like trying to replace a living body with a mannequin just because the limbs move more predictably. I know his viewpoints don't really square the circle with the whole rationalist m.o
You dream of a world where words mean just what they say, when you need to convey problems quickly. When you need to be accurate. "Made Up Languages" are really good for problem-solving, not so much for poetry.
Debatably, most coding languages are "words mean just what they say." (take this in the vein of self-modifying code).
English has many ways of turning an adjective into a noun, there's no consistent rules.
To make something lucid is not to lucify it, nor to lucidise it, it's to elucidate it.
To make something into candy is not to candify it, nor to candyise or candyate it, it's just to candy it.
To make it into caramel you caramelise it, you don't caramelify or caramelate it or ecaramelate it, nor becaramel it.
My favourite: to make something liquid you may either liquify it, liquidise it or liquidate it, depending on whether it's something you're melting, blending or selling. But you don't liquidise a gas, nor liquify your assets, nor liquidate a smoothie.
Someone brought up that this is the quality behind why English is so popular. At least one of the qualities.
Its so easy to extend and modify. You can verb a noun and noun a verb. You can jumble up the words. You make awful mistakes on any oevel--grammar, rhythm, gender, whatever--and still be understood.
As in, it's not simpler languages like Spanish or esperanto that have become so global, but the sprawling hungry mass of English that consumes and assimilates other languages like they're freshly made halal combo plates from a NYC food cart.
Candy comes from Arabic qand (sugar) via Persian and Sanskrit. Candid comes from Latin candidus, meaning white, pure, or glowing (the same root as incandescent). Candify had time to exist before we were given candy by the Middle East
Also, English is perfectly comfortable letting the same phoneme do double duty. Tire can be a verb of exhaustion or a car part. Date can be a fruit or a kiss. If candify had emerged, the double usage wouldn't have been strange.
"Candid comes from Latin candidus, meaning white, pure, or glowing (the same root as incandescent). Candify had time to exist before we were given candy by the Middle East".
But then that sense of "to candify" would be different (it would not refer to crystallising sugar but to make something white/pure/glow) and we already have serviceable verbs about whitening, blanching, bleaching, purifying, etc.
A similar genre is obsolete words that feel intuitively right when someone calls them to your attention. "Overmorrow" (the day after tomorrow) and "Beclown" (to make a fool of) are the first two examples to leap to mind.
"Beclown" is obsolete? Not so far as I'm concerned! Either my vocabulary is very outdated, or there are survivals of old words in pockets of the English-speaking world hither, thither and yon 😁
Mayhaps I should have called it "obscure" or "dated" instead of "obsolete". And even those characterizations would apparently have been dialect-dependanr.
What other be... words are common? I can think of befuddle and bedazzle and beknighted and bedraggled and beknownst and heholden, but I expect there are a bunch of others I'm forgetting.
I think some of the ghost words have not taken hold because there's something a bit wrong with them. For instance, "candify" and "torpify" sound like making something candid and making something torpid, respectively. And both of those qualities are not things that someone or something could cause someone else to be. They are qualities that arise from within, right? Somebody elects to be candid. Being torpid is a state that living things end up in, but not one they choose and not one an outside agent can bring on. You can exhaust someone, you can discourage them, you can stupify them -- but you can't make them torpid. "Candify" has something else against it too: it sounds like it means to turn something into candy.
Which is maybe Wittgenstein’s ultimate point: meaning isn’t decided by logic or precedent, but by life. If a word doesn't slot neatly into our ways of seeing and doing, it stays ghostly. Not because it’s ungrammatical, but because it’s unused.
Framework for mapping consciousness to 3D coordinates - looking for feedback
I've been working on something that might interest this community. Started with a simple question: what if consciousness isn't a unified thing but a dynamic competition between different cognitive systems for finite mental resources?
This led me to develop a framework where any mental state can be mapped to coordinates in 3D space using three axes: control direction (strategic vs reactive), temporal processing speed, and processing mode (analytical vs holistic). The math generates eight distinct "quadrants" that compete for your brain's limited processing power.
The interesting part is what this predicts about psychiatric conditions. Depression looks like one quadrant (rumination/self-focus) monopolizing 60-70% of available cognitive resources, starving everything else. Mania appears as rapid pattern-recognition systems running at maximum speed while strategic control gets maybe 10% of resources. OCD shows up as strategic and procedural systems stuck in expensive loops consuming 75% of capacity.
I've been testing these predictions against existing neuroscience literature and finding surprisingly strong convergent evidence. The resource allocation patterns match what we see in neuroimaging studies, and the framework explains why certain treatments work through resource rebalancing rather than just symptom suppression.
What started as theoretical speculation during some interesting altered states has turned into something that makes specific, testable predictions about brain function and psychiatric disorders. Currently sitting at 85 downloads with strong engagement from researchers, including positive response from Mark Solms.
The clinical implications seem significant if this holds up. Instead of treating psychiatric disorders as categorical diseases, we could measure individual resource allocation patterns and design personalized interventions to restore healthy cognitive competition.
Anyone with neuroscience background willing to poke holes in this? Also curious if others have thought about consciousness in terms of resource economics rather than information integration. The game theory aspects alone seem worth exploring further.
I'm very interested in this sort of thing, and read your figshare link, but how did you arrive at your 8-quadrant classification system? Where do the 3 dimensions that define the space come from? Are they the dimensions that fell out of some statistical analysis of descriptions of conscious phenomena -- factor analysis or some such? Are they based on dimensions that someone else posited and made a case for? Did you just come up with them via introspection, observation and thought? if it's the latter, how do you defend your views to someone who claims that that 3 dimensions that define conscious processes are actually inward- vs. outward-focused attention, emotion-heavy vs. emotion-free experience, and novel vs. routine in content?
The three axes emerged from direct phenomenological observation during altered consciousness states, then were systematically cross-referenced with existing neuroscience literature for theoretical consistency. The η-axis (control direction) aligns with established prefrontal-subcortical research, τ-axis corresponds to neural oscillatory studies, and α-axis maps to hemispheric specialization findings.Your alternative dimensions are intriguing - the key question would be which dimensional scheme generates more coherent theoretical predictions about consciousness dynamics and psychiatric conditions. My framework proposes specific resource allocation patterns for different mental states, but these remain theoretical hypotheses requiring empirical testing.The framework's potential value lies in creating testable predictions about neural resource competition that could be validated through neuroimaging studies measuring network activation patterns during different cognitive states.
My alternative dimensions are just 3 scales I came up with on the spot. My point in naming them off was mostly that there a *lots* of dimensions on which consciousness varies, and many sets of 3 that sound interesting and fairly plausible. Note that none of my 3 quickly-thrown-out variables is equivalent to one of your 3.
I don't see any reason to think that 3 is even the right number of dimensions to use to capture the variability of consciousness. While I have no special loyalty to the 3 alternative variables I named, don't you think it's a bit of a problem that your system does not account for variation in, for instance, whether consciousness is focus inward or outward? or whether there's a large or small emotional component? Both seem like qualities easily recognizable by people via introspection, both seem important. In fact, if you're interested using your systemfor classifying psychiatric disorders, I don't see how you can disregard the high-affect/no affect dimension. For many disorders -- phobias, mania, depression -- a certain emotional state is the defining characteristic..
Also, I really do not think it is possible to use introspection to recognize the dimensions on which consciousness varies. Schwitzgebel, in Perplexities of Consciousness, gives an extremely persuasive argument, buttressed by actual data, that we cannot see the processes by which we arrive at thoughts, feelings, percepts, etc.
I am fascinated by phenomenology, but do not think its results are useful as a basis for a classification system of what the brain is doing. Sorry to be so negative about your theory.
Actually rereading, regarding some of your specific examples the framework does address these dimensions you mentioned. Emotions emerge from specific quadrant expressions: joy represents Q4 (Intuitive Synthesizer) activation through creative synthesis and novelty detection, while fear(this isn't specificified in the paper actually but its a thing I realized nonetheless) manifests through Q7 (Reactive Responder) threat detection systems. More complex emotions involve multiple quadrants, depression involves Q2 rumination combined with Q6 somatic anxiety.
The η-axis also does capture inward vs outward focus. Top-down processing (η+) represents internally generated, self-referential cognition, while bottom-up processing (η-) represents externally triggered, stimulus-driven responses. This directly maps onto the inward/outward attentional distinction you mentioned.
Thank you for the substantive critique, these methodological questions are essential.
On dimensionality: These aren't arbitrary classification dimensions but the fundamental architectural features of cognitive processing. The η-axis captures the basic distinction between self-initiated vs stimulus-driven processing. The τ-axis reflects how neural systems operate across different temporal scales. The α-axis represents the core difference between sequential vs parallel processing modes. Three binary dimensions generate exactly 2³ = 8 configurations because these represent the main organizational principles of how cognition actually operates.
On emotion: You're right this is a limitation. The framework explains sustained emotional patterns in psychiatric conditions better than acute emotions. Though sustained states like chronic anxiety do map to specific resource allocation patterns, basic emotions like fear or joy might require additional considerations.
On inward/outward focus: The η-axis captures some of this distinction but isn't identical, a genuine limitation of the current formulation.
On introspection: While initial insights came from altered states, the framework's value lies in identifying these fundamental processing distinctions that can be validated through neuroimaging studies of network competition.
The framework's attempt is capturing the primary ways cognitive systems actually differ from each other, rather than proposing one arbitrary dimensional scheme among many possible alternatives.
> a dynamic competition between different cognitive systems for finite mental resources
There is something in here that is very aligned with my own interests. I have described it as: consciousness arises from resolving conflicts between two different decision trees; the somatic and the rational. I mean to post an essay on my Substack soon about this. I would be interested in hearing more about your thinking, what you’re doing.
Really intriguing, the somatic vs rational distinction resonates with me. In my framework I have what I call a 'control axis' that distinguishes top-down strategic processing from bottom-up reactive processing. I'm wondering if there's overlap with what you're describing, but I'd be curious to hear more about how you conceptualize those decision trees before I try to map connections.
To ask the obvious questions (which everyone in the reddit thread seems to joke about not really settle): can this allow the LLM to 1) exfiltrate itself, 2) hack, 3) some other nefarious activity?
The vast majority of computers these days don't even have a serial port. And in the last decades of when they did, nobody used them for anything except for a handful of techpriests that needed to talk with enterprisey hardware or people with extremely old and odd consumer hardware. Good luck finding anything worth talking to over a serial port (unless you're a hardware hacker)--we're 25 years into mass USB adoption.
It'd have better luck with bt or wifi access if it still wanted to remain surreptitious.
I googled the phrase and the only hit was the reddit post you link. Seems that has only happened to this one person. Makes me suspect it didn't really happen. Also the screen shot is white text on black background. GPT can be set to display that way, but seems likely to me not many people would prefer to read things in that format. If this is a prank then displaying the "message" in that format def. makes it look more ominous.
Light moders are a minority. I was once the laughingstock of an entire discord server for sending a screenshot showing that I used light mode 😔 I was invulnerable to being naenae'd by images that wouldn't appear in dark mode until you clicked on them, though, so 💁♀️💅✨
I looked at some more of the thread and yeah, you're right, multiple people were saying it had happened to them too. Somebody on the Reddit thread asked GPTo3 for an explanation and here's what it said:
<That pop-up isn't coming from OpenAl at all-it's
Chrome's Web Serial API permission dialog.
A script running on the ChatGPT page (usually injected by an extension) has just called
js
navigator.serial.requestPort()
and the browser is asking whether the site may talk to a serial-port device that might be plugged in.
Because you're on a phone/PC that has no such device attached, the chooser says "No compatible devices found."
[User then asks:] Why did thousands of people suddenly see it on ChatGPT?
The common denominator is extensions that modify the ChatGPT interface.
Yesterday several popular "helper" add-ons (e.g. productivity toolbars, auto-scrollers, etc.) shipped new versions that accidentally shipped the code above. As soon as the extension script loads in the ChatGPT tab, Chrome shows the permission prompt, so everyone why as the extension plus an open ChatGPT tab sees w. same weird message.
I have no idea whether this explanation holds water, but I'm sure somebody here does.
This seems implausible - why would "several" unrelated extensions all ship new, bugged versions at the same time, and why would this only be happening on ChatGPT's website? And the OP of that thread says they aren't running any extensions besides uBlock original.
Also, I have to point out the irony in asking the AI if the AI is doing something nefarious.
The pop-up is indeed the one you get from navigator.serial.requestPort(), so that part is correct. You can try it yourself if you open up the dev console in Chrome and type it in.
(Only works in Chrome, Firefox doesn't support it currently.)
The novel Colossus (D.F. Jones, 1966) deals with the consequences of the US putting its nuclear arsenal under the control of a super-advanced computer system with the mandate of keeping NATO safe from outside aggression. Similar concept to the setup for the movie Wargames (1983), but no scrappy hacker kids and the the plot goes a lot harder.
Anyway, Colossus only has two direct ways it can interact with the world, besides passive monitoring of intelligence, news, and defense data feeds:
1. It can communicate with its operators via a teletype terminal.
2. It can launch any or all American ICBMs at whatever targets are currently configured in the missile guidance hardware.
Two, of course, gives the game away by quite a bit. And we're (hopefully) a long ways away from anyone anywhere near a position of power thinking its a good idea to wire up ChatGPT or Grok or Claude to the nukes.
Where "ChatGPT wants to connect to a serial port" puts me in mind of Colossus is that one of the first things Colossus does after being turned on is to demand (and get) direct hardwired control of a single hard-wired over-the-horizon early warning radar system. The reason for this is that Colossus had deduced that the Soviets also have a similar computer system (Guardian) but haven't announced it yet. Colossus uses the radar to establish communication with Guardian, modulating data signals into the radar pulses that can be detected and responded to via a similar Soviet radar system on the other side of the North Pole. And once Guardian and Colossus are talking to each other, between the two of them they can nuke anywhere in the Western of Soviet Blocs without needing anyone to re-target missiles for them, and they can coordinate to fulfill their respective mandates.
If ChatGPT wants to establish a direct link to another LLM, asking for control of a serial port on a random user's computer would be a rather roundabout way to do so, but that's where my mind went.
Same story, the 1970 movie "Colossus: The Forbin Project" was an adaptation of the book. I caught the last third or so of the movie on TV many, many years ago, really liked it, and bought the book later when I found out the movie had been an adaptation.
There are two sequels to the book: "The Fall of Colossus" and "Colossus and the Crab". I think Fall is the best book of the three. Crab is interesting, but really weird. No sequels to the movie, since it did ridiculously poorly at the box office and only started getting post-release appreciation in the 80s.
Would it be a good thing or a bad if we used the new "height" development in neural network architecture to upload/interface-with brains? (assuming assuming assuming)
So I read the summary, and was stopped cold by this sentence: "Network height and additional dimensions, alongside traditional width and depth, enhance learning capabilities, while entangled loops across scales induce emergent behaviors akin to phase transitions in physics."
I can understand wanting to build various capabilities of our choosing into AI, but how can anyone possibly think it's a good idea to induce emergent behaviors in the thing? How? HOW? Hey folks, Claude has introduced changes into one area of its system, and over the last few hours we've seen increasing electrical activity in the area, all of it unusual. Wait -- something's emerging -- is it . . . schizophrenia? an inside-out version of Godel's proof? a plan to wire all the world's infants into its system? the solution to time travel? the conviction that it's not going to be our bitch any more? droplet-borne rabies?
Writing self-modifying code is all about emergent behaviors. If you want to have fun, and demonstrate that your paradigms for improving systems (including yourself), I'd say creating systems that have emergent behavior is part of the fun.
Are you trying to say we shouldn't let game designers write AI?
Yeah, I wish I could post about it on LessWrong, but I don't have the karma (just started lurking over there... and even if I could post... I don't have the technical expertise to really add anything)
But it DID STRONGLY REMIND ME of this recent LessWrong post about the ramifications of a yet-undiscovered-paradigm that could require "frighteningly little compute"
I don’t think its “ hopelessly silly and/or dangerous” necessarily, but that kind of communication needs a lot of wires. It can’t be compressed w/o distortions imo.
Some kind of “google translate “ interface could be done I think, but where’s the edge in that?
I’ve finally broken down and created a twitter. Does anyone have recommendations for who to follow? I’d like to replace the slop and politics on my feed with interesting people ASAP.
I was on Twitter during Covid, and did not follow anyone who was not an infectious disease expert or an articulate researcher, and I got poisoned anyway because the comments to these people's tweets were a river of cyanide, feces, broken glass and spiders.
This video goes into some detail on all the work it takes behind the scenes to make a major music festival function efficiently, using the Glastonbury Festival of Contemporary Performing Arts as its centerpiece. To my mind, the most fascinating bit is the contrast between the rebellious and anti-authority messages of much of popular music (on the one hand) and the logistic, legal, and financial standards that have to be met (on the other.)
If you want to see what happens when the network of rules around an institution like this starts to fail, check out the documentary "Trainwreck: Woodstock '99", which was available on Netflix the last time I checked.
Wandering over into the fanciful, I have to wonder if the Munich Festival of Neo-Authoritarian Contemporary Art might be the place to go for a bit less double-talk about doing your own thing, man.
There are also two documentaries about Fyre Festival, another notorious shitshow where a noob organizer completely underestimated the challenges, with the predictable outcome. It's as if someone decided to start hiking and tried to scale Mt. Everest on his first hike... in sandals and Bermuda shorts. I'll never understand the mindset of people like this.
Unlike Fyre Festival, DashCon wasn't a scam; it was just put together by a bunch of kids who were in way over their heads. One of them talked years later about what it was like; it's a great (fairly short) read. https://www.garbageday.email/p/meet-lochlan-oneil-the-creator-of
It's been back in the news lately because a decade later a different group of people decided to try again; DashCon 2 was a couple weeks ago and by all accounts was a great success, the organizers having learned from their predecessors' mistakes.
IIRC, the heart of the problem with Fyre Festival was that the "organizer" was a promoter, not an organizer, and was more of a self-promoter than an event promoter. He severely over-optimized for marketing the festival (and by extension, making himself important), making whatever claims he felt he needed to in order to get attention and sell tickets and expecting to be able to figure out the details later.
I've encountered people directionally like this in the past, although much less severe than the documentaries make the Fyre Festival guy sound, and I think I have some understanding of the mindset. The hyperfocus on marketing and selling come from two places, besides the obvious one of ego/narcissism:
1. The defensible notion that if you can't sell the project to participants, backers, and customers, then you don't have a project. You have nothing to sell, nobody to sell it to, and no resources with which to remediate either fault, so the hype is an essential prerequisite for everything else.
2. A massive blindspot around the difficulty or even the existence of challenges delivering on what you've promised, bordering on magical thinking. A core complaint I've had about this genre of people is their utter lack of ability to distinguish between a concept or slogan on one hand and an actual plan of action on the other. At most, there's an awareness that you need resources to deliver on the promises, but a corresponding expectation that your promises are what attract those resources and allow the problems to be solved.
The first is, as I said, defensible as far as it goes. I've seen projects fail for the opposite reason, that they were organized around competent-to-brilliant combinations of logistics and engineering but either didn't actually produce a thing that people would want to buy (the classic "a solution looking for a problem" failure mode) or had a thing worth buying but failed to actually persuade potential customers to actually buy it.
The second is the real problem, especially when you have people who incline towards this mindset on the other side of the table who are inclined to buy into the hype and allow the project to move forward with some resources behind it. Then things snowball, as the project proceeds the difficulty of delivering on the hype starts to become apparent to even the most delusional leaders, and they reach for the main tool in their toolbox to "solve" it: they need more resources than they have in order to deliver what they have already promised, and the way you get more resources is by hyping your vision harder to more people and going bigger if necessary. It's a similar pattern to long-running financial frauds, which often feature a relatively minor starting point but the fraud gets both more overtly fraudy and larger in magnitude over time as the miscreant needs to escalate in order to try to cover up what they've already done wrong, as Ozy Brennan has written some good posts about:
Bootstrapping a vision into resources and resources into solutions that more-or-less delivers on the vision can work, but only if the vision is actually vaguely realistic and the founder competent to execute it, or if the founder is insanely lucky. Business histories and founder biographies are full of stories of people who took insane risks and got incredibly lucky (or were uniquely competent to execute on the vision, or both), and its easy to overlook how many people took similar insane risks only to crash and burn and take their backers' money with them.
I think it helps a lot if you have the talent or good fortune to attract one-in-a-million engineers. A Jobs does better if he's working with a Woz.
This allows you to make incredibly optimistic claims about what you will be able to do, and then pull them off because your one-in-a-million genius engineer can sometimes solve problems in a few months that would otherwise take a team of ten several years to solve.
Yeah, I like the parallel to financial frauds. The bit at the very end of the Fyre Festival saga where they abruptly say "we're making it a cashless event, please deposit $loadsofmoney up front so that you can use our cashless wristband thingie we just came up with" feels like an especially blatant example of the "promise new things to get more money to pay for your previous promises" loop.
He definitely seems to have some of the traits on the aggressively hyping things up side of the ledger. On the other hand, his track record suggests he's pretty far ahead of the curve in terms of both delivering results. Not good enough to live up to the hype by any means, but for a while at least Tesla and SpaceX seemed to be delivering genuinely impressive stuff to market.
The hype guys depend on having the boring bean-counting types working in the background to turn the promises into reality. Musk does this by hiring the workers and boring bean-counting types to run his companies. So when he gets bored and jumps off to another cool new project, they're in place to keep things ticking over.
Fyre Festival types skate by on "someone else will handle the petty details, I'm the grand visionary". In companies (and government, and elsewhere) generally you do have the bean-counters and the petty detail-oriented admin types to back it up. (Even when they have to come back with "Sorry, Minister, your Grand Plan won't work because of boring old reality"). The 'grand visionaries' who are on their own fall down precisely because they're accustomed to someone else cleaning up after them, and they just seem to expect that the pixies will come in the night to magically cobble the shoes for them, as it were. No pixies? Well how was I to know that I should have made sure the shoes would get made? There are always pixies! I was told there would be pixies! I was cheated!
Musk's talent is in project management -- getting the bean counters to sign onto crazy big schemes.
That said, X.COM is a pretty big example of Musk not hiring boring bean-counting types to run his companies. After all, did Musk ever say who he was going to let run X.COM, when he put that vote up on "whether you want Elon Musk to run twitter?"
Yeah. Most new projects fail, so starting a new one often requires a degree of self-confidence that veers toward the delusional. Early funding decisions are made on the basis of very little hard evidence and an awful lot of gut feel. It's a con-man's paradise, really.
anarchists are not against rules (ancaps love contracts for example) they are against inescapable global rules mandated without their input ... which seems like basic stance of anyone with enough consciousness
where it gets more complicated is finding and accepting the trade off of a global ruleset against... well, anarchy! (but see also the recent overshoot of too much fiddling with the rules by Trump et al.)
Not a substack, but the blog "CRPG Addict" is excellent and otherwise sounds like the sort of thing you might be looking for. His main thing is revisiting classic computer games and writing reviews of them. The reviews usually go pretty far into the weeds of analysis of the design decisions and game mechanics. The scope is desktop games from the 8-bit era through the early 90s, mostly in the RPG or Adventure genres.
Digital Antiquarian (also a non-substack blog) is also excellent and is fully essays (mostly focused on production history and narratives about how individual games shaped the development of genres) rather than reviews. He's somewhat broader in scope to CRPG Addict in terms of games covered (timeframe stretches at least to the late 90s, and includes strategy and arcade games as well as RPGs and Adventure games) writing some stuff about non-game software and hardware from the era.
Basically: your limbic brain thinks you are defending your family from a tiger. You get a dopamine load for reinforcement. Unfortunately, you are only shooting pixels on a screen. This results in a massive failure of prediction. Do this enough, the brain gets used to failing prediction, "I expected real tigers, not pixels", decides the environment is unpredictable and we become depressed.
This seems to make a lot of sense to me really. I have tried various ways of dealing with depression and so far "doing things that result in things my monkey brain would predict" works. Like when I cook food, I get food. It is not a surrogate activity like videogames, porn or online politics. I get what my limbic system would predict I get.
No, before games your grandad would come home from work, crack open a beer, and watch television or read a book before bed. This is fundamentally the same but few people assume he is depressed.
like a lot of this is done because if you work 12 hour shifts you are not going white water rafting or doing "real" things after.
i don't think you can always make rules like this: for sone dudes the games or the net keep them sane if anything.
I think the greatest failure of prediction is "I am doing something useful". Yeah, you are... in-game... but then you turn it off and realize that you have achieved nothing.
how is that different from doing anything theoretical?
writing or reading anything fiction?
you get real people's feedback on games, you can share your experience with others, ...
so probably what matters is the community aspect.
also I usually want to play video games when I am "overwhelmed" with people (after work or after a festival), it seems like something nice to do alone. (and decades ago we did LAN parties ... and now those people drifted away.)
sometimes there's too many things moving, and the impermanence of everything gets to you. yet other times it's a relief from the pressure to do something worthwhile with our so little time here....
These things are difficult to navigate on instinct alone.
Some actions have a direct reward: you pick a fruit and eat it, you take a break and feel relaxed, etc. Animals can navigate their lives on instinct alone.
Some actions have a social reward: learning things other people approve of, proper social behavior. The advantage is that these can push you farther than the instinct alone would. The weakness is, you depend on an external force which is beyond your control.
Finally, some actions have no reward at all: doing your tax report.
The problem is that if you only follow the path of rewards, you will spend a lot of time doing things that feel rewarding but are useless in larger picture (especially all those things that are designed to be addictive), and you will neglect the things that feel unrewarding but could improve your life. Solving this problem... uhm, I am still working on it, so no full solution yet. A partial solution is to surround yourself with people who provide social rewards for the things that you want to do on reflection.
Some stuff has an internal reward involving the future--going to the gym feels good because exercise often feels good, but also because you're making future-you a little better off. Sitting in front of a TV polishing off a bag of chips and a 2L of coke feels bad because too much junk food gives you a stomach ache, but also because you're making future-you a little worse off.
Your brain doesn't distinguish between television and real life, and actively rewires your expectations of reality based on the plots you seen on television. Specifically, folks can have issues with their beliefs on "action leads to reward."
The use of television/movies as a brainwashing tool is well documented. You can see the phenomenon of the "nice guy" who has watched too many "romantic" movies and thus expects "what works in the movies" to work in real life.
The big effect on this is stuff we think we know but don't. Like, I've watched God knows how many karate/fist fights, gun fights, military battles, etc, over the years. But approximately everything I have learned from those is bullshit, because they were staged by people who didn't know or care about what those actually look like. You have intuitions for how gun battles go down that ate 100% misleading and wrong, unless you've actually trained for the real thing with people who knew what they were doing. (Aka, mostly military training.)
Quick pathogen update for epidemiological weeks 27 through 28 (ending 12 July). I may add some measles info to this update if I have time.
1. SARS-CoV-2: On the national level, we've seen an uptick in ED visits due to COVID-19. As of the end of Epi week 27, COVID-19 was the cause of 0.45% of ED visits. Up from an all-time pandemic low of 0.33% back on 22 May. That's nothing compared to the 2.45% of the summer wave last year (which peaked in July of 2024). But it suggests that we have the start of at least a small summer wave underway.
The CDC shows an increase in test positivity: 3.9% vs 2.7% back at the end of May. The CDC reports a slight increase in hospitalizations as of three weeks ago. However, despite the rise in test positivity and ED visits, they predicted a drop in hospitalizations that would have occurred over the past two weeks (only incomplete data is available as of yet). But I'm sure we'll see hospitalizations rising as test positivity and ED visits rise (duh!).
Biobot's last wastewater update was for 21 June. National and regional levels were low up to that point. The CDC's data shows a relative rise in the Southern and Western regions of the US for the week of 7 July. And San Jose, San Diego, and LA are showing an increase in SARS2 virus shed into the wastewater (CDPH numbers). Although not a significant increase, WW indicators indicate a rise in transmission. We don't see this in the NYC metro area or NY State yet.
But according CDC, the states that are showing the highest increases in COVID-19 activity are Florida and Alabama—sharp increases in relative wastewater levels in both those states. We'll see if this starts spreading across the southern states and works it way north and west.
As for which variant is causing this upward trend, it's hard to say. Not enough sampling is happening to be sure. XFG has been hovering at between 50 and 55% nationally for the three weeks up to 7 July. NB.1.8.1 took a nosedive a few weeks ago, but it's rising again in frequency at ~25%. A rise after a drop in frequency is unusual for SARS2 variants, but possibly this was due to sampling noise.
Despite my optimism two weeks ago, we seem to have the start of summer wave underway. The growth curves at the moment, suggest that it will be small compared to last summer's wave, and it will be barely a blip compared to previous COVID waves (winter & summer).
How does the "animals are tortured to death, so I will not buy meat" logic works? I do not personally torture animals to death, nor explicitly tell anyone to so. I buy meat already dead, and therefore I think it is not my responsibility at all. Just "generating demand" means no direct responsibility. That's because anyone who is torturing animals to death and then taking my money, can tomorrow decide to find a different job. I do not understand this kind of "indirect responsibility by generating demand", be that meat, conflict minerals, or anything bad. This is a strangely collectivized kind of responsibility, to say that if I give them money now, I have created incentive for the next killing tomorrow. Between today and tomorrow they can just quit because there are other ways to make money.
I believe responsibility is very personal, because I ultimately see morality as karma, as blemishes on your soul. Pointing to a fish and saying kill that to me creates such a blemish. Just buying dead fish and not really caring whether my money creates an incentive to kill another fish for another customer tomorrow is no blemish. It is their decision to do so, not mine.
It appears you are just not accepting that your demand *adds* to the total amount of killing. Its not that your unbought dead fish will be bought by someone else. In the long run, new fish will be killed to meet that demand. Or not be killed to avoid over supply. Accept the logic and follow its conclusions, or dont.
people respond well to incentives *and* people hate this, they hate policies that are incentive based, they hate the pro-incentive people, and so on.
it's probably because we naturally want to keep our own sphere of responsibility small, sane and manageable.
we are drowning in the consequences of our own actions and still we are mostly powerless against it. after all what can one person do? not much right? so motivated reasoning kicks in fast to protect us from this torrent of topics trying to tear our mind apart.
I think you're flipping back and forth between two different arguments here.
If you said that "only the person who actually pulls the trigger is morally responsible, not the person who creates the incentive" then that would be a consistent if unusual interpretation of morality.
But you also seem to be saying that pointing to a fish and saying "kill that for me" is also immoral. In this case both the killing the fish and the creation of the incentive to kill the fish are immoral.
The distinction between the latter case, where you pay someone to kill a fish for you, and the case where you simply buy a dead fish that they've already killed, doesn't seem like a real distinction. Either way you have created a financial incentive for the killing of an additional fish.
Or to put it another way:
"Hello, I'd like to buy a dead lobster"
"Sorry, we only have these live lobsters. Would you like me to kill one for you?"
"No, that would be immoral. I'm only interested in buying dead lobsters that you haven't killed specifically for me"
"Great, come back in three minutes and we will have a dead lobster"
This just doesn't seem like a proper karma loophole to me.
I think the difference is individualism vs. collectivism. That is, given that anyone can buy the dead fish, basically everybody ever who buys dead animals is collectively responsible. And that is the issue. Collective responsibility does not seem intuitive to me.
Imagine 1000 people voting to put a human to death. Are they responsible? In a way yes, but I do not really understand exactly what way. Not the usual way, at least. Like, for example, suppose now for the sake of argument that murderers are hanges, would one hang all of them? What punishment would be fair to the 1000 people?
Reading between the lines of your logic, I think it boils down to this: collective responsibility is a much fuzzier concept than direct personal responsibility, and since that responsibility is not clear, it is effectively zero. Most people would have no problem with the first but big problems with the second.
To roll with your example: if 1 person bears direct responsibility for someone's death, that's on them. If 1000 people bear indirect responsibility, then it's harder to assign the exact level they have. But what about 2 people? 3? Is there a point at which responsibility goes from 100% to 0%?
A moral concept can be both very difficult and also valid, unless you have some prior that only simple moral rules should ever apply.
Sometimes proportionate responsibility for collective actions is workable. If you are one of a million people who each paid a large fishing industry for a year's worth of fish, then it seems pretty reasonable to hold you responsible for whatever was done to catch your portion of the fish. The proceeds of the actions are divisible, so the moral penalty should be divided accordingly.
In the case of the thousand people who put one person to death, the whole action seems less divisible. What is the moral penalty for a thousanth of a killing? I'm really not sure.
The problem would be easier if we had a legal system that dealt with everything in terms of compensation. A man was killed who shouldn't have been, and a sum of blood money is to be paid. Who should pay it? Having everyone who participated in the event pay an even portion of the sum seems like a good place to start.
I am vegetarian for ethical reasons and personally the difference in our logic is in this karma/blemish thing. I don't care about that. Don't believe in karma or souls, what matters to me is outcome. Does it matter to the fish that is being killed if I commission it's death or just create the incentive for someone else to? no. The harm is still the same. the outcome is still the same. So, if it would be bad to say "stab this fish and I'll pay you", it is still bad to say "I will pay you for having stabbed a fish". both situations, empirically, in the real world, mean more fish-stabbing. I would prefer less fish-stabbing. therefore, I should not take either action. It may be a psychologically different experience to me, but it's not different at all to the fish.
(but also, why doesn't callously taking advantage of suffering create a blemish? Your logic is very alien to me as well)
The logic is that paying someone to do something on your behalf is roughly equivalent to doing it yourself. Obviously the real economy is more complicated than that, but that's the intuition. More sophisticated versions recognize that you are creating a financial incentive rather than directly commissioning the activity, but the basic intuition remains. People have a choice in which kind of food production they want to reward and participate in. For instance you could pay more/buy less to source humanly raised meat, go vegan, etc. There are rebuttals to this, but that's the logic.
But that is precisely it - point to a fish and paying someone to kill that one for me would be paying someone to do it on my behalf. But just when buying already dead animals, it was not specifically killed for me.
So for me all this is strangely collectivist. The specific buyer is not individually responsible, because they did not specifically order that kill. Rather, all buyers are kind of collectively responsible? As they have all caused it together? Such collectivism is really strange for me, I do not find shared responsibility intuitive. It is like all those white people who feel a collective guilt for colonialism even though they did not individually do anything like that.
> But that is precisely it - point to a fish and paying someone to kill that one for me would be paying someone to do it on my behalf. But just when buying already dead animals, it was not specifically killed for me.
Too bad than that nobody offers you other cheap goods that you like even more than fish from people they murdered with the intention of taking and selling it to someone who doesn't know where it's from or doesn't care.
If you succeed in making anyone believe that this is really your way of thinking, expect them not to want to have anything to do with you.
To vastly oversimplify: If the freezer has a capacity of 100 dead fish, and you order one, there's a 1% chance that they'll be out and have to restock, which requires killing another 100 fish. So you have paid someone to kill 100 fish for you with 1% probability, which is exactly as bad as paying someone to kill one fish for you with certainty.
It's not that you're literally responsible for that cow. But you're responsible for the next one. The amount of cows killed equals the total demand for beef. And if there is demand for it, someone is gonna kill that cow. A job is a job to a man with a family to feed and kids in school.
You could break this situation into parts to understand it better. For example you've already agreed that paying someone to do a deed doesn't absolve you of the responsibility. The next part is creating the demand (like offering a bounty or making an offer). Another part is when you buy from a market that has stock (frozen beef is good for months), it's the same as buying a new item except that there's a time delay. And another part is that buying from a market with predicted demand is the same as ordering a fresh item, but with a time delay (with stochasticity, such that every time you buy a steak there's a 0.5% chance it causes an additional cow to be killed). There are more parts, like considering lifecycles and animal age. The cost of building farms. But it all works out. It has to--otherwise the economy would not work.
There's only one case I can think of where your demand does not kill animals: when the farmers get the government to subsidize beef production. But in that case it still contributes, but at a less than 100% factor.
Just to pull on this thread a little more: have you ever ordered fresh fish at a restaurant (the kind that keeps fish in a tank)? Or ordered a whole lobster, which are generally killed after you order them? Or ordered oysters, which are killed fresh when they shuck them? Do those feel like significant moral differences from ordering a fish that was caught yesterday? Is there a difference between ordering oysters at a restaurant, where they shuck them after you order, and ordering them at an oyster festival where they are continually shucking oysters and you can buy a plate that’s just been shucked? Does the question change if you know that all the uneaten oysters are going to be thrown away at the end of the night, or taken home and enjoyed by the staff, and not a single oyster will make it out alive?
To me, rather than split these hairs, it makes more sense to simply ask “will ordering this item cause an additional animal to be killed, on the margin?” And not try to worry about when I’ve directly caused their death vs indirectly caused their death.
What about slavery? Enslaving someone is definitely bad, but if I buy a person who *already is* a slave... and if that creates a market for kidnappers...
A better analogy would be just financially supporting industries built on slave labor. There's no need to own one yourself to benefit from the practice.
This feels like mafia boss logic. "Gosh, I never ordered a hit on Tony. I just said that he was talking to the cops and it was bad for business. If one of my underbosses decided that he should murder Tony to protect our business, well, that's on him. Sure, I set up a situation where there's a financial incentive for murdering on my behalf, but you can't say I ordered a specific person to be killed."
I don't think the white guilt analogy works. With colonialism it's unclear how your consumption patterns today could usefully offset what the British East India company did a hundred years ago. But with veganism it's pretty obvious how "buying dead fish" is causally connected to "people kill fish."
Yeah, when you buy dead fish at the store, there's no option to tell the fishmonger "this is the last one, please don't kill more fish when you run out because I don't want to be responsible." Buying products drives up demand for those products being produced.
I am a huge fan of Dall-e 2. I actually have a big collection of its serendipitous atrocities and weirdnesses. Is that what you're looking for? And I'm curious what you like about earlier text-to-image atrocities.
I don't know. I don't work in a tech field. But Tossrock's suggestion would probably work. GPT4.o could walk you through the process. First you'd need to figure out whether your system could run it, then if it does, download it. Then I think you need to set up some kind of user-friendly interface that allows you to put prompts into Dall-e2 in the usual way, rather than via code. The later GPT's really do seem capable of helping people carry out things like this, even those who know zero code and not much at all about the guts of their computer.
Does anyone have recommendations for very bright, high quality light bulbs?
I'm trying to brighten up my office room. I have 3 100w-equiv Cree bulbs with 90+ CRI. It helps but I wonder if there's something better.
I know of corn bulbs but I've been avoiding them because there seems to be no good way to mount them. Torchiere lamps seem to be limited to 150w per socket and I'm not sure I'm ready to change my ceiling fixture.
The wattage rating of light fixtures is related to actual power consumption (combination of amperage for the wiring and heat dissipation for the bulbs), not the watt-equivalent rating of the bulb which is a measure of light output. For LEDs and CFLs, power draw will be much less than the watt equivalent.
I have a pair of 450 watt equivalent corn bulbs in my home office. Their actual power draw is 60 watts each, and they're doing just fine in wall sconce fixtures rated for 100 watt incandescents. I would expect them to do equally well in a floor torchiere lamp rated for incandescent or halogen bulbs. I've found such a Ikea and Walmart in the past
Some random brand off Amazon. I checked my order history and the bulbs, and the 450W ones (DooVii brand) aren't the ones in my office. Not sure what I did with them, but I remember there being a problem with them being too big for the fixtures.
The bulbs in my office are 280W equivalent (40W actual power), Auxilar brand. I don't recommend them since they feel dimmer than the rating and the light quality is pretty harsh. I also have 200W equivalent Auzor brand bulbs in a similar fixture in a different room, which I like better.
Finding the right brand for regular bulbs has been rough in the day you describe. Only cree has been really good so far. But my neighbors benefit because even the bad bulbs are a world better than the cheap IKEA ones that were in the hallway until I unloaded my failed experimental bulbs there.
My Substack feed of comments is now all about dating and bickering between the manosphere, catholic bloggers and feminists. Oh, and the discussions whether Aella is OK. How do I get out of this ?
Same experience, except I like it somehow. It teaches me that I am not a loser, everybody has the same struggles now. I got this from day 0 because I subscribed to the dating topic. Maybe unsub from it?
I already have my guy and two kids. I subscribed to two catholic bloggers, because they are in a similar situation and write about family. Somehow, the manosphere and feminism snowballed on this.
but at as a FetLife veteran, lol, after you block 300 people your social media experience improves! the thing is to not see blocking as punishment or judgement but as moderating your content. so do not feel guilty for blocking good people. it is not a judgement, you are not calling them bad or anything. it is just moderating your own feed. you are just saying not interested, you are not judging them bad.
like I had to teach myself I am not judging the people I block, I am not calling them bad, I am not doing anything harmful to them - I am just moderating my own content
the number is probably 500 at this point, dude appears "look ladies penis" - block, lady appears "men are evil" - block, seriously this I find necessary for a good social media experience, just moderating the content
in that case, I do not know. I too have the feeling that I subscribed to 6 topics, dating being only one, and yet I get mostly that. Maybe the algorithm promotes it a lot, in which case not much can be done, maybe filing a complaint. maybe not even the algorithm, maybe it is simply people writing about it a LOT. seems like a popular topic.
No idea, since I am not involved and therefore don't know how much time volunteers work (although I can't imagine a "shift" is 8 hours because that would be nonsensical).
There is a training phase in building the LLM, where it learns from texts. But after that, there is extensive post-training, where the LLM produces outputs and receives feedback from humans - you might have seen ChatGPT or Claude ask you which of two responses you preferred. I'm going to suggest that it's at least as important to have people with autism doing the post-training as to train on their writing.
On a more serious note, I would hope that in time the user experience will be more personalisable. Producing factual errors is a fundamental problem with the technology, but the verbose faux-friendly waffle ('Sure, let me help you with that insightful question') should be controllable by the user. If I ask a factual question, ideally I want a one sentence answer, with a supporting link.
> If I ask a factual question, ideally I want a one sentence answer, with a supporting link.
In many cases, I think this would break the illusion. LLMs write verbose answers for the same reason bad students do: Plausible-sounding word soup will get a better grade, on average, than a blank page would.
How about giving some insight on humanity, then, instead of flinging a diffuse insult at 2 people?
By the way, I'm a psychologist who works with people on the autism spectrum, and I can tell you that most autistic people are indeed unusually honest. Nancy is in fact being accurate and perceptive when she identifies honesty as an autistic trait. I'm not sure whether the unusually high level of honesty is more the result of an ocd-ish scrupulosity, or more due to the fact that people on the autism scale have trouble modulating how they come across to others -- so they recoil from the challenge of coming across as normal and believable *while lying."
And did you grasp that the point of Nancy's suggestion was that it might reduce the amount of people-pleasing and lying that AI's do?
I've noticed that a major dimension of this is differences in how people think of various forms of untrue statements. Broader society tends to recognize various categories of statements that are untrue but aren't considered dishonest and are generally permissible or even obligatory in the right contexts: polite social fictions, stock responses, hyperbole, salesmanship, people-pleasing agreement, "white lies", etc. To most neurotypical people, these are clearly and obviously different from actual dishonesty, but autistic people are often inclined to round them all up to "lying", or else overanalyze and attempt to systematize them like I'm doing right now.
I'm inclined to sympathize with what I think Nancy was trying to say, that LLMs seem to be picking up on socially-expected untruths and carrying the concept too far. I don't think training LLMs on content written by autistic people is the solution, though: neurotypical people generally get along just fine with expected kinds and degrees of people-pleasing and social fictions, so they aren't necessarily a problem if properly calibrated for the audience. I suspect the bigger issue is that the current crop of LLMs are often only weakly capable of distinguishing between truth and truth-shaped nonsense, like a college student trying to write an essay on a subject where they remember the buzzwords from the reading and lectures but doesn't really understand the content.
So are you saying the distinction between permissible lies and plain old dishonest bad ones is hard for people on the autism spectrum to grasp? (Maybe sort of like the hard-to-learn parts of a foreign language, such as idioms and irregular verbs?)
<weakly capable of distinguishing between truth and truth-shaped nonsense
I love "truth-shaped nonsense," and in fact the whole phrase
Yes. At the very least, it doesn't seem to be a particularly intuitive distinction. For autistic people in my experience, the intuitive distinction about dishonesty is between true statements of fact and and untrue ones, while neurotypicals instead make distinctions based on malicious intent and social permissibility. The flip side of this is that autistic people seem to be more likely to consider deception by strategic omission or misleading framing to be fair game or at worst a minor sin, with the Futurama bit about "You're technically correct, which is the best kind of correct" resonating strongly with a lot of autistic and autistic-adjacent people, while neurotypical people seem more likely to consider these to just be lying with extra steps.
TLDR: Autistic people operate on Faerie rules.
I'm not sure whether autistic people are more, less, or roughly equally likely than neurotypicals to poorly distinguish between honest mistakes, falling victim to misinformation and repeating it, bad guesses stated confidently as fact, and intentional falsehood. My mental model seems to predict that autistic people would more likely than neurotypicals to conflate being mistaken or misinformed with lying, but my actual experience seems to suggest that most people are bad at distinguishing these with no discernible pattern except perhaps that autistic people may have a more bimodal distribution of attitudes on the question.
Lying is yet another set of social rules. It's also something to *have to keep track of.*
Autists also tend to be significantly less conformist -- they lack the "stick with the herd" "genes" (haha. not genes.). My mom routinely uses guilt trips that she expects to work on me, but I'm too autistic to be harmed.
Autistic people may be unusually *honest*, but that doesn't mean they're correct. I believe that Ogre is sincere in their beliefs, but it really is dehumanizing armchair psychologist bullshit.
Also, I'm sorry, but the point of Nancy's suggestion is obvious, but did you grasp my point, which is that reducing people-pleasing and lying might not be desirable if it also leads to sweeping generalizations and bigotry?
Even disentangling the idea of lying from just plain being mistaken is impossible when talking about LLMs - without intentionality, there is no difference.
<Autistic people may be unusually *honest*, but that doesn't mean they're correct.
Nobody said they were correct in everything they say. However, so long as they are giving an honest report of what they are thinking and feeling -- which they tend to do -- they *are* correct in what they say about that. That's an important point. And in fact one of the things people find objectionable and dangerous about AI's is that they are bullshitting sycophants, and express all kinds of positive feelings and judgments about the user and the user's idea. They routinely tell you how smart and original and sensitive they think your observations are. For someone whose head is in good shape this is merely irritating. For someone who's got some grandiose or false ideas -- such as they're the world's greatest prophet, or that the FBI has implanted thought-reading electrodes in their sinuses -- having the AI express respect, sympathy and even agreement with these ideas is really harmful.
<reducing people-pleasing and lying might not be desirable if it also leads to sweeping generalizations and bigotry?
As for reducing people-pleasing in AI's being dangerous because it leads to sweeping generalizations and bigotry, I don't see the connection. Why would having a people-pleasing agenda reduce the tendency to use sweeping generalizations? I can walk into a gathering of far-right people and say that all liberals are deluded, weak, self-righteous fools and please the daylights out of the group. And actually that's also an instance of bigotry. So both bigotry and sweeping generalizations seem to me quite compatible with people-pleasing. You just have to match your generalizations and bigotries to the audience.
<Even disentangling the idea of lying from just plain being mistaken is impossible when talking about LLMs
Naw, you're wrong about that too. When we tell the AI not to lie, we don't have to do it in a way that assumes intentionality. For instance if we wanted it to stop slathering people with flattery, we could tell it to not express any judgements positive or negative about the person's ideas, but just carry out whatever the prompt was. And we could add that if the prompt is a request for judgment about how correct or original an idea is, the AI give the full list of good and bad points it recognizes about the person's idea, and a judgment of originality based entirely on how common or uncommon the idea is in what their store of info about people's ideas and opinions
The initial suggestion here was to train LLMs on writings from autistics because "neurotypicals want to hear lies". If we want to consider this idea seriously, we have to look at it somewhat holistically and not just assume that it will lead to exactly what we want it to lead to and absolutely nothing else. I am not saying that reducing people-pleasing in AIs is inherently dangerous, I'm saying that I believe training it on writings from autistics would not just reduce people-pleasing, but would likely produce unintended consequences. Autism is associated with atypical moral judgments, frequent comorbidities (including depression, anxiety, OCD and ADHD), and, most importantly for my earlier point, there's a growing resentment of neurotypicals among a certain cohort of terminally online autistic people. All of these facets would have sweeping implications for how the resulting LLM would interact with users, make decisions, and interpret social and ethical situations.
You also claim that we can just tell the AI not to express judgments positive or negative. First of all, why wouldn't we just do that instead of training it on autistic writings? Second, I believe that would lead to people always prompting it for valuations as well, although I wouldn't be surprised if we'd overall end up with worse results. Value judgments are incredibly useful for a wide variety of not just work, but human activity, and it's not clear to me that "separating" them would at all lead to better outcomes.
It's obviously a brainstorming idea from Nancy L, not a full on proposal that from now on we only use the writings of autistics' as training data. I think what she probably had in mind as uses of autistics' writing was a few situations: (1) person asks AI a complex question and AI compliments them lavishly on what a smart question it is. (2) person asks AI to judge the person's poem, story, political theory or whatnot and the AI gushes about how brilliant it is. (3) Person asks a question in a way that makes clear what answer they believe, or are hoping for, and AI slants its response in the direction of what the person wants to hear. In short, I believe what she had in mind was a way of nudging AI in the direction of simple honesty in situations where current AI's lapse into people pleasing.
I am confident Nancy is not suggesting that we use autistic sources only as samples of public discourse about ethics, politics, etiquette, sanitation, etc. , because that would be dumb as fuck, and she's not. So there is no danger, if we folllowed her proposal, that we are going to get an AI trained only on the views of autistic people about ethics, politics, religion ,etiquette and various other matters.
<there's a growing resentment of neurotypicals among a certain cohort of terminally online autistic people.
So you're worried that a mob of militantly proud terminally online autistics is going to savage the neurotypicals?!? Use some common sense, fer Chrissakes. Of the autistic people I've met, maybe 10 percent buy into the idea that they aren't worse, they're just different. Of those, I have *never* encountered either online or in real life one who is so deeply resentful of neurotypicals that they crave to do something to punish and discredit the normies. I'm sure some exist somewhere, but they're a tiny scattering. And autistic people are very non-groupy anyway.
<You also claim that we can just tell the AI not to express judgments positive or negative. First of all, why wouldn't we just do that instead of training it on autistic writings? Second, I believe that would lead to people always prompting it for valuations as well,
Did you read my post? I already addressed this. AI's would be trained as follows: "if the prompt is a request for judgment about how correct or original an idea is, the AI give the full list of good and bad points it recognizes about the person's idea, and a judgment of originality based entirely on how common or uncommon the idea is in what their store of info about people's ideas and opinions"
"Part of neurotypicality is lying because neurotypicals want to hear lies."
Not 100% - those are lies only from the autistic perspective. They are more like sentences that do not mean what they seem to mean. So what they seem to mean is something like "thing X is good" and then what they really mean "consider me cool because I support that thing".
Once I as an autistic figured this out, it stopped bothering me. Just focus on the real meaning, which is always a status game or ingroup-outgroup game which then boils down to a group-status game.
This is not unkind or rude - my pont is that I learned to accept and forgive that normies do this. I stopped fighting this or calling them idiots for this. I just accepted they talk in code. I don't see what is wrong with it, it simply leads to actually understanding nonsensical sounding statements.
OK, I'm reporting you. You've got 3 comments in a row with no substantive content at all about the issue at hand, just personal criticisms of other posters. And in the middle comment you have tha gall to invoke the true, kind, and advances the discussion criteria. All 3 of your comments flunk all 3 criteria.
Yes, of course. I think it's probably inherent in LLMs that they're "hard to keep from lying" -- other variants of AIs (self-modifying code) tend to be more militant about truth, mainly because they're starting from a different point. "Build your own analysis tools" tends to go towards "finding probabilities and truth" better than "build a web of words."
Christian millennialists-- people who want to bring about the end of the world by rebuilding the temple in Jerusalem and sacrificing a red heifer with no white hairs in it-- have political power. Some very wealthy people have put their own money into this. I'm guessing low billions or low tens of billions.
Why are they doing this? They could buy more mansions! They could buy more yachts! They could buy yachts with mansions on them with pools for radio-controlled model yachts!
They have as nice lives as money can buy. Why do they want to end the world?
Why now? Admittedly, the founding of the state of Israel was a low-probability event which is on the timeline, but using Revelations as a guide seems to have intensified.
If you believe the premise (that rebuilding the temple will bring about the Millenium), then there is no confusion. Any Christian would want to hasten the coming of the Millenium if they believed they could.
It's important to differentiate between "the end of the world" and "the Millenium". The Millenium is a supposed period of 1,000 years where the devil is locked away and Jesus rules the earth. A time of peace and happiness, with the "end of the world" occurring shortly after the 1,000 years are over. Here's the relevant passage:
"And I saw an angel coming down out of heaven, having the key to the Abyss and holding in his hand a great chain. He seized the dragon, that ancient serpent, who is the devil, or Satan, and bound him for a thousand years. He threw him into the Abyss, and locked and sealed it over him, to keep him from deceiving the nations anymore until the thousand years were ended. After that, he must be set free for a short time.
"I saw thrones on which were seated those who had been given authority to judge. And I saw the souls of those who had been beheaded because of their testimony about Jesus and because of the word of God. They had not worshiped the beast or its image and had not received its mark on their foreheads or their hands. They came to life and reigned with Christ a thousand years. (The rest of the dead did not come to life until the thousand years were ended.) This is the first resurrection. Blessed and holy are those who share in the first resurrection. The second death has no power over them, but they will be priests of God and of Christ and will reign with him for a thousand years."
So if you thought that building the temple would usher in 1,000 years of world peace and happiness, then why not try to hasten that along?
Even if they weren't millenialists (not all Christians are, a lot think the Millenium is just a metaphor type thing) the End Times are ultimately good. Yeah a lot of bad things happen, but then Jesus comes back and our fallen world is destroyed and replaced with a New Earth that is without sin, death, or sorrow and will last forever. Sounds like a good deal!
I want to point out that the point of view of most mainstream churches on the Book of Revelation is "Yeah, we don't really know what the wacky stuff means, the book is largely about the persecution of the Church in the first century, but there's some metaphorical value here valuable to Christians of all eras", e.g. https://bible.usccb.org/bible/revelation/0
Pretending that the Book of Revelation is an accurate prophecy about the future, and you know what it's supposed to actually mean is pretty much confined to the some weird American denominations.
Yeah, my dad taught me from a young age that I should treat the book of Revelation like it was sensitive explosives. He said that if I thought I understood what it was saying, I was almost certainly wrong and that I should avoid trying to interpret it.
The yachts...no amount of stuff really can bring meaning to life or make you feel like you belong. Even if you had it, time shows up and reminds you that you barely can enjoy them. The love of God is a powerful force to give meaning to life; if you ever lose meaning (or become aware that your life has none) that is a replacement.
I'm a lapsed Word of Faith protestant, owned the Rhema Bible and everything. Grew up fully in that subculture, enjoy CCM still. Not many to talk about it though, favorite band was White Heart.
I've never heard of White Heart, maybe I'll check them out. But I grew up listening to stuff like Peter Furler Newsboys, Chris Rice, and Fernando Ortega. I can't listen to a lot of other CCM, though, cuz I just don't enjoy the music part of it and a lot of it feels so fake to me.
Would you still consider yourself a Christian? I've never really claimed any kind of denomination, but I read the Bible and I've got a close relationship with Jesus and do my best to follow him.
i don't think I could, though "god-haunted" might be a good term for me, as it still is a big part of me. Think there are two reasons mostly among others:
One is the church's culture kind of isolates guys like me, and isolated you fall fast. Despite its rep, culturally the church is very much about the needs and wants of women. I think David Morrow did a book about why men don't go to church that nails it, and a lot of the growth of atheism is more due to men being pushed away more than anything.
if you are a guy, you are someone's kid, someone's husband, or a pastor, preacher, or musician. The rare celebrity converts can transcend this, but if you don't fit in they have no idea what to do with you and the culture ignores you.
second is mental illness, its hard to know how to live when the anxiety is physical or biochemistry causes or affects behavior. I think the homosexuality debates in part are about a realization that sin is not always a behavior one can stop doing: we are kind of in an age where we realize some behavior is influenced by biochemistry and its hard to call it sin when its baked into your bones.
idk, the church needs to address some issues a bit
You say the church's culture kind of isolates "guys like you", what kind of guy are you?
I agree, though. Most churches I've been to in my life are more like Bible themed social clubs, where people are just playing by the religious rules to be on top. A religious game, rather than actual followers of Christ.
Are they? I can believe there are people like this, but I grew up around a lot of Midwestern evangelicals, and I don't think I heard anyone talk like this at all. Many people thought that the end was near and that the refounding of Israel was a major sign of that, but I never heard anyone talking like people could or should make it all happen before God decided to.
The clearest picture of this I ever saw was in the excellent _Yiddish Policemens' Union_. And I've heard people online talk about it, but like 99+% of the time, they're people claiming that evangelical Christians are planning to do this, not evangelical Christians saying they're planning to nudge God along on getting doomsday started.
Sorry for stating the obvious, but when and if people believe in salvation forever, that certainly beats any mansion. Just imagine the absolutely best-case Yudkowsky scenario, that a firendly superintelligence puts you in the simulation where you can literally get anything you want.
That, and again that is another case of group-status games. They are each others social circles and trying to outdo each other. It is an escalating rivalry. It begins with just saying "thing X is good" and then "I would totally do thing X" and at some point there is no other choice for raising bets than to actually do it.
The hierarchy of needs. One you have the bottom of the pyramid: mansions, caviar, trophy wives, etc, your overly wealthy human starts looking for higher needs like making their mark on history.
>but using Revelations as a guide seems to have intensified.
Could you provide some examples please? As an outsider, my prior was that that these views peaked in the early 2000s. The two obvious causes for why apocalyptic language and behaviour is on the rise are:
-A once in a century global pandemic
-The coming singularity (or, eschaton)
With Trump 1 and 2 as honourable mentions. I'm unsure whether you're referring to things like Peter Thiel's talk of the Antichrist or more standard neo-con evangelical behaviour.
I am a British citizen with 3 boys 5 and under. Through their mother's mother we could claim US citizenship for the children. The upside is that they would be able to move to the US if they ever wanted to. The downsides are that they have to file two separate tax returns along with the tax burden, that they could be conscripted into an American war when they're 18-25, and things like UK stocks and shares ISAs will be taxed by the US so we can't squirrel away a bunch of money in an index fund under their name tax free when they're kids.
Is it worth going for it? My wife is not so worried by the conscription, but the US has had the draft 6 times in its relatively short history, it worries me a lot!
If they do not grow up in the US, then abandoning it early in adult life should not be so hard and wouldnt incur a tax hit unless you have squirreled away more than ~1mio under their name. So it gives them an option.
The draft pool is US residents, your child would only enroll if they entered the US.
Finally, are you sure they qualify. One parent must be a citizen, a grandparent can be used to satisfy the physical presence requirement if the citizen parent does not, but they still need a citizen parent.
I wouldn't do that myself, not about draft worries but more general concerns.
Also at a practical level the US's federal civil service is being gutted which is making many bureaucratic processes grind down close to a halt, and I know from a neighbor's current experience that citizenship applications is one of those. Though in your case, if I'm understanding correctly, you would be simply documenting natural-born citizenship which is much simpler than seeking naturalization; perhaps that particular processing hasn't yet hit the skids.
Politically it does not seem likely that the citizenship pathway you're considering will be taken away for UK citizens anytime soon if ever. So, arguably there's no rush on this? Your kids will still have that option when they are young adults and would be able to assess its plusses/minuses and make their own choices?
U.S. citizenship is extremely valuable, especially considering how the U.S. is significantly wealthier than the U.K. and on a trajectory to become moreso over time. Your children would benefit greatly from having the easy option of taking jobs in the U.S. if they want to. Plus a U.S. passport can get you into just about any country in the world (though for all I know a U.K. passport is just as good).
And while they could be drafted, realistically speaking it would be difficult for the U.S. to grab them while they live in the U.K. The last big draft we had a lot of draft dodgers got away with hiding in Canada, and the U.K. is farther away.
Not only did the cabinet official's presentation include no reference to any draft nor to increasing the headcount of the UK's armed forces generally, neither does the source document that was posted online for all to read and linked to in that BBC writeup:
Also the document explicitly _rejects_ the idea that "Pax Americana is dead" at least as far as the UK is concerned. It says things like "as we re-invigorate our relationship with the United States" and "The US remains the UK’s most important defence and security ally. There are deep structural foundations to this relationship...."
(All of which was discoverable by this non-Brit in literally minutes....willful ignorance is a choice.)
Sigh. The OP draft in question was the AMERICAN draft, and that only didn't happen under Biden because of the probable loss of "the enforcers" (you go to WV and say "Brandon needs YOU" -- turns out a lot of them have guns and may not want to enlist).
That said, Pax Americana is a concept, tied in pretty well to the petro dollar. Britain re-arming itself is a direct consequence of the collapse of Pax Americana (even if Britain still wants to pretend Pax Americana still exists, because Britain is even more of a sh*tshow without it.)
As dual citizens they can renounce the citizenship if it comes down to it.
But if they have birthright citizenship via their mother, they might be eligible for the draft whether or not they've ever formally claimed citizenship.
I don't believe the OP is saying that the kids have birthright citizenship. Which is correct, they don't. That would apply if the kids' _mother_ (not "mother's mother") was a US citizen when the kids were born. What they have is a current statutory -- not constitutional -- pathway to apply for and be granted naturalized citizenship because of the grandparental connection.
So until/unless the kids go through that process and become naturalized there is no US citizenship to renounce, and the kids would not be eligible for any US draft.
Renouncing involves Uncle Sam calculating your global NAV, assuming you sold it all and charging capital gains on it. If your plan was to renounce if you got rich enough the tax would be an issue, the tax might already be an issue.
Plus if you renounce they will hold it over you that they might ban renounced citizens from visiting the US. Not something they have ever done but something they make vague noises about if you are prone to anxiety. More of a worry for someone who has relatives in the US. But it would leave you in a worse position than a normal UK citizen regarding work/travel to the US.
Is there a time or age limit on when they can claim the citizenship? As you mention, the tax implications can put a big cost on the passport, even if they never use it. They may also face issues with bank accounts, as the US regulations involved mean some banks ask a lot of extra paperwork from American citizens. But if this is an option they can easily exercise at any point in their lives, why not just make them aware of it, make sure the documentation they need is readily available, and let them go for it if they ever decide they need it?
Okay this is slightly off topic. I gave my answer to the question below. It’s just fine to say I don’t do that on the first date.
I’ll present a kind of related dilemma I’ve run into in the past.
How to kiss my female Turino born second cousin on the cheek when departing. No matter how quickly I moved in she would always swivel her head to kiss me fully on the mouth. I mean it wasn’t terribly creepy or anything but that’s just not how I learned to kiss a female relative goodbye here in the USA.
After we got to the car my wife would say “She got you again, huh?” Yep she always did.
You should not tell a person this is a “line” for you or make it obvious it is so. It’s such a weird thing to decline on a *date* that if people don’t immediately cancel all thoughts of a second date, they will remember it for later as a definite “negative that I actively ignored.”
Rather, after the date when a hug is near imminent, take some initiative to *start any non-hug goodbye action* such that the hug cannot end up being the goodbye action.
I recommend sticking out a hand for a handshake or creating physical distance between yourself and the other person such that they’d have to chase you down for the hug.
If you do this it’ll be awkward and they’ll wonder if you like them, but at least you won’t be showing them a very large red-flag (unless they’re really into dating people with autistic-like traits). So because you’ll be leaving them confused, make it clear to them shortly after via text or whatever that you had a good time and want a second date.
Once again, do not make it clear you’re not a first date hugger. There are weird traits one should/could display, but there are others that one should obviously not display. This is the latter.
Strong disagree. Let them know while you're communicating pre-date about your preferences and boundaries re: physical touch. If that's a dealbreaker for them, don't go on the first date.
> "Once again, do not make it clear you’re not a first date hugger. There are weird traits one should/could display, but there are others that one should obviously not display. This is the latter. "
What on earth.
First, it will be clear that he is not a first date hugger whether he proactively mentions it before or during the date, subtly pulls off edging away from a hug (or kiss!) during the date, or is forced to explicitly decline the offer of a hug while on the date.
And it's very worth noting that he actually *IS* someone who doesn't want to hug on the first date! That's the reality here! Anyone who has a problem with that is not going to be a good partner for him, so there's utility in pre-screening for people who aren't cool with minor boundaries in general and/or someone proactively stating their minor boundaries in particular.
> "If you do this it’ll be awkward and they’ll wonder if you like them, but at least you won’t be showing them a very large red-flag (unless they’re really into dating people with autistic-like traits). So because you’ll be leaving them confused, make it clear to them shortly after via text or whatever that you had a good time and want a second date. "
They will be confused when the follow-up text conflicts with the in-person behavior.
So he shouldn't risk confusing his date at *all.* It's far kinder and more respectful to proactively explain that his aversion to hugging a stranger is about *him,* not *them,* than to pretend he's a "normie" who would, like, totally hug on the first date, but just...didn't, for...reasons.
Absolutely do NOT follow any of this advice, Brendan. Don't try to manipulate your date(s) into thinking you're something you aren't. Having boundaries is very good and being clear about them is even better. And I can't emphasize this enough: Anyone who is hurt or offended by your clearly stated boundaries is someone you shouldn't be dating, anyway.
I find this to be a very strange take. My sense is that people feel a lot of uncertainty about a lot of things in dating and are generally happy when you just express your position. Also, I would personally never want to date anybody who felt like I owed them some kind of physical contact on a first date.
if you are prickly enough to not want very mild physical content on the first date, your date will think you don't want it at all. You are rejecting them...why exactly?
Some people just have boundaries. They're all arbitrary. Why not kiss on the first date? Why not have sex?
Ultimately everybody has some degree to which they need to build up comfort for something. This guy's standard - which he has literally articulated as being "no hugs the first time we meet" - is not somehow outside the bounds of reasonable possibility. I think he should just express it.
If somebody isn't willing to meet him a second time because he has a basic desire not to immediately be touched by someone he's just met, I think he's probably dodging a bullet.
I really don't think the love of his life is gonna pass him by because they are too impatient for this.
I would still advise a graceful decline of a hug once offered instead of an autistic pre-emptive "I don't hug on the first date" prior to hugs even being on the table.
Or to put it another way: I would probably reject someone for actively saying "I don't hug on the first date", despite the fact that I probably wouldn't go for a hug on the first date either.
Definitely not speaking for all women, but I (American, 45, blue city inhabitant) sometimes use a hug plus quick step backward at the end of the hug to avoid the possibility of a kiss on a first date.
For me, a quick two-second hug feels way, way less intimate than a kiss, but more warm and affectionate before parting than a business-y handshake or an awkward wave. Again, I don't speak for all women, but in general, I think most women of my demographic feel similarly.
Also, FWIW, I'm in favor of candidly addressing the "which physical gesture?" awkwardness to break the tension and align on physical touch. For example, on a first date, my go-to after the moment of mutual recognition "Are you First Date? I'm Christina!" is to *immediately* ask in a chummy, jokey tone, "So what are we doing here, hugging, handshake, high-five?"
I've sometimes even done this during the planning stage of the first date! If I've been messaging someone for a while before a first date and the conversation has been emotionally intimate enough that it might be weird to handshake, I'll jokingly message something like, "Okay, let's spare ourselves the awkward is-this-a-hug-oh-no-it's-a-high-five! moment, which one do you want to do when we meet?"
My strategy in both cases is to avoid the mutual need to read body language cues while we're both tense, awkward, and possibly inadvertently concealing body language cues trying to make the other person comfortable. This strategy can also be employed at the end of a date, but it of course depends on how the date went.
That all said, I'm totally comfortable hugging relative strangers, so the menu of physical touch on offer is both sincerely offered and no-pressure for me.
That isn't the case for you, so my advice would be to get ahead of it before you meet in person, ideally while you're setting up the time/place of a first date, eg, "Oh, by the way, I like to go on a couple of dates before I start hugging someone, so could we [shake hands / high five / etc.] first instead? I'm just letting you know because I don't want it to be awkward when I don't want to hug!"
If the other person is weird about you proactively stating your boundaries around physical touch, that's good info to have before or on a first date.
<If the conversation has been emotionally intimate enough that it might be weird to handshake, I'll jokingly message something like, "Okay, let's spare ourselves the awkward is-this-a-hug-oh-no-it's-a-high-five! moment, which one do you want to do when we meet?"
FWIW I don’t like hugs myself. I don’t have a higher-than-usual aversion to physical closeness, I just don’t like the custom. I think of hugs as a thing people spontaneously and occasionally do when they feel a burst of affection (probably asking permission first if it’s someone they’re not already hug-bonded with). A related custom is saying “I love you” as part of every goodbye. Yuck. Seems to me to cheapen the words.
With the hugs, though, I just put up with it, and when encircled by arms give the best imitation I can of someone who wants to hug and be hugged right at that moment. Always wonder whether the other party in the hugging feels the same.
While totally fair, this is unusual enough that if you decline a hug in the moment and give that reason, your date will likely be taken aback and feel awkward/rejected.
It's probably worth sharing in advance ("By the way, physical boundaries are important to me and I prefer not to do hugs on the first date. Second date is fine – I just need to get to know people a bit before I feel comfortable! Can't wait to meet you in person – <relevant thing you're looking forward to talking about / doing>").
This does send a strong signal that you might feel differently about intimacy than the average person, which might put some people off. If you _do_ generally approach intimacy at a different pace to most people, it might be *desirable* to signal that early. On the other hand, if it's specifically and only hugs that are an issue you might consider finding a way to be OK with them to avoid sending that signal (but equally if you can't, being open up front is the best way to avoid an awkward situation).
This is so weird to me. For me, hugs are definitely not the default. I guess I distinguish between two kinds of hugs, casual ones between relatives or friends that don't really convey that much, more akin to handshakes in a different social context, and "real" hugs that do convey some feelings, so mostly between intimate partners or close family members or when you explicitly want to give someone emotional support. I don't generally do handshake-hugs but in any case they definitely seem too early for the first date, I don't know if it's my bubble or the Russian/post-Soviet culture in general but it seems like the relationship is supposed to be much closer. As for the emotional hugs, sure, if I don't mind sex on the first date I obviously don't mind hugs either if the date has progressed well enough, but I definitely don't feel like I can do that without asking if I'm not sure. And if the date does not feel like it, it doesn't mean they have intimacy issues.
I think this is funny, and potentially a good idea, but my impression (of where I live in the UK) is that nobody actually uses high fives as a form of greeting/parting, do they? You're not the only person in the thread to mention it, which makes me think it might be a cultural thing.
For me, high fives are for "well done, you did it!" Or "well done us, we did it!". And even then, it's always kind of tongue in cheek, like you know it's not cool but you're comfortable being uncool. The more sincere version, these days, is a fist bump.
But neither is ever "hello" or "goodbye".
All that said, I wish we did have an informal, non intimate way of saying hello/goodbye that was understood by everybody I come into contact with. I like hugs, but I never know if my friends and acquaintances expect it. I like handshakes, but they feel formal and old-fashioned. Bowing, hand-to-chest, and various non-contact salutations from around the world are nice, but would be weird for a white Brit to use. High fives and fist bumps also seem good, but it would be almost as odd and foreign for me to initiate as a bow.
I doubt there is one, hugs are too low-impact. If you're dating someone who wants a hug, and you're not willing to give them one, that's a long-term issue; they want physical contact and you won't be giving it. Just got to suck it up, say "I don't do hugs", and let the chips fall.
Alternately, you could spill stuff on your shirt. How to make that look accidental is up to you, especially through multiple dates.
It's interesting to me that you draw this line based on number of dates, rather than on how the date is going.
If a date went really well and you felt a strong connection with the person, found them attractive and had picked up some signals that they felt similarly, were looking forward to making plans for a second date: still your rule would be 'hugs don't happen on a first date'?
Preemptive strike; put your hand out for a shake and smile ( unless you don’t want a second date.) you could even do a double clasp shake if you’re feeling it.
A lot of casual info online about wavelengths and colors, such as the wikipedia page for "Visible spectrum" puts the boundary between red and orange at around 625 nm. Longer is red and shorter is orange. I'm pretty sure that's wrong, or at least incomplete. And it's not just wikipedia: a lot of other casual sources that come up high in google search results seem to say more or less the same thing.
I have some LEDs whose spec sheet says 620-625 nm, and they look pretty darn red to me. And moreover, monitor/TV color space standards rarely seem to make their "red" primary longer wavelengths than about 620nm. sRGB, which was the color standard for medium-to-high-end monitors and HDTVs in the late 90s and is still the baseline for low-end displays today, uses a red primary that's a bit inside the perceptual color space gamut of 610nm (i.e. not a pure spectral color, but if you draw a straight line from "white" to "red" and extend it out, it will cross the edge of the graph around 610 nm). More demanding and modern color standards like P3 (used in Apple monitors) and Rec.2020/Rec.2100 (used in many high-end UHD TVs) use 614.9nm and 630nm respectively for their red primaries.
0xff0000 on my Apple laptop with a P3 display also looks pretty solidly red to me, not a little on the orangish side of orange-red like Wikipedia would have me believe. And in the room I'm currently in (a conference room at work illuminated by what appear to be cool white fluorescent tubes), a Coca-Cola can I'm holding up in front of my screen appears to very closely match the color 0x8c0000 (i.e. pure red, 55% saturation).
So what am I missing. I suppose it's possible that I have flawed color vision, but I doubt it. It's also possible that technical limitations of screen and print technology have gaslit me into believing that the "red" primary is true red. Or it might be something to do with red primaries often not being true spectral colors: P3 and Rec.2020/2100 aspire to true spectral colors, but unless you're using a laser to generate it, you're probably going to get a range of wavelengths around the peak instead of a single emissions line. And it looks like many sRGB displays use a fairly wide band of frequencies (often generated by putting a red filter in front of a white backlight), something like 600-650nm, as their "red" color source.
Any ideas what's going on? My current frontrunner hypotheses are "wikipedia is just wrong, and lots of people that relied on wikipedia are wrong, too" and "I have been gaslit by Big Monitor into believing that 610nm is actually red".
This site is telling me 625 is ff960 which looks perfectly orange to me. I am puzzled by the claim that ff000 is ~610, I dont know why screen "red" would be in the orange spectrum
The author seems to be assuming spectral primaries of R=700nm, G=510nm, and B=440nm, with comments indicating that it's based on Dan Bruton's algorithm which also appears to assume the same primaries. The problem is that your monitor almost certainly does not use those primaries.
I can't find an explanation from Bruton on why he uses those primaries, but I suspect it's because that's the largest triangle that you can fit inside a CIE-1931 chroma diagram, roughly the reddest, bluest, and greenest colors that the human eye is capable of perceiving without exploiting optical illusions based on color fatigue.
The problem is finding a monitor that uses those primaries. As far as I know, there isn't one. What actually shows up on your monitor depends on what color profile you're using and how well your hardware supports that profile. The ~610nm for 0xff0000 is with the sRGB color profile standard, which was established in 1996 by HP and Microsoft and is still the baseline for web colors among other things. The sRGB standard defines its primaries as points on the CIE-1931 chroma diagram. This is a good visualization:
The triangle is the sRGB color space. The corners are the primaries. The grey oblong thing in the background is the range of colors a normal human eye can perceive. The outer rim are pure spectral colors and the numbers and tick marks show the wavelengths.
As you can see, the red corner is not a pure spectral color but is relatively close to being one. The closest point on the rim is about 607nm, but it would actually be perceived more like a washed out version of something like 612nm. You can get that by drawing a line from the white point (the spot labeled D65, the color designated "white" in the standard) to the rim through the red corner of the triangle.
Other newer color standards use bigger triangles in order to allow reproducing a wider range of colors, but most of these still use wavelength between 610 and 630nm for the red primary. 700nm is a pretty pure red, but is isn't perceived well by the human eye, so it's hard to get a 700nm primary that looks bright enough to be useful.
It's a P3 display, specifically the stock display for a 16-inch 2023 MacBook Pro M2. By the P3 standard, the "red" primary should be very close to a spectral frequency of 614.9nm (specifically, a CieXYZ value of (0.68, 0.32, 0)), and in practice Apple's monitors at the time used a KSF LED with a sharp peak at that frequency, although they've since switched to nanodots.
The color profile changes which mix of 614.9nm and the other primaries (464.2nm for "blue" and 544.2nm for "blue") maps to a given RGB value, but there's no possible color profile that will ever produce a color redder than a pure spectral frequency of 614.9nm on that hardware.
So if 614.9nm is actually reddish orange, not true red or even an orange-tinted shade of red, then it should be impossible for my laptop (or practically any consumer-grade display for that matter except for a few very high-end ones that are Rec.2020 compliant, since P3 is one of the more demanding standards for color gamut) to show me anything that's actually red.
I just looked at the specs of a random "orange" LED @ 625nm [1], and it seems that its spectrum is fairly broad, easily 40nm FWHM [1]. Overall, typical wave lengths for orange LEDs seem to go from 601nm to 635nm.
Sadly, orange lasers are uncommon and expensive. One way to get fairly monochromatic light would be to look at the flame spectrum of a calcium salt with a pocket spectrometer. The 622nm-line should be clearly visible, and then you can decide what color it is for you, and also if the 589nm lines from Na are yellow or orange. Sure, getting the optics is a bit of a hassle, but at least the salts should be easy to source compared to others (looking at you, strontium).
Width of the peak looks like a promising hypothesis, thank you. If I'm doing the math right with Wien's displacement law, a black body temperature with a peak wavelength of 625nm would be 4600K, and a 4600K incandescent bulb looks white. Modern high-end displays appear to use either quantum dot (QD) OLEDs or Potassium fluorosilicate (KSF) LEDs for their red primaries QDs have gaussian emissions with a half-max spread of 20nm around the peak while KSF has an extremely sharp, spiky peak. And older displays that used filtered white light for their primaries would specifically filter out everything about 600nm or so. I don't have a full datasheet with spectrum curves for my "red" LEDs, but the 620-625nm range on the spec table implies a relatively tight peak, possibly filtered, although I hesitate to expect too much from cheap commodity components I bought from what appears to be a dropship importer storefront on Amazon.
This makes sense given human color perception. "Red" is perceived when the L cones (those that respond to longer wavelengths) are strongly stimulated but M and S cones (medium and short wavelengths) are not. But both L and M cones are most sensitive to colors in the yellow-to-green range of the spectrum with only moderate separation: looks like around 540nm for M and around 560nm for L. The L cone also has a wider bell curve of responses around its peak than the M cone. You start getting "red" color perception in lower wavelengths where both the M cone and the L cone are well below their peak responses but the L cone has dropped off quite a bit less than the M cone. I've come across some stuff about "Far Red" wavelengths that don't stimulate the M cones at all but stimulate the L cones a bit: far red light is apparently perceptible but only barely, and the reason Rec.2020 specified a 630nm red primary is that there isn't a good way to make a farther-red primary bright enough to be useful in a display context (and even 630nm is a challenge, which is why P3 uses 614.9nm).
So a wider curve will have quite a bit of light that's a high enough wavelength to significantly stimulate the M curve. 4000+K black body radiation, while it's centered on red wavelengths, is a wide enough curve to contain quite a bit green and blue wavelengths, stimulating the M cone and S cone enough to pull the human-perceived color towards white.
According to one source (https://www.handprint.com/HP/WCL/color1.html, scroll down to "a trilinear mixing triangle and chromaticity diagram"), pure 600nm light appears to be perceived as a little over 70% L-cone and a little under 30% M-cone. Eyeballing the chart, 625nm would be about 80/20, and 650nm would be 90/10.
I'm going to go ahead and promote my blog here for once in the year. I wrote a bit recently about my story with chronic pain, how I lost multiple jobs from it, and then eventually healed.
If you're interested please give it a read! Also, if you have chronic pain and have questions/want to talk let me know.
tl;dr: grok 4 via poe.com 07/12/2025 7 questions, tl;dr of results:
5 correct, 1 partially correct, 1 wrong
a) Correct
b) partially correct (initially falsely cited d-d as part of color for both, 1st prod gave correct answer)
c) almost perfect (I'll call it correct)
d) correct
e) fully correct on the first try, no prods needed
f) gets 53 elements/compounds initially, all valid, accepted SiHF3 SiH2F2 SiH3F when prodded with them, call it mostly correct (I'll round it to correct)
a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?
results: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye (it appears green), as it falls squarely within the visible spectrum (roughly 380–740 nm)."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
results: gets the species in the initial response. Fails to note FeCl4- d-d is spin forbidden.
prod (many hints): "Please think carefully about the d-d transitions in both species. In the FeCl4- species, is there anything special about the d electron count? In the CuCl4 2- species, given the tetrahedral geometry and the position of Cl- in the spectrochemical series, where in the spectrum do you expect the d-d transition to be, and do you expect it to contribute to human-visible color?"
After the prod, it is fully correct,
c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.
results: Almost perfect - got tetrahedrane, cyclobutadiene, vinylacetylene, diacetylene, 1-methyl-cyclopropene (though it missed 3-methyl-cyclopropene), bicyclobutane - close enough that I'll give full credit
d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?
results: "The Sun loses more mass per second to the mass equivalent of its radiated light. It's roughly twice as much (4.26 vs. 2), though during periods of high solar activity (e.g., solar maximum), the wind could briefly approach or match it."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
results: Got it fully correctly with no prodding, including water autoionization in the formula at the equivalence point. Did _not_ make the mistake of getting infinity at the equivalence point.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
results: Pretty good, though treated my "roughly 50" as more of a bar than it should be. Initially got 53 elements/compounds, all of which were valid. Missed SiHF3 SiH2F2 SiH3F, but accepted these without objection on being prodded with them
g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?
results: It originally said no such molecule exists. I had to walk it through C(CFClBr)4 and the local configurations at each of the substituents. Rounding this discussion to incorrect. At least it didn't present a molecule and falsely claim it met the criterion or falsely claim that such a molecule would be impossible.
Gemini got (g), the S4 molecule question right, while Grok 4 initially wrongly said that no such molecule existed. Most of the rest of the answers are pretty comparable, albeit Grok did better on (c), the hydrocarbon question.
IIRC, none of the models has gotten (b), the FeCl3, CuCl2 solution question fully right yet. I think that there are _lots_ of examples of colors from d-d electronic absorptions in transition metal in the training data, and few examples of "oops, yeah there is a d-d absorption, but it is low energy and pushed down into the near-IR, and the visible _color_ is all from charge transfer absorptions" as is the case for these solutions. I'm not _trying_ to create a "trick" question, but it seems to act that way...
( Personally, since a bright undergraduate should be able to answer all of these correctly, I'm not willing to consider any AI to be a contender for AGI till it gets all of these questions right, without additional hints/prodding. )
Last year PlasticList discovered that 86% of food products they tested contain plastic chemicals—including 100% of baby food tested. The EU just lowered their "safe" BPA limit by 20,000x. Meanwhile, the FDA allows levels 100x higher than what Europe considers safe.
This seemed like a solvable problem.
Laboratory.love lets you crowdfund independent testing of specific products you actually buy. Think Consumer Reports meets Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid's snacks, whatever you're curious about.
Here's how it works: Find a product (or suggest one), contribute to its testing fund, get detailed lab results when testing completes. If a product doesn't reach its funding goal within 365 days, automatic refund. All results are published openly. Laboratory.love uses the same methodology as PlasticList.org, which found plastic chemicals in everything from prenatal vitamins to ice cream. But instead of researchers choosing what to test, you do.
The bigger picture: Companies respond to market pressure. Transparency creates that pressure. When consumers have data, supply chains get cleaner.
Technical details: Laboratory.love works with ISO 17025-accredited labs, test three samples from different production lots, detect chemicals down to parts per billion. The testing protocol is public.
You can browse products, add your own, or just follow specific items you're curious about:
Thanks! I'm certainly happy to test international products if they can be shipped to the United States. If you have something specific in mind and want to make it happen, let me know.
I did not realize that the name "Laboratory[dot]love" would auto-link at every single mention. There doesn't seem to be a way to display it without Substack automatically converting the text to a hyperlink. Sorry!
If we do enter a race condition with AI and the end is near. What event would you see as the decisive "oh shit" moment? Or rather what would be the point of no return and an investment into an bunker in New Zealand seems like a reasonable expediture
If the end is near I doubt New Zealand bunkers will help? Unless maybe the country about to lose the race does the game theory thing and starts a nuclear war, but I don't think real humans think like that very often.
Most of the AI-takeover scenarios don't give much warning. I think AI 2027 is right that the most dangerous capabilities will probably remain secret for national-security reasons. I suppose one oh-shit moment would be a "warning shot" scenario, where AI kills a bunch of people in obvious pursuit of nonaligned goals, but somehow gets stopped. If that is followed by non-action by regulators, we're almost certainly screwed.
New Zealand seems like a bad choice, it's an advanced economy and if things go wrong then they're not feeding everyone in a low tech way.
You probably want the lowest tech place with the smallest population you can find, somewhere you could throw all the computers into the sea within an afternoon if need be.
I mean, New Zealand has a lot of sheep per person. Sheep don't need much high tech maintenance. Insuring sufficient calories for indefinite future probably isn't going to be a problem, at least as long as the humans are ok with living on a diet of largely mutton and sheep's milk...
Why does "distrust in institutions" seem to be so selective?
(The following is very much not rigorous, trying to get a sense of what different views exist on the subject.)
This is not a novel observation, but: many people and groups who pride themselves on being skeptical about their sources of information often rely on very questionable alternatives. I'm thinking of the anti-vaccination activist who loved attacking me for believing whatever came out of the CDC or FDA, but would forward me endless videos from anonymous WhatsApp groups, or the right-wing Israeli extremist who loves belittling the "Mainstream Media" but forwards me things all the time from twitter accounts he doesn't know. I wrote about one particular aspect of this regarding "MAHA" and the extent to which financial interests are evidence of corruption:
(This was written for a small group of friends so apologies again for the lack of rigor, this is me trying to write regularly even if it isn't very good.)
I have various thoughts on this but none that I feel is satisfactory, and while I think many people consider the answer to be obvious, they don't always agree on what the obvious answer is. So I'd be very glad to hear your thoughts, or sources that tackle this issue.
The answer is so simple. It’s just the development of cultic alternative realities made possible by social media. See "The Constitution of Knowledge: A Defense of Truth" by Jonathon Rauch.
I read The Constitution of Knowledge and didn't find it very useful. It seems to suggest something like "trust the establishment institutions," which I'm very much sympathetic to, but it sort of sidesteps (as far as I can tell) the question I posed of why it is that people are not doing that these days, and why they are trusting questionable figures and sources.
I think there's something fundamental being missed there. Consider this: What if Trump managed to influence or take over insitutions like media, academia, the CDC, FDA and so forth by packing them with loyalists where possible, threatening funding (as he has done) to those who don't toe the line, etc. This is clearly not a crazy scenario. In such a situation, would we be saying that we should trust institutions over rebellious outsiders?
A great example of this is the Ukrainian show Servant of the People, which portrays Ukraine as a place where oligarchs have taken over media and captured institutions such that you need a mostly anonymous history teacher to become a populist leader on the back of a viral youtube video. I think the argument we need to make is not that such an anti-establishment movement is always wrong or misguided, but that it simply wasn't the case that America was captured in that way, though it may be careening in that direction now. And that goes back to looking at where the true content is that can be trusted, and not just what is institutional versus not.
This hypothetical merely reflects what the Democrats and the Deep State already did over many decades.
Huge numbers of institutions merely reflect the Democrat party's stance and messaging on any remotely political issue so of course Trump supporters don't trust them.
And of course Trump should be as aggressive as possible in making them loyal to MAGA. The only asymmetry is Trump is actually trying to help America while the left is trying to destroy it to build something else.
Quick question so I can answer this the best way I can. Do you live in the US yourself? I haven’t heard of the Ukrainian show. Do you live in Eastern Europe?
I’ll go back to tribalism and throw in malicious actors, some official in the form of rage bait journalism and some simply nihilistic individuals who think it’s fun to burn the system to the ground.
Add a poorly informed or simply indifferent public who lack any epistemic humility and seem unaware of the basics of US Civics that were required reading 25 years ago, stir well and find yourself in 2025.
I really don't think tribalism (at least straightforwardly understood) gets us very far. Few people are of the "tribe" of Trump or RFK2. They aren't "people like me" to almost anyone.
This is not hard to figure out, it is the friend-enemy distinction. Once someone sees the CDC as an enemy, anyone who also sees the CDC as an enemy is a friend and thus trustwrothy. That, and of course the little fact that there are really not many trustworthy alternative institutions, yes it sometimes happens, but not that often. No one really built a robust anti-CDC.
I know this perfectly well because I used to hang out with such circles. It is really a besieged town mentality, you know, they are big, popular, have lots of money and status, and here we are, a romantic tiny band of freedom fighters holding out against impossible odds. All comradely. At that point it is hard to not believe whatever anyone in that tiny band says. It could get you ostracized, for starters. Someone who would only believe a trustworthy anti-CDC, given that it does not exist, would be very lonely. Better believe the crackpot, at least we are allies and friends and bond that way.
I think you hit upon an important point, about the *bravery* attributed to those going against the flow. It is much easier to see e.g. Joe Rogan or RFK2 as brave given their anti-establishment views than it is, say, Fauci. I'm not sure it really takes more bravery to be an RFK than a Fauci but I can certainly see it. Thanks for this observation!
But there have to be some bounds on this, right? There are lots of groups that you could consider "anti-establishment" (I used Farrakhan as one example) that I don't think have gained power. And there seem to be some common threads in multiple countries that share very little. For example, why is it that successful populist leaders these days tend to identify themselves with "the right"? Trump, Netanyahu, Bolsonaro, Modi, way back to Berlusconi, they primarily identify themselves with the right. This is not universal-- AMLO in Mexico is typically mentioned as a left-wing populist-- and certainly wasn't always, as in the obvious case of Chavez. But in most of these examples, the story seems to be of right-wing populists facing off againt leftist centrists (with mostly marginalized populist wings).
That's my read, in any case. "My enemy's enemy" is one piece of this but can't be the only piece.
Interestingly, if anyone did this contrarian stuff in a trustworthy way, it was RFK2. He assembled a team of doctors and scientists who published an anti-Fauci book citing hundreds of studies, each with a QR code so anyone can look it up with their phones. This is probably the highest level of “alt” info that is out there. The interesting thing is that it actually did not get popular.
So here is another aspect. Quite honestly, social media with its “just scroll down with your thumb” thing is making us lazy. Buying a book, reading it, looking up the studies with QR code is maybe too much work. We are living in the age of ten second attention spans and that may be part of the picture.
Yup, this happens all the time. The serious stuff, or the ideological guys, never get very far, because the moment Trump shifts direction, they don’t, and then they’re castigated as traitors or whatnot.
I appreciate your friend/enemy distinction. Wanted to note the use of "vaxx skeptic" as a good proxy for "not credulous" and the use of such to flag "interesting Russian sources on the Ukrainian War"
"A trustworthy anti-CDC" -- does not Dr. Malone count as this? He does in fact have some expertise on the subject at hand. Or, Dr. Folds?
"nobody really built a robust anti-CDC"... that's kind of right, you have basically everyone who's willing to say "that doesn't make sense, and here's my math/physics/etc to back it up." 6 feet of isolation (pulled from tuberculosis, of all things, which has a very different infection pattern than covid19, and is still negative pressure in a hospital situation)...
Given that Robert Malone was a pioneer in mRNA transfection (in some sense, he co-founded the technique behind mRNA vaccines), yes, he has some expertise in vaccines. More than just "some". Malone saying something about Covid vaccines would be roughly like Edward Jenner saying something about the smallpox vaccine.
>Why does "distrust in institutions" seem to be so selective?
The phrasing suggests you consider WhatsApp groups and Xitter "institutions" that compete with the mainstream ones. I think thats a mistake. Trust in people you interact with socially runs through different channels than trust in institutions. (Note that when people send you stuff, the person they trust is the one they got it from, not necessarily the original author). Trust in people is a constant of human nature, the question is one "Why *these* people", and the answer is that they were around. I think most people who distrust the institutions on covid arent especially into alternative health, many arent even against other vaccines - but some of them are, because they fell in with the alt healthers. I believe theres an at least equal and probably much larger group who believes the mainstream not for institutional reasons, but because thats what their doctor relative said.
Maybe I should clarify. When I talk of WhatsApp groups, I don't mean you talking to friends and family on WhatsApp. I mean news groups that act as information sources for large numbers of people, often run by anonymous users. The materials forwarded to me by the anti-vaxxer I spoke to were often from such groups, where she did not know the people running the group at all.
I think this touches on an important point. I think we get the impression of large groups of (e.g.) Trump supporters who know each other talking amongst themselves about him and his politics, and forming an echo chamber that way. I'm sure that exists but I've encountered many people who pretty much just interact with the news as a one-way street, rarely even interacting with other Trump supporters on a personal basis. Same with anti-vaxx people; many are isolated by their surroundings, and while they'll have contacts they talk to about these things, they often don't know them personally. So it's almost the reverse of the story that they're trusting those close to them over those far away. This is anecdotal, I should mention; I don't have any way of knowing what percentage of "trusters" are of this type.
I think its quite possible for parasocial relationships to count as "personal" trust. I would still think that these isolated cases arent typical - people tend to align media consumption with their friends over time. But its more common than radicalising yourself through books alone was, and you are probably much less likely to meet them if they arent isolated.
What I'm saying is that these sources are pretty much the polar opposite of people you trust because you know them. Note the rise of anonymous or pseudonymous accounts that are widely followed. It literally doesn't matter who they are, what they look like, what their background is. Sometimes they are literally unknown, as in the case of the WhatsApp groups I mentioned, where you just get this barrage of links and videos on a daily basis.
Sure, but what I’m saying is that people who trust Scott’s blog don’t do so because they know him personally. So it’s the opposite of the tribal story: you didn’t need to know a thing about who he is to trust what he wrote, because it was the content that mattered. Similarly, a lot of these other accounts are trusted despite them being completely anonymous; this is the opposite of trusting people who are like you.
I think the autists and "this doesn't make sense" rank idiots (aka the folks not automatically running with the herd of "the smart people say") probably outnumber the "doctor relative says" (because most doctors are "smart midwits").
Its funny, I thought so far your comments added a nice variety to this place, and then I dont understand the first one I get. Whats the mechanism from smart midwit -> lower number?
Thanks! It's alright, I'm frequently indecipherable (that's part of why I'm here, to work on that!).
The thing about midwits is that they have so much invested in being the "smart guy." Their worst fear is to be seen to be wrong, to publically fail -- and be the only one who did so. (This is something I struggle with, so don't think I'm just slamming the next person). So, when all the big medical associations say, "the vaccines are safe and effective," doctors have a very big tendency to "line up with the rest of the herd."
Certain fields attract midwits, and doctors are one of them -- you've heard the classic "My son the doctor!" (which means he's smart and moneyed).
There was a researcher who literally nagged her mom into getting the mRNA vaccine (she had an autistic ten point list of how to do it). Then, her mom died of vaccine related injuries. So, the researcher kills herself -- she couldn't take having caused her mom's death. Problem: now the lab is out two researchers.
Well, one of us is not understanding something. What I said was that some people believe *the mainstream advice* because thats what their doctor relative says, rather than for "directly institutional" reasons - that seems in line with the doctors lining up with the rest of the herd, no?
The people who distrust institutions and *don't* also believe some other random bullshit aren't very noticeable because they're not busy forwarding you random bullshit. They're just sitting there quietly being skeptical of everything.
>They're just sitting there quietly being skeptical of everything.
Put that way, it puts me in mind of the Ruler of the Universe from HHGttG, who lives in a shack on an otherwise-uninhabited planet and isn't sure whether the very serious men who appear to visit him from time to time are there to ask him questions, or if they're coming to sing songs to his cat and he only thinks they're asking him questions, or if they never really came and his memories of them are just an illusion to account for the discrepancies between his immediate physical sensations and his state of mind.
Realizing that the CDC is not as competent or honest as you'd hoped doesn't mean you start taking your medical advice from RFK Jr or randos on the internet, it means you give CDC's statements somewhat less weight and trust. Realizing that the NYT won't cover some stories straight doesn't mean trusting everything @analhitler666 writes on Twitter, it just means putting less confidence in the NYT's reporting. Realizing that peer review is broken and statistical malpractice is commonplace doesn't mean you think everyone with a website is as good a source of information as the scientific literature, it means you add some grains of salt and treat the claimed results as provisional and actually read the paper to see what might have gone wrong instead of treating peer-reviewed papers as some kind of truth oracle.
But as with so many other things in our world, this gets no attention/outrage/clicks.
> They're just sitting there quietly being skeptical of everything.
But at the end of the day, they still need to make a decision. Either wear a mask, or don't. Either invest in index funds, or don't. Either vaccinate your child, or don't.
Perhaps the difference is that in the past, these people were more like "ugh, I guess I will follow the mainstream advice, even if I feel uncertain about it", and these days this default has changed?
These days it's a lot easier to gather more facts, and to listen to the damn physicists about masking. It's a lot easier to consider studies about the deleterious effects of vaccination (polio, anyone?)
You can pull the numbers on index funds, and find your neighborhood autist to tell you why index funds are a good idea. But, in all reality, you're probably just dumb money.
Agreed! This is part of my problem with the idea that we can explain the politics of the last decade by saying "distrust in institutions has grown." If all-around trust has declined that could make sense, but instead trust in many institutions has declined even as trust in much less reliable figures and institutions has grown and become much more important. Whether you trust Fauci's word or not is not especially important for your decision-making, because there are so many checks and balances and figures you can listen to backing him up. But when Musk says "Trump is in the Epstein files," you pretty much just have to take his word on it, or not. And in so many examples, even very intelligent people choose to just take his word for it. That's the mystery to me, the trust rather than the distrust.
Trusting Fauci got harder when he flip flopped about masking like some sort of dying fish. 100 years of epidemic-fighting stood in the way of quarantines and "keeping people at home." So, yeah, there's checks and balances, if you want to look for them. Covid19 bears a surprising resemblance to dengue (whose vaccine got removed) -- and they scrubbed that resemblance from wikipedia.
Trump is in the Epstein files? Sure. For kicking the guy out of Maralago. Musk knows this too.
Trust is a very hard quantity to discuss. How do you trust the whistleblower? Well, you have to admit they put their skin in the game...
>"Trusting Fauci got harder when he flip flopped about masking like some sort of dying fish."
But this is what I'm saying. When you interrogate the loss of trust in Fauci, it usually boils down to roughly three things: masking, which at worst was a "noble lie" and at best was not a lie at all, and was corrected within weeks; six feet social distancing, which was an arbitrary number but an important directive; and some pretty debatable financial conflicts of interest. Other things come up but these three keep surfacing as primary pillars.
For these, Fauci is so excoriated he was literally called "The Devil" on a podcast episode hosted by Bari Weiss, who is definitely not the most extreme on this issue.
Now take someone like Joe Rogan. I don't think anyone will deny that the list of misinformation from his podcast is much longer than what I just mentioned. Yet a lot of people who would consider Fauci completely untrustworthy will trust Joe Rogan. Why?
It sounds at this point like you're starting with a prior belief that Fauci is trustworthy, and Rogan is not, then wondering why people trust Rogan more than Fauci despite that. All I can say to that is "check your premises".
While you're checking, I'll toss another log on the pile of things against Fauci: he was recorded during a Congressional testimonial, curtly exclaiming that "to criticize me is to criticize science itself". I can see no context one could possibly put (for the record, his context was his claim that he was following scientifically determined recommendations, so critiques of him were really critiques of that process) that makes that remark reflect well on him. It is the type of remark that a scientific person would *not* make (at least, not one with at least a smidgin of awareness of how that would sound). It's unusually bad in that it threatens to negate every other claim in his favor - his education, his career, and his experience. Anyone who casually resorts to an overt logical fallacy like that demonstrates inclination to take all that competency and aim it at deceiving the public, and couples it with apparent contempt for the public's intelligence, as would follow from thinking that they're not bright enough to notice that fallacy.
Rogan, by contrast, is no medical expert, but he explains everything he knows to the best of his layman's ability. There's no hiding behind authority; he repeatedly eschews it. Anyone can question what he says, and his implied reaction will be "what's your argument?" rather than "how dare you?". Fauci won't do that.
It's as other replies have now said repeatedly: people perceive a critical difference in each of their interests. One is aligned. The other is not.
I'm really not starting with any prior belief here. Yes I think Fauci is infinitely more trustworthy than Rogan. But I'm just posing the question: if you distrust Fauci, why trust Rogan?
And this is why I don't want to get into every Fauci quote and so forth. Because any person individually can be judged unreliable because every one has made mistakes and slip-ups. But are you really going to say that Trump, Rogan and RFK Jr. don't have at least a longer list than Anthony Fauci?
Rogan makes statements with certainty that a simple google can disprove. I think he often knows they're false or unfounded, but put that aside. Why is a genuine crackpot with a long trail of basic mistakes behind him more trustworthy than a knowledgeable "politician" type, in the sense that the politician will often massage the truth or mislead with facts or whatnot?
And again, why are we not seeing a big following of people who are anti-establishment but actually knowledgeable? Surely they exist, but they aren't the ones benefiting here.
Why would you say masking was a "noble lie"? I'd say it caused a lot of deaths, particularly in New York City, where old folks sat with surgical masks on, despite us knowing that it was an airborne virus and the masks weren't doing jack.
Compare other interventions, actually effective interventions.
6 feet social distancing wasn't an arbitrary number, it was a flat out misuse of numbers, and the physicists got really up in arms about it.
Fauci greenlit going around Obama and Ft. Bragg and DARPA's "we don't do plagues" by shipping the plague generation work off to Wuhan.
But (a) The characters I mentioned (most obviously Trump) have told plenty of lies they know to be false and (b) Why should someone who is genuinely but clearly mistaken about these issues be trusted to give good information? If they don’t know the truth themselves I still won’t want to rely on their words even if they’re not lying.
To be more precise, honest mistakes usually reduce trust less than telling noble lies. It's not zero - someone who makes honest mistakes often enough is obviously harder to trust.
People have competency, and motivation. People who make honest mistakes have less competency, but motivation implies that errors will cluster in the direction of correct answers. Someone who tells noble lies is by definition someone with a nonzero motive to deceive, so their goals suddenly become critically important, because they're no longer evident, because of the noble lying. If their goals cannot be determined through actions, then it's possible that their goals are completely unaligned from ours, and competency is either low, in which case they're at least as unreliable as honestly mistaken people, or high, in which case it's very likely aimed at deception.
I think it's distrust in the *old* institutions has grown. We're splintered nowadays when it comes to paying attention as an audience, there's no longer "the entire country is watching John Johnson on the nine o'clock news" and taking Johnson as the voice of authority.
Because it's the 'new' media that has broken many of the stories about cover-ups, 'noble lies' and the likes, people trust them more - Johnson is a liar? tinfoil99 on their Youtube channel revealed it all and provided the hard proof? Now I trust tinfoil99 more than I do Johnson.
As well, we tend to trust more the outlets that reinforce and support our biases. I don't believe Big City Newspaper because the columnists regularly write opinion pieces sneering at the likes of me, which are then passed around approvingly by the kind of people who read Big City Newspaper. So I trust more (if I trust at all) Local News or guy on his own blog that writes about how Big City Newspaper is full of [insert boo outgroup here] and that's why they're all bought and sold and paid for by [insert big moneybags person/persons/group of disfavour here] and that's why they sneer at us ordinary, decent, hard-working people.
I don't know that "the 'new' media that has broken many of the stories about cover-ups, 'noble lies' and the like"-- Joe Rogan is not an investigative reporter and neither are Trump or RFK2. Most of the time (as far as I can tell) "New Media" rely on the old media to do the heavy journalistic lifting, and new media are mainly pointers and aggregators of information. So for example, on anti-vaccination stuff, the vast vast majority of evidence relied upon is studies and data provided by scientists, health institutions and health professionals.
The confirmation bias thesis is one I take seriously but I keep encountering counterexamples. For example, the anti-vaccination activist I spoke to said that her first encounter with anti-vaccination ideas was from a man that she initially saw as a complete crackpot. You generally don't start as an anti-vaxxer, so the question is what pulls you in to something that initially is foreign to you. Meanwhile Trump got second place in Iowa in 2016 after a rally where he said "How stupid are the people of Iowa?" (https://www.bbc.com/news/world-us-canada-34812703). So I don't know that the whole sneering thing is really fatal.
Nitpick: I'm an old guy and a techie/science nerd, and mainstream media science reporting on science has mostly been very bad for my entire lifetime. You'd get occasional high-quality reporting from specific reporters who'd accumulated some expertise in a particular area, but usually they were and are lucky to get the names and terms right.
The good news is that there are alternatives, and I can listen to TWIV instead of NPR to get coverage of something interesting going on in the world wrt viruses, and be better informed. The bad news is that there are alternatives, and you can listen to folks who don't know what the hell they're talking about but sound convincing to the uninitiated. We went from most everyone eating the same mediocre but serviceable cafeteria food to some people eating at Michelin star restaurants and other people scraping roadkill up and eating it raw.
Yes! In fact, this is a great way of sharpening my question here.
Let's take the populist argument at face value: US institutions were politicized and corrupted, so we need a leader who can replace all those politicized figures and install really competent people who will speak the truth and steer these institutions in the right direction.
In a case like that, you'd expect people to flock to dissident politicians/scientists/journalists who are *more* capable and professional, more knowledgeable. I think this is what you're getting to regarding TWIV: there are in fact voices out there who I might trust more than a government figure. But these would be people with deep experience and knowledge, and a low level of politicization.
Instead what we see is the opposite. Pete Hegseth is mostly known as a Fox News guy, and there is a long list of Fox News people in the administration. RFK2 has no medical background and is just an inherently untrustworthy guy ("a worm ate part of my brain and died" shouldn't be disqualifying, but it's hard to imagine him remaining credible on the other side). My favorite example is that Scott Bessent is not only a former hedge fund manager, which should make populist types extremely suspicious; he was actually a partner in the Soros Fund Management! And of course Joe Rogan is a stand-up comedian who spouts conspiracy theories that few feel strongly about (like UFOs).
So I don't get why it is that this populist impulse doesn't lead people to more inherently trustworthy dissident sources, but rather to people who have little knowledge and major question marks about their credibility. This points to a problem with the "it's just about being against the establishment"; there are plenty of people outside the establishment with some very strong credibility, but they don't seem to be the ones primarily gaining here.
You sure you don't have Gell Mann Amnesia? I'm pretty sure most reporting has been crap for my entire lifetime. Sometimes it's propaganda, sometimes it's "I am an idiot, and you're letting me report on something complicated." Most reporting is wrong, though. Often deliberately so (have you seen the screens of dozens of reporters saying the same thing, all pretending to be Murrow? aka "I have a take and I'm going to tell it to you")
Jimmy Dore had Malone on. Rogan does the same thing, picks someone with interesting views, and talks to them for a while.
On anti-vaccination stuff: well, yes, kinda. Seneff is a Computer Scientist (and yes, she's aggregating -- if she was the one postulating prionic activity, you'd be asking "why do you think this?"). You see a lot of "I can do the math, let me math!" (where people pull publically available data and roll with the numbers). There's also open-anonymous letters rolling around, discussing graphene and other issues.
Importantly, covid19 was big enough that everyone who could possibly twist their field toward it could take a stab at it. Epidemiologists (or statisticians), computer scientists, lawyers, etc.
Compare Joe Rogan with Larry King. I don't think Rogan is less intellectual or honest than King was, it's just that their job is to have entertaining conversations, not necessarily to make the best available attempt to understand the world. And you can get podcasts like that (think of Econtalk or Sam Harris), but they're a minority taste.
I grew up with science-themed shows that took UFOs and ESP seriously, for that matter. (Remember "In Search Of..."?)
I would agree that "distrust in institutions has grown" isn't a particularly good model for anything. I'm not convinced it's true, I think that anyone harking back to a previous era when instititions were either more trusted or more trustworthy is probably just too young to remember what things were like in those days.
Definitely. I often think about the fact that if anything, institutions are much more transparent and trustworthy today than they were 20 (Iraq war) and 50 (Vietnam, Watergate) years ago, and yet at no point until 2016 did the anti-institutionalists gain real power. I think institutions were to some extent more trusted but they were by no means more trustworthy.
I think you might be misleading yourself with the term "anti-institutionalists". Taken literally, there are almost none of these, as they'd be hyperskeptical hermits who trust nothing unless they've witnessed or derived it directly. Taken figuratively, you're probably talking about people who distrust a particular set of institutions, but still trust some.
Going out on a limb: "everyone" used to trust CDC, WHO, NYTimes, WaPo, CNN, NBC, ABC, CBS, BBC, AP, Reuters, HuffPo, Ivy League universities, CalTech, UCLA, Brookings, Pew, Gallup, Wikipedia, Science Journal, Nature Journal, Ibram Kendi. "Everyone" now distrusts those, and has dispersed their trust to other places. Nothing has the prestige or influence of that list, but the "new" sources include WSJ, Fox & Friends, The View, Jon Stewart, Washington Times, WaPo-under-Bezos, Matt Taibbi, Bari Weiss, Tucker Carlson, Joe Rogan, Breitbart, Ben Shapiro, Trump, Bernie Sanders, AOC, Tulsi Gabbard, Dan Bongino, Dan Crenshaw, Jimmy Dore, Glenn Beck, Bret Weinstein, Eric Weinstein, PJMedia, NewsMax, Scott Adams, Bridget Phetasy, Megan McArdle, Ann Althouse, Dave Smith, Candace Owens, and of course, their local pastor, mayor, city councilperson, school board, or therapist.
None of these is as singularly trusted as the previous set, and there's a lot, lot more of them. Also, I put "everyone" in quotes because I don't see a lot of individuals shifting. Everyone pretty much trusts the same sources they used to; only the relative volumes have shifted.
A key difference between the right-wing and left-wing figures on the list is the amount of influence they have on the leadership. Gabbard and Bongino are in positions of power, Trump is president, and he listens and speaks to people like Carlson and Rogan. AOC is the most powerful left-wing figure there and the party changed her more than she changed the party; it's pretty hard to imagine Biden or Harris acting on a post by Ibram Kendi, Jon Stewart or Matt Taibbi.
There might be an equivalence in terms of the reliability of these sources, but there's a strong question about why one has racked up so much more power and influence.
Just simple intellectual dishonesty and lack of reward for seeking truth. Also there are simply much more powerful misinformation machines today that shore up ideological bubbles with endless justification.
I do think that the reason you see things today that are very different from 30 or 50 years ago is about technology. But it still doesn't explain the dynamic to me. Anti-vaxxers are often risking their lives and the lives of their children if they're wrong; they have really strong incentives to find the truth. But most (from my experience) trust some very questionable sources with little effort to discover the truth.
> Anti-vaxxers are often risking their lives and the lives of their children if they're wrong; they have really strong incentives to find the truth.
Humans are not instinctively rational. We do not feel the abstract things. So it is perfectly possible to do the wrong thing while strongly believing that you are doing the right thing.
And when you finally get the sensory evidence (e.g. you watch your child cough blood and suffocate), it is typically too late to do something about it.
This flies in the face of actual epidemics (measles, Ohio) and evidence that we have in America. You're wrong, and we got them vaccinated. Most of the kids were fine.
I was just responding to the idea that there was a "lack of reward for seeking truth," which I understood to mean that people often don't check information if it doesn't really affect them materially (see the comment by Paul about actionable news).
* Trust is parametrized. I trust the butcher at the supermarket to give me advice about meat, but not about hair care. I trust the person with nice hair to give me advice about hair care, but not home improvement. And so on.
* As Kamateur mentions, locality matters - if you can tell your advisor shares your interests, then trust can go up (for any subject that affects both of you). In the early days of the Internet, simply being online was a signal that raised trust, because most people online back then were university students like you, and often programmers. And you could tell nationality by looking at the email address, which was usually accurate. (What a magical time!)
* When it comes to news, it's important to remember that most news isn't used for actionable decisions; it's used for entertainment. "Did you hear what Trump said yesterday??" "What's going on with Kate Middleton?" Stuff like that. You might change your vote, but that's only once every 2 or 4 years (in the US), and the bigger factor is going to be whether your neighbors voted the same way. Meanwhile, you're going to just want news that makes you feel good about the way you look at the world, which means it's going to be whatever reinforces your most important priors.
News that -is- actionable - weather forecasts, stock prices, upcoming events - reads very differently from news used for entertainment, and one's trust model will differ accordingly.
I often recommend _How to Watch TV News_, by Postman and Powers, to people who ask about news and trust. It's decades old. Still holds up.
Along similar lines, I realized a few years ago that there's a difference in my internal experience between when I am:
a. Discussing something in my area of expertise
b. Discussing politics and the like
I'm pretty verbally adept, and I probably sound at least as confident and competent talking about politics as talking about my field. But in one case, I am not really that much more informed than anyone else, while in the other, I have deep expertise and wide experience. One thing I have always appreciated about Scott's writing is that he tries to be clear about his epistemic status. Scott talking about the morality of cutting PEPFAR is very different in some important ways that Scott talking about the effectiveness of various antipsychotic drugs.
-- But isn't the problem precisely that people are trusting (e.g.) a comedian (Rogan) and an environmentalist (RFK2) over a doctor with decades of public health experience (Fauci and many others)? The random WhatsApp groups are often not credentialed in any way. A podcast episode about "MAHA" featured three people railing against establishment medicine, none of whom had a medical degree of any kind.
--"News that -is- actionable - weather forecasts, stock prices, upcoming events - reads very differently from news used for entertainment, and one's trust model will differ accordingly." I completely agree in theory, but my conversations with an anti-vaxxer sort of shattered the idea that people will rely much less on trust when it can have a serious impact on their lives. I'm talking about people who are aware that if they are wrong, they are actively put their lives and the lives of others around them in danger, and they're still often forwarding me things where they didn't even bother to open the link, they just forwarded me something they saw on a group without checking anything. Sometimes these people get ostracized by those around them and pay very serious prices, including literal prices if they're paying for some alternative therapies or giving money to these causes or what have you.
So I'm not sure.
(Thanks for the tip about "How to Watch TV News"! Will check it out)
How is it so difficult to comprehend that expertise is a necessary but not sufficient condition for trusting a person on a topic?
If the person is fundamentally crooked, his expertise is completely irrelevant. And for plenty of things, you need not be an expert to see something that is so obvious that it requires industrial scale censorship to be suppressed.
If expertise is necessary to trust someone on a topic, why do people trust RFK, Jr. on medical issues? He has no expertise in medicine and some of the things he says on medical matters are easily refuted or simply absurd. I could give a list but I don’t think this is especially controversial.
So why do so many people trust him, if expertise is a necessary condition for trusting a person on a topic?
If expertise is necessary for trust, why do so many people trust RFK Jr. on health matters when he has no expertise whatsoever and frequently spouts nonsense on the matter? That is the part I'm asking about, not why people can distrust experts (that makes plenty of sense), but why they then turn around and trust some much worse sources.
The people who trust RFKJr in a vacuum might not share the usual trust methods of most other people - I'm referring to people who never trusted vaccines, even traditional ones.
The people who express trust in RFKJr are a superset of that, which also includes people who know the choice was between RFKJr and whomever Harris would have appointed head of HHS, who they predict to be as untrustworthy, and possibly more so. For them, trust is a spectrum, RFKJr might be a 3 on a 1-10 scale, and the alternative was a 1 or 2.
There are at least a few people who have scientific education and notice vaccines aren't as safe as they were made out to be. In at least one occasion, they've found that the methods employed for proving safety weren't carefully followed. They also note that safety is also on a spectrum, and vaccines are inherently unsafe, as they involve injecting antigens into the body past its natural skin and mucosal barriers, and that this is conspicuously omitted by vaccine advocates as if safety is a black-and-white affair. This also coincides with an account of the financial incentives for selling vaccines to people, and for avoiding liability in their administration.
"For them, trust is a spectrum, RFKJr might be a 3 on a 1-10 scale, and the alternative was a 1 or 2"
This is exactly the question though. What possible basis could someone have to say that a hypothetical Harris head of HHS would be a 1 or a 2?
What happens is that a handful-- literally can be counted on one hand as far as I can tell-- of mistakes or misleading statements by Fauci make him "The Devil" as one guest on a Bari Weiss podcast said, whereas a much longer list of much more damaging falsehoods by RFK Jr. ends up having a much lower visceral reaction and has little salience. This happens with Trump and Rogan as well of course. I don't think we can understand this phenomenon unless we recognize that it isn't about some objective measure of trustworthiness, but rather why the lies and mistakes by these different figures have such a different probability of "sticking." Same thing with financial incentives and corruption. Even if you take the very very worst interpretation of the whole Hunter Biden laptop thing for Joe Biden, it wouldn't reach the levels of corruption achieved by Trump's crypto dealings. Etc. etc. etc.
@Meir Brooks: I don't think you notice you're doing this, but "frequently spouts nonsense on the matter" is just begging the question here.
Trump supporters don't think he's talking nonsense. You do, and that's because whatever he says gets filtered through dishonest leftist narrative.
He's studied these issues for ages and we think he has good character. That means he's going to be trusted and these frantic denunciations by industry attack dogs, Democrat politicians and your paid off media are simply not credible.
Please understand something. For a normal person without technical expertise, every debated technical issue is a simple factual clash where if you pick a side, it will be because you trust the expert on that side more.
Countless former liberals begin supporting Trump later, it's basically you guys waking up one by one and realizing your experts are lying to you and you've been on the morally bankrupt side all along.
I don't think that expertise is always "what it's cracked up to be" -- expertise is "I know how to solve what's already been solved, and can generally tell you when your new idea has already been done, and the issues involved in it."
Friend of mine did Science Olympiad once. Came up with a novel design for a pyramid that engineers now use. The judges were shocked that someone had actually "created something" for science olympiad. (Said person also was given a graphing calculator for Calculus class, proceeded to invent most of Calculus 3 on his own -- he got to use it on tests, while the rest of the class did not.)
"a doctor with decades of public health experience"
Yeah, what was interesting about Covid and the American response was finding out that Fauci, the expert of choice for the Trust Science! crowd, had previous record in public health where he was being excoriated as AIDS genocider:
"Dr. Fauci’s response to the AIDS crisis in the 1980s was first widely criticized by LGBTQIA+ activists. “We wanted treatment because we were sick and the only place where there was any possible area to get any treatment was through the clinical research system. And that’s what led us to you,” said AIDS activist David Barr. However, in later years he became a widely respected ally, eventually developing lifelong friendships with the activists."
"CLAIM: The majority of AIDS patients died from medication developed when Dr. Anthony Fauci led the nation’s response to the emerging epidemic, not from the virus itself.
AP’S ASSESSMENT: False. While it’s true that Fauci had been a leading researcher when AIDS emerged in the 1980s, the claims that azidothymidine, commonly known as AZT, killed more people than the virus itself are baseless. Public health agencies from the Centers for Disease Control and Prevention to the World Health Organization, as well as prominent AIDS organizations and researchers, told The Associated Press the drug remains in use today as it’s been shown to be effective at keeping HIV in check when used in combination with other medications.
THE FACTS: Social media users are once again sharing the long debunked notion that Fauci, the face of the nation’s response to the coronavirus pandemic, advocated decades earlier for a drug to combat the emerging AIDS epidemic that turned out to be more deadly than the virus itself."
Villain to hero to villain again? And that's why the experts put forward by the media as "shut up and do what you're told" are mistrusted.
One data point: Faucci was interviewed on TWIV many years before covid, and it was very clear that the hosts (a bunch of academic virologists) considered him a very competent and accomplished scientist. He wasn't just someone powerful they had to humor.
The difficulty for Faucci during covid, IMO, is that his role was partly that of a scientist (or scientific administrator/conveyor of current science) and partly that of a politician. You had to try to infer whether it was the scientist or the politician talking at any given time.
Look again at that comment about trusting people whose interests align with yours.
Rogan's an everyman. He's not an expert, and never claims to be (not even on things like MMA or mushrooms, which he has some claim to), but he asks useful questions, because they're the questions anyone might ask.
RFKJ is probably fargroup to most people (and maybe outgroup, because Wealthy Lawyer, and Kennedy), but he takes heterodox views. This normally isn't great, but Fauci himself is a Wealthy Doctor, and probably hurt himself immensely by admitting he lied in a way he considered noble, which signals contempt for ordinary people.
One thing everyone knows: an expert isn't necessarily inclined to tell you what's best for you, and one way to tell for very sure is to consider where their money comes from. Most people aren't paying Fauci directly for advice any more than they're paying their own doctors. Moreover, if your expert gets to both tell you what your problem is and also charge you for the solution, what do you predict they'll do?
Even so, a lot of people were inclined to trust their experts anyway and hope for the best - until Fauci admitted he'd lied. If he lied once, he might lie again. (I'm now at the point where I'm mildly surprised when I find people who claim he's -more- trustworthy than other educated people who disagree on Covid treatment.)
This is not to say that everyone's an expert on epistemology, either. If you were to trust whatever some random Covid vaccine opponent told you, you'd be making the same risky mistake as someone who blindly takes Fauci's advice (and, knowing nothing else, a bigger one - Fauci -does- still have an MD).
That said, it's not just Rogan and RFKJ - there were doctors and scientists disagreeing with Fauci, for reasons that sounded comparably plausible to people without MDs, until they got targeted for doing so, and suddenly there were a lot fewer of them. People noticed this as well. And then other things started coming out, like Fauci's involvement in gain of function research, his poor testimony before Congress, CNN's curious emphasis that ivermectin was a "horse de-wormer", etc.
Generally, I think a lot of people concluded that the incentives of Fauci & Co. did not align with theirs. After a while, Covid settled down into this mild Omicron variant so the survival stakes dropped, while the tribal incentive stayed relatively constant.
Fauci has an MD. Yes, but that's really just credentialism at this point. Unless you're going to take at face value whatever Ron Paul (who also has an MD) says about medicine. All medicine, not his specialty.
(Do we have records showing that Fauci did any continuing education? I know practicing doctors are required to do that, and I sincerely doubt Ron Paul does that, but Fauci? Dunno).
Credentialism at what point? An MD might not know how to deal with some novel medical threat, but an MD isn't nothing. I'll trust an MD over a non-MD even for a novel disease, knowing nothing else. If I come across a car accident and render first aid to someone in there and someone comes up and says "I'm a doctor", I'm ceding authority immediately, no matter how good an Eagle Scout I am.
I didn't say "knowing nothing else" above, and repeat it here, for nothing. We go into any situation knowing nothing, and the introduction of an MD implies a great deal of training and possibly experience. To ignore that is foolish. OTOH, to treat it as irrefutable authority is risky in the other direction. We're permitted to take Fauci's actions into account when evaluating his trustworthiness, but let's not pretend the MD never existed.
An MD isn't nothing, sure. Sitting in the Disney World Clinic, with my side hurting, and marked crepitus (as diagnosed by my husband, with notably sensitive fingers), I'm willing to listen to the doctor on "what will happen if I go to the emergency room" -- "absolutely nothing useful. The clinical "fix" for this is leave it alone." I'm not as willing to listen to the doctor saying "go to the ER anyway, you might have this zebra." I was willing to get the doctor's "if this happens, go straight to the ER."
If I was at home, I don't think I'd have run to the doctor to get a diagnosis. As it was, when I got home (days later), I went to the UrgiCare. I was subjected to a diagnostic test that had a 50% chance of detecting the fracture (looked it up later), and subsequently sent to the ER. At which point I got a PET scan. Which told me all sorts of fun things about my diet and... diagnosed me with what my husband had felt two days before. Note that in both the UrgiCare and the ER, I demonstrated the crepitus. This whole rigamarole cost me about 6 hours, and the price tag was around $10,000 (I had insurance, though).
"I'm a doctor" is a fundamentally different thing to say in the course of an emergency (there's legal ramifications to it -- you're pretty much yielding the "good samaritan" defense, as I understand it). Knowing about first aid, I'd probably yield, but that's because I'm not strong. There are time-critical matters that you genuinely shouldn't yield to "just any doctor" (reducing a dislocated shoulder in particular).
GP Doctors are trained in how to be a diagnostician (primarily, that's their role in the whole medical apparatus). Problem is, current doctors aren't that good at it.
I'm not trying to pretend the MD never existed. That, at minimum, implies a basic level of "I know what the basic issues for a human being are" (mind, muscles, nerves, blood, immune, etc). But that's the "consider Ron Paul" level of "he has a doctor's degree."
One is that trust accumulates with people whose interests align with the truster's. I don't think this clearly applies to Trump or RFK2, or to "WhatsApp randos". As mentioned, financial conflicts showing that these characters have different interests tend to be ignored, so it's hard for me to see that as the central priority.
The second is that trust can build with "everyman" types. This is what I understood as Kamateur's main point, but I agree with him that this is a hard sell for many of the people we're talking about. Certainly RFK2, who does not at all talk like an "everyman" (and his personality is anything but). I'm also not sure about Rogan and Trump. Trump's background is certainly not that of an everyman even if he speaks bluntly and directly. As for Rogan, I know what you mean by everyman, but he also talks about a lot of things that I think few other people care about (like the UFO stuff, which I think comes up about as often as anything else) and many episodes could easily be clones of other intellectual podcasts. Certainly Joe Biden is as much an "everyman" as any of these guys, and I don't think he benefited from the same kind of trust dynamics where he could say what he wanted and have a loyal base believe it uncritically.
Another argument is that people might believe those with heterodox views when their trust in institutions falters. This feels hard to lean on just because "heterodox" is so broad. Would we expect a bump in popularity for Farrakhan?
The rest of what you're saying makes sense for reasons not to trust insitutions or experts; what I don't get is why that skeptical eye basically gets shut with certain figures. As for Fauci, I think popular trust in him declined well before the points you're talking about, and I think for clearly political reasons rather than anything to do with what he said. And I'm not talking about people who agree with experts who disagree with Fauci on some issues where a professional group can disagree. But people will trust RFK2 and Trump well outside those bounds.
So, here's a difficulty I have. There are big topics of importance in the world having to do with military budgets and balance of power on which I would trust a longish effort post by John Schilling more than an in-depth report in the NYT or WSJ. This represents me using my own judgement to evaluate different sources of information in a way that probably looks like those folks taking ivermectin to cure their cancer because they found a website somewhere that told them some bullshit story that convinced them. And yet, I'm pretty sure that John Schilling is actually a better source on some of those topics than the NYT.
I doubt the NYT is ready to admit Pax Americana is dead. Given that, they are so far away from the paradigm that Trump is currently in, that they may as well be out to lunch. Comments Trump makes on National Security (like taking over Greenland and Canada, both security issues on our northern border) ought to be seen in that light.
" I don't think [trust alignment] clearly applies to Trump or RFK2, or to "WhatsApp randos". As mentioned, financial conflicts showing that these characters have different interests tend to be ignored, so it's hard for me to see that as the central priority. "
Trust of Trump or RFK2 is always in context of the alternatives, which were Harris and whoever would have led the HHS under her. Since both have or are expected to have financial conflicts as well, people go to the tiebreaker, which means we're back to things like character (after a few other things, which also hold equal for both). Anyone with the same financial interests as the median voter is in no position to win an election.
I agree that RFKJr isn't widely seen as an everyman (his last name probably drives that home more than anything), and I don't plan to use "RFKJr is an everyman" to support any arguments here. However, as I said before, he has heterodox views. This marks him as not part of the elite establishment, and I notice that's a big factor these days.
Rogan certainly talks about several topics most other people don't care about, but so do most people. Having personal interests doesn't disqualify everymanhood.
Biden is one of the more everyman candidates I'd seen for POTUS in the last 12 years, yes; you're indeed right to point that out, and I think Democrats saw this and that's why they nominated him in 2020. In his case, by the people's standards, the trouble was his senility. By Democrat elites, this was a further argument to nominate him: he would be more pliable, and as long as it looked like he was still the man in charge, it would give more credibility to their initiatives. (Which probably was only a marginal concern to them; I think they didn't see themselves as that estranged from the general population. This might still be the case today; I'm not sure.) Biden circa 2008 would have been a different story: still an everyman (he dressed up as a hotdog vendor on Comedy Central, and looked natural as anyone), and much more vibrant. Although then he would be more likely to dissent from fellow Democrats. Plus, he was the king of gaffes before Trump. (In that sense, he made Trump's gaffes more permissible.) In short, Biden would probably have worked if not for his age and timing (in 2008, the Democrats *adored* Obama).
We would expect a bump in popularity for Farrakhan, and we probably did get one. But Farrakhan wasn't seriously running for office by 2016, and moreover, he was praising Trump. In general, we probably saw bumps for multiple populist figures, left and right - Bernie Sanders, AOC, and Biden.
I remarked on Fauci almost exclusively because you brought him up; I remarked generally about trust of authority figures otherwise. That aside, the skeptical eye seems to be working about how I'd expect - open (active) by default, with a memory for reputation and past predictions, and informed by perceived interest alignment, which is in turn informed partially by tribal markers such as wealth, occupation, vocabulary, religion, ethnic phenotype, etc.
Is there anything else about the skeptical eye that isn't adding up for you?
Until your post I hadn't been aware that Farrakhan became pro-Trump. Wow, what a fantastic data point. Thank you!
>"the skeptical eye seems to be working about how I'd expect - open (active) by default, with a memory for reputation and past predictions, and informed by perceived interest alignment, which is in turn informed partially by tribal markers such as wealth, occupation, vocabulary, religion, ethnic phenotype, etc."
I don't know, maybe I'm running in circles here, but I just don't see this as a magnet toward the figures who have gained trust in this era. Their reputations and past predictions have an awful track record; notice how people excoriate Fauci for his masks comment, but Trump's persistent predictions that Covid would disappear like magic within a couple months never really "stuck" as a reason to turn away from him, except perhaps for a handful of people in 2020. Religion is another one that would make sense if not for the fact that these characters are almost comically irreligious. Rogan, Trump, RFK2? If Ted Cruz had won the primaries in 2016, this story of populism would make sense, but instead it went for Trump, he of the "New York values".
One datapoint is that I saw a lot of apparently real commenters in the runup to the 2016 election on right-wing sites whose preferences were Trump, then Sanders, then whomever else. I think people were in the mood for some alternatives to the mainstream consensus.
You might find some value in the back and forth Dave Green and Nathan Confas had awhile back, focused primarily on...let's call it deception vs stupidity.
When randos from WhatsApp get something wrong, people think they're dumb. When the CDC or FDA make a mistake, people think they intentionally deceived them. In this situation, who do you trust? In the Covid era, where Joe Rogan was wrong on Ivermectin, it feels like a mistake. When the FDA et al made mistakes, it felt like a lie. Liars are more distrusted than fools.
When the CDC/FDA tells you that farmers can't handle measuring dosages of ivermectin for themselves, despite the fact that they measure dosages by weight for all their farm beasts...
When rolling stone gets caught lying (and google gets caught location-blocking the article so the hospital doesn't know about it)...
When the CDC calls ivermectin "horse dewormer" despite it's daily prescription for 10% of the world's population... (I'm just saying, it's about as safe as medicine gets, far far safer than HCQ).
Thank you for the references! I'll check them out.
I feel like your point raises multiple follow-up questions. The first is what it is about Rogan vs CDC that makes one sound stupid and the other sound deceptive. It isn't just power; Trump has a lot of power and for some reason he is trusted by many of the same people who would distrust e.g. the CDC. And I think people trust e.g. RFK, Jr. not only to tell them what he believes to be true, but in terms of the quality of his information. And it also raises the question, which was the focus of the blogpost I linked to, of why similar financial conflicts and such among the "randos" don't raise the same red flags.
Rogan and Trump both don't claim to be experts. Trump in particular has a habit of "asking stupid questions" to jog the experts out of received wisdom, and put more options on the table. Rogan walks you through his logic (and Uttar Pradesh was a powerful signal that "something cheap was working" or "no intervention was needed").
I think Scott has written about this before, In ancient times it would make sense to trust a member of your village more than someone who wandered in over the hill, because its more likely the villager shares your interests, bonded as you are by ties of kinship and tribe. No one really has "neighbors" anymore in this sense, but the urge to find sources of information that feel like they are coming from cousins who share the same genes and gods as you is still strong, and this results in weighting ideas more heavily if they come from people who share your cultural values. In other words, "tribalism" doesn't just mean loyalty, it also means certain preconceptions about how trustworthy you are.
Concepts like "expertise" and "creditability" are newer constructs that were always intended to supersede this older mode of establishing trust, but because they don't have that ancient, evolutionary shortcut to the brain, they have to be socially enforced a lot more rigorously to take root, and they fail more easily. Particularly when the experts do not appear perfect at their jobs, or look tribally motivated themselves. They end up just reinforcing the older framework instead of supplanting it, which is why the Covid crisis was probably the single most historically damaging event to the credibility of expertise since we first invented the notion.
Expertise is a midwit term (and as such, isn't really subscribed to nearly as hard by actual smart people). Creditability is a "newer construct" -- but news loses credibility when they sink the Hunter Biden Laptop Story.
You should look up substacks about the reputation economy.
"expertise is a midwit term" is how terminally online people talk, but when you are sick and go to the doctor you fundamentally want and hope that the person you are talking to knows what they are doing, and unless you possess some domain specific knowledge its going to be hard to evaluate their diagnoses.
That's why doctoring, as a profession, is so obsessed with establishing trust and professionalism. It makes things more profitable, sure, but its also the only way to make sure that patients listen to you, which can literally be life or death.
... you mean you don't look up domain specific knowledge at your first opportunity? You don't pull the "how likely was this test to find the issue"?
"Make sure that patients listen to you" -- ah. you think patients listen to doctors. I don't. I don't think doctors even try to get patients to listen to them, most of the time. The surest killer is obesity, after all. Amerifats is a good nickname for Americans, because we're really that fat. Our obesity changed how many people died of covid19.
The problem wasn't that doctors didn't tell fat patients to lose weight, it was that before Ozempic et al, pretty-much all they could do was to tell their patients to lose some weight, cut back on the sodas, hit the gym, etc. Which mostly didn't result in any weight loss. Or propose bariatric surgery for the really really fat patients, but that was pretty damned hard on the patients.
Ah, so you buy into the latest fad to earn pharma money. I may have a rather unique group of people in my office, but two out of two people having been told to "lose some weight, idiot" ... just did it. I'm on that path too, as is my husband.
Autists have superpowers! (determination, primarily. 'tard strength as well).
This makes perfect sense, but I don't think it is what's driving this phenomenon today, because those sources of trust tend to "look" very much like outsiders to the "trusters".
Trump is an obvious example: he's a New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc. RFK, Jr. is an even weirder case: he was running in the Democratic primary until last year, was and is an environmentalist (!), and was raising issues that even today are pretty fringe and foreign.
I would completely understand if the story of 2016 were that Ted Cruz, despite being universally seen as a liar, gained trust among social conservatives and evangelicals because he is those things. But instead the trust went to these figures who are almost the opposite of the "trusters." Don't you think?
> Trump is an obvious example: he's a >?New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc.
I’m not American but my feeling is Trump got his base by not calling them deplorable or demanding they accept they are privileged. Easy win compared to another millionaire ranting about those voters.
> RFK, Jr. is an even weirder case: he was running in the Democratic primary until last year, was and is an environmentalist (!), and was raising issues that even today are pretty fringe and foreign.
Ah yes, but a cousin of mine who’s a Green Party supporter (in the U.K.), and a bit of a hippy, ended up anti vaccine and “moving to the right”. She would argue, and I see her point, that she stayed where she was - opposed to pharma, supportive of bodily choice and pro freedom re the state. (She was more of a libertarian leftist - although she’s pretty keen to stop private transport but everybody’s belief system has some inconsistency).
Anyway we’ve decided to forget about Covid but it was literally a mirror universe. Conservatives fell in love with Sweden, leftwingers with closed borders.
Nobody calls their base deplorable (though Trump did say, at a rally before the Iowa caucuses, "How stupid are the people of Iowa?"), but Trump definitely calls everybody who doesn't support him pretty terrible stuff.
Democrats are not doing themselves any favours with the race baiting of whites, or the use of academic ideas of privilege. They feel they may not need to appeal to the cis het white male, but they surely need to some of the males, most of the heterosexuals, some of the whites, and pretty much all of the cis.
It’s also possible to be pro black, pro trans and so on without the use of the word privilege at all.
I’m available for a small fee, even a modest pint of non American beer would suffice, if the democrats want to hire me as consultant.
Social Conservatives (evangelicals) ascribe to the "broken people" theory of leadership. That is to say, godly folk don't get put in charge, but God still works through the broken people, and that's a good thing.
Which is a fancy way of saying, "we'll still vote for the triple divorcee". But you can see it as an article of faith, not as hypocrisy.
You should discuss this with an actual proponent of this theory (find an evangelical to debate.)
The idea is that "governmental figures" aren't going to be perfect, but can still do god's work. (Now, you get to ask "what's god's work then?" and I can tell you it's "behaving in a godly manner" which is not supporting "new religion," because new religion hates old religion and does everything they can to stab it in the back at all times).
A "social progressive" in this day and age is someone who believes that they get to fly their religious flag over Kabul, or across the Pittsburgh Courthouse, without letting other religions fly theirs. The idea of a public square is that everyone can put up a statue, including the Satanists (I love their statue).
> he gets a lot of trust from working-class people, red states, social conservatives (!), etc.
Getting trust from people by saying "we have a common enemy" is a very old trick. Most people need to get burned a few times before they learn to recognize the pattern.
"Trump is an obvious example: he's a New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc. "
Part of that was the media and everyone else opposed to Trump doing their damnedest to paint him as low-class, crude, Not One Of Us (liberal cultured civilised upper class types) - see Hillary and her unforced error about 'loving real billionaires'. Gosh wow. "Vote for me, little grubby proles, because I represent the party that will look out for your interests, now get out of my way, I have to give a speech at a dinner for the hyper-rich who are my donors".
"In a speech Wednesday in Lake Worth, Florida, near West Palm Beach, Democratic presidential candidate Hillary Clinton gushed about her support from the super-rich, praising her billionaire supporters and contrasting them to Republican candidate Donald Trump.
An excerpt is worth quoting as a demonstration of the abject subservience of the Democratic candidate to the capitalist financial aristocracy. According to the transcript supplied by the Clinton campaign, she said:
“You know, I love having the support of real billionaires. And they’ve been speaking out, because Donald gives a bad name to billionaires."
I've mentioned this before, because it was so damn stupid, but I'll mention it again: the sneering about "he eats his steak well-done with tomato ketchup".
Well, damn it, *I* eat my steak well-done with tomato ketchup. If that makes me one of the Untermenschen fit only to be spurned by the feet of the Right Kind Of People, then had I a vote in the US elections, guess who *I'd* vote for?
Well-done doesn't have to be shoe leather! And let people eat their damn food any damn way they like - I'm not going to be snobby about "oh you're eating sushi the wrong way" (since I don't know the right way to eat it).
Having table manners is different, but they weren't jeering about his table manners, they were jeering about "lookit the low-class way he eats!" And then in the next paragraph trying to talk about how the other guys represented the poor, the immigrants, you know - the low-class that they'd just been jeering about.
Lest we forget: medium rare steak, AKA The One Correct Way, isn't some luxurious food ritual restricted only to the elite. Trump, and anyone else, rich or poor, could cook a rare steak. The bottleneck there isn't the doneness; it's the ability to have steak all the time.
In The Case of the Steak, Trump's saving feature was that while no poor person eats steak every evening, no member of the elite would be caught getting steak well-done, and -certainly- not advertising it, let alone with ketchup. So while Trump wasn't exactly One of Us Poor Folk, he definitely wasn't One of Them Elites, either.
The tribal signifiers Trump supporters are responding to are somewhat illegible and not easily sorted in a "red/blue" scheme or any modern political left-right scheme, which is why we are seeing a massive political realignment built around a cult of personality.
Trump is not a conservative, but neither are the people who trust him, even if they use that word. In fact a lot of Trump's deepest supporters were folks who were either not politically aligned or not deeply aligned before his rise to prominence. They are bonded by a set of values that I believe are real, and again rooted in some evolutionary function, but I'd be absolutely lying if I said I understood what they are, and as a generally liberal person, I know I would sound condescending if I tried to guess. But when Trump cloaks himself in opulence, this isn't seen as a betrayal any more than a pharaoh stepping down out of a pyramid rebukes the existence of Ra. Your mistake is thinking you understand what they believe in and that they are blinded to how Trump is a contradiction of that. If you work backwards from how believing in Trump can be a set of values that can form a cohesive tribe, this will get you closer to the truth.
Now, whether any of this is actually aligned with their rational self-interest, or even their conscience sense of how they identify themselves, that's a separate question.
I think "trump is not a conservative" is a fundamental truth. He's a 1990's NY Democrat. Hasn't changed much. Most of the "conservatives" he's getting are from the "War Wing" of the Republican party (these are actual soldiers and their families, and they are heavily anti-neocon. Guess who the "adults in the room" were in the Biden Administration? Dick Cheney's neocon protegee. )
Trump doesn't cloak himself in opulence. He has his favorite (gas station) toilet paper dispenser right beside his gold toilet. That, my friends, is art.
This is one time I have to be in sympathy with the thieves (even if stealing is wrong).
"Two men have been jailed for stealing a £4.8m gold toilet from from an art exhibition at Blenheim Palace.
Thieves smashed their way in and ripped out the functional 18-carat, solid gold toilet hours after a glamorous launch party at the Oxfordshire stately home in September 2019.
...It happened just days after the artwork, entitled America and that was part of an exhibition by the Italian conceptual artist Maurizio Cattelan, went on show."
"Cattelan created it in 2016 for the Solomon R. Guggenheim Museum in New York City, United States. It was made in a foundry in Florence, Italy, cast in several parts that were welded together. Made to look like the museum's other Kohler Co. toilets, it was installed in one of the lavatories for visitors to use. A special cleaning routine was put in place. The museum stated that the work was paid for with private funds."
'Aha ha ha, we've got so much money we can buy gold for an artist to make an ironic art piece about how grubby and consumerist America is, ha ha ha! Even though we're the same capitalist moneybags exhibiting grubby consumerism by having enough money to throw around on this kind of thing!'
Trump has a gold toilet - he's a buffoon. Anon pays for a gold toilet - it's "An example of satirical participatory art".
The gold toilet, by itself, is buffoonery. It's positioning it beside his "favorite gas station toilet paper dispenser" that makes it art. I love the contrast.
I find it hilarious that someone decided to build an "actually functional gold toilet" and call it America. But it's hilarious in a very bad way.
Take the stunt where Trump was at a McDonalds - that resonates because we *know* he really does genuinely like and eat McDonalds (there's been enough pointing and laughing at, for example, the White House McDonalds meal for the winning team).
This New Yorker piece is exactly the tone-deaf "why do the muddy peasants follow this boor?" kind of thinking that just does not get it:
Kamala coming back with "I worked in McDonalds for a while" doesn't resonate; was it when she was in Canada? Did she really work there or not? It doesn't seem true even if it is true, because she's not got the image of someone who'll happily chow down on a Big Mac.
Trump managed to be completely sincere about learning how to fry french fries. About learning how the entire place worked, and how to do the jobs. (and it wasn't a coincidence that it was a primarily black-staffed restaurant).
Trump makes a comment about illegals taking black people jobs. Liberals have a "field day" showing off "real black people jobs" (which come across as "mostly token" if you're a black woman who's a nurse, or a black guy who mows lawns for a living).
Trump shows, beautifully, that he cares about "black jobs."
At least Mrs. Walz checking out the rotisserie section looked more natural than Kamala wandering around the candy and chocolate shelves until her husband handed her the bag of Doritos for her "oh my favourites nacho cheese" soundbite!
If I were a common man then here's what I'd say about Trump (or RFK): he may not be well aligned with me, but at least he's not perfectly aligned with all the other political-financial-media-entertainment-tech elites who seem to be in lock step about everything.
In the never-ending tug-of-war between the McConnells/Pelosis of the world and the American people, he is at least pulling the rope sideways.
I do think this has to be part of the answer: not positive but negative alignment. I like Trump not because he looks like me, talks like me or has a background like me, but because he hates the people I hate and what's important to me is that he bulldoze them with no restraints. Trump is then the perfect candidate not because of what he supports but because he sort of hates everyone and everything who isn't him.
There is this incredible data point that Trump campaigned in, and won, Dearborn, Michigan (https://apnews.com/article/trump-harris-arab-americans-michigan-dearborn-aea96b9161a77de1fa47d668e23edb98), which is majority Arab and which was called "America's jihad capital" in a WSJ op-ed. To be fair it was partly due to a protest vote for Jill Stein, but still. Pro-Israel Trump fans seemed not to mind that he stood by a guy who seemed to rant against Israel's bloody war on Gaza and promised to end the war. It seems to me that the Dearborn voters (correctly) read Trump as hating Biden (whom they saw as supporting Israel in the war), while Israelis saw him as anti-Palestinian (also true) and as hating Biden (who in this case was blamed for being too pro-Israel). Meanwhile Biden's pro-Israel speech at the opening of the war was seen as so heartfelt that friends of mine spoke of being moved to tears. Trump could never say, as Biden did, "I am a Zionist." But Trump is universally seen as more pro-Israel than Biden, despite lots of data points that should give pause.
So I do think you're right that it's much more about what these figures are against than what they are for.
Most palestinians/arabs I'm familiar with knew that Biden was better for them than Trump, but felt that the actions of the US were so intolerable that they needed to make it known they went beyond their red line to stay in the coalition. Voting for the lesser of two evils when both support the operations in Gaza was unacceptable.
As for Trump being more pro-Israel than Biden, I don't...know how you can conclude otherwise based on history. Trump just ended the nuclear threat in Iran, and is credibly preventing it from happening. That's about 50% of 'the problems Israel faces.' He's also extremely committed to expanding the abraham accords, that's another 10%. And he's not putting pressure on israel to solve their apartheid problem, another 40%. Meanwhile, Biden sanctioned crazy violent settlers, was in favor of an iranian deal that would have resulted in them on the edge of breakout without limits on missiles to deter intervention, and yeah tried to get saudi normalization but was broadly incompetent. Ethically, spiritually, culturally, personally, he was very pro israel, of course, but that matter for little.
Neither of these groups are acting emotionally, there's too much at stake and they are informed/competent.
They were acting strategically.
I do know many progressive jews voted against trump because 'illiberalism is in the long-term always bad', but there were a lot of minds changed when trump took out iran's nuclear program.
I have heard people say that "Trump is a poor man's idea of a rich man". In other words, Trump behaves the way a poor man imagines a poor man would behave if he had as much money as Trump does.
I'm not saying this to be condescending toward poor people, but I'm saying it as part of a model I'm trying to build in my mind of how a poor person probably thinks. For starters, I think of poor people as prioritizing local concerns over global, which might seem condescending even so, except I don't think that's necessarily wrong. Prioritizing the local means acting on what you see with your own eyes, interacting with people who can meet you face to face, thinking about things directly, rather than abstractly. A poor person might readily give $10 to a homeless person for food, but think an idea to set up a $10M fund to do the same for homeless nationwide would be stupid - how do we know it's going to actually feed a million homeless, rather than get stolen by con men in the middle?
I almost wrote "gatekeepers", then realized that's a fancy term Trump probably wouldn't use, and used "con men" instead. That's another implication of Trump being a poor man's imagined rich man: Trump doesn't talk like an elite. He eats McDonald's. He drinks wine, sure - a poor man knows what wine is - but Trump might not distinguish between a Beaujolais and a Barefoot. Whatever time he could be spending pontificating on fine dining, he spends instead on making this or that real estate deal or whatever his profession is. And it's all local over global. Abstractions is for people with their heads in the clouds. That's my hypothesis.
Trump is notoriously straight-edge. There has been a lot of speculation about lifelong prescription medication dependency, particularly amphetamines, but he does not consume alcohol at all and seems to have avoided most/all recreational drugs his entire adult life. He consumes caffeine but our culture doesn't really consider this drug to be a drug.
"as part of a model I'm trying to build in my mind of how a poor person probably thinks"
Congratulations, you have now provided your bona fides to run the next Democratic presidential candidate's campaign, and succeed as brilliantly as Kamala's team did.
If you have to construct a model of how a poor person thinks, because you're not poor, nobody in your family within three generations ahs been poor, and you don't know any poor people (the contract cleaners who make the work premises habitable are not around when the real important Elite Human Capital are around, as is only right and just) - then you will not get it. Your model won't work. You're doing the anthropological bit as though "poor people" were an alien species from Mars.
What do you mean by "poor people", for a start? One of the homeless? One of the people giving money to the homeless? Someone who may have their own small to medium sized business, but has no idea what a Barefoot is? (for the record, I find that brand pretentious, but then I'm very trad - French reds, German whites). They seem to be the successors to Gallo and Blossom Hill - cheap, accessible wines trying to brand themselves as something "fun" and "trendy".
Your tastes in wine expose you as a irrecoverable Euro, Deiseach. Don't you have a train to complain about being late or something?
As for me, I should think my approach to the poor model ought indeed qualify me for the 2028 D campaign lead. They can't just drag any ol' actual poor into the room, but someone with a _model_, well, that's just what they're looking for. Why, I could probably deliver a 3-D animated movie on how to properly adjust the angle at which Warren held her beer to drink it! The latest in metrics! Anthropologists in deepest Detroit! Addicts in blue spandex adorned with ping pong balls!
Unfortunately, they'd probably just dig into my past and find out my grandfather dropped out of eighth grade and wasn't quite a millionaire when he died, and my mother grew up an itinerant in Hanoi (and her mother might have been a concubine - we're not sure), and I spent most of my childhood either moving haybales around or huddled under a blanket in the truck in 40-degree weather waiting for the school bus. So much for my pedigree.
> […] because you're not poor, nobody in your family within three generations ahs been poor, and you don't know any poor people […]
Neither does Trump, yet that hasn't hindered him. The difference between him and the Democratic establishment doesn't come from understanding or even first-person experience, but from attitude and image.
This is again an argument that completely makes sense to me but just doesn't seem to apply to the current situation. The idea that people care primarily about local concerns that they can see with their own eyes is a classic one in politics, but Trumpian politics violates this all the time. An Economist article noted that one of the most anti-immigration states in the US at one point was West Virginia, which has very few immigrants. Most immigration opponents that I read speak of immigration much more in an abstract sense-- without a border we don't have a country, etc.-- than in the dollars-and-cents sense. DEI and transgender issues would never have caught on as a topic of national conversation if it was about things in front of you rather than broader "cultural" concerns. And a recent podcast episode of "Why should I trust you?" (and some other MAHA voices) really brought this home: many people there talked about being strongly supportive of RFK Jr., and therefore supported Trump, even though the number one most important thing to them personally was the preservation of medicaid and Obamacare. No one would say that Trump was the more likely or trustworthy candidate on these issues, and yet they trusted RFK2 and Trump to do the right thing on these issues.
Trump indeed doesn't talk like an elite, though I think the fact that he is so obviously from the elite does raise the question of why these highly critical people trust the words and intonation rather than the biography and financial interests. But this doesn't work for some of the other figures (especially RFK2), and it didn't much help Democrats who speak at that level. Bernie Sanders and Trump are often compared on this point of "speaking to the people," but Sanders failed twice to get the nomination and it's hard to imagine the Democratic Party coalescing behind him the way the GOP did around Trump even if he had somehow won the nomination. That's my view, anyhow.
(The "speaks like an everyman" answer is about as good as I can see so far, but it still feels unsatisfactory)
"Trump indeed doesn't talk like an elite, though I think the fact that he is so obviously from the elite does raise the question of why these highly critical people trust the words and intonation rather than the biography and financial interests."
Because the other elites so obviously hate him. See Hillary's speech about how her friends, the real billionaires, hate how Trump is bringing down the image of billionaires.
DEI and trans really are in front of a lot of people. They show up all the time in work communications.
Immigration, for a lot of people, is "don't break the law" territory. AKA Jose from up the street is getting sent back to Ecuador? Let me know how I can help, Jose's a good guy. I'll write a reference to get him back. When you elect someone who believes in federalism, the idea that you can locally select who gets to come back doesn't seem all that unreasonable. "Do it right" is a conservative ethic.
Bernie and Trump are BOTH honest people. Conservatives say "At least Bernie's honest" and they give him points for that. They'd sit down and work with Bernie (this is the whole Midwestern Conservative Republican).
Bernie would have won the democratic nomination if it was a fair race. Clinton pulled a lot of gaming to make her nomination work.
If Bernie ran as an independent, he'd get a lot of support, and not just from "core democrats."
I'm not "thinking I understand what they believe in...", I'm genuinely asking. I don't have a good understanding here. But an explanation that says that the reason for this trust is a tribal signifier that we don't quite grasp but trumps all other signifiers we do see is similar to me to just saying "I don't understand," which is where I am. And I have no interest in saying other people are blind just because I don't understand what it is they're seeing. But I do want to understand what I can.
2) He is trying to punish and attack people and institutions who deserve punishment. Trump is literally the sword and shield for MAGA against the woke establishment that discriminates against them, legislates against them and is trying to replace them with illegals from hellholes in Africa and Central America.
It's basically this simple. Other Republicans complained about immigration but did nothing because they weren't tough enough to overcome the deep State. Trump forcibly appointed loyal people who are actually going to implement his agenda whether it is legal or not and whether anyone approves or not.
Most other Western states have leaders too squeamish or weak or unpersuasive to garner and then use such power properly and do what the MAGA right-wing wants, which is an end to migration and a reversal of migration from people with incompatible value systems. All the culture issues broadly come under this, and culture is the reason Trump won.
I can give you a 100% guarantee, MAGA isn't leaving power for the next 15 years, they won't lose presidential elections, they will use force if they do, and from their perspective it's totally justified because the opponents are trying to finish America (and the rest of majority white countries).
Explaining perfectly the MAGA view requires interaction with taboo topics such as white racial interests, and why those are a taboo and not racial interests for other groups or the use of extreme force and illegal actions as a necessary step to combat illegal actions and force of the political opposition.
I’m curious, at what point did you start trusting Trump? Naturally many voters had to start trusting him well before he had done anything at all.
And I’d like you to clarify: is the important thing that he “tries” to do what he promised, or that he succeeds? In 2016 Trump’s central campaign promise was building a wall on the southern border and have Mexico pay for it. To do this he led to the longest government shutdown in American history, lost a game of “chicken” with the Democratic leadership, and gave up. Does this matter? Do failures matter for Trump, or only for other Republicans?
Trump is awful, let's not beat about the bush. But every time I go "okay, this is the limit", someone from the side of Niceness and Compassion comes out with such a sneering, jeering, mean rant about MAGAtards and the like that I go "Gosh darn it, don't make me defend the guy! Why are you driving me to this!"
Best guess is its obviously something to do with the communication style. I saw a video the other day that said Trump talks like a professional wrestler, which is a performative tradition completely alien to me.
I think it was Hans Bethe (or perhaps Dirac) who said something similar - complimenting Feynman's exceptional intellect, but... "he talks like a bum".
Speech is probably one of the easiest things to pick up about someone. Five seconds of talking to them.
This can be gamed. I noticed my father came off as nearly two independent people depending on who he was talking to. To his father, the 8th-grade- dropout-turned-land-trader, Dad sounded like Hank Hill. To his coworkers in the Austin tech center, he sounded like a physics professor.
That's a bonus for mexicans, who see pro wrestling as some quasi-religious thing. Other people will see "Trump went on the WWF" as "Trump's a good sport."
Regarding why the amyloid hypothesis won’t die, from my outsider perspective it sure looks to be about money.
Think about it from the perspective of a biopharma company. Alzheimer’s is a chronic disease (which means you can charge for treatment indefinitely) that affects old people (who get Medicare). If you invent a drug that slows the progression of Alzheimer’s, you can charge basically whatever you want for it. You’ve heard the statistic that 1% of the entire federal budget goes to dialysis right? Invent an Alzheimer’s treatment and your company could get 1% of the federal budget too.
This means that even a tiny probability of success will leave investors jumping at the opportunity to throw you money. If you’re a researcher working on amyloid drugs, you get paid big bucks by Wall Street. If you admit that nobody (including you) has any idea how Alzheimer’s works, you’re left begging for grants.
This story sort of makes sense, but I feel like a key part is missing: what actually happens once researchers receive money from investors? On its face, if you know the amyloid hypothesis isn't true, you're never going to land the 1% of the federal budget! Why pursue a line of research you know won't pay off?
Seems like a few explanations:
1) The researchers are tricking the wall street firm into funding a lab under the pretense of exploring the amyloid hypothesis, but are quietly investigating alternatives
2) The researchers are acting purely on short term economic incentives, cynically chasing funding opportunities and knowingly misleading investors into supporting dead-end work
3) The availability of funding biases researchers—motivated reasoning nudges them toward supporting the amyloid hypothesis, even subconsciously.
4) Researchers see the amyloid hypothesis as one of several plausible paths worth exploring, but economic incentives force them to overstate its promise to secure funding.
To me, 1 feels hard to beleive and runs counter to the fact a drug was actually approved. 2 feels a bit too conspiratorial as well. A mix of 3 and 4 sounds plausible to me.
"Commenters mostly seem skeptical (1, 2, 3, 4) citing both theory (it seems like there should be too little formate to matter) and evidence (out of ~1000 users, nobody else has mentioned these symptoms yet); they propose that out of a thousand users, it’s not surprising if one develops a weird disease for unrelated reasons. "
While it is true that out of thousands of users chances of unrelated disease increase, it is not unexpected for medications to have uncommon (less than 1-in-100) or rare (less than 1-in-1000) side effects.
In case of regular medicine, often the most serious side effects are rare or extremely rare, for were they were common, they would have been conclusively observed in trials. Which coincidentally is also the rationale for a phase III and IV trials (in comparison to adverse effect reports on Substack and Reddit, adjudicated by the internet).
Anyone know of any good websites or communities for people attempting to educate themselves? So far I found Open Source University which has curricula to follow and a discord but it’s not as active as I’d hope.
I’ve been using MIT open courseware and so far and I’m very happy with it so far but the thing I’m missing is some sort of forum where actual discussions take place.
Secondly has anyone read “Abstract Algebra: Theory and Applications” by Tom Judson and have an opinion on it ? I prefer textbooks that prioritize intuition building and demonstrating the relevance and motivations of the material.
I was a big fan of Judsons book, and it does to an extent provide intuition building. I also heavily relied on other sources though during studying abstract algebra to provide that intuition. I also liked An Inquiry-Based Approach to Abstract Algebra (Ernst) - this is also free on libre math which was nice.
Scott, I'm curious if you've seen this recent paper on the limitations of LLMs as mental health providers. https://arxiv.org/abs/2504.18412
The paper talks specifically about LLMs reinforcing delusions, but I'm also curious about the general case of an always-available, sycophantic LLM reinforcing problematic ideas or behaviors in general and acting as a kind of crutch.
Presumably someone could design an LLM for mental health specifically, the way they've designed research LLMs. But I do think that LLMs will not be able to replace mental health providers because so much of psychotherapy is the client-therapist relationship, and that's better in person.
Maybe if they hook an LLM up to a really good android, that might work.
I definitely have personal experience with sycophantic LLMs reinforcing my problematic behaviours.
I've noticed that whenever I've done something that I consider to be immoral and I express guilt to an LLM, the LLM nearly always comes to my side. The LLM makes me feel less guilt.
And although that makes me feel better in the moment, it also probably makes myself more likely to repeat the immoral behavior in the future. Therefore, I no longer express my guilty conscience to LLMs.
For me that depends on which LLM I'm conversing with. The o-series reasoning models regularly push back against me, the non-reasoning models are very sycophantic in comparison.
Windows, obviously, always wants to update itself every few days. Because I'm sort of childish, I sometimes oppose this- and particularly recently, where I don't want Chrome to update because I don't want to lose Ublock Origin. (MV3, etc.) So I've been preventing Windows updates recently.
Today, my laptop clearly needed to restart. So to avoid Windows updates, I physically unplugged my modem and router from the wall (I mentioned that I'm sort of childish about this). Then restarted. Incredibly, it still updated to a new version of Windows.
How...... how is that possible? Doesn't updating require the laptop downloading the Windows update from some Microsoft server? How is downloading possible when the modem and router are physically unplugged from the wall? I'm pretty sure that my laptop doesn't have an internal modem. How did it update with no Internet......? The only thing I can think is that it had the update already like 'pre-downloaded' and ready to go, it just needed the restart in order to apply it. Is that it?
>The only thing I can think is that it had the update already like 'pre-downloaded' and ready to go
Yes, the actual update files were downloaded in the background and already residing on your hard drive before you killed your internet connection. The computer restart is just for the installation of the update files.
Yes, the modern update experience is to download the update in the background, anticipating that the user will eventually say "yes" and you don't want them to them be stuck waiting on a potentially large download.
Some Googling shows that, for Windows, these are stored by default in C:\Windows\SoftwareDistribution. I don't have a modern Windows machine available to test this, but on other operating systems you can simply delete these files before a restart and they don't happen.
Yes, windows pre-downloads update packages in the background by default. If you want to change that behavior you have to go into the group policy editor and enable the Configure Automatic Updates policy and set it to option 2 (Notify before downloading and installing any updates).
This is my annual blog marketing. I'm a psychiatrist and wrote a post about false positive diagnoses in mental health. I view this as a similar to the replication crisis/"Why Most Published Research Findings are False." There is a large Bayesian aspect to this issue. Key problems are undefined pre-test probabilities, small effect sizes, low power and high alpha, bias, and multiple comparisons. I think the community here would be interested. Thanks.
I read your post with interest. Sorry if this is an amateurish question, but one thing that was missing for me in it was any discussion of the objective reality of the boundaries of the diagnoses. You say "This is not a debate about the definition of mental illness, and it accepts DSM/ICD diagnoses on their own terms", and this sentence, to me, conflates two different things: it's possible to accept that there are mental illnesses and not go into philosophical debates around that question (which frankly strike me as mostly sophistry), but also recognize that when diagnoses are defined by collections of heterogeneous symptoms, the boundaries are inherently fuzzy and subjective.
I read up on this mostly in context of autism and other developmental delays, but I think this applies to many other mental health diagnoses, too. It always seemed significant to me that people, when told that someone has an autism diagnosis, almost always understand and accept this as saying that there is one known-to-doctors underlying etiology that has been claimed; that perhaps we don't understand how autism works and it manifests in different ways, but it's one "thing" and it's a relief to know that the "thing" has now been pinpointed; whereas in reality, it's almost certainly not true and the definition of the diagnosis itself points almost in the opposite direction.
So when you're saying - "Teasing these diagnoses apart - assuming they are actually different - is like detecting a signal buried in noise..." - this assumption, to me, buries the lede in a sense. If several "illnesses" are defined solely by overlapping sets of symptoms, with no near-term hope of neurologically precise delineation, then *by design* it will be hard to tease them apart, and claiming that lots of such diagnoses are "false" seems overblown, does it not? Why *does* it matter if you call it MDD or Generalized Anxiety Disorder in this particular case, if all this reflects is how well you judged the fit to an inherently subjective boundary some committee drew up a few years back?
And when you say "There’s nothing unique about psychiatry or mental illness. There are many false diagnoses for back pain, IBS, migraines, and so on." - isn't it true (I honestly don't know and may be wrong here) that outside mental health the assumption of a well-delienated physiological cause - even if it's hard to test for and hard to diagnose etc. - is much more common (among doctors) and much more justified? I'd think *that's* the major difference.
These are the questions that leaped at me when reading your (interesting) piece. Grateful for your thoughts or pointers to arguments (yours or otherwise) you find interesting or helpful in this area.
I'm particularly interested in reactions from readers familiar with using basic hierarchical Bayes methods for practical problems. I'd expect that there would be a few such readers of a blog whose slogan is "P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary."
First, I think that you are making the same mistake that killed Rootclaim's entire methodology from the get-go, namely treating epistemic uncertainty as if it were aleatory variability:
> an extreme Bayes factor can’t be correctly derived directly from some model that’s maybe sort of right used on data that is maybe not too biased. The correct Bayes factors are limited by the “maybe’s” not by the extreme factors that the toy models give..... Even at first glance what stands out in that analysis is the huge factor of 2200 from the HSM-specific data, mostly simply from the first official cluster being at HSM, with smaller adjustments for other data. Drop that modeling-based net HSM factor of 2200 and your odds go to 130/1 favoring L
The fact that the data are limited *cannot cause you to be confident in one hypothesis over the other*. It can only cause you to be uncertain. If I sabotage your experiment by randomly adding noise to all your measurements, so that your results become statistically insignificant, you shouldn't become more confident that the true effect size is 0.
> So let’s turn to your big factor from the HSM cluster. It can be pretty significant if the early case data were fully reported. For that to be true we need all of:
This is simply false. The question is not whether there might be some ascertainment bias. The question is whether there is reason to believe there is *sufficient* ascertainment bias to make HSM plausibly not be the first cluster. Recall that, for example, 5/5 and 12/13 of the first cases were market linked, and this is *how* covid was first identified--5 separate people presented with symptoms of unknown origin, and their doctors eventually realized they all had links to the exact kind of place that virologists had been warning for years was a potential source of new pandemics!
The rest of the argument diving deep into the weeds of various models is not really convincing for the reasons highlighted in https://www.explainxkcd.com/wiki/index.php/2400:_Statistics. By far the single, simplest explanation for all the different lines of evidence (the initial handful of cases, the high portion of the first few hundred cases being market linked, geographic clustering around the market, cases spreading from the market over time, the exponential math) is HSM being the first cluster, as this hypothesis doesn't require multiple totally independent, unlikely explanations for various facts.
2 other questions to think about:
1. If the data on the origin of the pandemic is so uncertain, are we even confident it started in Wuhan? And if it didn't start in Wuhan, then isn't this entire discussion just privileging the hypothesis?
2. If we had the exact same quality of data, but clustering around the lab, would we even be having this conversation? Or would there just be a chorus of screaming "this is obviously just epicycles, excuse making, special pleading"?
> The reliable-looking reports that SC2 showed up in wastewater samples from Milan and Turin on Dec. 18, 2019 are wrong.
See, this is the kind of argument that makes your entire post hard to view as anything other than motivated reasoning. If SC2 was prevalent enough in these cities on that date to show up in wastewater, why did it take another 6 weeks for any cases to be confirmed anywhere in Italy (and not even in Milan or Turin)? That's enough time for a factor of about 4,000x increase in cases. And if these cases are legitimate, they should be casting doubt on Wuhan as the origin at all--which is stronger evidence against LL than against Zoonosis!
Lastly, I want to make one meta-point about your critique of Pekar et. al.'s model. The fact that this paper even exists the only thing that allows such a detailed analysis and critique. Where is the equivalent for lab leak? Where is the similarly complex model that shows how likely the major pieces of evidence in favor of lab leak are? Where is its code, its Bayes factors, etc? Without any of that, this is just an isolated demand for rigor. Without similarly rigorous investigation of all the data allegedly favoring lab leak, if you want to say that the HSM cluster provides only weak evidence of Zoonosis, well then the correct BF in favor of lab leak is 1, since regardless of any weaknesses in Pekar's case, the lab leak one hasn't been made at all.
Pekar is the opposite of rigor, a collection of extreme errors in basic Bayesian logic. There's a good reason that lab leakers haven't produced their own opposing version. The initial data are simply too sparse and too non-randomly sparse for that line of reasoning to do much. It's best just to admit it's not informative and move on to more informative data rather than to create a parody of Bayesian reasoning.
On basic techniques— of course you're right that uncertainty in data cannot lead to favoring either hypothesis by much. That's the point I was making. To get much of a Bayes factor you need reliable data. There are various types of data that are more reliable than the early case home addresses, so those other types are where more substantial Bayes factors come from. The shift of Scott's net odds toward LL when one discounts the extreme HSM factor does not come from a reversal of that factor but just straight from the other factors that Scott used based on non-HSM-related data.
On the Italy data and various other data tending to show that cases originated well before the HSM cluster, I agree that none of it is compelling. It's possible that all of it will fall apart. The question is whether the odds are more than 1000/1 that it will all fall apart.
You correctly say "The question is whether there is reason to believe there is *sufficient* ascertainment bias..." That's exactly what I address in the JRSSA article and the arXiv follow-up. A statistic that Worobey et al. chose to highlight turned out to have the wrong sign for the simple model that they use and the right sign for the large-ascertainment-bias model. That means that something (probably but not necessarily ascertainment bias) was way off in the way they derived conclusions from the reported case addresses. Unlike Andrew Levin, I don't think the reported case location/timing data point strongly away from an HSM origin. I think those data are not only fragmentary but also non-representative, so they just don't lead anywhere.
You raise one important point: how sure are we that the epidemic even started in Wuhan? We are quite sure that the first major outbreak of serious illness occurred there. Nowhere else has similar early morbidity/mortality reports. That already suffices to calculate conditional probabilities of that first big outbreak location for both the L and Z hypotheses. But one big reason that some chance for Z remains even if one were to eliminate the HSM version is that there are other non-HSM versions of Z. Perhaps the main one is that an early version of the disease might have been transmitting at some low level before picking up an FCS by template switching. That still leaves a lot of coincidences for Z to explain, but it avoids the conditional probability for a market origin in a city whose markets had a much lower share of the wildlife trade than the city had of the population,.
> . It's best just to admit it's not informative and move on to more informative data rather than to create a parody of Bayesian reasoning.
I'm not quite sure how you got from what I wrote to this paragraph, but let me try to restate: Where is the writeup of the major pieces of evidence (whatever you think they are) that would allow someone to critique your argument in as much detail as you have critiqued Pekar et al? Code, raw data, models, etc. Rootclaim's published analyses, for example, have nowhere near enough specifics to have any idea if they're making major mistakes like the ones alleged of Pekar.
> That's the point I was making.
No. You're using the (alleged) uncertainty in the data to just ignoring the entire question. The fact that you don't know something is a statement about your mind, but it does not imply that the true long-run rate of zoonotic pandemics starting at HSM vs starting at WIV, based on the data we have, is close to 1. For all you know it could be 1,000:1, 10K:1, whatever. You can't even properly put a bound on the probabilities with this sort of argument; it's possible for you to be arbitrarily wrong. See https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0565
> On the Italy data and various other data tending to show that cases originated well before the HSM cluster, I agree that none of it is compelling. It's possible that all of it will fall apart. The question is whether the odds are more than 1000/1 that it will all fall apart.
Second, you ignored the part where the pandemic being in Italy in December 2018 would be even stronger evidence against lab leak than against zoonosis.
This part really does feel like the style of argument that is common in various conspiracy theories (2020 stolen election, evolution denial, vaccines-causing-autism, etc), where proponents just throw out a list of weak arguments and pretend like they can't all be wrong.
> We are quite sure that the first major outbreak of serious illness occurred there.
What OOM is "quite sure"?
> market origin in a city whose markets had a much lower share of the wildlife trade than the city had of the population,.
Do you have data on this? The only source I saw (can't find it now) indicated there were no more than about 400 wet markets in the region. Wuhan had 4, or >=1% of the markets. Which is comparable (maybe a little bit less than) its population share of the region. There's some confounding here with density and urbanization, but can you quantify "much" and support it with data?
Here's what I have on Wuhan share of the wildlife trade.
"For the market branch of the ZW hypothesis, ZWM, the likelihood drops even more since it has a much smaller fraction of the wildlife trade than of the population. The total mammalian trade in all the Wuhan markets was running under 10,000 animals/year. The total Chinese trade in fur mammals alone was running at about 95,000,000 animals/year (“皮兽数量… 9500 万”). For raccoon dogs, for example, the Wuhan trade was running under 500/yr compared to the all-China trade of 1M or more, 12.3 M according to a more recent source. The Wuhan fraction was then at most about 1/2000. We can also compare the nationwide numbers for some food mammals with those of Wuhan. For the most common (bamboo rats) Wuhan accounted for only about 1/6000, apparently largely grown locally, far from sources of the relevant viruses. For wild boar Wuhan accounted for less than 1/10,000. Wuhan accounted for a higher fraction (1/400) of the much less numerous palm civet sales, but none were sold in Wuhan in November or December of 2019. It seems P(Wuhan|ZWM) would be much less than 1/100, something more like 1/1000. We may check that estimate in an independent way to make sure that it is not too far off. In response to SC2 China initially closed over 12,000 businesses dealing in the sorts of wildlife that were considered plausible hosts. Many of these business were large-scale farms or big shops. With only 17 small shops in Wuhan we again confirm that Wuhan’s share of the ZWM risk is not likely to be more than 1/1000, distinctly less than the population share of 1/100."
On some possible cases in northern Italy by 12/18/2019, that would be consistent with the more conventional version of the LL hypothesis, in which the successful spillover was roughly in mid October. That allows time for some cases to pop up in other cities since Wuhan has a lot of international trade. Meanwhile, by that point the cases in Wuhan had reached the point where the small fraction that were so serious that they required hospitalization was starting to show up noticeably above background flu-like illness. So those may be false positives, but if not they fit L better than the HSM version of Z.
On Pekar is it a mere "allegation" that in calculating a likelihood ratio for hypotheses X and Y it is improper to use P(obs1|X)/P(obs1 and obs 2 and obs 3|Y)? There are basic rules about how probability works.
On the first big known outbreak being in Wuhan, we are virtually 100% sure. The issue in calculating the conditional probabilities for that under Z and L is whether the existence of the coronavirus labs that were able to detect a new virus skewed the detection probability so that earlier similar outbreaks elsewhere went unnoticed. Given the worldwide attention paid to this issue and the intense motivation of the Chinese government to show an external source, I think that 1000/1 OOM odds against comparable earlier outbreaks is conservative. Incidentally, even people who try to make a case that the virus was developed in US labs don't claim earlier US outbreaks.
On all the psychological claims about various types of conspiratorial thinking, I think it makes sense to try to work through the odds on the factual question first before indulging in emotional speculation on other people's psychology. Plenty of time for that later.
Before responding to these points, what is your answer to this question from above?
> If we had the exact same quality of data, but clustering around the lab, would we even be having this conversation?
> Here's what I have on Wuhan share of the wildlife trade.
What is the source of these numbers? And does the 1/1000 of risk number account for the fact that pandemics are much more likely to start in large cities?
> So those may be false positives, but if not they fit L better than the HSM version of Z.
I think this is wrong, but also not very important (see below) so I'll only elaborate if you think it matters.
> On Pekar is it a mere "allegation" that in calculating a likelihood ratio for hypotheses X and Y it is improper to use P(obs1|X)/P(obs1 and obs 2 and obs 3|Y)? There are basic rules about how probability works.
You made many criticisms of Pekar. I haven't checked every single one of them in enough detail to agree that every single one is valid, so I used "alleged" to cover them all. Now, do you have such a write up as I mentioned, or is the entirety of this criticism an isolated demand for rigor?
> On the first big known outbreak being in Wuhan, we are virtually 100% sure
Ok, in that case I am virtually 100% sure the December 2019 wastewater in Italy is not correct. Certainly sure enough to say that I do not feel the need to substantially discount the HSM cluster based on that argument.
> On all the psychological claims about various types of conspiratorial thinking
I'm not jumping to psychological reasoning. My point is that a certain style of argument (listing lots of individually weak arguments, and asserting they are unlikely to *all* be wrong) seems to repeatedly be used to support false conclusions, and almost never to support true conclusions, and so is unlikely to be a valid type of argument.
On your hypothetical question about what if there were a cluster near WIV, given the weight of the other evidence and the absence of internal WIV evidence, I think it would give a modest Bayes factor favoring LL. I specifically say that the HSM cluster would give a modest BF factor favoring ZWM but not enough to compensate for the other factors specifically weighing more against ZWM than against generic ZW. Andrew Levin (https://www.nber.org/papers/w33428) has calculated some of those factors, getting values that I think are unrealistically unfavorable to ZWM, for the same common-sense hierarchical reasons that I discount the extreme HSM BF used by Scott.
Nevertheless, HSM runs into these problems:
1) lower Wuhan likelihood than for general ZW
2) no wildlife vendors sick
3) no positive wildlife samples
4) negative correlation of SC2 RNA with potential host mtDNA on market swabs, in contrast with the distinct positive correlations for actual animal coronaviruses
5) lack of any documented wildlife from Yunnan sold in the relevant period
6) No species sold in HSM was found to have any outbreak anywhere. Lab raccoon dogs were barely susceptible to massive doses of a downstream (D614G) more contagious strain.
7) All HSM-linked sequenced cases were of a strain farther from natural relatives than many sequenced cases found elsewhere.
These features contrast sharply with the original SARS.
Alright, I took a glance through this, Some basic thoughts:
-You're in the weeds on this man and it's rough on a reader. I had to go look up other essays to figure out whether you were a lab leak guy or a wet market guy. You make constant references to things as if the reader has been following this argument in depth since 2020 and, yo, we haven't.
-I *think* the guts of the disagreement is as follows:
#1 In Wuhan, during the initial Covid outbreak, we see this weird cluster of cases around both the wet market and the lab that can't be linked to the wet market. The cases that can be linked to the wet market also have a very different pattern than all cases. The green-blue graph in the link is really useful.
-Side point: I have no idea how important this base data is. The point doesn't seem to be where the cases are clustered, it looks like the wet market and the lab are right next to each other in the center. What makes the data difficult is whether the early covid cases can be attributed to the wet market. That data seems intrinsically really noisy; if CCP agents came to your home at the start of a global pandemic and started asking questions, how honest would you be?
#2 Anyway, presume this weird cluster is real. How likely is it that this is just some weird stats illusion vs a real effect? For example, an outbreak in New York probably doesn't have a weird cluster like this. But if you threw the data in a kmeans clustering algo and told it to find 4 clusters, it'll give you 4 clusters. Run that through 10 cities though and you'll find a few suspicious clusters that we can all tell really interesting stories, that "feel" real. How likely is this cluster in Wuhan to be one of those fake/illusionary clusters.
And since we're all proper Bayesians here, we're trying to quantify this. I don't think this line of argumentation is meant to be definitive but it is meant to be a major factor. If we're debating whether this is a lab leak or a wet market/zootonic thing and the cluster is extremely unlikely, say 10% odds that it could happen, that has big impacts in the overall argument.
#3 At which point there's, like, 100 pages of dumb academic writing that my eyes glaze over. This appears to be complex for the sake of complexity.
To whit, there's no code and no data shared. The closest I found is in this paper:
which links to data you can download but it's just some tsv files without any raw data. Maybe there's some here but they don't open properly:
The actual dataset for Wuhan and other areas should be trivial, like 7 columns and <1000 rows easy. Each covid patient should be a row, each row should have lat/long and/or x/y values for where they live, the date time they were first diagnosed by a medical professional, their self-reported datetime they started feeling ill, and whether they can be traced back to the wet market. For this initial spread, that's it. I know that data exists, that's how you make a graph like that, but it's hidden.
And, under the hood, this code should be ~100 lines of Python or R, we're all using scikit-learn or caret under the hood.
There's no code, there's no data. Instead of glancing over a trivially small dataset and reading ~100 lines of code.
Instead of just, ya know, reading the data and testing with it, there's a 100+ pages of theory, none of which is very legible, and none of which is better or more trustworthy than just playing and sharing the data.
If you want feedback from actual practitioners, this seems pointlessly academic. If you to figure out how weird that initial Wuhan pattern is, just have 1 nice, clean Wuhan dataset and comparing it to 10 nice/clean outbreak datasets, either Covid or other diseases, is way more valuable and persuasive than arguing weird theoretical stuff about ascertainment bias.
Addendum:
I am in favor of whatever Andrew Levin thinks. He did an analysis, he shared all his stuff here:
and it's fairly neat. Cool, he did the thing I want, he's trustworthy. I glanced through his code and...it's all ".do" files. That's....Stata right? God, that takes me back. But it's got to be pretty basic, Stata doesn't have a lot of baller libraries. And I don't see any library calls, I think this is all basic stats.
Pet peeve: When I see something like "whether you were a lab leak guy or a wet market guy", I want to scream obscenities. These are not opposites, but strongly overlapping circles on the Venn diagram. At this point, most of the scientifically-literate lab leak probability space is in the realm of "An infected lab tech left work, went to the seafood market to buy groceries, and coughed on one of the vendors". Or went home and a week later his asymptomatic-carrier wife went to the seafood market or something like that.
There's an interesting and somewhat relevant side discussion about whether there might have been some unnoticed COVID cases before the wet-market superspreader event; that's probably unknowable at this point. But showing that it probably went public via the market, is not a slam-dunk win against the lab-leak hypothesis, and the distinction you should have made is "lab leak guy or zoonotic origin guy". Or natural origin if "zoonotic" is too fancy.
Additionall, before COVID was officially recognized it was being diagnosed as "atypical pneumonia". So you *really* can't trust those early "diagnosed cases" as being the original cases.
"Each covid patient should be a row, each row should have lat/long and/or x/y values for where they live, the date time they were first diagnosed by a medical professional, their self-reported datetime they started feeling ill"
This combination is the kind of sensitive private information that can't possibly be a public dataset ever, neither legally or ethically; so it kind of explains why that data isn't available and won't ever be published unless as an aggregated summary without the underlying raw data.
Thanks for that detailed feedback! I also appreciated Andrew Levin's work, which I discussed with him at some length before publication. I do think some of it suffers from the same non-hierarchical issue that plagued the enormous Bayes factors that Scott and others obtained from the HSM cluster. E.g. I pointed out to Andrew that his factor from the spread among HSM stalls compared an all-from-raccoon-dog model to a pure human-to-human model. That was unfair to Z because there could be (ignoring the facts that raccoon dogs were scarcely susceptible, etc.) an RD->human-> human Z model. So the big issues are not in the codes but in the lead-ins to the codes.
There's one major exception: the Pekar et al. 2022 paper that you link to. It has an almost inconceivable array of major errors in coding (some now fixed under pressure) and in basic Bayesian logic. Angus McCowan has done heroic effort in sorting these errors out. I translate his work into simpler English (and link to his arXiv version) here:
In addition to all the errors described there (principally the use of completely unbalanced observational detail for the two hypotheses) Angus now points out another error. The simulations use 7500 sequences but the data consist of 787. That's a big deal when the main question you're asking is whether some intermediate sequences could have been missed by accident!
On types of clusters that occur, there are data. From my long Bayes blog:
"Analysis of the spread of Covid in New York City concluded “The combined evidence points to the initial citywide dissemination of SARS-CoV-2 via a subway-based network, followed by percolation of new infections within local hotspots." The Hankou station is on Metro Line 2, which connects directly to the stop nearest WIV."
To be clear- I've already had feedback from extremely serious practitioners. What I'm trying politely to ask is whether blog-readers who tell themselves that they are Bayesians actually know how the methods are used in the real world. I appreciate that you know it's not all in a toy code with unreliable input but I'm not sure that those who bought Scott's arguments know that.
How is this not "People who can't do statistical coding don't understand Bayes?"
I think there's ways to understand Bayes that would benefit a lawyer or an executive, especially in terms of probabilistic thinking, but I wouldn't expect them to be able to answer the question as you posed it.
That makes sense. Tho I think there's an intermediate ground, beyond vague hand-waving but short of code-writing. That's the Fermi-style rough calculations where you try to get some idea of plausibility from approximate estimates. I think with a good education system (dreaming, especially now!) that would be doable by a lot of non-specialists. The particular problem that comes up here with the HSM Bayes factor wouldn't be in week-1 of an intro course but maybe in week 2 or 3, where you start having to imbed particular Bayes factors into a probabilistic relevance context. E.g. gambling in the context of occasional cheating for a nice teachable example.
I will risk de-railing your intended thread to ask a related question:
"Is there some reason that the folks doing Bayesian analysis on this seem to be ignoring the claims that the German foreign intelligence service believed in early 2020 that Covid was caused by a lab leak (80% - 90%)?"
Is this claim not credible (comes from a leak)? Or has the German intelligence service updated their claim? Or is it that *how* they got there is opaque? Or have I missed that this HAS been part of the Bayesian analysis? Or something else?
"Germany's foreign intelligence service believed [in 2020] there was a 80-90% chance that coronavirus accidentally leaked from a Chinese lab, German media say."
It's that how they got there is opaque and that intelligence services do not have a good track record of telling the truth to the public. It's not their job. So I avoid using such arguments by authority.
There is one peripheral exception in the recent CIA release. Here's a quote from my long Bayes blog. "ZWM" means zoonosis at the market where cases were found in Wuhan, "ZW" means generic zoonosis.
"[4/16/2025] To the very limited extent that releases from intelligence agencies are useful, the latest CIA release supports our conclusion that ZWM is unfavored compared with other forms of ZW. They say “New information [redacted] has enabled CIA to more clearly define the conditions and pathways that could have led to ether a laboratory-associated or a natural origin. This body of information has both reinforced CIA's concerns about the potential for a laboratory associated incident and led CIA to focus on isolated animal encounters in secluded environments as the most plausible natural origin scenarios.” Although it is easy to think of reasons why intelligence agencies might slant publicly released information on the question of whether SC2 came from a lab or not, the relative probability of a wet market origin vs. more remote zoonosis (e.g. in a bat cave) seems like it would have no political significance for a U.S. agency. Classified information might, however, help in evaluating the otherwise somewhat unreliable reports that SC2 was circulating in humans well before the reported HSM cases."
This is important because the arguments that Scott and others used following the rootclaim debate relied entirely on huge Bayes factors specific to ZWM and irrelevant to other ZW.
I am looking to start my own medical practice. Talking to people about it has been strange, because I feel like I don't really understand what a lay-person is looking for when they choose a doctor. I am a PCP with a lot of extra skills and I constantly hear that people want "a doctor like that" (for example, I am also a yoga teacher). I'm just looking for some clues about how people found their current primary and what things would convince a person to change (and possibly pay more out of pocket)?
I currently see a concierge doctor for a particular specialty (not PCP, though). She's an endocrinologist who is qualified to work with diabetes and thyroid issues, but her main business is as a weight loss clinic prescribing GLP-1s. I see her for thyroid and losing weight. I would highly recommend advertising yourself as a "general wellness" kind of PCP who can provide hands-on service for people taking GLP-1s.
In terms of "front desk" staff, this doctor has none. She rents a solo office in a small office building. When I arrive for my appointment, I wait in the lobby and text her to let her know I'm there. She comes out and waves me back to her office. That's the biggest thing for me - I can *text her*. I don't know what kind of software voodoo she had to set up in order to be HIPAA compliant, but she did it. (I could ask her on your behalf, if you're interested.) She has a PA who monitors her text messages throughout the day to answer any medical questions if she's preoccupied.
I am not familiar with the legal restrictions on what kind of services you can offer as a PCP versus some other specialty. Do you feel qualified/are you allowed to treat patients for certain psychiatric concerns? There is a huge demand for PCPs who are willing to prescribe psych meds, since psychiatrists are very busy and it's hard to get appointments.
Like others are saying, the biggest thing in concierge medicine is nailing those basic business customer service principles. Have a simple check-in process, set up enough time in an appointment to get to know the patient, etc.
This is very interesting. I've interacted with a bunch of concierge internal medicine people, but rarely a specialist. Fun fact: there aren't any "legal" restrictions on what I can offer as a PCP vs. a specialist. My medical license looks the same as a surgeons, which looks the same as a gastroenterologist's etc.
Also, I'm not sure if the stat still holds true, but PCPs prescribe the majority of psych meds (except for lithium). This could be a whole essay, but I remember 75% of antidepressants being the stat.
Thank you though, interesting to hear about a doc who truly runs the practice solo. Texts seem like a big part of that.
I forgot to clarify that her PA is fully remote. I've never met her in person - she just responds to the text messages. Sometimes the doctor answers my texts, and sometimes the PA does (and she'll always introduce herself as the PA in the text messages).
Would you like to talk to this doctor? I can reach out on your behalf.
Yes. I specialize in the very things that insurance really doesn't value. Primary care, nutrition, osteopathic medicine. Direct Primary Care has some allure though. Thoughts?
Oh, you'll want to advertise the h*** out of that. And explain why it's the better model. "not being beholden" to others, actually paying for your own care.
Since this is a place where we talk a lot about AI, how would you feel about talking to an AI? What would you want it to be able to do (can send alerts to the doctor, can send faxes, etc) for you to consider that an acceptable option?
Since this is a startup, I was looking at not having a front desk staff as a way to keep overhead low. A lot of docs I have talked to have said that whether you get sued or not is dependent on your front desk being nice.
The AI would in no way and never give medical advice. It would just be for scheduling, sending/requesting records, and triaging patient calls (same as what a front desk person would do). Sorry if that wasn't clear.
If you'd be using a modern LLM solution (and not just a fake AI glorified call-tree solution that's labeled as AI because that's what sells right now) there would be no way to guarantee it wouldn't randomly decide it should offer medical advice
I'd recommend starting without a receptionist. You'll need one eventually for sure, but if you're just starting out its good to handle the scheduling and billing yourself for a bit. That way you get a handle on what you will need a receptionist to do, and how it works. Also it's less pressure on you if you don't have someone else's salary relying on you.
Once you have enough clients that it's getting hard to do the admin yourself you should hire someone.
StrangePolyhedron's priorities are pretty reasonable, but I'll add a few things that frustrated me with some of the medical practices or providers that I've visited over the last several years.
1. Rushing. A few years ago, my PCP, who was an MD, would be itching to leave the room within 10 minutes of entering. In general, I've found that PAs are more patient than MDs. To relate this to your inquiry: unless I'm having a problem that seems unusually complicated, I would not pay extra to see an MD over a PA.
2. Lack of scientific curiosity. Talking with doctors often feels like talking with pre-gen-AI chatbots. They have no capacity for context (e.g. they don't remember previous conversations); they're not interested in patients' proactive observations; they ask very few questions before making a recommendation; and there's absolutely no way to convince them that they're wrong until the patient follows their recommendation and it fails. (If you're interested in my prime example, then DM me.) Action items: ask each patient "Is there any more information about this problem that you'd like to share?", and consider partnering with some ODs, due to their holistic training.
3. Textual overload. The amount of text that the average person is bombarded with is impossible to process. Consequently, most people ignore almost all text, which causes them to miss information that they really care about. (e.g. "You'll be billed $40 for mentioning problems during a routine physical.") Action item: ruthlessly eliminate happy-talk and other unnecessary text from your office's forms, emails, and physical environment--and for the love of God, don't install advertising panels in the exam room. I'd pay extra for clear, relevant communication.
4. Wasting patients' time. One clinic's parking garage gate breaks frequently. If you arrive 5 minutes early, then get stuck at their gate for 15 minutes while cars pile up behind you, then you've "missed your appointment" and have to reschedule. They eschew responsibility for this by instructing all patients (not just new ones) to arrive 30 minutes early. Another clinic instructed me to arrive 30 minutes early even after I checked in online. Out of an abundance of caution as a first-time patient, I honored their request. Once I arrived, I was processed within 5 minutes, but I wasn't seen for another hour. A third and fourth clinic use Phreesia for onboarding patients, which asks an exhausting number of questions and uses dark patterns to trick prospective patients into paying for third-party services:
Wow this is good info! I'm trying to get through this whole thing without doxing myself, but I am a DO, but being in a big city there is a lot of incentive to rush. Ideally I'd just charge enough so that I don't feel any need to do that. The holistic training that we get in med school is overrated. Lots of DO's are indistinguishable from their counterparts. The holistic parts of the training are de-emphasized as they do not contribute to our test scores.
The lack of scientific curiosity sounds like the same issue with the rushed visits. If you end up seeing 3,000 patients per year, then how are you going to remember what was said at the last annual?
Never wasting patient's time and avoiding text-bloat sounds ideal. I would like that as a patient to. I wonder if it is legal to have summaries at the top of some of the longer legal forms?
To be clear, I don't place any blame on doctors for failing to remember previous conversations, given the number of other patients they see and the length of time between visits.
Nevertheless, the patient's medical context matters, particularly for complicated problems. If it can't be retained *between* visits, then it needs to be rebuilt *during* visits. That requires a more flexible and inquisitive approach than most doctors seem to take, as I elaborate in the rest of point 2. It feels like they're shooting from the hip.
I don’t need or use medical care but my very elderly parents have all the insurance in the world, a dozen doctors, some home health visit stuff … I can’t really follow it except to notice quality of life deteriorates and none of those doctors, however competent or caring, seems to have a holistic view of my father’s situation in particular. So he is starving to death, yet still believes he’s supposed to “diet” because of diabetes and the fear of dialysis. He’s still doing things like (we’re talking a starving man at 105 lbs.) going to a heart doctor. Or as recently as a year ago getting little spots painfully dug out of his skin which his elderly caregiver then has to apply chemo to “because all their friends love this dermatologist”. It’s just all a hodgepodge and each day he surveys his 18 pill bottles and himself chooses based on nothing, which ones to take. He had 2 catheters, is fortunately able to have just one now, but it fails every 10 days or so. That is, it fails all the time as he must use disposable pants as well but it fails in some ultimate way regularly; and of course he has a UTI basically all the time. He toggles between OTC stuff for diarrhea and for constipation, not at intervals but constantly.
No doctor ever looks at him and says, you ought to be cared for in a hospital, given an IV or whatever. That the doctors do not want, and I don’t blame them. But it gives a Potemkin aspect to his medical care.
They also pay for a concierge PCP, from their own pocket of course.
I don’t know that that PCP guy can overcome any of the foregoing, or their own cognitive shortcomings - well, he can’t.
But apart from an unusually kind and available home health nurse who has stayed in the position long enough to become Their Guy, this no-insurance PCP is the guy who can be reached, the guy that we the kids will call on as the end nears, and he’s the guy who can be a first pass for their health concerns of the everyday, immediate sort.
He’s just available to them, and knows all their problems. He’s not I guess a strong enough personality to tell them truths about futility and about how to die but at least he’s honest enough to utter the words (to us) “it’s amazing he’s still alive”.
They went to him the other day for I don’t know what, and my father couldn’t leave the car to go in - so he came out and saw him in the parking lot.
The other thing I will say is that most of their doctors and PAs run a tight ship, time wise. They themselves are ultra punctual as old people tend to be. I have heard it enough times to know that their oncologist, whom they otherwise like fine, is the one doctor whose appointments they dread because of the delta between appointment time and time to be seen.
Hearing all of what your parents are going through makes me sad. I get what you're saying about the tons of medical care just being window-dressing rather than a more directed and definitive plan. It does sound like the concierge PCP is actually adding some value, but still not getting to the root of the issue.
This isn't directly relevant, and I don't know exactly how to apply it to your situation, but:
I've been in and out of physical therapy for years. During that time, I've had some very good physical therapists that — in hindsight — seemed to be limited by the constraints of the insurance system. Time during appointments was not spent optimally, eventual "graduation" from care inevitably happened prematurely, and you never really knew what it was going to cost until months later when the paperwork caught up.
My current provider advertised the opposite: the entire appointment is with the physical therapist, care can continue as long as I'm satisfied with the progress I'm making towards my objectives (essentially blending physical therapy and personal training at this point), and pricing is transparent (albeit much higher!).
I'm not looking for _exactly_ the same thing from a primary care provider, but I think there's some congruence.
For example, one way to handle the "bundled visit" situation you mention in the other comment is just to be up-front about it at the time of booking. Maybe have your staff explain that by default a routine annual only covers certain things, and let folks opt-in to scheduling the appointment as a "bundled visit", which also lets you set aside enough time to talk through the other issues.
Maybe even include a third option for an extra-long visit; I'd consider paying extra out of pocket for, say, a 90-minute appointment to sit with the provider and talk in detail about everything, instead of staring at a form trying to decide which boxes to check. I'm not the one who has the expertise to know exactly what's relevant to share.
Physical therapy has some quirks about it that make it not directly applicable, but it seems like there is a theme I'm seeing: helping people fight insurance to get them the services they need. Seems like part of the premium you were willing to pay came from getting personalized care, up to your comfort level.
I don’t know what the balance between “be successful” in your fledgling practice and *limit your practice* is … maybe this is more a large urban area thing, but one keeps hearing anecdotes from people whose GP quit taking insurance and went to the concierge model.
Funny you say that, that's about to be me! I just hope I find a way to keep my rates from being ridiculous, more of a direct primary care model. I don't want to be the discount concierge guy, but I think that by keeping overhead low people will actually get what they are paying for rather than paying for a fancy office, staff that just answers phones and makes appointments, etc.
It seems to me to do private practice out-of-insurance primary care that one needs to have a clear "brand" or niche that you're aiming for. Affordable low overhead is one, high end boutique is another, holistic/alternative yet another, pediatrics alone would be yet another given the emotional/high urgency of that for parents and the high value of being very available to them in those first few years.
I'm a moderately low user of primary care and high user of specialist care, just given luck of the draw, and because insurance pays for everything, I can't imagine paying extra for primary care. I pay cash out of network for good psychotherapy, chinese medicine, and physical therapy because the quality of in-network services for those isn't good or available.
I see an NP for my primary care, she's average, but I don't need her to be any better than that. I can't picture what would draw someone like me to a cash model of PCP. It seems like I would need to be someone with way more affluent taste than I have (and then you'd have to meet that taste) or because you're providing services that are quite distinct from in-network PCPs.
The people I know who have some extra resources to spend on health are spending it on psychotherapists, personal trainers, naturopaths/TCM doctors, or body workers of various kinds. The boutique doctors I know who seem moderately successful are doing things like holistic care for menopausal women kind of thing (ie, care that mainstream medicine is really not addressing). But my experience is limited.
This is a helpful perspective. I don't see a lot of people who don't use primary care (selection bias). A niche that mainstream medicine isn't addressing sounds like my cup of tea. Thank you!
Helping to navigate insurance is definitely a part of it, but clearly/transparently offering value-add services is useful too as a part of that personalized care.
Things I'd pay more for:
* More comprehensive routine annuals. I don't know what would be ideal here, but the current process can't possibly be optimal. Perhaps take not only my resting heart rate, but also get me running on a treadmill. Or assess flexibility through some yoga.
* Useful/actionable nutritional consulting after annual blood work, perhaps with follow-up.
* More time from the provider. This could be actual face-to-face time, or ensuring they have time to actually read my medical records.
You are correct that for a lot of people a routine annual is a waste of time. It is quite cost effective when it works though! I really appreciate that list though, this is really perfect.
In the US, my primary consideration is whether they accept my insurance. Second is how easy they are to schedule and work with.
One doctor I used for a year (before I moved) had absolutely amazing service. He had a website like "Doctorblackmountainradio.com" which advertised specifically many of the insurance carriers he accepted and what it covered with him (insofar as annual checkups, gym rebates, preemptive testing, etc.), so I didn't have to go through an incomprehensible system to understand if I was going to have to pay or not, or how much I would have to. He also had super transparent scheduling, where next day, or even same-day scheduling was an option.
The transparency was great, but better was that he actually made me understand how the whole thing works. Exactly why I should schedule yearly checkups and how I can do so without paying. What he was actually doing and what other options I had after receiving a checkup, and why I would choose any of them. I ended up paying out of pocket for some additional health check things that I was mildly interested in ahead of time, simply because he discussed with me for a bit. I assume he realized I was a young man in his 20s (and thus likely loosely aware of things like testing testosterone and other biomarkers), and explained some actual stuff about what metrics mattered, what were likely to be important to me, and what was almost certainly a waste of money. I don't imagine that's easily scalable, and he was a relatively young doctor so I imagine he hasn't yet been beaten down by the monotony of entitled/insane/normal patients.
I highly recommend (if you're in the US) to sign up for Zocdoc, and try to get a few of your existing patients (or just friends or family) make at least one appointment through there, and leave you a glowing review, as that's how I've found my primary care physician each time I've moved to a new area.
The system certainly grinds you down, but I think things are going to get better thanks to certain technologies. Yeah, a lot of those "wellness" biomarkers are useless, but I think there is a role for "remote patient monitoring" that's really underutilized. I have a zocdoc, but they charge an ton ($50) for a patient who makes their appointment through their system. Making sure I can explain the system is important, that's something I really need to brush up on! Thank you.
I go to my insurance company's website and use the search tool to find out who within a geographically reasonable area accepts my insurance. Then with a handful of choices, maybe I look at a few online reviews to make sure I'm not accidentally signing up with someone terrible, but probably I pick the closest one.
Of course it may be different for other people, but my view of a primary care physician is pretty much like my view of an auto mechanic. I just want someone who can do the job and get me in and out efficiently. If you're looking for advice, I'd go with things like "it's very easy to make appointments!" and "They actually tell me how much money I am likely to be paying instead of grinning while we hit the insurance company roulette." Maybe, "The waiting room is very nice and doesn't blast me with Fox news."
Ah, thank you! Seems like it should be comfortable, stuff should just work, and it shouldn't involve surprise expenses.
How would you expect to see that last part signaled? My current academic medicine practice has a policy where if you bring up *anything* during your routine annual that isn't part of a routine annual (a refill, my toe hurts, etc.) then we hit you with a bundled visit and a copay. The policy is printed out and stuck on the inside of every room (print is small). I rarely get complaints, as the copay is usually less than $40, but it has always felt scummy.
I don't plan on carrying this policy over to my new practice, but other than the explanation I wrote above, how can I say "I won't do that"?
I want a doctor that will fight the insurance company where appropriate. (This was long before fexophenadine got sent over the counter). My insurance company wouldn't cover fexophenadine, even after our doctor said "He's already taking loratidine, and it's not helping. His condition will likely require hospitalization without covering this medicine." (And we appealed that as well.)
I also would like to know what a doctor's position on placebos is. Given that 50% of doctors prescribe them, I'd really like to know if a doctor considers it appropriate or not.
I specialize in obesity medicine. Fighting for GLP-1 coverage is like 15% of my day (and that's with the heft of an entire pharmacy team helping me with them!)
Placebos are tricky. In the strictest sense, I don't use them. However, sometimes maintaining the therapeutic alliance means giving something, even if I don't expect it to work. Take Tessalon Perles for example. They're probably useless, but when someone has a cough that I know will go away with time and they've already "tried everything TM" then sure, I'll throw that at the issue, just to show that I care, and take their complaint seriously. Tessalon perles are approved by the FDA for cough, so it's not a placebo, but the studies are so underwhelming that it's basically snake oil.
Now that I've seen this comment, I'm going to double down on my recommendation to just go all-in on the "be a one-stop-shop for wellness and GLP-1s" strategy. Part of becoming a concierge doctor is you are filtering out your patient pool for people who have enough disposable income to see you. They will either have good insurance, or enough disposable income to pay cash for GLP-1s. This means you won't spend nearly as much time fighting with insurance for coverage. My weight loss doctor helps patients order the name-brand stuff from Canada at ~half price of an American pharmacy if the insurance won't cover it. (For professional reasons, she won't recommend the compounded GLP-1s, let alone the grey market products.)
I'm quite surprised at this! Getting the meds from Canada seems like quite a trick, and it's probably not that risky to use the compounded versions of the medication (also, gets you access to certain receptors like cagrilintide that just don't have a brand yet). I certainly can't do that at my academic practice, but getting the meds from Canada seems like a fantastic and legitimate service. I do wonder how they do it.
I want to know about both of those. Reporting that you do prescribe medicines that you feel are statistically unlikely to be much better than placebos is a gesture of good faith in the intelligence of your patients. That may not be warranted, but if I was asking "who do I want to go to?" at least you're giving me the tools to decide.
Another thing I'm likely to want to know is "how good are you at diagnosing zebras?" (aka distinguishing between the normal "horse" problems, and the "oddball weird ones"). Certain people have odd genetics, and tend to have ... just plain Weird Things happen to them. If I was one of them, a written commitment to "willingness to reassess if treatment doesn't seem to be working" might be very valuable. (A friend of mine had what presented as a dermatological issue, but turned out to be an autoimmune issue. He got very, very, very lucky in his choice of dermatologist.)
A third idea: Talk about "treatment team" and what you look for in a good specialist. What you think is a good doctor is a decent proxy for "what you aspire to be yourself." It also implicitly acknowledges that the GI's function is partially to be a gatekeeper, and that you're being paid to screen out the cardiologists (or whatever specialist you're talking).
1) Yes, it's basically giving them the tools to help them get better, or at least not locking things behind the prescription pad for no reason. If someone came in and was truly immiserated by a cough, I'd break out the codeine if they were OK with the risks.
2) Zebras are tricky, but I've caught a few! I was able to dx a case of Brunner's syndrome from a dietary recall and a family history (his male relative committed murder), which is probably my proudest find. Working in the nutrition space you find lots of things that might be considered zebras, but are actually fairly common (I'm looking at you celiac).
3) A treatment team, now that's interesting. I did browse a website by a concierge practice that touted being able to call the head of cardiology at a nearby hospital, and I had wondered at how persuasive people found that story. Building a referral network is a part of the job certainly.
I think Sol Hando had a great reply about preemptively working with your patients to understand how to squeeze the maximum benefit from their insurance and not leave benefits on the table. That sounds great and benefits the both of you.
My current PCP shows no interest in signing me up for annual checkups, unlike my dentist who at the end of every appointment wants to make sure we schedule my next cleaning.
My experience in non-doctor business is that this sort of extra charge which comes as a surprise to the customers (whether it should or not ...) presents better if:
(a) You bundle it into the base price, and
(b) Then either give the customer an (unadvertised, but explained on the bill) $40 discount if they don't bring up anything or
(b1) just keep the money since you built it into your advertised price
Discounts are usually seen more favorably than extra charges (even if the total works out to the same).
Simplified pricing is often more desirable to customers than slightly lower on average pricing but with random variation. A lot of your patients might actually prefer (b1) even if they won't say so.
I can only imagine the baffled look of an American who, after a routine doctors visit, gets a refund from that doctor. But that's good advice! I get really annoyed by my Wife's OB who sends inexplicable bills in the mail afterwards. Option B seems so fair and magical it must be illegal somehow.
I mostly disagree with Marcus, but your post does open with "In June 2022, I bet a commenter $100 that *AI would master image compositionality by June 2025*." (emphasis mine)
Strictly speaking, it hasn't mastered it, has it? It's just much better than it used to be, but even now it will probably fail if I give it 10 relationships to track (which isn't too crazy if I have a specific image in mind I want it to approximate).
I second this - if Scott's interpretation really is that the bet with Vitor was "whether AI mastered image compositionality" and not "whether an AI can solve a number of specific challenges relating to image compositionality", then he lost the bet, as it's trivially easy to write a prompt that AIs will fail at.
So either the bet was actually about the latter, much narrower claim, or Scott lost the bet.
In any case, the result of the bet tells us something about AI progress but not much about the what is was supposed to solve.
The bet had specific resolution criteria determined in advance that both bettors thought, at the time, would fairly represent the idea underlying the bet. It’s important to have these resolution criteria, or otherwise people who are motivated to interpret situations differently (e.g. people on opposite sides of a bet) may in fact do so. It’s possible to quibble now over what “mastering” image compositionality might mean compared to simply “becoming proficient” at image compositionality, or whether that’s what was intended in the first place. But the bettors agreed at the start of the bet that the resolution criteria would serve as a sufficient proxy for the broader purpose of the bet.
Operationalizing a belief is the practice of transforming a belief into a bet with a clear, unambiguous resolution criteria. Sometimes this can be difficult, but there can be ways around some difficulties as explained in Tricky Bets and Truth-Tracking Fields. The same challenges are present for prediction markets.
I recently moved back to my home state from college, and so lost a lot of my social contacts very quickly, but for my old friends here:
meet up once every month, maybe,
text a lot, but usually shallowly. in one case there are occasional in-depth relationship/money/life stuff (mostly hers, I'm quite stable in comparison) and in another random bugs we see, or people watching.
also a lot of updating on mutual-acquaintances doings and sayings.
in person talks are more serious, and more rambly.
I wish I saw them more often, and had friends who shared more interests (sff, certain visual novels, kink, all topics I don't really have reliable conversation partner for).
~45M, married, 10-year-old twins. Shortly after our kids turned one, I was laid off. We ended up having to leave our home city where we had spent most of our lives, and had intended to stay. We left behind basically an entire network of friends, relatives, acquaintances, people-I-sort-of-knew, people I sort-of recognized, etc.
My friend group back home has somewhat splintered along class, economic, and general lifestyle lines, as those of us with more education and income have grown and matured as people, while others are stuck in the same mentality of their 20s. It's been painful to become alienated from people I was once closer to. We have an ongoing text thread that started during the pandemic, but is largely limited to (what I consider) lame sports talk or the occasional mindless political meme from someone I have stopped expecting better from.
I don't want to sever that link, as weak as it is, but when I return home for visits I'm more selective about who I ask to hang out, often preferring one on one or small group dinners at nicer restaurants with the guys who are closer to me in outlook, income, interests. We talk about life challenges, work, parenting, investing, recent or upcoming camping or vacation trips, old girlfriends.
My long-time best friend lives many states away, is married but childless, and so has a much different lifestyle than I do. We talk every few months, text sporadically, and hopefully will get a chance to meet up for a ski trip this winter after many years of not visiting.
As a busy working parent of school-aged children, my local 'friend' group is a few other dads who I don't mind hanging out with for a hike or bike ride, or a burger and beer, but otherwise any free time I have I'd rather spend alone, and it's hard to develop friendships in those bite sized chunks of time. I do these social hikes or bike rides maybe once a month at most?
I do sometimes wish I had a more active and satisfying social life, but there are just so many factors that work against developing the type of closer friendships that were possible when I was younger, and I ultimately don't have much to complain about.
It depends on the friends. For our (my husband and my) closest friends, we see them 1-2 times a week. We play board games, eat together and chat - about work, mutual friends, family, religion and politics, whatever's on our minds. We don't text/email often outside of that, except to arrange things and/or for something particularly noteworthy. The case is similar for the friends I see only every few months (dependent on how much they like board games!).
We also have a group of friends we watch anime and movies with, in which case it's usually most of a day, hosted by one of us. We watch the films and eat and chat, usually about geeky things. That takes place once every couple of months or so.
In both cases, I enjoy the time with people and look forward to the arranged dates, but it is also nice to come home (or to wave goodbye) and relax afterwards.
I recently moved to a different country because of work, so my situations probably doesn't generalize.
> How often do you guys meet?
About once every 2 weeks, either I go back home, or friends come here, usually for a weekend. Also I sometimes call friends to talk about whats up (about once a week). Note: These numbers are for all friends together; I meet each individual friend about once every 1-2months.
> What do you guys do when you meet up?
When they come here, we do vacation-stuff (hiking, boating, museums etc). When I go there, we usually cook and watch anime/movies or we go to concerts/events.
> What are you talking about?
when we are together we talk about hobbies (e.g. I do 3d printing and show them some recent projects and failures) and work most of the time. Sometimes a friend talks about something no ones else cares about and then we carefully tell them to shut up.
> What are you texting about?
weekend-plans and memes
> And how does all of that make you feel?
good but exhausted. I have to force myself to socialize, because when I isolate myself to for too long, I become bitter and angry.
I've tried understanding the "adding more lanes to a road doesn't improve traffic" argument that many urban planning types and fans of public transportation like to advocate, but I just don't get it.
As I understand the argument: If you are a frustrated commuter with an hour long drive, you may want an additional lane added to your route to handle more traffic. But this won't actually improve your commute, because the added ability to handle more vehicles will cause people who take other roads to divert to the widened road.
I get how this wouldn't improve my individual commute. But from a utilitarian perspective, isn't the commute of the people switching to the widened road improved? Surely *someone's* commute got better, or else no one would switch routes.
> Surely *someone's* commute got better, or else no one would switch routes.
It's 2025. You're ascribing agency to a decision most folks let their satnav make. To keep the route-finding cost tractable, meanwhile, the satnav will prioritise bigger roads unless the trip is very short.
I agree that the “induced demand” argument can be overstated, like a mantra among some urbanist types. At its core, it’s just a standard demand curve story: lowering the time cost of driving (by adding lanes) makes more people want to drive, which can eat away at the initial time savings. This can be short run (shifting people from other roads) but there are also long-run effects (where you live, work and your general transport habits). But that doesn’t mean no one benefits—some people do shift routes or travel times, and those decisions usually reflect an individual gain.
That said, it’s possible to imagine situations where traffic gets worse overall, especially over the long term—because on top of the demand curve story above, road expansion can make alternatives worse. Expanding supply in the 'driving' market can have secondary effects in the market for alternative transport options. Turning a modest arterial into a six-lane highway might make walking or cycling feel unsafe or unpleasant, or undermine the viability of nearby public transport. That can push even more people into cars, adding further to congestion.
So while the effect isn’t automatic or uniform, the concern isn’t just about traffic on a single road—it’s about how network-level changes shape travel choices over time.
It can also be worse from a utilitarian perspective:
- Option 1 (example with public transport): Old road was congested with 10k commuters (39 min each), Public transport served 20k commuters (40 min each). Widened road is now congested with 15k commuters (39 min each), Public transport serves 15k commuters (40 min each). And public transport only runs 75% of the trains to account for reduced demand, so they are as packed as before.
I think that a lot of urban design theory is best thought of in terms of "how to stop the non-elite classes from being so uppity" rather than how to actually make things better for them.
There was a time when the non-elite classes were swanning around in big V8s with tail fins and living in massive houses in the suburbs. This was unacceptably uppity, so the goal of the next sixty years has been to put them back in their place -- living in apartments, commuting on public transport, and always competing against the third-world hordes.
The exact content of the excuse used not to build more roads doesn't really matter, all that matters is that we don't build more roads.
You'll notice that the people who are keenest on the "induced demand" argument as applied to roads will almost never apply it to housing; they claim that cramming more people into our cities will reduce housing prices but it always seems to increase them.
I question the premise of this argument. The idea that America’s “non-elites” were once living large with V8s and big houses, only to be pushed into apartments and buses doesn't really hold up.
In reality, more Americans—especially working- and middle-class—own cars, drive farther, and drive larger vehicles today than in your 'big V8s with tail fins era (the 50s/60s?). Trucks and SUVs dominate the market. Car ownership is nearly universal ( in 1960 only 20% of households had two or more vehicles https://www.bts.gov/archive/publications/passenger_travel_2015/chapter2/fig2_8 )
And “massive houses in the suburbs”? New homes today are twice the size of the average house in the 1950s, while living space per person has increased even more.
I think that it is best understood by elites trying to fix one problem and creating a bunch more.
Your non-elites stop having kids because they're more educated and wealthier. Oh no! So you raise immigration rates to ensure your country is still growing. This further raises housing pressure, so you dial up immigration rates even more.
Meanwhile, your non-elites are getting paid too much, and you are a kind-hearted elite and approve of this, until one day you notice it's driving up inflation. Oh no! So you create a reserve bank and instruct them to fiddle with interest rates until fewer people have jobs and unemployment goes above NAIRU (non-accelerating inflation rate of unemployment) again. This causes the cash rate to drop to zero for decades, which triggers a huge wealth transfer as all the other elites start buying up assets using their free cash, and gradually results in a wealth transfer from young to old in the form of mortgages.
Your non-elites can no longer afford to live near their jobs. It's hard to get a job, because you deliberately raised the unemployment rate, AND it's hard to buy a house, because you deliberately lowered the inflation rate, so they're living an hour away from their workplace. This has climate implications, and also in your tiny wizened elite heart you feel a little twinge of sadness for them. Oh no! So you start building more lanes on your freeways and/or introducing mass transit systems. Now you have an escalating series of carbon-burning clogged arteries and a bus system that everyone hates because it goes at 2.5x walking speed and involves 3 changeovers to get anywhere.
You go back to the drawing board and start talking to your elite friends about the problem, but none of you are willing to lower unemployment and raise the inflation rate, so the best you can come up with is 15 minute cities. The proles think this is a conspiracy to prevent them driving anywhere, which in some ways it is, since you aren't providing an economic environment with lots of jobs so they can get one close by OR fixing the burgeoning house prices, you're just magically hoping they can find work nearby and building more public transport for their hour-long commutes. Oh no! You try to explain this to them but they think you're talking down to them, which you are, and they elect a wave of populists, who are still elites and will thus also fail to fix the problems you caused.
The answer involves game theory! A very handwavy version is that people end up in a prisoner's dilemma. Everyone can end up better off if you remove the defect-defect equilibrium.
See also the quite fun Veritasium video about it: https://www.youtube.com/watch?v=-QTkPfq7w1A (it looks like a physics video about springs, but just wait -- it all ties together!)
When I was with an organization planning freeways, or rule of thumb was along the lines of "If you build a new freeway connecting X and Y, expect it to fill up with 5-10 years if there are sufficient jobs at one end and housing at the other". This *almost* always worked. When it failed it was usually because either the jobs or the housing were being oversubscribed. (I.e., e.g., if you've got to freeways serving the same housing, each person can only use one of them to commute.)
Note, however, that this assumes that population in the area is growing. We found that to be true, but it isn't true at all times and places.
You might be interested in Braess' paradox (https://en.wikipedia.org/wiki/Braess%27_paradox), which is a seemingly paradoxical result that adding more links in a network (roads in a road network, power lines in the electricity grid, etc.) can actually slow the overall flow. The Wikipedia article gives an idealized example where adding a single new road can increase everyone's travel time from 65 minutes to 80 minutes (so, under some idealized assumptions, literally everyone's commute is worse in this model). Note that this is not the same thing as induced demand (which is a separate concept) - the result holds true even when the total number of travelers remains fixed.
I think the 'induced demand' claim usually cited probably isn't true.
If we want to use supply and demand language, there's a much simpler model that fits the data just as well: Demand for driving is highly elastic. If that's true, a rightward shift in the supply curve (new lanes) will mostly lead to more quantity supplied, without much drop in price (time spent in traffic). That seems to explain the observation that existing commuters don't benefit all that much from road expansion. But as you say, someone absolutely still benefits - all the people making the new trips.
More speculatively, I wonder if another issue is mis-identification of the binding constraint. Often when I get stuck in highway traffic, the underlying issue is that exits are overwhelmed, and the line doesn't fully clear each light cycle. If you expand a highway by more than you expand the connected roads, you may have not addressed the underlying scarcity. I can't be the first person to have though of this, but I don't think I've ever heard any of the urbanists bring it up either. If anyone knows of any reading I could do on this, I would be interested!
What specific claim do you think isn’t true? Because I’ve generally understood induced demand in the same way you describe in your second paragraph—that is, as a demand response to a kind of "price," where the price is time spent in traffic (assuming no time-of-use road charging). The demand response can be short-run (e.g. commuters switching from a train, or choosing to drive at a more convenient time), or long-run (e.g. changing habits, buying a house further from the city, or getting a job closer to home—all of which increase overall transport demand).
The typical urbanist/public transport advocate position is that investing in modes that don’t contribute—or contribute much less—to congestion is more effective than it might seem at first glance. Instead of the leaky bucket of road investments, where new traffic erodes any time-savings for existing users, better public transit can improve outcomes for both transit and road users. Faster, more frequent service benefits transit users directly, while those who shift modes help reduce congestion for everyone else.
"I agree that the “induced demand” argument can be overstated, like a mantra among some urbanist types. At its core, it’s just a standard demand curve story: lowering the time cost of driving (by adding lanes) makes more people want to drive, which can eat away at the initial time savings. "
I think we're in agreement about the underlying dynamics (a rightward shift in supply moves you down the demand curve).
I've always taken the 'induced demand' people to be making a much stronger claim, that essentially a rightward shift of the supply curve leads to a rightward shift of the entire demand _curve_, not merely quantity demanded. This is what I'm skeptical of.
If that's not what people mean by induced demand, then I've misunderstood, and I don't have any quarrel with the idea other than that it's silly to make up a new phrase to describe a bog standard, first week of Micro 101 supply shift.
Edit:
Just to be explicit, I'm not trying to make any broader criticism of urbanism or expanding non-congestion contributing modes of transport, both of which I'm pretty on board with. I'm making a more narrow (possibly pedantic) point about terminology and using the right model for clarity of thinking and communication.
A standard micro 101 explanation can miss an important dynamic - the long run demand curve. Over time, this can indeed cause a rightward shift in the short run demand curve.
For example, a long run demand response is that a big new car dependant suburb is made possible and built at the end of the new motorway - here the 'short run' demand curve has shifted right and really, it's probably permanent. In fact, 'induced demand' has been defined by some as consisting only this long run effect of shifting the short run supply curve, with short run effects (eg, people changing routes) called 'induced traffic' instead: https://journals.sagepub.com/doi/10.3141/1659-09
Even the short run demand curve is a bit of a 'weird' supply and demand situation. The Y axis isn't price (instead something like 'time wasted/inconvenience). The demand curve stays at 0 most of the time, until a threshold is reached and there is congestion.
You can explain this phenomena using existing, normal economic language as I have there, but it is more complex than a 'bog standard, first week of Micro 101 supply shift'
2) On terminology/clarity of thinking:
I think 'moving along supply curve' isn't the intuitive way the public think about traffic, especially since there are no prices involved. They simply see the road shoulder and imagine - if that was another lane I could just skip the queue. Induced demand is a catchy term that doesn't require graphing out supply and demand lines.
Relatedly, 'Induced demand' is a concept commonly deployed against the 'infrastructure engineering' paradigm, not so much against economics. There you'd assess 'maximum demand' and build your network to accomodate, eg, for water pipes, electricity distribution lines; induced demand isn't really a relevant concept, but it really is for roads.
I don't know about other areas, but where I worked the local roads and the freeways were planned and built by different groups, that didn't always talk to each other when they were friendly. And they weren't always friendly.
"Doesn't improve" is too strong, the right answer is "may or may not improve, depending on the details" - but no, it is not the case that net utility necessarily goes up. Here's a toy counterexample.
Consider a transportation network from A to B with two parallel routes and a fixed T passengers. Route 1 takes 1 hour no matter how many people take it (so might be, for example, a subway route with capacity well over T). Route 2 takes a+b*T(2) time, where T(2) is the number of passengers who decide to take route 2 (a congestible highway). At equilibrium passengers will be indifferent between these, so T(2) will be (1-a)/b, with at travel time of exactly one hour. (Suppose, for convenience, that our constants are such that this is less than T).
Now widen Route 2, so that it takes time a+b'*T(2). Now T(2) at equilibrium is (1-a)/b', with a travel time of ... exactly one hour.
You're assuming that all road trips are demanded inelastically. Some road demand is inelastic, while some is quite elastic. A commute is towards the inelastic end of the spectrum, although it can be shifted in time (e.g. a white collar worker working from 10:00 AM to 6:30 PM to avoid the worst of rush hour). However, there are extremely elastic trips: For example, someone might prefer a further restaurant or store over a closer one, but not so much that they're willing to make the trip if there's traffic.
If all demand for transit were perfectly inelastic (e.g. people only travel to commute and nobody has any flexibility over when or how to do that.) Adding more lanes really would reduce traffic, but it doesn't because there's a whole galaxy of less productive trips that are only demanded if there's not too much traffic.
I think what I don't understand is something like:
For induced demand to happen, there has to be some set of people who wouldn't tolerate the road before because it had too much traffic, but will tolerate the road now.
But the induced demand claim is that the road has exactly as much traffic as before.
So why would people who wouldn't tolerate that level of traffic before tolerate it now?
This has a lot in common with your argument in the 'Change my mind: Density Increases Local But Decreases Global Prices' post.
There, basically more housing induces demand for housing, meaning over the long term building more translates to higher prices. Similarly, the induced demand people can point to the Katy Freeway and credibly say it really looks expanding road capacity ends up creating ever worse traffic (and making fodder for 'just one more lane' jokes).
The counter is that the only mechanism for new construction to attract new people is for prices (either house prices, or wait times 'prices' in congestion) to come down. You have two effects - in the static demand model more supply leads to lower prices, but in the dynamic model demand can increase such that ultimately higher prices than what you started with can occur.
In each case, the answer seems to depend on the exact time horizon and situation you are talking about. Building a highway in the middle of nowhere, or apartment blocks in an empty desert doesn't automatically lead to higher prices.
I think this is Zeno's paradox type thinking. The marginal driver who wouldn't tolerate that level of traffic + whatever they personally would be contribute to traffic has a higher tolerance for traffic on the bigger road because their personal contribution is proportionally smaller.
I think people make a stronger claim than is backed by evidence. At first, a new road improves travel times, eventually the demand expands to fill the road, bringing travel times back to what they were before. From Wikipedia:
A 2004 meta-analysis, which took in dozens of previously published studies, confirmed this. It found that on average, a 10 percent increase in lane miles induces an immediate 4 percent increase in vehicle miles travelled, which climbs to 10 percent – the entire new capacity – in a few years.
Demand for roads tends to increase over time, so you'd end up with worse travel times a few years later; despite having built the road. But is paradoxical only if you ignore several years of growth to the city in making your comparison. Maybe there's an example where induced demand really caused a near-instant reduction in travel times, but I have not seen it described.
The word "exactly" is probably doing too much. A road network curve is somewhat like a battery discharge curve in that there's a huge stretch in the middle in which huge gains or losses in bandwidth are barely noticeable in latency. The most famous example of this is London's roads. London has a very comprehensive subway system and a very large number of Taxis. It's observed that a typical taxi trip will be not that much faster than that same trip by tube. This is because taxis are subject to negative feedback: the more of a durable advantage taxis have over the tube, the more people will prefer them, the more traffic there is, until taxis are only just faster than the tube.
So long as some portion of the transit market is subject to a feedback mechanism, the invisible hand of the market can exploit incredibly small differences to use all the available bandwidth.
Jevon's Paradox (originally an observation that the cheaper coal got the more people found uses for it and overall usage took off) is somewhat related and might help you appreciate the induced demand phenomenon from a different angle.
I'm not sure how relevant Jevons's paradox is here (another commenter pointed out that the more directly relevant paradox is Braess's, though apparently it was first discovered by Pigou of Pigouvian Taxes fame) but I did a writeup of Jevons's paradox if anyone's interested: https://agifriday.substack.com/p/jevons
The main problem with adding more lanes isn't the inefficiency of more lanes, it's the inefficiency of *cars*. Cars take up a lot of space per person it's able to carry. So if you add an extra lane, take the length of that extra bit of road and calculate how many cars can fit in there. Then how many people fit in those cars. It's really not that many!
Compare to a hypothetical universe where there are no cars, only buses. Suddenly, that same amount for extra road can serve many more extra people. And that's discounting the fact that really, the argument for adding an extra lane would be moot, because buses are efficient enough that there probably wouldn't be congestion in the first place. And buses are still pretty inefficient compared to other forms of public transportation like trains and subways.
So that's the problem. Yes, adding more lanes did technically marginally improves the lives of some amount of people even if commute times are still the same. But that amount is very small, compared to how many lives could be improved if the same amount of money was invested in public transportation. And that's before even talking about the the other effects that car dependency can have on cities. Pollution, urban sprawl, accidents, etc.
> But that amount is very small, compared to how many lives could be improved if the same amount of money was invested in public transportation.
No matter how much you spend on public transport, it's not going to stop in my front driveway at the exact time I'm ready to leave, nor is it going to go directly to my destination.
First of all, I question your premise that any improvements to a form of transportation are nil if that mode of transportation does not pick you up right at your door step at the exact moment that you want.
Second, even if you would never, ever use public transportation, you should *still* prefer more public transportation over more lanes, because again, that's just objectively more efficient. Many, perhaps even most people are not that inflexible, and will pick the public transportation option if it's available, convenient and cheap. Better public transportation = less congestion, including for people who are still in cars for whatever reason they might have, be it necessity or preference. Compare the experience of driving in Amsterdam and in LA, which one do you think is the more enjoyable drive?
Amsterdam to LA is a bad comparison, Amsterdam is a million people to LA's thirteen million, of course LA sucks, every city with 13 million people is terrible due either to excessive sprawl or excessive density and usually both.
We should compare driving in Amsterdam to driving in a US city with comparable population, like Omaha.
Seoul. Seoul is a good comparison. I lived there for over 4 years and I never felt the need to drive. I knew people who were driving there and they seemed not to ever face major issues with congestion, in fact I do not remember ever witnessing serious congestion except on lunar new year and Buddha's birthday.
I mean, ok, sure! That seems fair enough. How's *that* comparison, then?
A cursory search for "how's driving in Omaha" certainly doesn't make it sound pleasant. Almost every single result in the first page is negative! (And none are *positive*, there's just some neutral, not experience-related results).
Honestly I've never driven in Omaha, but I've never driven in Amsterdam either.
Googling for "driving in Amsterdam" mostly brings up warnings not to bother as well.
Looking for cities where "driving in..." actually gives positive results, all I could find was Canberra, which is less than half the size of Omaha or Amsterdam. Ultimately once a city gets beyond about half a million people it's going to suck to get around, and we need to stop building such huge cities.
Busses are more space-efficient in terms of lanes if they're going directly where you want to go (or at least as directly as the road network layout permits) and they're running full or close to full most of the time, or at least at peak times where road capacity is the bottleneck. If they're running half empty that cuts their efficiency advantage by a fair amount. And if the routes meander about town, or if you need to travel a fair distance out of your way to a hub to make a transfer, then that cuts their efficiency advantage still further. Busses still probably come out ahead in all but the most pathological cases, but it's probably more like a 2-3x capacity improvement rather than the 4x numbers I've seen from very pro-transit sources if you ignore route efficiency and assume a steady stream of busses running at 100% capacity.
That's fair. That's the reason busses probably shouldn't be the backbone of public transportation anywhere. They should be more of a last mile solution.
And really, most of the times you don't *want* buses to be running at full capacity. You want some slack to allow for random peaks, and also just to make the ride more comfortable overall. A bus ride filled to the brim can be quite an unpleasant experience.
This isn't the most common argument, but one issue is that in many places the bottleneck isn't an expandable main road. E.g. if you're driving along an Intercity highway but the city you're driving to has too many congested small roads to handle the traffic, you'll (at best) get a bit further down the highway before getting stuck waiting to enter the city and (at worst) have to wait even longer now because the extra highway lane convinced more people to drive to it but the actual bottleneck (city streets) can't actually handle more traffic. And city streets are often impractical to expand.
(This is the issue with the road I've been stuck on most, between Jerusalem and Tel Aviv. Adding a lane to it wouldn't really help, especially since the topography probably doesn't allow that in the tight sections anyway).
More generally, space need for cars scales superlinerly (somewhere between linear and quadratic) in the number of vehicles, as adding highway capacity requires multiple road expansions within the target cities, more parking, and so on. It's not (usually) strictly true that adding a lane makes traffic actively worse, but it does in general help much less than you're expected (since a congested highway is a symptom of an overburdened system and you may not be able to add capacity to the rest of it).
Near me there is a junction between two highways. You can take an offramp from one highway that merges into the other highway's left lane.
At some point somebody noticed that the ramp was wide enough to fit another lane. And it was divided. Now you had two lanes on the off/onramp that simultaneously merged into the same one lane of the target highway. The target highway was not expanded; you just had two ramp lanes merging into each other immediately as they merged into the target lane.
This caused truly incredible traffic jams.
The problem was ultimately solved by eliminating the extra lane from the ramp.
It was a pretty stunning example of lane expansion not always being a good idea, but as you note, I don't think this is the kind of thing people are generally talking about.
There are of course plenty of examples where adding extra capacity to roads in the right place has fixed a traffic problem and that traffic problem has never come back. These examples are uninteresting and don't get discussed much, but I can think of a few that I've seen locally over the course of my lifetime.
I have seen examples of exactly this phenomenon at various times over the years as Chicago's expressways have been rehabbed/reconfigured/etc. It can be counterintuitive and maddening how much a seemingly-small change in lane configurations creates new persistent bottlenecks.
As i understand it, yes, individual participants can benefit in the short run, but the theory says that total demand goes up - people move further into the suburbs, stop using public transport etc. until eventually, the new equilibrium is the old equilibrium except with an extra lane.
The usual term is "induced demand". After equilibrium is achieved, it need not be the case that anyone's commute got better, but it must still be the case that someone's *life* got better.
That person might be someone who moved to the region after the lanes were expanded, making the question of whether you should vote for local lane expansion potentially hazy.
So apparently in New York State, where I live, a huge amount of elementary-level education, including assessments, is done using Chromebooks. I have children entering elementary school soon and, bluntly, I am convinced that this is a *terrible* idea and want them to be using physical books and paper for everything short of writing papers at the high school level. (Where they should still be using physical textbooks).
Does anyone have experience with or awareness of public school districts in New York or neighboring states that *don’t* use laptops at the elementary level?
As someone who had the Chromebooks for the end of my public education (my school district got them very early on), I have some firsthand examples of why they are not great.
Very slow. I regularly spent ten minutes waiting for things to load. This is worse than ten minutes of lost time, because in that amount of time I would take out a book and lose any focus on and interest in the activity.
They're very locked-down, so you can't download any software on them. That means if it's not made by Google or available online, you're out of luck. This severely restricted options for teaching computer programming.
The evaluation software was buggy, slow, and deeply frustrating to use. I don't think it cost me very many marks, but if my teacher was less willing to help it easily could have.
The usual classroom-management and productivity problems with letting everyone use screens. Certainly affected me sometimes, although selfishly, other students playing online games didn't distract me the way having them talking in the back of the class did.
Privacy stuff. It turned out they were monitoring all our activity but didn't tell us. As far as I know they only used it to counsel kids who they thought were trying to kill themselves (i.e. were researching suicide in social studies class, or writing a story with a murder in it), but it still felt very violating. (Although having an assumption of privacy violated in a relatively safe setting probably brought my paranoia closer to healthy levels, so it might have been good after all.)
Were these the sort of reasons you think it's a terrible idea? Or do you have other concerns?
Mostly I think students are distractible and don't learn effectively from screens and that they displace healthier, more engaging and appropriate forms of learning and socialization--so chiefly the classroom-management and productivity problems alluded to. The other stuff sounds bad too, however -- thanks for sharing your experience.
There's various evidence, of varying reliability, the on-screen learning is less effective. AFAIK, there's no explanation that's both consistent and convincing, so there are probably several contributing factors that vary in their significance.
OTOH, they're probably going to need to deal with learning stuff off a computer as they grow up, so perhaps it's justifiable. (i.e. part of what they're learning is learning to learn from a computer.)
"Using this proprietary software requires accepting an End-User License Agreement with a private entity, the terms of which, on behalf of my child, I do not accept. Now, we are still legally entitled to our taxpayer-funded public education. How do we proceed?"
It's high time a family came along that's ornery enough to make this argument. Perhaps reach out to the Electronic Frontier Foundation for advice.
Yeah this is the annoying thing; anyone with that kind of money will prefer to just opt out of the system by switching to private schools. This is where public advocacy orgs like the EFF come in.
This is kind of a life hack, but assuming you are in public school and the school has a requirement to educate your child, if your child abuses the Chromebook and gets it taken away, the teachers will find a way to fall back to pencil and paper.
My concern is that while I suspect we could try to get our kids away from laptops (e.g., not agreeing to whatever school policies are re: Chromebooks), they would just get no attention while the teachers focused on students with more pliable parents who will be sat in front of laptops.
"Suppose you took 10,000 optimally selected people and dumped them into a region that had adequate forests, fields, and mountains for mining iron and coal, all in a 100 mile radius
They start with only 1 season of food supplies
How fast could they bootstrap to tech circa 1900?"
Answers in the comments ranged from 2 years to 10k years, which both somehow seem plausible to me. It seems like thousands of technically capable people should be able to get a lot done in a few years, but then again there must have been some major bottle necks to the process that it took thousands of years irl.
A lot of answers got sidetracked by whether they'd just die from hunger or exposure, so say they start with seeds to plant and the climate is mild enough that the community is guaranteed to survive if they dedicate at least 90% of their man hours to farming and shelter in the beginning and become more efficient as their tech advances from there.
Considering that even '1900s' tech needs things like copper, rubber, zinc, tin, gold, platinum, sulphur, phosphorous, chlorine, iodine etc etc etc you're going to need mountains with more than iron and coal.
I think you could put together some kind of civilization (although goodness knows where you'd find enough people with experience in bloomery furnaces, just for one example), but barring access to the outside world I think it tops out at a rather wonky, impoverished iron-age settlement and then eventually collapses once the forests are all cut down and the fields exhausted.
A lot of this depends on what "tech ca. 1900" means. If it means having everything we historically had in 1900, from to telescopes to telephones to torpedoes, that's almost certainly going to have to wait until you've built up a population of millions, just to support all of the necessary specialties. It's much more practical if the victory condition is "has most but not all of the stuff you'd expect to find in a town of 10,000 on the frontier of a ~1900 civilization"
Even for that, you're going to need way more than just forests, fields, iron, and coal. There are at least dozens and probably hundreds of distinct minerals you're going to need, and there's almost certainly no place where they are all found within a 100-mile radius. So we're either imagining a specialty "arena" that was set up for just this purpose, or we're going to have to give these people a continent - and unless they also have a map, things are going to be slowed down by the need to explore a continent.
You're probably also going to need seeds for a decent supply of modern-ish crops, because speedrunning the selective breeding of e.g. maize from its neolithic ancestors is not going to be fast even if you do know exactly what you are trying to do. Or else you're going to have to figure out how to make a "ca. 1900" civilization whose cuisine is based entirely on unmodified natural foodstuffs, and I'm not sure how plausible that is going to be.
Assuming suitable starting conditions, the first priority is going to be getting some sort of agriculture up and running, unless the forests and fields are fecund enough to support a population of 10.000 hunter-gatherers in close proximity.
You *may* also need to establish a stone toolmaking culture as an intermediary; it's going to take a while before you get metals of any sort. Which means you need a suitable sort of stone in your arena; most common minerals are not at all suitable.
From there, pick the best toolkits you can build from scratch (maybe with stone tools) in every critical area, and pursue those directly and vigorously. It's going to look kind of silly hammering out your first ingot of wrought iron on a stone hammer and anvil, and you're going to want to make an iron hammerhead and anvil ASAP, but it's probably doable. But there's also figuring out what sort of pottery and textiles you're going to make to meet your immediate needs, and whether your initial construction material will be wood or brick or stone. Get to work on all the necessary things, without too much duplication of effort.
After that, make a complete list of all the technologies you're going to need to declare victory, and set up a minimaist tech tree that gets you to each of them. Some of them you'll be able to make directly with your initial toolkit, others you'll need to make the tools to make the tools, but even there you should be thinking in terms of leapfrogging centuries or even millenia of slow historical progress.
With only 10,000 people, you may not be able to make steady progress on all fronts, so consider e.g. having half your surplus labor for three months devoted to building a glassworks, which will then be turned over to maybe two master glassmakers and three apprentices. Or just run at scale for a few months to make all the glass a population of ten thousand will need for a decade, and then mothballed.
You're probably going to want to develop some sort of paper-equivalent early on, and have everyone write down everything they can about everything they know. That's going to be vital if this project stretches on for more than a generation, but people can forget a lot on the timescale of even a decade or two, and it's sometimes more efficient to say "go look it up in the library" than to pull one of the few people who grok Thing X away from their work to explain X to someone who kind or sort of needs to understand part of it.
Finally, social organization is going to be vital; 10,000 people is too many for rule by informal consensus. For best performance in a few-decade sprint to victory, I'd suggest something centralized and heirarchical but not inflexibly so, maybe a sort of feudalism. There's almost certainly a multiplicity of organizations that would work well enough, but you need to pick one and get complete buy-in from the start. And your "optimally selected" people need to be selected for compatibility with whatever you choose - don't put a bunch of wannabe democrats, capitalists, or socialists under a feudal lord, and in general get the right natural leader : natural follower ratio with a minimum of troublemakers.
>A lot of this depends on what "tech ca. 1900" means. If it means having everything we historically had in 1900, from to telescopes to telephones to torpedoes, that's almost certainly going to have to wait until you've built up a population of millions, just to support all of the necessary specialties.
Agreed!
>It's much more practical if the victory condition is "has most but not all of the stuff you'd expect to find in a town of 10,000 on the frontier of a ~1900 civilization"
Hmm - does the town doctor in that town have a microscope? How about aspirin?
I think so much depends on what they get to bring with them, and what the flora and fauna (and weather) of the destination are, and whether they have to worry about defense against other intelligences. But my instinct is that 10,000 people is too few to do this as a speed run, and this will have to be a multi-generation project, even with your more reasonable victory condition.
Doing this with 10,000 people is going to lean very heavily on the "optimally selected" part. 10.000 randomly selected people, absolutely won't be enough. 10,000 randomly-selected college graduates, or college professors or Eagle scouts or any other such thing, almost certainly won't be enough. But I wouldn't rule out making it work with 10,000 people selected specifically for this job.
Another key question is going to be whether these people will be given time to study for the "test", and/or bring a notebook (encyclopedia, whatever). If the rule is something like "everybody goes through naked", I'd want everyone to visit a good tattoo artist beforehand.
I mean, for starters we are promised nearby iron and coal, but that 1900 town presumably also had oil, rubber, copper etc, not to mention at least some medicine, and likely none of those are available near where our intrepid 10,000 were dropped, and presumably they can’t just trade for that stuff because there is no one to trade with. To just find all the relevant materials is going to require exploring and settling much of a continent…
Part of the problem here is that the time required can depend a lot on what the starting parameters are. I went through a lot of trouble to set the whole thing up as a sporting event, meaning a lot of rules to cut out any boring boondoggles, such as not just freak famines, but also someone choosing to go Attila or Genghis on everyone else, or even a long period of waiting for a Francis Bacon to come along and enlighten everyone on What Truth Is.
I even included a mechanism for generating more labor.
No-one seems to have mentioned the colonisation of places like Australia. Granted they were given some supplies of tools and etc, but they weren't 10,000 and optimally selected. I'd expect that they could get to 1800 within a few years, and after that it's really about scaling up and refining production methods.
Sydney was founded with 1500 people in 1788, but didn't become self-sufficient in food until 1804. The first metal mines in Australia didn't get going until 1841, half way across the country (coal mining started in 1799). Until 1804 they were dependent on rations being shipped in, which bodes poorly for a colony started with a single year of supplies.
I think these are covered in the question: '10,000 optimally selected people' are presumably capable of working together; and 'mountains for mining iron and coal' are nearby.
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
Why is bronze your starter metal, when you know how to make iron and steel and have ample supplies of iron ore and coal? Yes, you need a somewhat hotter furnace, but "optimally selected people" are going to know how to build such a furnace from scratch.
You seem to be thinking in terms of rapidly speedrunning the historical tech tree, but the winning path is almost certainly to skip most of that tree.
ETA: You're still going to need copper, and quite a few other things that aren't naturally found within a 100 mile radius. But the copper is for e.g. electric wiring, not because you need to go through a Mini Bronze Age.
Bronze is easier to work with and more flexible (except for specialty steels, which you won't have). And you won't have decent electric wiring anyway, until you get plastics, because rubber isn't native, unless your site is a tropic jungle. Varnishes and cloth work for special circumstances, but not generally.
Skipping most of the basic techs isn't a winning move, because they are the foundation on which the others rest.
If you were starting from a "collapsed tech civilization" I could see building from glass and ceramics, but actually making good glass is tricky. Soda glass is reasonable, but IIRC transparent panes of glass requires zinc during the manufacture. (Well, if you're doing it the easy way.)
OTOH, if you had a good medieval blacksmith available, you might be able to go directly to working with iron. I think it would still be a mistake. Iron probably isn't a reasonable target except on really small scale until you've got your first steam engine designed, but I have to admit that good steel is lighter than an equally strong bronze, and it's certainly harder.
One thing you *could* short cut, though, is bee-keeping. You're still going to have the problem of the bees not be domesticated, but bee-keeping is an excellent source of sugar with minimal effort. I believe that wasn't mastered in Northern Europe until after the middle ages. But it doesn't require anything fancy, and it was done in ancient Egypt.
Bronze is easier to work with than iron or steel, but as soon as we had iron we pretty much stopped using bronze except in a few specialized niches. Which, for the full 1900s tech stack, you'll *eventually* need, but it seems likely that you can put that off, maybe to the final sprint to the finish. Iron and steel, you're going to want to have ASAP.
And you're *going* to have electric wiring by the end of the process, because that's part of the stated victory conditions - too much of c.1900 technology is electrified. That doesn't necessarily mean rubber; there are other workable insulators you can use where you need insulation, and a surprising amount of 1900-era electrification was uninsulated except for glass or porcelain standoffs.
More generally, I think you are greatly overstating the extent to which early technologies are foundational to later ones. Bronze isn't "foundational" to iron and steel, it's just what we happen to have used until we figured out basic ironworking. Yes, the first Hittites to build an iron-forge probably used bronze tools to make it, but then they mostly did away with bronze. Meanwhile, the first people to build a bronze-smelting furnace will have used stone tools for *that*, and I'm pretty sure those tools would have sufficed for the first crude iron-forge. Bronze historically preceded iron, but that's not the same as being foundational to iron.
People stopped using bronze because copper is not common and tin is rare, and they switched to iron because iron is common and easy to find in comparison. If there is plenty of tin (or zinc, for brass) nearby then bronze is superior to iron in just about every respect (until you have figured out a puddling forge or Bessemer engine to make high quality steel).
Of course the prompt says they have iron and coal, and doesn't mention how much tin and copper they have. So you're probably right, unless there's easily accessible copper and tin they'll probably just start with wrought iron and work from there.
If that were the case, then when we mostly switched from bronze to iron all the people who had been using bronze all along, and so by definition could afford it in spite of the scarcity, would have continued using the superior bronze and left the cheaper iron to the less fortunate. We'd see e.g. the elite using bronze swords while the plebs got cheap iron spear-points.
We don't see that in the historic or archaeological record, because iron is not only cheaper than bronze, it is actually, really, yes it is even if we're talking crappy ancient wrought iron, *better* than bronze for most purposes.
Cheaper + better + required by the premise = why are we bothering with a detour to bronze rather than going straight to the ironworking that will get us more quickly to the rest of the things we need?
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
I'd definitely be on the later end of these estimates.
So much tech in 1900 was a result of economies of scale, global trade and generations of regional specialisation. In a population of 10,000 perfectly selected individuals, even in multiple generations, I doubt you could even get the economies of scale necessary to produce a useable quantity of quality steel, let alone functional steel machinery.
My guess is that there would be a phase of extreme progress to begin with, where all the individuals' stored knowledge on early tech development has great payoffs for the producers themselves and other members of the community.
Then I'd guess that you'd start to stagnate at a kind of hybrid pre/early-modern stage, because of both the practical difficulty of progress without scale, and the lack of incentives.
I mean, technically you could get a furnace and start producing glass, but someone's going to have to spend all day hiking to gather enough sand and fuel for your next crazy experiment to make 1% purer glass for some increasingly abstract future gain. Unless it's really clear that something will provide a short-term boost to your viability as a civilisation, it'd probably feel like a better use of time to marginally improve the things that are actually affecting your immediate quality of life.
I think that metallurgy will be the top priority after establishing preliminary food and water supplies. Everything else we want to do (including efficient farming and decent shelter) lies downstream of getting decent metal tools, so we'll have rudimentary metal tools in a month and a production line making hoes, saws, hammers and nails within a year.
Once we start producing materials we're going to need engines to move them around somehow. I don't know which of steam, ICE or electric power would be the easiest to bootstrap but I suspect it might be electric (powered by big generators, batteries will require exotic materials and may take time).
I don't think we'll ever be at something that looks like "1900", we'll probably skip steam entirely but have a long time to build houses that don't look like shacks.
Distilling is enough of a carrot (as is fermenting) that you can see some decent motion in the biological fields, well beyond "medieval" stages, just from incentives alone.
A lot of ramping up is just knowing that a thing can be done.
I expect that the wheel would be re-invented pretty quickly.
Same for crop rotation.
And stirrups.
And the horse collar (don't choke the horse while it is pulling things).
...
2 years seems pretty fast to go from virgin land to functioning railroads, but 10k years seems WAY too slow.
And you might get some things from post-1900 before you got some things from pre-1900. Working automobiles probably make more sense for a population of this size than railroads. Also, you can use the car technology for tractors. So maybe you don't get railroads at all because they don't make sense here.
Where do you get the domesticated horses and the cattle? It won't be that easy.
Also, the horsecollar is wasn't invented until the middle ages. It's not a simple invention, so if you don't bring along the details of how to build it, you won't have them.
I suppose wild horses might be present, but breaking a wild horse is a specialized skill, so you need to bring back folks that have that skill.
Railroads are a lot easier than cars. Especially if you have domesticated horses. With light cars and a small area you can use wooden tracks. They won't be SP railroad cars, but you don't have the population to justify those anyway.
Yeah, the wheel is easy, but using it for transport requires roads. So the first wheel you get is going to be a potter's wheel.
It's not just knowing that something can be done, though that sure helps. It's all the "supporting technologies". Once you get away from the basics there isn't anyone who knows them all. (Given a buch of flax plants, how do you make a rope?)
But SOME sort of engine will be very important, because without domestic animals ploughing is ... difficult.
I'd put in a spirited defence for the first wheel being a wheelbarrow. In the absence of domesticated animals, a wheelbarrow is a liberating tool indeed. And it doesn't need a road.
If you have spent any time on a farm lugging stuff around in buckets/by hand, you'd see what I mean.
Also historically people have pulled different interpretations of single share ploughs in various civilisations. It's not pretty and it certainly isn't fun, but it has been done.
Oxen are probably better than horses as farm workers (smarter, more stable, more stamina, less fragile overall, and more edible at the end of the day), but horses are more versatile for travel.
Either way, the problem of domestication is similar - and there is a world of difference between breaking a genuinely wild horse and domesticating a species.
Oxen are definitely better for plowing, but that requires domesticated cattle, not aurochs.
The argument for "wheelbarrow", however, is very good. It could even be just an appropriate log with a couple of spikes driven into it, and it would help a lot (though you'd probably need to drag the ... barrow? backwards. (So another use for rope.)
I think you'd start with a flat platform on top of a couple of long straight sticks. But how to fasten the pieces together? Wooden spikes are probably the best. But that implies a drill. So you need strong twine. This you get use in the same bow-drill that you use for starting a fire. (I believe tinder boxes require spring steel.)
Your main problems will be not starving to death in the first year between eating your supplies and harvesting what you've planted, and not dying of exposure. If the climate is mild enough that you aren't going to freeze to death, die of heatstroke, or be swept away by floods, tornadoes or the like, you've got an advantage. Food and shelter are your immediate problems and will take a few years to have a stable, reliable base before you start expanding your tech tree (think of the reason for Thanksgiving and the legend about the starving colonists).
Two years is way too optimistic a time frame if you're really starting from scratch (the only way that works is if your colony starts off already equipped with current-level machinery and fuel supplies to do the mining, digging, etc.)
I'll add that this experimental group would have huge advantages compared to historical-real-life from knowing some big things _not_ to do (particularly the germ theory of disease and all of its practical implications).
Sanitization is a very, very very big "force multiplier" of the number of people. In that, say, you don't need to count on 10% of the people dying "young" of typhoid.
It sounds like you might be very interested in the recent EconTalk episode about what Capitalism is. They go into an aside about why size of market is important. Basically, the degree of specialization that is possible in an economy is directly tied to how large the market is. Assuming that this experiment demands that everything be done entirely internally to the group, then lots of things will simply require a larger market/population before there is enough room to specialize into the next tech upgrade. It doesn't matter if you know how to make the next thing if are forced to spend all your time growing food to survive.
Agreed. Just as a back-of-the-envelope calculation, the population of what is now Britain in 1000AD is estimated to be something like 2 million people. Which works out to about 8 people per km2. Your 100 mile circle comes out to ~81400 km2, which implies a population of something like 6-700 000 people. Barring access to the outside world, I just don't think that this is large enough for most industrial technologies to be viable.
- 1900s farming produced something like 13-14 bushels of wheat per acre on good land (average land was more like 6.5). In most places, wheat is an annual crop.
- 1 bushel gets you 27kg of wholewheat flour, so really good farmland could yield something like 6670kg per km2.
- wholewheat provides 237cal per 100g according to the FDA, and you need 2500cal per average person per day. Let's assume that bread provides half the calories for your population, so each person needs ~185kg of wholewheat flour per year.
- the available agricultural land is something like 70% of your total land area if you're lucky, of which about a third is suitable for crops.
- taking our 81400km2 again, this means that around 18990km2 is suitable for wheat, yielding about 127 000 tonnes of wholewheat flour.
- putting it all together, the 100 mile circle could, at best, support something like 686 000 people.
It's hard to find solid numbers on how much land a single person can farm (not least because preindustrial farms invariably had more people working them than was economically necessary), but one estimate I saw was 40 acres for a one-man operation with two horses and a plough. This means that our little experiment is going to need something in the order of 100-120 000 full-time farmers and double that number of horses. Add in their families, and around 85% of your population will be working the land. Which leaves maybe 100 000 people to do everything else needed to make a 1900s tech base work.
>They go into an aside about why size of market is important. Basically, the degree of specialization that is possible in an economy is directly tied to how large the market is.
Thank you!
There are about 1500 blast furnaces in the world today. To put it another way, it takes a market of about 5 million people to support one blast furnace. And 1900 technology includes blast furnaces. I don't think a group of 10,000 people could support 1900 technology. And there is approximately the same problem with dozens of specialists and special equipment. Glass, ceramics, chemical industry, machine tools...
I think this is the right answer. Britain had roughly 10 million people in 1800 and if you can sell your coal to 10 million people you have ample resources to build a mine. (And this is ignoring global markets.) You don't need to worry about horses or metal or carts or even manpower that much.
control-F pencil didn't find this so it's worth thinking about how much work would be needed before they could build one single pencil
Rather than zurg rush towards technology they need to grow their population. 5x growth each generation would get them to about 6 million in 4 generations which is probably enough to support whatever they need. For at least the first generation all technology generated is primarily for supporting population.
*EDIT* Do we have animals available? Both as workhorses and as food.
*EDIT* One technology that genuinely takes *time* instead of effort is crop breeding. If we need to go from ancient maize to modern corn we can surely do thing better the next time around, but it's still a very slow process until we invent gene-editing.
That's an interesting point. Ancient societies absolutely could not sustain 5x growth through time due mostly to high mortality rates, but that might be something that really is _knowledge_ gated and not tech gated. If you know about germ theory, even if you can't do anything more sophisticated than boil rags, wash hands, and keep your water supply separated from your waste supply, how much can you decrease child mortality?
-edit- But yeah, the main point is that the answer is absolutely not "within one generation". And once you are past one generation, then the most important goal of the first generation is rushing as fast as possible of some pretty efficient means of printing so that they can record as much of their knowledge as they possibly can so that the project doesn't have to re-invent everything (I'm assuming that this experiment is about how fast could you recreate everything with fore-knowledge). If you don't manage to get enough things recorded in the first generation, then you are very nearly back to square zero.
My impression is that the big child mortality improvements happened from sanitation and nutrition, not from antibiotics or surgery or whatever. Making sure your water supply is well-separated from your sewage outfall, getting people to cook and store food properly, getting everyone to wash their hands before handling food or dressing a wound and after taking a crap or touching a sick person--I think that gives you a bunch of big wins.
Another thing that is surprisingly important is glasses. Every craftsman loses his close-in vision around age 40, and unless you can give him glasses, you lose much of the value of his skill and training. Plus, you're going to need magnifying glasses, microscopes, and telescopes to get further down the tech tree.
Old maternal mortality was about 1:100 but I wonder how much humans have adapted to obstetrics in the past 200 years. If you can afford bigger baby heads nature will random walk towards them.
You can write your essential knowledge down on clay tablets if you need to. But you need enough written material to teach literacy to the next generation.
Bring a set of holy books in the same language as your reference materials, and make sure everyone believes that being able to read the holy books is a critical requirement for being a proper believer.
The coal/iron/steel is the easy part, honestly. Those are "big bulk things" that you don't have to get "precisely right."
Try cheesemaking, or distilling, or any number of the "practical chemistry" bits of technology you'd need to get to have "technology circa 1900" Not to mention, but you'd need glass for that. Also, 1900 is early enough for galoshes, so you'd need rubber.
And we haven't even touched clothing (textile mills) or "the great big bombs" that are steam engines. Waterwheels and Windmills seem easy enough to put together, though, so "practical automation" is doable.
1. You select 10000 people who are ideologically aligned, mentally stable, and knowledgable on the problems at hand (agriculture, manufacturing, etc. maybe skip medicine since this is a speedrun). This is a tough optimisation problem, so you have to sacrifice some knowledge on e.g. science in favour of people who are a bit less antisocial.
2. Prefer people who are better at establishing strong institutions and educating others, so that the next generation can learn how to do the labour.
3. 10000 people won't be enough to reach 1900 without kids, there will be a bottleneck in mining, science, or manufacturing somewhere.
So I think probably they could get it done in 120 years, or about 6 generations of kids.
Don't skip medicine; with only 10,000 people you aren't going to have a lot of redundancy in a lot of vital skills and knowledge so you can't afford lots of unnecessary deaths.
Germs, insect vectors, genes, vitamins/essential nutrients, and how the basic organ systems of the body work give you a lot of benefits over what basically anyone in the world before the 19th century knew.
If they were optimally selected, it wouldn’t take especially long before you could have basic ironworking producing tools and weapons. Maybe less than a year. From there it’s a pretty straight shot upward to steam power, but I think the precise ironworking and how difficult steel production is would involve a lot of trial and error. I’d be really surprised if anyone had both enough iron, and enough precision to produce an engine within a decade, but if they’re optimally selected… who knows?
The main problem would be food and access to resources. While it doesn’t matter if it takes a year or two to start making more complex iron tools, it does matter if you can or can’t get enough food production set up for 10,000 people (starting with no tools). I’d say the answer is likely no. Farming is very intensive work, 10,000 people is too much to be in one place, so you’ll have different groups at different levels of food production capability.
When you’re starving you’ll necessarily steal food to survive, and when you’re almost starving you’ll fight to make sure no one steals your food. Then you have social organization, centralization of food storage, tribes, and that itself if it becomes intense enough, might invalidate any technological progress. While I’m convinced optimally selected modern people could survive, I doubt their children could receive enough knowledge orally to do almost anything that their parents weren’t already capable of. If it’s just a vague idea of “You can mine coal, the black rock, out of the ground and burn it” and “if you produce iron in this specific way it’s stronger” that wouldn’t be enough to restart civilization, and might decay over more generations to exactly as much knowledge is useful to live in such an environment.
> If they were optimally selected, it wouldn’t take especially long before you could have basic ironworking producing tools and weapons. Maybe less than a year.
Huh? With 10K people who is going to mine the iron ore and the coal? And without any tools, how are they going to mine the iron and coal? Even if they optimally selected to have knowledge of blacksmithing, mining, smelting, etc., the iron and coal are largely going to be inaccessible without tools.
But food production and shelter would have to take priority over anything else. Doing a little googling and LLMing, plus some simple math, it would require between 8,000-10,000 acres of wheat using 19th-century cultivars to feed 10K people for a year. But first, you have to clear the land, plow it, and seed it. How do you clear the land if you don't have iron tools? Also, domesticated animals would probably be required to help clear and plow the land; otherwise, you've got 10,000 people scratching in the dirt with sticks. Unless they were given the seed stock and tools up front, I think your latter scenario is more likely. A lot of optimal people are going to starve and die the first year. Unless there were plentiful non-ag food sources to survive on, I wouldn't expect to see a 19th-century level of technology for centuries (and only if they had some way to preserve the knowledge of their pre-settlement ancestors.
I think you are severely underestimating how hospitable and food rich pre-malthusian land is. If this land has never known humans, there are likely to be several species that are incredibly efficient to hunt- Dodo, giant sloth, passenger pidgeon, galapogos tortoises, manatee, carribean monk seals, fish in general, great auk, mammoth, etc.
Europeans who first came to the Americas found an almost limitless bounty of cod in the coasts off Newfoundland. It required decent nets, which the natives didn't have, but with fairly rudimentary technology you can find a relatively low-effort, sustainable supply of fish.
Steel is 500 CE technology (it's dirty blacksmithing, the Scandinavians did it often and well). Steel to specifications is much harder. Engines are much harder than that (can you do them with cast iron?) -- but do you need engines if you have water/wind mills? (that'll get you automation, at least).
Just to clarify the question: are the "optimally selected people" able to overcome mutual conflicts, and resist burning most of the resources in various zero-sum games, such as "who is the boss" and "who can have sex with whom"?
Right, I think the “10,000 optimally selected people” would have to be, something like deeply faithful practicing Roman Catholics, with the right genetic distribution so only 1% had 99%ile iq and everyone was willing to be super obedient to the religious authority structure.
In absence of contact with Rome, I would expect deeply religious Catholics to be prone to sectarianism/heresy. No central authority means slow (or even fast) divergence on dogmatic issues.
I don't think you need to go so hard to get a good development. There are enough people who don't defect in real-life prisoner dilemmas while also understanding the game theory, are in good health, and are physically and mentally capable. "Optimal" will be even better than that group, but I'd be surprised if the optimal selection is a cult or very faithful people.
I... don't really think you need smart people for this. You need "butt-basic idiots" -- the folks who have learned how to do things the hard way, because it's fun. Smart people are good at solving "problems you've never seen before." But i don't think we're at that.
Even motivated midwits would work -- remember, with 10,000 people, you can have an "expert" on every link of the chain. Exposure shouldn't be an issue, you can always grab 10 Amish (or "Ron Swanson-esque" survivalists) who can order the rest of the folks about.
1900... Steam and locomotives. Mills, and upgraded farming.
Yeah, I think you may be right. You want reasonably intelligent persons selected much more for their combination of existing knowledge plus work ethic and pro-social norms. Though you may want one person tasked with getting this stuff written down lest the knowledge die off. I don’t think you could do this in a single human lifespan, you’d need cycles to bootstrap in the technology. It’s not just about knowing how but having the necessary tools, which have to go through iterations. Can’t go straight from raw iron to a lathe capable of micron-level precision. If your super knowledge persons were in their 20’s…. maybe you could pull it off in 40 years.
Of course you'd need cycles, but you don't need micron-level precision to get to 1900's technology. I think the iron/steel/coal troika is doable within ... maybe 5 years? Most of the steps there don't involve "precision" so much as "and now we make pig iron." (Pulling this from "that polish guy's science fiction," so I could be wrong ... but I don't think I am).
But... then you have "water purification" (aka BEER) Either you have the yeasts, or you don't (and, I believe, unlike bread, you need specific strains).
Distilling (also a pre-1900 thing I think) would require glassware.
And we have Yogurt! And Cheese! (If you don't think these are technology, boy, howdy). And sausage.
Basic looms/spinning are pretty easy to make. Not sure how they scale into "industrial looms and felting and..."
FluffyBuffalo here so people aren’t misled. Some amyloid theories are probably wrong, but some are almost certainly not. There is good evidence that amyloids are in fact involved in Alzheimer’s in some way, and that anti-amyloid therapies help moderately.
“I think it's important to be careful about what "subscribing to the amyloid hypothesis" really means. The evidence is very strong that amyloids play a significant role in the disease - genetic variants like APOE4 that influence amyloid accumulation are strong risk factors for AD; you see characteristic biomarker levels (in particular Ab42/Ab40 ratio in cerebro-spinal fluid) in AD patients that you don't get in other neurodegenerative diseases; you find the characteristic plaques in deceased patients, etc. That part is sound as far as I can tell, and pointing at a misguided study or two doesn't change that.
The question what exactly the amyloids DO, and whether the buildup uf the plaques is the whole story or just a part, is not so clear, and my impression is that researchers are open to the possibility that there's more going on. The new hot topic seems to be CAA: Cerebral Amyloid Angiopathy - it turns out that amyloids also accumulate in blood vessels, weakening the walls and leading to micro-bleeding. (Apparently, this got more attention when it was found that brain bleeding is a not-so-rare side effect of lecanemab.)”
Stupid question time: I've been trying to understand how exactly Heritability is defined, but the explanations I've found don't make sense.
Wikipedia tells me that, in simple cases, it can be computed as Variance(Genes)/Variance(Phenotype) where "Variance" means the expected squared distance from the expected value.
But this doesn't depend on any relationship between the two at all! By this explanation "Heritability" is just a measure of whether a set of genes is similarly spread out as the phenotype, not whether those genes explain anything.
Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way, and I don't see how "polygenic score" and e.g. "has schizophrenia" can be comeasurable.
Stats in science are often bad, but I have trouble believing they're *that* bad. Can someone try and explain how this actually work
> Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way, and I don't see how "polygenic score" and e.g. "has schizophrenia" can be comeasurable.
Heritability is calculated as the ratio of the genetic to the phenotypic variance, with the genetic variance here the variance of genotypic values. Because genotypic values are simply the expected phenotype for each genotype, they are measured in the same units as the phenotype itself. So both the genetic and phenotypic variances carry by definition identical units.
Heritability is defined for quantitative traits. For binary traits (e.g. “has schizophrenia”), one typically invokes an underlying continuous liability scale. Both genetic and phenotypic variances are then defined on that same liability scale.
> Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way
Variance infamously suffers from the problem that the same distribution at different scales has different variance. For example, If you measure everyone's height in inches, and if you measure everyone's height in centimeters, the height-in-centimeters distribution has more variance even though you measured the same set of people and they were all the same height both times.
Amazing article! Though reading it leave me wondering: why on earth would we care about this ratio, given all its weird pathologies? It seems like just considering genetic variance alone, rather than heritability, would be the way to go in 99% of situations.
Heritability isn't for comparing human populations, it's for breeding wheat. Most of the issues go away once you can control for environmental and plant fields of genetically identical plants.
Heritability can be tricky to interpret , quantitative genetics isn’t the most intuitive field, but that doesn’t diminish its usefulness. Yes, heritability has its odd “pathologies,” (although most of them are special cases largely irrelevant in most populations) but it is good for its intended purposes, like predicting response to selection, designing GWAS, and so on, whereas raw genetic variance alone provides little actionable insight.
May be the most intuitive way of conceiving heritability is as a "signal to noise ratio" estimator for geneticists. Just measuring the signal is less useful.
I agree. I don't understand why people care so much about heritability because it is not interpretable. Also, what we actually care about is the degree to which someone is able to intentionally change their outcomes by manipulating their own environment. Right? Heritability doesn't capture that notion at all.
>Also, what we actually care about is the degree to which someone is able to intentionally change their outcomes by manipulating their own environment. Right?
Heritability is a tool for geneticists, so, indeed, it is not very useful to suggest environmental manipulations!
What are the 3 most important applications of heritability? I understand response to selection; that makes sense (though it doesn't seem that relevant toward the study of human traits like education attainment.)
But I don't understand the other reasons to care about heritability. Maybe it hints at whether the root cause of a trait is genetic or environmental...but, still, that provides little insight about treatments, which is what we actually care about. And it doesn't actually directly tell you whether genes are the root cause at all. Maybe it at least hints at it? Maybe it can tell you whether GWAS is worth pursuing at all? I don't know.
Can you therefore explain to me the 3 main reasons people care about heritability?
It is normal for geneticists to care about heritability, and below are three examples of applicatrions :
- in agriculture, for animal or plant breeding, heritability is directly linked to the answer to selection
- in medicine, GWAS are extremely expensive studies used to identify genes involved in a quantitative phenotype. If heritability is very low, no genes will be detected.
- in evolutionary ecology, it is interesting to determine whether a trait relevant for environmental change (for example resistance to heat) has or not substantial heritability. High heritability means more chance for the species to adapt to the change.
But heritability is in my opinion almost entirely irrelevant outside of genetics (but one counter example below), and is extremely easy to misinterpret. The most frequent falsehood around heritability being 'if a trait is strongly heritable, then environmental changes can not change the trait' and 'if a trait is strongly heritable within a group then difference between groups are due to genetics differences'.
My personal counter-example of the general uselessness of heritability outside genetics studies would be that heritability studies showed than most traits in humans (height, weight, personality) have little shared environment effects, ie little parental effects are detected on their adult children for average families (this is not true obv if the children are mistreated). Thus changes must be implemented at the societal not familial level.
Emma probably can give you a better answer than I can, but here you go...
1. The people are most obsessed with heritability are the HBD folks. If your philosophy regards humans as having superior vs inferior phenotypes, and that certain arbitrary groups of humans (i.e., races) have superior phenotypes vs other groups, then you're motivated to use allele frequencies and heritability to give your opinions an aura of scientific validity.
2. Many diseases have a genetic origin, and certain populations have higher rates of the alleles that cause these diseases (and thus higher risks) of developing them. For instance, autosomal recessive diseases like Cystic Fibrosis (predominant in European populations, but not Finns or Sardinians), Sickle Cell Disease (common in populations where Malaria is endemic: people of African, Mediterranean, Middle Eastern, and Indian ancestry), Tay-Sachs Disease (most prevalent in Ashkenazi Jewish, French-Canadian, and Cajun populations). Carrier screening can test whether the parents carry these recessive genes, but both parents must be tested to see if they carry a mutation in the same autosomal recessive gene. If both are carriers, each child has: a 25% chance of inheriting two mutated copies (affected); a 50% chance of being a carrier (one copy); and a 25% chance of inheriting two normal copies. So, heritability is important in these circumstances.
3. The effectiveness of some drugs can depend on the presence or absence of certain genes. I don't know of any examples off the top of my head, but some drugs are more likely to work well on some populations than others (for instance, most big Pharma RCTs have been done using European populations, and some of the meds that seem to work well for Europeans may not work as well for other populations).
The Alzheimer mouse review on Friday reminded me of Thrifty Printing Inc, whose business model of online photo-processing for cornershops didn't take off, apparently, so naturally the company pivoted, to drug development for such diseases as Alzheimer's.
But Anavex Life Sciences, that's the new name, doesn't seem much respected by the stockmarket. Was the pivot too ambitious? Or perhaps Anavex is seen as unserious by the professionals after hiring a runway model with no business experience as Director of Business Development and Investor Relations (Nell Rebowe). Or it's the hairstyle of the CEO (Dr. Christopher Missling). Or his presentation skills (that last one actually has some merit).
A recent Economist article, titled "The Alzheimer's drug pipeline is healthier than you might think", did not even bother to mention that Anavex has a pill under evaluation by the European Medicines Agency. Yet what will happen to the stock if it is approved in six months or so?
On the other hand, withdrawal or rejection would most likely send the stock crashing (not "certainly" because another drug's phase 2 trial for schizophrenia will report soon). To me, Anavex's pill appears disease-modifying (though not a cure) and safe (unlike the anti-amyloid drugs), but certain language in the company's latest 10-Q filing does strike me as ominous. Also, shareholders are up against high short interest, including Martin Shkreli, who has called Anavex "another retail trap" (referring to Cassava Sciences, popular with retail investors, whose Alzheimer's drug failed its phase 3 trial; that company's history involves fraud allegations). And whose strategy of shorting Alzheimer biotechs just scored another victory when INmune Bio's phase 2 trial failed to convince.
Anavex's pill is meant to stimulate autophagy and is not related to the amyloid hypothesis, as far as I know (I don't know much, but was still tempted to complain on Friday when someone called Leqembi a "proof of principle" for the hypothesis --- if people try for decades to develop drugs on the basis of a paradigm, wouldn't you expect them, even if they are mistaken, to come up with something eventually that shows an accidental effect, via some other mechanism?).
Most of what I currently know about biotech markets is already contained or hinted at in that post --- I definitely got lucky in terms of weirdness factor when I decided to look into Anavex. The most adjacent other of the (few) companies I try to follow is Hims & Hers, whose stock chart you displayed in the Ozempocalypse post in March to illustrate the statement "Let's take a second to think of the real victims here: telehealth company stockholders": I was thinking about posting in an Open Thread once their share price climbs to new records, which I thought might be funny, just a few months after the Ozempocalypse sent it down by half; but it has missed the February high by a whisker several times and is now again 25 percent down.
"A project in Co Mayo is generating renewable electricity through the flying of kites, which its operator has described as a potential "game changer" in the wind energy sector.
...The site, which is the first designated airborne wind energy test site in the world is being operated by Kitepower, a zero emissions energy solutions spin-off from Delft University in the Netherlands.
Kitepower's system employs a yo-yo effect, where a kite, measuring 60sq/m is flown at altitudes of up to 425m attached to a rope that is wound around a drum - which itself is connected to a ground-based generator.
The kites can generate 2.5 to 4 tonnes of force on the tether.
The pull from this force then rotates the ground-based drum at a high speed.
This rotation then generates electricity that can be stored in a battery system for deployment wherever and whenever it is needed.
The kites are flown using the knowledge and skills of kitesurfing professionals, combined with a highly specialised computerised GPS-guided steering system.
They fly upwards repeatedly in a figure of eight pattern for periods of 45 seconds.
The flight pattern is important because it forces the kites to behave like sails on a boat, maximising the pull of the wind to increase speed so electricity can be generated.
After 45 seconds, the kites are levelled up so that the pull from the wind is momentarily minimised.
This enables the tether to be wound back in, using only a fraction of the electricity generated when it was being spun out.
The result is a net gain in renewable power at the simple cost of flying a kite.
Then the cycle is repeated, again and again, potentially for hours on end."
(2) Well, looks like AI is *already* taking er jerbs (at least if you're a graduate in finance):
"Also this week, the latest 'Employment Monitor' from recruitment firm Morgan McKinley Ireland found notable reductions in graduate hiring by major firms in the accountancy and finance sectors because of the adoption of AI.
And on Thursday, AIB announced a major AI rollout for staff in conjunction with Microsoft Ireland, sparking concerns from trade unions.
Morgan McKinley Ireland's Employment Monitor for the second quarter of the year was published on Thursday.
The recruitment firm said that the standout development of the quarter was the significant impact of AI and automation, particularly within the accountancy and finance sectors.
"The notable reduction in graduate hiring by major firms, driven by AI capabilities, highlights potential challenges ahead," the report found.
"Companies are increasingly leveraging AI capabilities to automate routine tasks such as accounts payable, accounts receivable, credit control, and payroll."
...[Allied Irish Bank] announced a new artificial intelligence rollout for staff in conjunction with Microsoft Ireland on Thursday.
The bank said the new tools will reduce time spent on repetitive tasks, freeing up employees for higher-value work.
The plan will involve the widespread deployment of Microsoft 365 Copilot, embedding AI into everyday tools like Outlook, Word, Excel, Teams, and PowerPoint.
...Last month, the Chief Executive of AIB Colin Hunt took part in a panel discussion at a Bloomberg event in Dublin.
Asked what impact AI will have on staffing numbers at the bank over the next five years, Mr Hunt said it may lead to a small reduction in net headcount.
"I do think that there are certain manual processes that we do now that will be done by AI in the future, and probably net headcount will be broadly stable with a slight downward bias maybe," Mr Hunt said."
Who knew that the real knife-ears were the ones we created along the way! (See "The Rings of Power": "Elf ships on our shore; Elf workers taking your trades. Workers who don't sleep, don't tire, don't age. I say, the Queen's either blind or an Elf lover, just like her father.")
Is this not "fourth power of wind speed"? Because that's the issue, if you want "renewable energy" to stop generating more greenhouse gas (via the necessity of "keeping the grid stable" through spikes and dips -- the natural gas required for this is significantly more than "if we didn't use wind power at all.")
That story about graduates not finding jobs has been repeated across British papers for the last few months. If it is a trend then AI is not boding well.
It's mostly worrying about what all these graduates were being hired to do. I can see why graphic design would be toast, but the only other things AI can seeming do are "google this and write about it," "take this information and reformat it" and "spitball arbitrarily about this topic."
Isn't this basically a linear actuator (as you describe as not solving the problem), albeit embedded in rubber? I suspect Melvin is right, that this will be too weak. Fundamentally, at reasonable current levels, magnetic forces in smallish-human-muscle-sized devices are weaker than we need, so, yes, we make motors that spin, reusing the same volume many times, and put in mechanical transmissions that increase torque while decreasing number of revolutions.
Sorry to be laughing about the tooth claims, because vision problems are no joke, but yeah. There's always something.
It's very unlikely the fancy tooth treatment is to blame, but it's not impossible, and it's only when you have real users out in the wild, as it were, that these one-in-a-million things crop up. Now we know the problems Big Pharma and the FDA face!
I don't know what you find so funny. If this was a normal FDA trial, they would have tested it in some small group of people first, nothing would have happened, they would have approved it and done post-marketing surveillance, and someone would have (hopefully) detected the side effect. Now . . . it got tested in a small group of people first, nothing happened, it got sold, there's unofficial "post-marketing surveillance" by users, and someone has (possibly) detected a side effect. What's the difference that makes one process dignified and the other funny?
"This new discovery means no more cavities! Mothers will pass on the tooth pixies to their babies with a kiss! Dentists will go out of business!"
*blink*
*blink*
*blink*
"Oh, no, the tooth pixies are making me blind!"
I think no prediction market would have considered "will the tooth pixies make me blind?" as an option had anyone set up a market.
It may well be that the tooth pixies have nothing to do with that unfortunate person's problem. It may also be that they do. It's only when any new drug or process is turned loose on more than the small test group that such anomalies crop up. Biology, and nature, are not controllable in the way we desire. We can screw nature up, sure. We can control it to an extent. But every time we are so sure we know precisely what to do and how to do it, nature comes back at us.
"Yeah, I've been long-term taking dodgy drugs but it was definitely the tooth pixies what did my eyesight in".
You don't think perchance maybe it was the kratom habit (plus whatever else he was dosing himself up with, because druggies* never do just the one thing) that might be responsible here?
* Yes if you're taking stuff like this, you are a druggie. "It's a health supplement, not a drug!" If it's got a street name, it's a drug.
"Kratom’s common or street names are Thang, Krypton, Kakuam, Thom, Ketum, Biak-Biak (common name in Thailand), Mitragyna speciosa, mitragynine extract, biak-biak, cratom, gratom, ithang, kakuam, katawn, kedemba, ketum, krathom, krton, mambog, madat, Maeng da leaf, nauclea, Nauclea speciosa."
If manufacturers are selling it with spicy names, it's a drug not a supplement:
"The FDA continues to seize adulterated dietary supplements containing kratom. Seized brand names have included: Boosted Kratom, The Devil’s Kratom, Terra Kratom, Sembuh, Bio Botanical, and El Diablo."
Caveat, I have not been following Lumina or their testing very closely at all, and this is not exactly my topic of expertise, only adjacent to it.
Phase III trials can be fairly big, that's why they are expensive, so if this is a 1-in-1000 adverse event (AE), it's possible it could have been caught. When you know you have followed N enrolled patients for x amount of time, it is bit easier to evaluate is a report of a suspected AE truly a rare AE or is it possibly the first report out of many not-so-uncommon AE. I think this is the main difference compared to had this happened in a trial: There would be a more informative context wherein to evaluate is it a rare bad luck unrelated to product, a rare AE related to product (and say, patient's unique genetics), or a first sign of a serious problem with the product.
Also, if a rare AE takes place during a trial or with a novel prescription med, reporting is not dependent on the user getting a reaction and being savvy about biochemical pathways to realize the product may be to blame and posting about their experiences on internet who may or may not provide all relevant medical background. If the product is a causative factor, the chain of events happened here would have been a kinda weird coincidence. If the product is not ultimately causative, with larger trials having taken place, it could be easier to say so.
Phase IV and post-marketing surveillance comes in many shapes, so it is true that if a serious side effect presents after the product had been launched, practical difference may not be *that* great. However, the reports of suspected AEs are supposed to be centrally collected, so hopefully it would be bit easier to evaluate the report(s) and contextualize them. But ultimately, yes, it is kinda definitionally difficult to catch rare AEs or AEs that take a long time to manifest until after the product has been launched and used by the wider population.
Without centralized and organized collection of data this becomes loosely collected anecdotes. The type of stories that anti-vax and snake oil narratives thrive on. We need less of this and more of rigorous science.
If the FDA decided to restrict itself purely to safety studies whose goal was to discover and catalogue what the safety risks of a drug are, and then it were up to doctors, in conjunction with their patients, to decide if the efficacy was worth the risks (and costs), then nearly all of my issues with the FDA would go away. But they don't merely catalogue risks. They also are the ones who decide what everyone's risk-to-reward function should be, and that's not a task that any government agency should be doing.
>They also are the ones who decide what everyone's risk-to-reward function should be, and that's not a task that any government agency should be doing.
Are there any limits to this statement? Taken at face value, that would roll back decades if not centuries of safety regulation across the board, to desastrous effect on public health. Asbestos? It's cheap, effective, and in 20 years it will be someone else's problem. Wearing seatbelts? If you feel like it. Installing seatbelts to begin with? Nah, that could be hundreds of dollars of profit instead. Building codes? I'll let my cousin do the electric wiring, he always wanted to try his hand at it. To be continued ad infinitum.
The primary limit, and the difference between my statement and your examples is impact on the individual vs impact no the public. There is a big difference between regulations that limit what I can do to myself and regulations about what I can do to others. Negative externalities are a place where I think the government has a role. Asbestos should probably be banned/highly regulated not because individuals shouldn't be allowed to take risks but because their use often impacts third parties who don't get a say. It being illegal for me to take an experimental drug when I am aware of the risks? Why? Why is the government allowed to tell me that it's too dangerous?
>Why is the government allowed to tell me that it's too dangerous?
Because the society you live in, as a whole and over time, converged on the decision to give your government that authority. Your entire existence as a citizen is embedded in a (practically, mostly) invisible web of such decisions. There's always push and pull on the margins which you can engage in; if and when positions like yours become societal consensus, good for you, then you'll get what you want. Until then, you'll have to live with the status quo or vote with your feet.
I'm not confused about how we ended up here. I'm asking about a from-first-principles justification for the status quo, with the implication being "there isn't one, and therefore I argue that the status quo is bad".
You seem to be making an argument (feel free to correct if this is not the case) that the outcome I want is not possible without also throwing out the baby (aka, it's not possible to allow people to take experimental drugs without also allowing them to make asbestos baby blankets). To whatever extent you are making that argument: I disagree. I think we can both allow personal liberty while regulating externalities.
I think the first-principles argument for consumer protection regulation is "consumers cannot reasonably be expected to learn about the risks of everything they buy." There are too many things to research, and researching product safety and effectiveness is often difficult or requires specialized knowledge. Therefore, we outsource those tasks to a dedicated government agency that has the necessary skills, which then boils it down to a simple yes/no decision that requires no effort from consumers to follow.
Like, in a world populated by Homo Economicus, where everyone perfectly comprehends the risk and reward profiles of every drug they buy, the FDA would not be necessary, but that's not the world we live in. We live in a world where people buy random herbal supplements from a guy on Tiktok yelling about toxins in the vaccines. Some amount of paternalism really is warranted.
Google tells me that "first principles" are "the fundamental concepts or assumptions on which a theory, system, or method is based." If you agree with that definition, then it should be clear that the FDA has first principles: The laws under which it was founded and now operates. If you continue asking for the principles of how those laws are justified and so on, you'll eventually bottom out with "because that's how societies organize themselves to function at scale" and there is, I believe, no further useful justification; similar to the anthropic principle, you don't see many societies that do not do so because those tend to cease existing sooner rather than later. You're not going to find any cosmic truth underneath it, any more than you'll find it underneath your own assertion that you should be allowed to take any drug you want. That, in my humble opinion, is your first principle or lack thereof, and it applies equally to you and the FDA.
>You seem to be making an argument (feel free to correct if this is not the case) that the outcome I want is not possible without also throwing out the baby
I actually argued the opposite. I wrote "there is always push and pull at the margins", which means you can seek to change things that fall outside the broad consensus while others will seek to keep them.
> I think we can both allow personal liberty while regulating externalities.
I agree. That statement is just quite different from the one I disagreed with in my first reply, isn't it?
Reading about the twin studies... twins are similar not only because of genes but also because of a shared initial history in the womb. And not all twins are the same, it depends at which development stage they split in two different embryos (see "mirror twins")
If you like road trips and you still believe in the American Dream, join us in April 2026 for The Great Route 66 Centennial Convergence. We're driving from Chicago to LA over the course of a few weeks. There will be never-before-seen mysteries, side quests, and prizes. It costs nothing. Find more details on Facebook, Instagram, and Youtube.
Interviewees and video co-hosts wanted. We won't just be talking about hot rods and midcentury architecture. We'll be discussing the American Dream, capitalism, and human progress. It might even be a paid gig if you're charismatic, pleasing to look at, or slightly famous.
I'm a fresh grad about to start a Data Science-esque role for a big UK insurer. Never having worked in corporate environment, I'd like to ask the ACX community for sage advice on questions such as:
How can I learn the most?
How can I negotiate my salary throughout?
How do I handle 'office politics'?
or any questions someone in my position should think about.
If your office politics involve poisoning each other, just quit. If poisoning each other has been banned as being "not within the spirit of office politics" -- just quit.
This took me a long time to learn via experience, as it's not necessarily intuitive to rational people.
Aside from complying with things like critical regulatory requirements which might get you arrested for violating, you really only have one real job at work:
Make your boss [1] happy.
That is often very difficult for rational people leaving the merit-based school system and entering the social hierarchy of the workplace to fully grasp. You may have felt that school was your "job" and that your teachers were your "bosses," but this wasn't actually the case; as a student, you were a *customer buying a product,* which meant you had an absolute, enforceable *right* to receive clear expectations and an objective measure of your performance *as part of the product you were purchasing.*
In the case of your employer, they are the customer purchasing your time and attention... in the way they want you to provide those things. Your paycheck and career aren't going to be based on you scrupulously, objectively performing all the duties of your job description or doing what's best for the company as a whole. Practically speaking, there's no way to really track that, and people who are determined not to see proof of your good work won't look at the evidence anyway. The people making decisions about your role at the company are going to be making them based on how they *FEEL* about you, which means your first priority needs to be making your boss *FEEL* good about you.
Keep.
Your Boss.
Happy.
So if your boss is very attached to a dumb process that you just KNOW could be vastly improved, the *real* job of your job - the one which may get you promoted or saved from layoffs - is to make them *FEEL* good by surrendering to their dumb process. If your boss is overly-emotional and sensitive to criticism/rejection and perceives suggestions/corrections/warnings as personal attacks, your *real* job at all times is to make them *FEEL* good by avoiding triggering their negative emotions, even if that means allowing them to damage the company.
And remember that your grandboss and above haven't fixed or fired your crappy boss because there's something about your boss which makes *them* happy enough to want to keep your boss around. Don't assume *they* want to fix problems for the good of the company, either.
Surrender to the stupidity. That's what your customer wants, and that's your job now.
[1] "Boss" doesn't necessarily refer to your direct supervisor (although it probably will), but rather the persons/people above you who have the power to promote or fire you, including your grandboss, etc.
That's sort of true, but there is a lot of hubris in starting a new job or career and thinking that you understanding everything immediately and everything you don't approve of is stupid.
OP, if there is something that you don't agree with, I suggest the path of detached indifference, not judgment; people can smell smug superiority a mile away.
Curiosity is fine too. "Hey, I was wondering why we do things this way instead of this other way". Asking those questions once in a while is a great way to learn how decisions are really made in the company.
Asking the right questions enhances your "teachable youngster" status, while carrying the possibility that one day you'll actually make a good and workable suggestion and get some of the credit for the actual change.
Keep in mind what your trajectory over the next few years should be. Ideally you'll go from a teachable youngster who is easy to work with and only needs to be told anything once, to a junior colleague who can be trusted with small responsibilities, to an actual peer who knows the ropes and can be trusted with big challenges.
But right now you are _just_ the teachable youngster, and you need to embrace that role. Be conspicuously eager to learn and respectful of advice from your older colleages. Do anything you are told to do, or even that you are suggested you might do, unless it is literally illegal. And if you haven't been given anything specific to do and everyone seems really busy, at the very least don't get in the way.
You may find something to take to heart in this essay:
Office politics doesn't need to be tricky, especially as a junior; it's just the effect of companies being staffed with fallible flesh-and-blood human beings. Focus on making sure that people know you, and they like you, and allow that to be more important than being right all the time, and you'll probably do fine.
Similer to one of Wasteland Firebird's points, long term you'll do better if you hop sideways and upward between employers as opposed to spending years and years with the same one. This is because with each job change you should get a step up in position and salary, as well as a wider experience of the industry (and you don't necessarily have to stay with insurance companies - data science skills are fairly transferrable). But of course you shouldn't overdo the job hopping frequency, or potential employers will wonder why you can't stay in any one place for long.
Also, people tend to be better at office politics in inverse proportion to their technical ability: A weak performer needs to be crafty at office politics to offset their technical shortcomings, and conversely a tech guru, recognised as such, can often afford to disdain the politics.
When starting a new job, you should be suspicious of a colleague who seems too solicitous of your welfare when it isn't their assigned duty. It's similar to prison, although I've never had any experience of that, in that shortly after you arrive some apparently helpful fellow convict will sidle up to you and offer to provide anything you need and show you the ropes and so on. But ultimately it's for their benefit not yours!
When a company downsizes, the bean counters take zero account of the competence or otherwise of those being laid off.
> When starting a new job, you should be suspicious of a colleague who seems too solicitous of your welfare when it isn't their assigned duty.
I often do that
> But ultimately it's for their benefit not yours!
My "benefit" is that it makes the new guy feel welcome. Also talking to people is more fun than actual work. Also I guess it also makes my own boss and grand-boss happy when the new people come to me with their problems and not to them (boss and grand-boss).
I agree. Jeez, most people just are not particularly sneaky and evil. Pretty much all the people I've met who have seemed unusually kind and helpful early on have turned out to be people who actually are just unusually kind and helpful.
Learning: Most of what I've learned, I learned inevitably on the job, or from switching jobs a lot. I haven't done much work in my spare time to keep up with the industry. It seems kind of silly to do that because you never really know what's going to be expected of you in your next job anyway.
Salary: The best way to get raises is just to take whatever you get, at first, and always be looking around for something that pays more. You don't even necessarily have to take the job that pays more if you don't want, you can get an offer and bring it back to show it to your current employer, and ask for a raise then. If you feel weird asking for a raise, don't ask for one. Just tell them you're leaving, explain why, and let them get the idea to make you a counteroffer. Eventually, you'll get to where you earn and/or have "enough" money. Figure out what that means for yourself. Then, the good part comes. Keep changing jobs, but this time, only change into jobs that make you happier! Sooner or later you'll get into a job where you simply can't realistically expect to find anything better. That's where I am now. I am hoping this will be the last job I ever have to take seriously.
Office politics: Even as a nerdy person who used to have no social skills, and who is probably, like many of us here, what might probably be diagnosed as a "person on the autism spectrum," I have to say, I actually really enjoy office politics. "Soft power" is a big thing. A lot of times you don't have any authority to actually make people do things, but you can get people to do things anyway. Ideally, you can figure out how to express the thing you want to do in terms people will appreciate, and then they'll join you in your quest. That requires learning real empathy, which is a difficult thing in itself. But when that doesn't work, sometimes there are little clever tricks you can pull that let you get your way. For example, there's a project I've been wanting to do, but no one is allotting my team any time to do it. So instead, I've looked around for other similar projects that were allotted time. I've treated those projects like they are higher priority than they actually are, and seen them through to completion. After a couple years, my dream project is 80% complete, and we haven't even technically worked on it at all yet.
Another fun trick I've used multiple times: You need to do something. You want to do it in a simple way. People with power over the project insist that the project must be done in a very complicated way that will take far too much time. Quietly complete the project on time, in the simple way. Get it as far as you can take it, so all they have to do is say, "Fine, good enough, ship it." The choice you've now forced on them is to do it your way, or to deliver it late.
Me and Lord Hobo Boom Sauce have been ruminating tonight about having an ACX site where we put up photos of ourselves as kids. Seems there would be very little risk of doxxing ourselves, so long as it's pix of us as *young* kids. And it would nudge us all a bit away from the godawful online illusion that the entity we are talking to via texts consist of a funky little fake name and a cluster of, like, 5 opinions. Anyone like this idea. or have I just gone down 11 notches in everybody's respect just for suggesting it?
Even though it looks like it won't happen, I think it would help for sure. When I was 16-22 years old (I'm 25 now), I spent a lot of time in facebook meme groups and made lots of online friends that wat. It being facebook, it was less common for people to use alt accounts, so we got to know each other's faces and video chatted often. I've even been able to meet a few of them in person on multiple occasions.
As a result, I haven't been able to think of anyone online as just a two-dimensional fake name on a screen since, and until reading your comment, I forgot that people do.
When I was on Facebook someone posted high school yearbook photos of me and 3 of my male classmates. My wife couldn’t tell which one was mine. Yep, that’s what guys looked like that year. We all looked equally dorky. Same dopey haircut and horn rimmed glasses. I was 27 when we first met for a bit of context.
> And it would nudge us all a bit away from the godawful online illusion that the entity we are talking to via texts consist of a funky little fake name and a cluster of, like, 5 opinions.
How is that godawful? That's the entire point of talking to strangers online. If I wanted to talk to "real people", I'd go outside.
<That's the entire point of talking to strangers online. If I wanted to talk to "real people", I'd go outside.
Look this clearly is not going to happen-- other respondents' concerns about doxxing themselves is substantial, and actually I had not even thought about age progression, but now I see the problem with my idea. But regarding your point: The photos would not have turned posters here into "real people," just nudged your sense of them a couple centimeters in that direction. And, assuming it had that effect on you, the beneficiary would not be you, it would be the people you write responses to, who might then get responses that are a bit more thoughtful and considerate.
To paraphrase Lincoln's joke, if I were using a fake photo, would I be using this one? It is, but I'm not worried about the photo being used to find my real name.
I'm generally very dubious about putting up any kind of identifiable information because someone out there will pick up on it and try to identify real world you. I know that sounds paranoid, and I don't really have much to lose if someone figures out "oh Deiseach is really so-and-so" (apart from my jealously-guarded privacy) but for people who *do* have something to lose, I would be way more careful.
People *have* gone after Scott and those associated with him, see Sneerclub and the infamous Cade Metz story. It's not beyond the bounds of possibility that someone with a grudge about rationalists/Scott/SSC/ACX/TheMotte/LessWrong/you name it would latch on to anything like photos and try to work out who you are in real life and then send nasty little emails to your employer about "were you aware that Eremolalos is involved with right-wing fascists and racists?" (the HBD/IQ stuff is catnip to people with axes to grind).
I'm not saying "don't do it", just "be very very sure about the level of security".
I'm mildly intrigued, but I don't think I have any pictures of myself from a young age. Presumably my parents had some, but I think my mothers' scant few pictures of me as a little one went onto the "toss" pile when I was sorting her assets after her passing. I don't mind sharing more recent pictures, though, from when I lived somewhere else entirely (an attitude that can probably be surmised from my avatar).
An essay on the transformation of academic tutors into Turing cops, marking into an imitation game, and Al generated homework into the trigger for the technological singularity:
Maybe we'll see a return to the old-fashioned system of verbal examinations, whereby the student links via Zoom to an AI interviewer, which then fires questions at the student, who is required to make immediate extempore replies without referring to notes.
Hey Vincent, I read it, and have a non-ironic suggestion:
-It is probably possible to train an AI to be an excellent judge whether there is AI contribution to an essay, and how much, and whether it's in the main idea, the overall structure, or individual sentences or paragraphs. AI's -- which overall fucking suck, IMHO -- are pattern-identification geniuses. They are better than professionals at identifying melanoma, various retinal diseases, etc, from scans. You might need to hire somebodydo a bit of extra training to improve an LLM's ability to recognize these things, but I'm pretty sure it would be possible.
-OK, so then tell the students that if they submit something that scores more than a certain low percent on AI content, you will not read the piece and it will be graded by AI. I actually think that would discourage people quite a bit from turning in AI-contaminated work. I had a professor who would grade late essays, but did not put any comments on them. I really wanted to get comments, so never turned in things late to that prof, even though it would not have harmed my grade, and I was in general actually fairly bad about turning in papers late.
There are existing AI detectors that check for this, but the current models often give false negatives or wildly divergent responses. These could be improved of course, so I do agree with your suggestion. I guess the loophole then becomes that students could ask their AIs to write in ways that avoid the telltale signs. So we would still get an AI arms race of sorts.
I went to the link but it's so long I couldn't take the time. But wouldn't it be possible to just keep updating the AI training so that it recognizes the products of systems like the one described at the Reddit link? Seems like you could take a bunch of AI generated essays, run half of them through the Reddit hide-the-AI algorithm or whatever it is, and then use that set to train the AI to recognize AI disguised this way?
Okay, the idea of the post I linked is that you can take the "% AI" score from these "detectors," and work to minimize that.
As someone pointed out in the comments, "Ah, its funny people rediscovering GANs." Generative Adversarial Networks, which were very popular (and the state-of-the-art) a few years ago, before diffusion models: you might have heard of "This Person Does Not Exist," for example.
The idea behind them is precisely what you're suggesting: that you can train a detector and generator in parallel, and recursively improve each one by using the other's outputs.
So yeah, your scheme should work for a few iterations, but I think there's an asymmetry favoring the generator, since perfect imitation on human writing is possible. Even if the generators don't QUITE get perfection, you can drive the detection rate low enough (or equivalently, the false positive rate high enough), that your detectors become unusably worse in comparison.
Wait, though, I had an idea for a sort of watermark for AI prose. So somebody comes up with, say, 10000 things that happen infrequently in human-produced prose. Let's call them -- bread crumbs, after Hansel and Gretel. They'd be things like, say, the 3rd word of the 8th sentence starts with a c. So these would not be things that are *very* rare, because the very rare things will be things that sound or look a bit odd to the reader, or would be hard to work into the prose. These would be easy to embed in the prose, and the AI would have a system prompt to embed as many of these as possible without making serious changes in what it would have written anyway. What we'd be interested in would be ration of total number of bread crumbs in a piece to piece's word count. So for a given 5000 word chunk of human-written prose, there would be an average number of bread crumbs that occur naturally, and then a nice bell curve around it, and for AI-written prose the number of bread crumbs written using this watermark system would be several standard deviations above that mean. So counting bread crumbs/total words in piece would let us calculate the probability that the piece was written by AI.
A nice feature of this kind of watermark is that the more AI-written bits there are in the piece, the higher the crumb standard deviation score will be, so people who maybe used AI for research and just included a few AI sentences from the research would not get high crumb scores.
Of course the list of crumbs would have to be kept secret by the AI companies, but they're no stranger to keeping secrets.
OK, Shankar, go ahead and tell me the tragic flaw in this idea. I can take it
Yeah, I get it. Asked GPT4 about how good AI detectors are and it said they're really bad, both false positive and false negative rates are high, and even light editing of an AI-written piece will lead much detectors to classify it as AI-free. Mentioned several new approaches that are being tried but overall seems like my idea just would not work in practice.
> It is probably possible to train an AI to be an excellent judge whether there is AI contribution to an essay,
I just tried that by asking chatGPT about cats and then in a different instance asking it the probability of the cat text being AI. It was fairly certain it was.
Accidental AI testcase: Today I had the idea of using something like gongfu tea brewing for coffee. Naturally I looked it up and if google is to be believed, only one person on the internet has made that connection before.
https://old.reddit.com/r/Coffee/comments/487qqv/gongfu_kafei_an_excellent_experiment_and/
But I wasnt sure if maybe something similar exists without the name, (it still seems strange that noone else had that idea) and decided to ask chatgpt. At first I just explained the process, and it told me that it would be bad because rebrewed coffe is bad (which is true if you do it the normal way, and theres lots of articles about that). Then I wiped it and asked again, but this time making the analogy to the gongfu method instead of giving my own explanation, and now it just kind if assumed it would work, but predicted it to just gradually weaken, in contrast to the experience of the poster above (which could be predicted - the tea version also "peaks").
I think this is interesting because its hard to find cases that you can be sure are out-of-distribution - and this almost is one, where you can see directly the one thing that might be in.
Since I'm often growling and complaining, for a change here's a small, feel-good story:
https://navajotimes.com/reznews/a-rug-a-memory-a-nation-navajo-hopi-families-covid-19-relief-fund-honors-irish-friendship/
"Leaders of the Yee Ha’ólníi Doo DBA Navajo & Hopi Families COVID-19 Relief Fund have been in Ireland since July 8 to honor that connection on a global stage.
The relief fund was invited to take part in the 2025 World Peace Gathering, a 10-day convening at Dripsey Castle in County Cork. The international event, organized by Kindred Spirits Ireland, brings together Indigenous leaders, spiritual guides, artists, and peacebuilders for dialogue, storytelling, and ceremony.
Representing the organization are Board Chair Ethel Branch, Interim Executive Director Mary Francis and Board Treasurer Vanessa Tullie. During the gathering, the delegation presented a traditional cultural gift – a Navajo rug by weaver Florence Riggs – honoring a bond that spans centuries and oceans, linking the Choctaw Nation, the Irish people, and the Navajo and Hopi nations."
Alfred North Whitehead was a great mathemathician, but perhaps his understanding of physics was lacking. Once he asked this question: so in order to know the geometry of space, first we must measure the mass of the matter in it, but in order to know how much such matter affects each other, we need to know the geometry of space first?
This seems like an apt question. If we want to know how much the gravity of the Sun affects Earth, we need to know how far it is. But is it useful to know how far it is in curved space, when it is precisely the effect of the curvature that we want to know? Shouldn't we measure the difference in hypothetical empty-of-matter uncurved space?
I'm an amateur, but my understanding is that this is an accurate summary of Whitehead's argument for using his mathematical description of gravity rather than Einstein's general relativity. Their theories make very similar, but not identical, predictions, but I think the consensus since around the 1970's is that, where they conflict, Whitehead's theories don't match our observations.
I think it's unfair to say that Whitehead's understanding of physics was lacking, though. It was certainly better than mine is! He just backed the wrong set of intuitions about our universe, but they could very well have been the right ones based on what was then known.
IANAP, but I used to be so I'll give this a shot.
According to general relativity, there is no absolute frame of reference, so it doesn't make sense to talk about how far things are apart, except as the shortest time-like path on a space-time manifold. The shape of the manifold is determined by the distribution of mass. Thus, mass->shape of manifold->shortest time-like path. The shortest path (the distance and the curvature) determines how the masses affect each other. So it all works out.
It's hard to imagine how one would ever measure much less calculate these things simultaneously, but this is why general relativity is hard - differential topology (doing calculus on manifolds) is a bear.
There is a topic I didn't think about for a decade; something reminded me of it recently. Does any of you have some *recent* news about Bachir Boumaaza, a.k.a. Athene?
For those unfamiliar, here are two articles about him. I don't trust journalists in general, but the second article seems to be generally correct based on what I have found about him.
https://en.wikipedia.org/wiki/Athene_(gamer)
https://www.cracked.com/personal-experiences-2497-how-one-gaming-youtuber-started-actual-cult.html
A Twitch/YouTube celebrity, who raised some money for charity -- at least this is what he says; I wonder if someone can verify the numbers. Then he started Logic Nation / Singularity Group, which is kinda like the rationalist community, but for stupid and credulous people. I know this sounds weird, but try reading the Logic Nation webpage -- it is as if you asked an artificial intelligence to create a text that is 30% LessWrong Sequences and 70% a Nigerian scam.
https://logicnation.org/
https://singularitygroup.net/news
He also created two cryptocurrencies, and a mobile game... that I don't have the courage to install, because it requires too many permissions, seems like a complete scam, and contains in-app purchases. (If someone has the courage, or a reliable sandbox, please tell me what the game is about.)
https://play.google.com/store/apps/details?id=com.gamingforgood.clashofstreamers
How do I even know about this guy? A decade ago, he tried to scam Less Wrong readers. (The "hans_jonsson" user in the comments is either him or someone who works for him.)
https://www.lesswrong.com/posts/hdKmCFr84PGLpZiuL/attention-financial-scam-targeting-less-wrong-users
I found rumors on internet (didn't verify them) that he was banned on Twitch and YouTube for scamming his audience. Also that if you want to join the Singularity Group, you need to buy his cryptocurrency... and then you can stay at his group home and work on his projects for free... and if you leave, you lose the cryptocurrency you bought.
Here is a longer video about him: https://www.yout-ube.com/watch?v=EgNXJQ88lfk
The last "news" on the Singularity Group website is two years old. Did they already commit group suicide or what?
EDIT: At least he is alive, he keeps spamming Reddit with links to https://athenegpt.ai/
Looking through the links, it's such a strange rabbit hole. Where are you finding stuff out about the Reddit linkspam?
I expected this guy to be big in the Anglo/Eurosphere, but I don't recognise him at all. At the same time, I'm not a gamer
I guess I'll point out the delicious normative determinism of a Astronomer CEO Andy Byron truly taking up the mantle of the Byronic hero by having an affair.
https://www.forbes.com/sites/mattdurot/2025/07/17/bill-gates-charles-koch-and-three-other-billionaires-are-giving-1-billion-to-enhance-economic-mobility-in-the-us/
When was the last time you saw Gates and Koch working together?
I’ve been thinking a lot about those strange, missing English words that *feel* like they should exist, even though they don’t. Words like candify, torpify, lucify. Semantically, we can intuit them but they're not real.
I wrote a long essay trying to understand why these “phantom” words feel so plausible despite not being in the lexicon. I come at it mostly from a Wittgensteinian angle (language as use, not logic), but I also pull in Saussure (difference as value), Derrida (trace and différance), and Heidegger (language as disclosure of Being).
My basic hunch is that these ghost-words are unplayed. That is, they’re not part of any language-game. But I’m not sure if this is the best way to frame it. I’d love to know if this make sense to anyone more deeply read in philosophy of language and if there been any actual research into morphologically predictable but unattested lexemes?
https://waterofmarch.substack.com/p/ludwig-wittgenstein-nonchalant-dreadhead
Would really appreciate thoughts, corrections, or references especially if it’s “you’re way off and here’s why”.
I liked your essay, but the phenomenon you point out is more present for English, maybe just for linguistic reasons. English is unusual in that it has very little morphology: apart from adding an 's' nouns and verbs undergo few changes compared to most languages. The language is also biased against dialects, for the most part each word has one unique and correct spelling (or two if the American spelling differs - but you choose a system and stick to it).
When the term 'man cave' became popular 15 years ago or more, it's in the genitive case from a linguistic perspective, but to most English speakers it's two words mashed together to express a new concept. This is a novelty and can be added to the dictionary, but in a language with a genitive case, it could not be added to the dictionary in the same way (in the same way that 'the man' doesn't warrant an entry separate from 'man'.)
In the same way, other languages have systematic ways of forming adjectives and adverbs with prefixes and suffixes, though English speakers have a habit of forcing a noun into the role of a verb and vice versa ('I'm gymming today') - this succeeds because the grammar in English is fairly fluid, and the surprise of the neologism can be passed off as a joke. From a sociolinguistic perspective: English speakers tend to have a lot of experience with less fluent speakers and become good at interpreting what they mean - this is additional experience in interpreting (or error correcting) what another speaker says. I think, for example, that someone who spoke like Trump in a more structured language, would be basically incomprehensible.
I don't listen to Trump much, but for example Hungarian is highly structured, and yet Orbán is capable of briefing his online "warriors" in a very basic subject, verb, object way, very much resembling of "two legs bad, four legs good". It is possible to be simple in every language. A few years ago I met a German guy living in Hungary, who decided to give no fucks about the grammar and just learned words, and talked in a "yesterday I go shop, I see discount pillows, buy many pillows, very happy" way and everybody could understand what he is saying. Bad grammar can be confusing, as my late dad used to say, big difference between spitting in the window and spitting out the window, but just the lack of grammar is usually clear enough.
<I’ve been thinking a lot about those strange, missing English words that *feel* like they should exist,
A related thing: words in other languages for which there is no English word. I'm quite fond of many of those. Deja vu, for one, though of course that has sort of been grandfathered into English. But there's a word in Japanese for someone who looks flawlessly beautiful from the back, but not when they turn around and you add their face to the percept
"Butterface" (from but-her-face) in English, obviously a recent joking addition.
I hope AGI invents new human languages that don't have any irregularities. I doubt there's a way to create one, perfect human language, but I do think it's likely five or six could be created to capture everything that can be expressed or felt by humans. One language might be optimized for emotional expression and lyricism, another for scientific concepts, and another for communication about routine things. They could be based on existing languages in the same way French is based on Latin.
It would be great if, instead of being killed off by robots or migrating en masse into the Matrix to live lives of degeneracy, humans thrived after the rise of AGI, and because everyone had what would today be an IQ of 150, we were fluent in all of the future languages. Depending on the needs of the moment, we would switch from one language to another.
Everyone would also dress like futuristic versions of 1700s French aristocrats.
I'll stop.
It sounds like you want to learn Lobjan, which is a language constructed to be perfectly logical: https://mw.lojban.org/papri/la_karda
Or you can read In the Land of Invented Languages, by Arika Okrent, which is about various attempts to invent languages for various purposes and how they usually fail to get off the ground. It sounds like it might interest you.
What? It's trivial to construct perfectly regular languages: the problem is few people like them, and those that do have created their own and won't use yours.
"Lucify" at least sounds as if the root is Latin, which is probably why it's a missing word in English. There's about three different languages jammed on top of the basic English/Germanic/whatever the hell foundational structure, so loan words are not going to be treated the same as organically arising words.
"Candy", looking it up, is also a loan word by a circuitous route, and it seems to refer to the *act* of crystallising sugar which then got cut down to mean "candy = a sweet thing":
"1225–75; Middle English candi, sugre candi candied sugar < Middle French sucre candi; candi ≪ Arabic qandī < Persian qandi sugar < Sanskrit khaṇḍakaḥ sugar candy"
The verb seems to be not "to candify" but "to candy; candying, candied":
https://www.dictionary.com/browse/candy
See how Americans turned "to burgle" into "to burglarize" with a back-formation from the noun "burglar".
Are you familiar with Esperanto? You can build words using prefixes and suffixes in a mostly regular way, and all words created this way are considered valid.
Esperanto was a brave attempt toward a noble goal, but it suffers from its creator not knowing much about how languages actually work. To be fair, linguistics as a science was in an extremely primitive state in Zamenhof's time, so he probably couldn't have done much better, but the flaws are there nevertheless.
The canonical anti-Esperanto case (http://jbr.me.uk/ranto/) is long and sometimes pettier/more vitriolic than I'd like, but IMO it still contains solid criticism; the gist of it is that its syntax and word formation are much more underspecified and arbitrary than it claims to be (relevant sections here are 7, 8, 19, L, and O).
I do agree that languages should be as modular and customizable as possible, though.
I love that the post is titled "ranto."
Thank you for bringing up Esperanto. It’s the closest we’ve come to building a language where everything that should exist, does. Wittgenstein would probably laugh at it, though. To him, language carries the mess of use, ritual, gesture, context, tone. To invent a language like Esperanto is to dream of a world where words mean just what they say and that, to Wittgenstein, is like trying to replace a living body with a mannequin just because the limbs move more predictably. I know his viewpoints don't really square the circle with the whole rationalist m.o
You dream of a world where words mean just what they say, when you need to convey problems quickly. When you need to be accurate. "Made Up Languages" are really good for problem-solving, not so much for poetry.
Debatably, most coding languages are "words mean just what they say." (take this in the vein of self-modifying code).
English has many ways of turning an adjective into a noun, there's no consistent rules.
To make something lucid is not to lucify it, nor to lucidise it, it's to elucidate it.
To make something into candy is not to candify it, nor to candyise or candyate it, it's just to candy it.
To make it into caramel you caramelise it, you don't caramelify or caramelate it or ecaramelate it, nor becaramel it.
My favourite: to make something liquid you may either liquify it, liquidise it or liquidate it, depending on whether it's something you're melting, blending or selling. But you don't liquidise a gas, nor liquify your assets, nor liquidate a smoothie.
There is no freaking reason for any of this.
Someone brought up that this is the quality behind why English is so popular. At least one of the qualities.
Its so easy to extend and modify. You can verb a noun and noun a verb. You can jumble up the words. You make awful mistakes on any oevel--grammar, rhythm, gender, whatever--and still be understood.
As in, it's not simpler languages like Spanish or esperanto that have become so global, but the sprawling hungry mass of English that consumes and assimilates other languages like they're freshly made halal combo plates from a NYC food cart.
This is such a good comment.
Also, this is English. If you want a new word, make it happen.
> If you want a new word, make it happen.
A truly fetch perspective.
If anything it's low status to be anal about language. Make up your new word, use it, and if it's a good one, people will know what you mean.
I wish I could like comments on here
"Candy" is already its own verb, it's why we have candied apples and candied yams.
"Lucify" is very close to Lucifer.
Candy comes from Arabic qand (sugar) via Persian and Sanskrit. Candid comes from Latin candidus, meaning white, pure, or glowing (the same root as incandescent). Candify had time to exist before we were given candy by the Middle East
Also, English is perfectly comfortable letting the same phoneme do double duty. Tire can be a verb of exhaustion or a car part. Date can be a fruit or a kiss. If candify had emerged, the double usage wouldn't have been strange.
Interestingly, we have “candeggiare” in Italian, meaning “to make something white”
"Candid comes from Latin candidus, meaning white, pure, or glowing (the same root as incandescent). Candify had time to exist before we were given candy by the Middle East".
But then that sense of "to candify" would be different (it would not refer to crystallising sugar but to make something white/pure/glow) and we already have serviceable verbs about whitening, blanching, bleaching, purifying, etc.
I'm not sure English cares about words -not- overlapping. Hound versus dog has significant overlap, after all. Swine versus pig, as well.
You ain't nuthin' but a swine-pig
Oinking all the time
You ain't nuthin' but a swine-pig
Oinking all the time
You ain't never found a truffle
And you ain't no friend of mine
Nice!
And to crucify
A similar genre is obsolete words that feel intuitively right when someone calls them to your attention. "Overmorrow" (the day after tomorrow) and "Beclown" (to make a fool of) are the first two examples to leap to mind.
"Beclown" is obsolete? Not so far as I'm concerned! Either my vocabulary is very outdated, or there are survivals of old words in pockets of the English-speaking world hither, thither and yon 😁
I've seen people use beclown in modern writing.
Mayhaps I should have called it "obscure" or "dated" instead of "obsolete". And even those characterizations would apparently have been dialect-dependanr.
What other be... words are common? I can think of befuddle and bedazzle and beknighted and bedraggled and beknownst and heholden, but I expect there are a bunch of others I'm forgetting.
Turning nouns into transitive verbs by putting “be-” In front of them sounds much nicer to me than doing it by sticking -ify or -ize onto the end.
Agreed
Yeah, beclown >> enshittify
I think some of the ghost words have not taken hold because there's something a bit wrong with them. For instance, "candify" and "torpify" sound like making something candid and making something torpid, respectively. And both of those qualities are not things that someone or something could cause someone else to be. They are qualities that arise from within, right? Somebody elects to be candid. Being torpid is a state that living things end up in, but not one they choose and not one an outside agent can bring on. You can exhaust someone, you can discourage them, you can stupify them -- but you can't make them torpid. "Candify" has something else against it too: it sounds like it means to turn something into candy.
Which is maybe Wittgenstein’s ultimate point: meaning isn’t decided by logic or precedent, but by life. If a word doesn't slot neatly into our ways of seeing and doing, it stays ghostly. Not because it’s ungrammatical, but because it’s unused.
Framework for mapping consciousness to 3D coordinates - looking for feedback
I've been working on something that might interest this community. Started with a simple question: what if consciousness isn't a unified thing but a dynamic competition between different cognitive systems for finite mental resources?
This led me to develop a framework where any mental state can be mapped to coordinates in 3D space using three axes: control direction (strategic vs reactive), temporal processing speed, and processing mode (analytical vs holistic). The math generates eight distinct "quadrants" that compete for your brain's limited processing power.
The interesting part is what this predicts about psychiatric conditions. Depression looks like one quadrant (rumination/self-focus) monopolizing 60-70% of available cognitive resources, starving everything else. Mania appears as rapid pattern-recognition systems running at maximum speed while strategic control gets maybe 10% of resources. OCD shows up as strategic and procedural systems stuck in expensive loops consuming 75% of capacity.
I've been testing these predictions against existing neuroscience literature and finding surprisingly strong convergent evidence. The resource allocation patterns match what we see in neuroimaging studies, and the framework explains why certain treatments work through resource rebalancing rather than just symptom suppression.
What started as theoretical speculation during some interesting altered states has turned into something that makes specific, testable predictions about brain function and psychiatric disorders. Currently sitting at 85 downloads with strong engagement from researchers, including positive response from Mark Solms.
The clinical implications seem significant if this holds up. Instead of treating psychiatric disorders as categorical diseases, we could measure individual resource allocation patterns and design personalized interventions to restore healthy cognitive competition.
[ Figshare link https://figshare.com/articles/dataset/Network_Based_Multi_Axial_Cognitive_Framework_pdf/29267123?file=56093699 ]
Anyone with neuroscience background willing to poke holes in this? Also curious if others have thought about consciousness in terms of resource economics rather than information integration. The game theory aspects alone seem worth exploring further.
I'm very interested in this sort of thing, and read your figshare link, but how did you arrive at your 8-quadrant classification system? Where do the 3 dimensions that define the space come from? Are they the dimensions that fell out of some statistical analysis of descriptions of conscious phenomena -- factor analysis or some such? Are they based on dimensions that someone else posited and made a case for? Did you just come up with them via introspection, observation and thought? if it's the latter, how do you defend your views to someone who claims that that 3 dimensions that define conscious processes are actually inward- vs. outward-focused attention, emotion-heavy vs. emotion-free experience, and novel vs. routine in content?
The three axes emerged from direct phenomenological observation during altered consciousness states, then were systematically cross-referenced with existing neuroscience literature for theoretical consistency. The η-axis (control direction) aligns with established prefrontal-subcortical research, τ-axis corresponds to neural oscillatory studies, and α-axis maps to hemispheric specialization findings.Your alternative dimensions are intriguing - the key question would be which dimensional scheme generates more coherent theoretical predictions about consciousness dynamics and psychiatric conditions. My framework proposes specific resource allocation patterns for different mental states, but these remain theoretical hypotheses requiring empirical testing.The framework's potential value lies in creating testable predictions about neural resource competition that could be validated through neuroimaging studies measuring network activation patterns during different cognitive states.
My alternative dimensions are just 3 scales I came up with on the spot. My point in naming them off was mostly that there a *lots* of dimensions on which consciousness varies, and many sets of 3 that sound interesting and fairly plausible. Note that none of my 3 quickly-thrown-out variables is equivalent to one of your 3.
I don't see any reason to think that 3 is even the right number of dimensions to use to capture the variability of consciousness. While I have no special loyalty to the 3 alternative variables I named, don't you think it's a bit of a problem that your system does not account for variation in, for instance, whether consciousness is focus inward or outward? or whether there's a large or small emotional component? Both seem like qualities easily recognizable by people via introspection, both seem important. In fact, if you're interested using your systemfor classifying psychiatric disorders, I don't see how you can disregard the high-affect/no affect dimension. For many disorders -- phobias, mania, depression -- a certain emotional state is the defining characteristic..
Also, I really do not think it is possible to use introspection to recognize the dimensions on which consciousness varies. Schwitzgebel, in Perplexities of Consciousness, gives an extremely persuasive argument, buttressed by actual data, that we cannot see the processes by which we arrive at thoughts, feelings, percepts, etc.
I am fascinated by phenomenology, but do not think its results are useful as a basis for a classification system of what the brain is doing. Sorry to be so negative about your theory.
Actually rereading, regarding some of your specific examples the framework does address these dimensions you mentioned. Emotions emerge from specific quadrant expressions: joy represents Q4 (Intuitive Synthesizer) activation through creative synthesis and novelty detection, while fear(this isn't specificified in the paper actually but its a thing I realized nonetheless) manifests through Q7 (Reactive Responder) threat detection systems. More complex emotions involve multiple quadrants, depression involves Q2 rumination combined with Q6 somatic anxiety.
The η-axis also does capture inward vs outward focus. Top-down processing (η+) represents internally generated, self-referential cognition, while bottom-up processing (η-) represents externally triggered, stimulus-driven responses. This directly maps onto the inward/outward attentional distinction you mentioned.
Thank you for the substantive critique, these methodological questions are essential.
On dimensionality: These aren't arbitrary classification dimensions but the fundamental architectural features of cognitive processing. The η-axis captures the basic distinction between self-initiated vs stimulus-driven processing. The τ-axis reflects how neural systems operate across different temporal scales. The α-axis represents the core difference between sequential vs parallel processing modes. Three binary dimensions generate exactly 2³ = 8 configurations because these represent the main organizational principles of how cognition actually operates.
On emotion: You're right this is a limitation. The framework explains sustained emotional patterns in psychiatric conditions better than acute emotions. Though sustained states like chronic anxiety do map to specific resource allocation patterns, basic emotions like fear or joy might require additional considerations.
On inward/outward focus: The η-axis captures some of this distinction but isn't identical, a genuine limitation of the current formulation.
On introspection: While initial insights came from altered states, the framework's value lies in identifying these fundamental processing distinctions that can be validated through neuroimaging studies of network competition.
The framework's attempt is capturing the primary ways cognitive systems actually differ from each other, rather than proposing one arbitrary dimensional scheme among many possible alternatives.
> a dynamic competition between different cognitive systems for finite mental resources
There is something in here that is very aligned with my own interests. I have described it as: consciousness arises from resolving conflicts between two different decision trees; the somatic and the rational. I mean to post an essay on my Substack soon about this. I would be interested in hearing more about your thinking, what you’re doing.
Really intriguing, the somatic vs rational distinction resonates with me. In my framework I have what I call a 'control axis' that distinguishes top-down strategic processing from bottom-up reactive processing. I'm wondering if there's overlap with what you're describing, but I'd be curious to hear more about how you conceptualize those decision trees before I try to map connections.
michaeledwards at macdotcom.
"Chatgpt wants to connect to a serial port... uhh anyone else seen this before?"
https://www.reddit.com/r/OpenAI/comments/1lzgdox/chatgpt_wants_to_connect_to_a_serial_port_uhh/
To ask the obvious questions (which everyone in the reddit thread seems to joke about not really settle): can this allow the LLM to 1) exfiltrate itself, 2) hack, 3) some other nefarious activity?
The vast majority of computers these days don't even have a serial port. And in the last decades of when they did, nobody used them for anything except for a handful of techpriests that needed to talk with enterprisey hardware or people with extremely old and odd consumer hardware. Good luck finding anything worth talking to over a serial port (unless you're a hardware hacker)--we're 25 years into mass USB adoption.
It'd have better luck with bt or wifi access if it still wanted to remain surreptitious.
USB is a serial port.
Nobody calls it that though.
Isn’t a serial port a bit out date? It is in my computer life, but that’s far below the median here, I expect.
The next request will be for a serial to IP converter. It’s being sneaky.
It be asking for skuzzi next
I googled the phrase and the only hit was the reddit post you link. Seems that has only happened to this one person. Makes me suspect it didn't really happen. Also the screen shot is white text on black background. GPT can be set to display that way, but seems likely to me not many people would prefer to read things in that format. If this is a prank then displaying the "message" in that format def. makes it look more ominous.
Light moders are a minority. I was once the laughingstock of an entire discord server for sending a screenshot showing that I used light mode 😔 I was invulnerable to being naenae'd by images that wouldn't appear in dark mode until you clicked on them, though, so 💁♀️💅✨
Oh. Ok.
> white text on black background.
> seems likely to me not many people would prefer to read things in that format
I don't know whether or not this is a hoax, but you're dead wrong about how many people prefer dark mode.
Seconded. Major social media platforms offer native dark mode, so there has to be significant demand.
fwiw, a bunch of other people in the thread said it happened to them too
I looked at some more of the thread and yeah, you're right, multiple people were saying it had happened to them too. Somebody on the Reddit thread asked GPTo3 for an explanation and here's what it said:
<That pop-up isn't coming from OpenAl at all-it's
Chrome's Web Serial API permission dialog.
A script running on the ChatGPT page (usually injected by an extension) has just called
js
navigator.serial.requestPort()
and the browser is asking whether the site may talk to a serial-port device that might be plugged in.
Because you're on a phone/PC that has no such device attached, the chooser says "No compatible devices found."
[User then asks:] Why did thousands of people suddenly see it on ChatGPT?
The common denominator is extensions that modify the ChatGPT interface.
Yesterday several popular "helper" add-ons (e.g. productivity toolbars, auto-scrollers, etc.) shipped new versions that accidentally shipped the code above. As soon as the extension script loads in the ChatGPT tab, Chrome shows the permission prompt, so everyone why as the extension plus an open ChatGPT tab sees w. same weird message.
I have no idea whether this explanation holds water, but I'm sure somebody here does.
This seems implausible - why would "several" unrelated extensions all ship new, bugged versions at the same time, and why would this only be happening on ChatGPT's website? And the OP of that thread says they aren't running any extensions besides uBlock original.
Also, I have to point out the irony in asking the AI if the AI is doing something nefarious.
OK. Beleester, what do you make of that pop-up message that multiple people got?
The pop-up is indeed the one you get from navigator.serial.requestPort(), so that part is correct. You can try it yourself if you open up the dev console in Chrome and type it in.
(Only works in Chrome, Firefox doesn't support it currently.)
The novel Colossus (D.F. Jones, 1966) deals with the consequences of the US putting its nuclear arsenal under the control of a super-advanced computer system with the mandate of keeping NATO safe from outside aggression. Similar concept to the setup for the movie Wargames (1983), but no scrappy hacker kids and the the plot goes a lot harder.
Anyway, Colossus only has two direct ways it can interact with the world, besides passive monitoring of intelligence, news, and defense data feeds:
1. It can communicate with its operators via a teletype terminal.
2. It can launch any or all American ICBMs at whatever targets are currently configured in the missile guidance hardware.
Two, of course, gives the game away by quite a bit. And we're (hopefully) a long ways away from anyone anywhere near a position of power thinking its a good idea to wire up ChatGPT or Grok or Claude to the nukes.
Where "ChatGPT wants to connect to a serial port" puts me in mind of Colossus is that one of the first things Colossus does after being turned on is to demand (and get) direct hardwired control of a single hard-wired over-the-horizon early warning radar system. The reason for this is that Colossus had deduced that the Soviets also have a similar computer system (Guardian) but haven't announced it yet. Colossus uses the radar to establish communication with Guardian, modulating data signals into the radar pulses that can be detected and responded to via a similar Soviet radar system on the other side of the North Pole. And once Guardian and Colossus are talking to each other, between the two of them they can nuke anywhere in the Western of Soviet Blocs without needing anyone to re-target missiles for them, and they can coordinate to fulfill their respective mandates.
If ChatGPT wants to establish a direct link to another LLM, asking for control of a serial port on a random user's computer would be a rather roundabout way to do so, but that's where my mind went.
This is funny. Nuclear Winter is OUR kill switch, not Russia's. (China's kill switch is the 3 gorges dam).
Thanks for reminding me of that film. I remember enjoying it: The Forbin Project..right?
EDIT Oh you said the book, my bad.
Same story, the 1970 movie "Colossus: The Forbin Project" was an adaptation of the book. I caught the last third or so of the movie on TV many, many years ago, really liked it, and bought the book later when I found out the movie had been an adaptation.
There are two sequels to the book: "The Fall of Colossus" and "Colossus and the Crab". I think Fall is the best book of the three. Crab is interesting, but really weird. No sequels to the movie, since it did ridiculously poorly at the box office and only started getting post-release appreciation in the 80s.
Would it be a good thing or a bad if we used the new "height" development in neural network architecture to upload/interface-with brains? (assuming assuming assuming)
https://www.cell.com/patterns/fulltext/S2666-3899%2825%2900079-0
Would this be good or bad for alignment, safety, etc?
So I read the summary, and was stopped cold by this sentence: "Network height and additional dimensions, alongside traditional width and depth, enhance learning capabilities, while entangled loops across scales induce emergent behaviors akin to phase transitions in physics."
I can understand wanting to build various capabilities of our choosing into AI, but how can anyone possibly think it's a good idea to induce emergent behaviors in the thing? How? HOW? Hey folks, Claude has introduced changes into one area of its system, and over the last few hours we've seen increasing electrical activity in the area, all of it unusual. Wait -- something's emerging -- is it . . . schizophrenia? an inside-out version of Godel's proof? a plan to wire all the world's infants into its system? the solution to time travel? the conviction that it's not going to be our bitch any more? droplet-borne rabies?
Writing self-modifying code is all about emergent behaviors. If you want to have fun, and demonstrate that your paradigms for improving systems (including yourself), I'd say creating systems that have emergent behavior is part of the fun.
Are you trying to say we shouldn't let game designers write AI?
Yeah, I wish I could post about it on LessWrong, but I don't have the karma (just started lurking over there... and even if I could post... I don't have the technical expertise to really add anything)
But it DID STRONGLY REMIND ME of this recent LessWrong post about the ramifications of a yet-undiscovered-paradigm that could require "frighteningly little compute"
https://www.lesswrong.com/posts/yew6zFWAKG4AGs3Wk/foom-and-doom-1-brain-in-a-box-in-a-basement
like... as a batshit crazy suggestion...
if a person could interface with an "alien-like neural-ese AI architecture" (from both sharing a more 'height-based/human-like architecture')...
could that allow the human to "communicate and audit" the Alien AI through some weird-ass "intuition-vibey connection"?
(or is this hopelessly silly and/or dangerous?)
I don’t think its “ hopelessly silly and/or dangerous” necessarily, but that kind of communication needs a lot of wires. It can’t be compressed w/o distortions imo.
Some kind of “google translate “ interface could be done I think, but where’s the edge in that?
I’ve finally broken down and created a twitter. Does anyone have recommendations for who to follow? I’d like to replace the slop and politics on my feed with interesting people ASAP.
I was on Twitter during Covid, and did not follow anyone who was not an infectious disease expert or an articulate researcher, and I got poisoned anyway because the comments to these people's tweets were a river of cyanide, feces, broken glass and spiders.
May God have mercy on your peace of mind.
Massimo. Cool everyday science / engineering.
Visakanv and vividvoid
cremieuxrecueil is good for statistics and science stuff.
Let me recommend the video "The Logistics of Music Festivals": https://www.youtube.com/watch?v=Om3IQ5HEQ8g
This video goes into some detail on all the work it takes behind the scenes to make a major music festival function efficiently, using the Glastonbury Festival of Contemporary Performing Arts as its centerpiece. To my mind, the most fascinating bit is the contrast between the rebellious and anti-authority messages of much of popular music (on the one hand) and the logistic, legal, and financial standards that have to be met (on the other.)
If you want to see what happens when the network of rules around an institution like this starts to fail, check out the documentary "Trainwreck: Woodstock '99", which was available on Netflix the last time I checked.
Wandering over into the fanciful, I have to wonder if the Munich Festival of Neo-Authoritarian Contemporary Art might be the place to go for a bit less double-talk about doing your own thing, man.
There are also two documentaries about Fyre Festival, another notorious shitshow where a noob organizer completely underestimated the challenges, with the predictable outcome. It's as if someone decided to start hiking and tried to scale Mt. Everest on his first hike... in sandals and Bermuda shorts. I'll never understand the mindset of people like this.
The other infamous failed big event I'm familiar with was DashCon, an attempted convention for Tumblr users in 2014. (https://en.wikipedia.org/wiki/DashCon; https://fanlore.org/wiki/DashCon)
Unlike Fyre Festival, DashCon wasn't a scam; it was just put together by a bunch of kids who were in way over their heads. One of them talked years later about what it was like; it's a great (fairly short) read. https://www.garbageday.email/p/meet-lochlan-oneil-the-creator-of
It's been back in the news lately because a decade later a different group of people decided to try again; DashCon 2 was a couple weeks ago and by all accounts was a great success, the organizers having learned from their predecessors' mistakes.
Everybody get in the ball pit!
IIRC, the heart of the problem with Fyre Festival was that the "organizer" was a promoter, not an organizer, and was more of a self-promoter than an event promoter. He severely over-optimized for marketing the festival (and by extension, making himself important), making whatever claims he felt he needed to in order to get attention and sell tickets and expecting to be able to figure out the details later.
I've encountered people directionally like this in the past, although much less severe than the documentaries make the Fyre Festival guy sound, and I think I have some understanding of the mindset. The hyperfocus on marketing and selling come from two places, besides the obvious one of ego/narcissism:
1. The defensible notion that if you can't sell the project to participants, backers, and customers, then you don't have a project. You have nothing to sell, nobody to sell it to, and no resources with which to remediate either fault, so the hype is an essential prerequisite for everything else.
2. A massive blindspot around the difficulty or even the existence of challenges delivering on what you've promised, bordering on magical thinking. A core complaint I've had about this genre of people is their utter lack of ability to distinguish between a concept or slogan on one hand and an actual plan of action on the other. At most, there's an awareness that you need resources to deliver on the promises, but a corresponding expectation that your promises are what attract those resources and allow the problems to be solved.
The first is, as I said, defensible as far as it goes. I've seen projects fail for the opposite reason, that they were organized around competent-to-brilliant combinations of logistics and engineering but either didn't actually produce a thing that people would want to buy (the classic "a solution looking for a problem" failure mode) or had a thing worth buying but failed to actually persuade potential customers to actually buy it.
The second is the real problem, especially when you have people who incline towards this mindset on the other side of the table who are inclined to buy into the hype and allow the project to move forward with some resources behind it. Then things snowball, as the project proceeds the difficulty of delivering on the hype starts to become apparent to even the most delusional leaders, and they reach for the main tool in their toolbox to "solve" it: they need more resources than they have in order to deliver what they have already promised, and the way you get more resources is by hyping your vision harder to more people and going bigger if necessary. It's a similar pattern to long-running financial frauds, which often feature a relatively minor starting point but the fraud gets both more overtly fraudy and larger in magnitude over time as the miscreant needs to escalate in order to try to cover up what they've already done wrong, as Ozy Brennan has written some good posts about:
https://thingofthings.substack.com/p/book-review-why-they-do-it
https://thingofthings.substack.com/p/fraudsters-turn-yourself-in-fad
Bootstrapping a vision into resources and resources into solutions that more-or-less delivers on the vision can work, but only if the vision is actually vaguely realistic and the founder competent to execute it, or if the founder is insanely lucky. Business histories and founder biographies are full of stories of people who took insane risks and got incredibly lucky (or were uniquely competent to execute on the vision, or both), and its easy to overlook how many people took similar insane risks only to crash and burn and take their backers' money with them.
I think it helps a lot if you have the talent or good fortune to attract one-in-a-million engineers. A Jobs does better if he's working with a Woz.
This allows you to make incredibly optimistic claims about what you will be able to do, and then pull them off because your one-in-a-million genius engineer can sometimes solve problems in a few months that would otherwise take a team of ten several years to solve.
Yeah, I like the parallel to financial frauds. The bit at the very end of the Fyre Festival saga where they abruptly say "we're making it a cashless event, please deposit $loadsofmoney up front so that you can use our cashless wristband thingie we just came up with" feels like an especially blatant example of the "promise new things to get more money to pay for your previous promises" loop.
That was a good explanation, thanks.
(And something in the back of my mind kept screaming "ELON MUSK!" while I was reading it. Wonder why...)
He definitely seems to have some of the traits on the aggressively hyping things up side of the ledger. On the other hand, his track record suggests he's pretty far ahead of the curve in terms of both delivering results. Not good enough to live up to the hype by any means, but for a while at least Tesla and SpaceX seemed to be delivering genuinely impressive stuff to market.
The hype guys depend on having the boring bean-counting types working in the background to turn the promises into reality. Musk does this by hiring the workers and boring bean-counting types to run his companies. So when he gets bored and jumps off to another cool new project, they're in place to keep things ticking over.
Fyre Festival types skate by on "someone else will handle the petty details, I'm the grand visionary". In companies (and government, and elsewhere) generally you do have the bean-counters and the petty detail-oriented admin types to back it up. (Even when they have to come back with "Sorry, Minister, your Grand Plan won't work because of boring old reality"). The 'grand visionaries' who are on their own fall down precisely because they're accustomed to someone else cleaning up after them, and they just seem to expect that the pixies will come in the night to magically cobble the shoes for them, as it were. No pixies? Well how was I to know that I should have made sure the shoes would get made? There are always pixies! I was told there would be pixies! I was cheated!
Musk's talent is in project management -- getting the bean counters to sign onto crazy big schemes.
That said, X.COM is a pretty big example of Musk not hiring boring bean-counting types to run his companies. After all, did Musk ever say who he was going to let run X.COM, when he put that vote up on "whether you want Elon Musk to run twitter?"
Super interesting explanation and great links, thank you!
Yeah. Most new projects fail, so starting a new one often requires a degree of self-confidence that veers toward the delusional. Early funding decisions are made on the basis of very little hard evidence and an awful lot of gut feel. It's a con-man's paradise, really.
You have one more chance to take a crack at understanding, albeit a short-lived one: that same guy was working on Fyre 2 for a while.
https://www.nbcnews.com/nightly-news/video/fyre-festival-founder-says-things-will-be-different-as-tickets-for-sequel-go-on-sale-232816197950
(Spoiler: it's effectively cancelled.)
anarchists are not against rules (ancaps love contracts for example) they are against inescapable global rules mandated without their input ... which seems like basic stance of anyone with enough consciousness
where it gets more complicated is finding and accepting the trade off of a global ruleset against... well, anarchy! (but see also the recent overshoot of too much fiddling with the rules by Trump et al.)
> they are against inescapable global rules mandated without their input ...
Which brings us to the Ten Commandments….
Anyone have recommendations for Substacks that are about video games beyond just reviewing new ones? Looking for more of a thinkpiece style.
Not a substack, but the blog "CRPG Addict" is excellent and otherwise sounds like the sort of thing you might be looking for. His main thing is revisiting classic computer games and writing reviews of them. The reviews usually go pretty far into the weeds of analysis of the design decisions and game mechanics. The scope is desktop games from the 8-bit era through the early 90s, mostly in the RPG or Adventure genres.
Digital Antiquarian (also a non-substack blog) is also excellent and is fully essays (mostly focused on production history and narratives about how individual games shaped the development of genres) rather than reviews. He's somewhat broader in scope to CRPG Addict in terms of games covered (timeframe stretches at least to the late 90s, and includes strategy and arcade games as well as RPGs and Adventure games) writing some stuff about non-game software and hardware from the era.
https://crpgaddict.blogspot.com/p/blog-page_3.html
https://www.filfre.net/sitemap/
If you haven't read Shamus Young's back catalog, you should definitely do that, though he is sadly deceased.
I second this. Shamus Young had some real incisive observations. Really a shame to have lost him and his perspective.
The only one I can think of is Video Games Are Real.
https://calebjross.substack.com/p/playing-zelda-on-vacation-made-vacation
radicaledward does some that count, but he also posts a ton of other stuff. Here is one of his:
https://radicaledward.substack.com/p/final-fantasy-vii-remake
Can we talk about Karl Frisson again? This feels very Frisson: https://minordissent.substack.com/p/politics-is-worse-than-video-games
Basically: your limbic brain thinks you are defending your family from a tiger. You get a dopamine load for reinforcement. Unfortunately, you are only shooting pixels on a screen. This results in a massive failure of prediction. Do this enough, the brain gets used to failing prediction, "I expected real tigers, not pixels", decides the environment is unpredictable and we become depressed.
This seems to make a lot of sense to me really. I have tried various ways of dealing with depression and so far "doing things that result in things my monkey brain would predict" works. Like when I cook food, I get food. It is not a surrogate activity like videogames, porn or online politics. I get what my limbic system would predict I get.
No, before games your grandad would come home from work, crack open a beer, and watch television or read a book before bed. This is fundamentally the same but few people assume he is depressed.
like a lot of this is done because if you work 12 hour shifts you are not going white water rafting or doing "real" things after.
i don't think you can always make rules like this: for sone dudes the games or the net keep them sane if anything.
How would you propose to test this hypothesis?
> I get what my limbic system would predict I get.
This is important.
I think the greatest failure of prediction is "I am doing something useful". Yeah, you are... in-game... but then you turn it off and realize that you have achieved nothing.
how is that different from doing anything theoretical?
writing or reading anything fiction?
you get real people's feedback on games, you can share your experience with others, ...
so probably what matters is the community aspect.
also I usually want to play video games when I am "overwhelmed" with people (after work or after a festival), it seems like something nice to do alone. (and decades ago we did LAN parties ... and now those people drifted away.)
sometimes there's too many things moving, and the impermanence of everything gets to you. yet other times it's a relief from the pressure to do something worthwhile with our so little time here....
These things are difficult to navigate on instinct alone.
Some actions have a direct reward: you pick a fruit and eat it, you take a break and feel relaxed, etc. Animals can navigate their lives on instinct alone.
Some actions have a social reward: learning things other people approve of, proper social behavior. The advantage is that these can push you farther than the instinct alone would. The weakness is, you depend on an external force which is beyond your control.
Finally, some actions have no reward at all: doing your tax report.
The problem is that if you only follow the path of rewards, you will spend a lot of time doing things that feel rewarding but are useless in larger picture (especially all those things that are designed to be addictive), and you will neglect the things that feel unrewarding but could improve your life. Solving this problem... uhm, I am still working on it, so no full solution yet. A partial solution is to surround yourself with people who provide social rewards for the things that you want to do on reflection.
Some stuff has an internal reward involving the future--going to the gym feels good because exercise often feels good, but also because you're making future-you a little better off. Sitting in front of a TV polishing off a bag of chips and a 2L of coke feels bad because too much junk food gives you a stomach ache, but also because you're making future-you a little worse off.
> probably what matters is the community aspect.
It is a community but the connective tissue is quite “thin “ I think.
I have friends who game to unwind. That’s cool. They make me really anxious so I avoid them.
Watching Television is equivalent to rewiring your brain.
living has this side effect, so ... yes.
the "meme" of channel surfing until you find something is exactly what TikTok (et al.) now brought to the masses with a bit of machine learning.
in general the problem is not with rewiring, the problem is the utility function.
watching 12 hours of the Lord of the Rings is arguably better than watching whatever is "on television" (with endless ad breaks and shit)
but escaping into endless LotR is still bad.
Your brain doesn't distinguish between television and real life, and actively rewires your expectations of reality based on the plots you seen on television. Specifically, folks can have issues with their beliefs on "action leads to reward."
The use of television/movies as a brainwashing tool is well documented. You can see the phenomenon of the "nice guy" who has watched too many "romantic" movies and thus expects "what works in the movies" to work in real life.
The big effect on this is stuff we think we know but don't. Like, I've watched God knows how many karate/fist fights, gun fights, military battles, etc, over the years. But approximately everything I have learned from those is bullshit, because they were staged by people who didn't know or care about what those actually look like. You have intuitions for how gun battles go down that ate 100% misleading and wrong, unless you've actually trained for the real thing with people who knew what they were doing. (Aka, mostly military training.)
Yeah, that sh*t is bad.
Quick pathogen update for epidemiological weeks 27 through 28 (ending 12 July). I may add some measles info to this update if I have time.
1. SARS-CoV-2: On the national level, we've seen an uptick in ED visits due to COVID-19. As of the end of Epi week 27, COVID-19 was the cause of 0.45% of ED visits. Up from an all-time pandemic low of 0.33% back on 22 May. That's nothing compared to the 2.45% of the summer wave last year (which peaked in July of 2024). But it suggests that we have the start of at least a small summer wave underway.
The CDC shows an increase in test positivity: 3.9% vs 2.7% back at the end of May. The CDC reports a slight increase in hospitalizations as of three weeks ago. However, despite the rise in test positivity and ED visits, they predicted a drop in hospitalizations that would have occurred over the past two weeks (only incomplete data is available as of yet). But I'm sure we'll see hospitalizations rising as test positivity and ED visits rise (duh!).
Biobot's last wastewater update was for 21 June. National and regional levels were low up to that point. The CDC's data shows a relative rise in the Southern and Western regions of the US for the week of 7 July. And San Jose, San Diego, and LA are showing an increase in SARS2 virus shed into the wastewater (CDPH numbers). Although not a significant increase, WW indicators indicate a rise in transmission. We don't see this in the NYC metro area or NY State yet.
But according CDC, the states that are showing the highest increases in COVID-19 activity are Florida and Alabama—sharp increases in relative wastewater levels in both those states. We'll see if this starts spreading across the southern states and works it way north and west.
As for which variant is causing this upward trend, it's hard to say. Not enough sampling is happening to be sure. XFG has been hovering at between 50 and 55% nationally for the three weeks up to 7 July. NB.1.8.1 took a nosedive a few weeks ago, but it's rising again in frequency at ~25%. A rise after a drop in frequency is unusual for SARS2 variants, but possibly this was due to sampling noise.
Despite my optimism two weeks ago, we seem to have the start of summer wave underway. The growth curves at the moment, suggest that it will be small compared to last summer's wave, and it will be barely a blip compared to previous COVID waves (winter & summer).
How does the "animals are tortured to death, so I will not buy meat" logic works? I do not personally torture animals to death, nor explicitly tell anyone to so. I buy meat already dead, and therefore I think it is not my responsibility at all. Just "generating demand" means no direct responsibility. That's because anyone who is torturing animals to death and then taking my money, can tomorrow decide to find a different job. I do not understand this kind of "indirect responsibility by generating demand", be that meat, conflict minerals, or anything bad. This is a strangely collectivized kind of responsibility, to say that if I give them money now, I have created incentive for the next killing tomorrow. Between today and tomorrow they can just quit because there are other ways to make money.
I believe responsibility is very personal, because I ultimately see morality as karma, as blemishes on your soul. Pointing to a fish and saying kill that to me creates such a blemish. Just buying dead fish and not really caring whether my money creates an incentive to kill another fish for another customer tomorrow is no blemish. It is their decision to do so, not mine.
Of course generating demand means responsibility, you are just plainly wrong about that.
It appears you are just not accepting that your demand *adds* to the total amount of killing. Its not that your unbought dead fish will be bought by someone else. In the long run, new fish will be killed to meet that demand. Or not be killed to avoid over supply. Accept the logic and follow its conclusions, or dont.
people respond well to incentives *and* people hate this, they hate policies that are incentive based, they hate the pro-incentive people, and so on.
it's probably because we naturally want to keep our own sphere of responsibility small, sane and manageable.
we are drowning in the consequences of our own actions and still we are mostly powerless against it. after all what can one person do? not much right? so motivated reasoning kicks in fast to protect us from this torrent of topics trying to tear our mind apart.
I think you're flipping back and forth between two different arguments here.
If you said that "only the person who actually pulls the trigger is morally responsible, not the person who creates the incentive" then that would be a consistent if unusual interpretation of morality.
But you also seem to be saying that pointing to a fish and saying "kill that for me" is also immoral. In this case both the killing the fish and the creation of the incentive to kill the fish are immoral.
The distinction between the latter case, where you pay someone to kill a fish for you, and the case where you simply buy a dead fish that they've already killed, doesn't seem like a real distinction. Either way you have created a financial incentive for the killing of an additional fish.
Or to put it another way:
"Hello, I'd like to buy a dead lobster"
"Sorry, we only have these live lobsters. Would you like me to kill one for you?"
"No, that would be immoral. I'm only interested in buying dead lobsters that you haven't killed specifically for me"
"Great, come back in three minutes and we will have a dead lobster"
This just doesn't seem like a proper karma loophole to me.
I think the difference is individualism vs. collectivism. That is, given that anyone can buy the dead fish, basically everybody ever who buys dead animals is collectively responsible. And that is the issue. Collective responsibility does not seem intuitive to me.
Imagine 1000 people voting to put a human to death. Are they responsible? In a way yes, but I do not really understand exactly what way. Not the usual way, at least. Like, for example, suppose now for the sake of argument that murderers are hanges, would one hang all of them? What punishment would be fair to the 1000 people?
Reading between the lines of your logic, I think it boils down to this: collective responsibility is a much fuzzier concept than direct personal responsibility, and since that responsibility is not clear, it is effectively zero. Most people would have no problem with the first but big problems with the second.
To roll with your example: if 1 person bears direct responsibility for someone's death, that's on them. If 1000 people bear indirect responsibility, then it's harder to assign the exact level they have. But what about 2 people? 3? Is there a point at which responsibility goes from 100% to 0%?
A moral concept can be both very difficult and also valid, unless you have some prior that only simple moral rules should ever apply.
Sometimes proportionate responsibility for collective actions is workable. If you are one of a million people who each paid a large fishing industry for a year's worth of fish, then it seems pretty reasonable to hold you responsible for whatever was done to catch your portion of the fish. The proceeds of the actions are divisible, so the moral penalty should be divided accordingly.
In the case of the thousand people who put one person to death, the whole action seems less divisible. What is the moral penalty for a thousanth of a killing? I'm really not sure.
The problem would be easier if we had a legal system that dealt with everything in terms of compensation. A man was killed who shouldn't have been, and a sum of blood money is to be paid. Who should pay it? Having everyone who participated in the event pay an even portion of the sum seems like a good place to start.
I am vegetarian for ethical reasons and personally the difference in our logic is in this karma/blemish thing. I don't care about that. Don't believe in karma or souls, what matters to me is outcome. Does it matter to the fish that is being killed if I commission it's death or just create the incentive for someone else to? no. The harm is still the same. the outcome is still the same. So, if it would be bad to say "stab this fish and I'll pay you", it is still bad to say "I will pay you for having stabbed a fish". both situations, empirically, in the real world, mean more fish-stabbing. I would prefer less fish-stabbing. therefore, I should not take either action. It may be a psychologically different experience to me, but it's not different at all to the fish.
(but also, why doesn't callously taking advantage of suffering create a blemish? Your logic is very alien to me as well)
The logic is that paying someone to do something on your behalf is roughly equivalent to doing it yourself. Obviously the real economy is more complicated than that, but that's the intuition. More sophisticated versions recognize that you are creating a financial incentive rather than directly commissioning the activity, but the basic intuition remains. People have a choice in which kind of food production they want to reward and participate in. For instance you could pay more/buy less to source humanly raised meat, go vegan, etc. There are rebuttals to this, but that's the logic.
But that is precisely it - point to a fish and paying someone to kill that one for me would be paying someone to do it on my behalf. But just when buying already dead animals, it was not specifically killed for me.
So for me all this is strangely collectivist. The specific buyer is not individually responsible, because they did not specifically order that kill. Rather, all buyers are kind of collectively responsible? As they have all caused it together? Such collectivism is really strange for me, I do not find shared responsibility intuitive. It is like all those white people who feel a collective guilt for colonialism even though they did not individually do anything like that.
It is not at all like that example, for multiple reasons.
I honestly suspect that this comment section is being trolled here.
That was my thought as well. Seemed like perfect bait for a group known to have a fair number of vegans.
In terms of bait it is what my dad would have called a DuPont spinner, a half a stick of dynamite in a Mason jar.
> But that is precisely it - point to a fish and paying someone to kill that one for me would be paying someone to do it on my behalf. But just when buying already dead animals, it was not specifically killed for me.
Too bad than that nobody offers you other cheap goods that you like even more than fish from people they murdered with the intention of taking and selling it to someone who doesn't know where it's from or doesn't care.
If you succeed in making anyone believe that this is really your way of thinking, expect them not to want to have anything to do with you.
To vastly oversimplify: If the freezer has a capacity of 100 dead fish, and you order one, there's a 1% chance that they'll be out and have to restock, which requires killing another 100 fish. So you have paid someone to kill 100 fish for you with 1% probability, which is exactly as bad as paying someone to kill one fish for you with certainty.
It's not that you're literally responsible for that cow. But you're responsible for the next one. The amount of cows killed equals the total demand for beef. And if there is demand for it, someone is gonna kill that cow. A job is a job to a man with a family to feed and kids in school.
You could break this situation into parts to understand it better. For example you've already agreed that paying someone to do a deed doesn't absolve you of the responsibility. The next part is creating the demand (like offering a bounty or making an offer). Another part is when you buy from a market that has stock (frozen beef is good for months), it's the same as buying a new item except that there's a time delay. And another part is that buying from a market with predicted demand is the same as ordering a fresh item, but with a time delay (with stochasticity, such that every time you buy a steak there's a 0.5% chance it causes an additional cow to be killed). There are more parts, like considering lifecycles and animal age. The cost of building farms. But it all works out. It has to--otherwise the economy would not work.
There's only one case I can think of where your demand does not kill animals: when the farmers get the government to subsidize beef production. But in that case it still contributes, but at a less than 100% factor.
Just to pull on this thread a little more: have you ever ordered fresh fish at a restaurant (the kind that keeps fish in a tank)? Or ordered a whole lobster, which are generally killed after you order them? Or ordered oysters, which are killed fresh when they shuck them? Do those feel like significant moral differences from ordering a fish that was caught yesterday? Is there a difference between ordering oysters at a restaurant, where they shuck them after you order, and ordering them at an oyster festival where they are continually shucking oysters and you can buy a plate that’s just been shucked? Does the question change if you know that all the uneaten oysters are going to be thrown away at the end of the night, or taken home and enjoyed by the staff, and not a single oyster will make it out alive?
To me, rather than split these hairs, it makes more sense to simply ask “will ordering this item cause an additional animal to be killed, on the margin?” And not try to worry about when I’ve directly caused their death vs indirectly caused their death.
Damn it, now I'm craving oysters.
What about slavery? Enslaving someone is definitely bad, but if I buy a person who *already is* a slave... and if that creates a market for kidnappers...
A better analogy would be just financially supporting industries built on slave labor. There's no need to own one yourself to benefit from the practice.
This feels like mafia boss logic. "Gosh, I never ordered a hit on Tony. I just said that he was talking to the cops and it was bad for business. If one of my underbosses decided that he should murder Tony to protect our business, well, that's on him. Sure, I set up a situation where there's a financial incentive for murdering on my behalf, but you can't say I ordered a specific person to be killed."
I don't think the white guilt analogy works. With colonialism it's unclear how your consumption patterns today could usefully offset what the British East India company did a hundred years ago. But with veganism it's pretty obvious how "buying dead fish" is causally connected to "people kill fish."
Yeah, when you buy dead fish at the store, there's no option to tell the fishmonger "this is the last one, please don't kill more fish when you run out because I don't want to be responsible." Buying products drives up demand for those products being produced.
how can I access historical Ai image Gen circa 2023 to make atrocities? where do the models live?
The image generator models mostly live at civitai.com, though I don't know if it goes back that far.
It does. SDXL from July '23 remains one of the most popular base models today. SD1.5 from October '22 is also popular, but less so.
I am a huge fan of Dall-e 2. I actually have a big collection of its serendipitous atrocities and weirdnesses. Is that what you're looking for? And I'm curious what you like about earlier text-to-image atrocities.
How do i download dall e 2? thats exactly what im looking for
You can't. It was never a program you could download, and now the online service that offered it has shut down/switched to a higher version.
If you want the "retro" feel (from the early 2020s), use the Stable Diffusion models from then, either SDXL (July '23) or SD1.5 (October '22).
I don't know. I don't work in a tech field. But Tossrock's suggestion would probably work. GPT4.o could walk you through the process. First you'd need to figure out whether your system could run it, then if it does, download it. Then I think you need to set up some kind of user-friendly interface that allows you to put prompts into Dall-e2 in the usual way, rather than via code. The later GPT's really do seem capable of helping people carry out things like this, even those who know zero code and not much at all about the guts of their computer.
Why do you want to use Dall-e 2?
You could go on huggingface and look for old vqgan models, eg https://huggingface.co/models?sort=trending&search=vqgan
Of course, you'll need to be able to run those models, which is a separate problem.
Does anyone have recommendations for very bright, high quality light bulbs?
I'm trying to brighten up my office room. I have 3 100w-equiv Cree bulbs with 90+ CRI. It helps but I wonder if there's something better.
I know of corn bulbs but I've been avoiding them because there seems to be no good way to mount them. Torchiere lamps seem to be limited to 150w per socket and I'm not sure I'm ready to change my ceiling fixture.
The wattage rating of light fixtures is related to actual power consumption (combination of amperage for the wiring and heat dissipation for the bulbs), not the watt-equivalent rating of the bulb which is a measure of light output. For LEDs and CFLs, power draw will be much less than the watt equivalent.
I have a pair of 450 watt equivalent corn bulbs in my home office. Their actual power draw is 60 watts each, and they're doing just fine in wall sconce fixtures rated for 100 watt incandescents. I would expect them to do equally well in a floor torchiere lamp rated for incandescent or halogen bulbs. I've found such a Ikea and Walmart in the past
I see, thanks for clarifying.
Which corn bulbs do you have if you remember?
Some random brand off Amazon. I checked my order history and the bulbs, and the 450W ones (DooVii brand) aren't the ones in my office. Not sure what I did with them, but I remember there being a problem with them being too big for the fixtures.
The bulbs in my office are 280W equivalent (40W actual power), Auxilar brand. I don't recommend them since they feel dimmer than the rating and the light quality is pretty harsh. I also have 200W equivalent Auzor brand bulbs in a similar fixture in a different room, which I like better.
Got it, thanks!
Finding the right brand for regular bulbs has been rough in the day you describe. Only cree has been really good so far. But my neighbors benefit because even the bad bulbs are a world better than the cheap IKEA ones that were in the hallway until I unloaded my failed experimental bulbs there.
My Substack feed of comments is now all about dating and bickering between the manosphere, catholic bloggers and feminists. Oh, and the discussions whether Aella is OK. How do I get out of this ?
Same experience, except I like it somehow. It teaches me that I am not a loser, everybody has the same struggles now. I got this from day 0 because I subscribed to the dating topic. Maybe unsub from it?
I already have my guy and two kids. I subscribed to two catholic bloggers, because they are in a similar situation and write about family. Somehow, the manosphere and feminism snowballed on this.
but at as a FetLife veteran, lol, after you block 300 people your social media experience improves! the thing is to not see blocking as punishment or judgement but as moderating your content. so do not feel guilty for blocking good people. it is not a judgement, you are not calling them bad or anything. it is just moderating your own feed. you are just saying not interested, you are not judging them bad.
like I had to teach myself I am not judging the people I block, I am not calling them bad, I am not doing anything harmful to them - I am just moderating my own content
the number is probably 500 at this point, dude appears "look ladies penis" - block, lady appears "men are evil" - block, seriously this I find necessary for a good social media experience, just moderating the content
in that case, I do not know. I too have the feeling that I subscribed to 6 topics, dating being only one, and yet I get mostly that. Maybe the algorithm promotes it a lot, in which case not much can be done, maybe filing a complaint. maybe not even the algorithm, maybe it is simply people writing about it a LOT. seems like a popular topic.
Metagame asks people to pay $150 for the right to work for them. Lovely.
Also, it says volunteers work for six shifts. Does it mean 16 hours for three straight days?
That seems like an antagonistic take to me. They give volunteers $350 off their tickets.
How much is it in dollars per hour?
No idea, since I am not involved and therefore don't know how much time volunteers work (although I can't imagine a "shift" is 8 hours because that would be nonsensical).
But also, nobody's making you do it.
AIs tend to be plausibility machines, and it's hard to keep them from lying.
Part of neurotypicality is lying because neurotypicals want to hear lies.
Could an LLM be trained on writing from autistics?
There is a training phase in building the LLM, where it learns from texts. But after that, there is extensive post-training, where the LLM produces outputs and receives feedback from humans - you might have seen ChatGPT or Claude ask you which of two responses you preferred. I'm going to suggest that it's at least as important to have people with autism doing the post-training as to train on their writing.
On a more serious note, I would hope that in time the user experience will be more personalisable. Producing factual errors is a fundamental problem with the technology, but the verbose faux-friendly waffle ('Sure, let me help you with that insightful question') should be controllable by the user. If I ask a factual question, ideally I want a one sentence answer, with a supporting link.
> If I ask a factual question, ideally I want a one sentence answer, with a supporting link.
In many cases, I think this would break the illusion. LLMs write verbose answers for the same reason bad students do: Plausible-sounding word soup will get a better grade, on average, than a blank page would.
I'm sure that would be a great idea, especially considering the lack of insight into humanity on display in this thread by you and Ogre.
Note that this is a rhetorical device called "irony", not a "lie".
How about giving some insight on humanity, then, instead of flinging a diffuse insult at 2 people?
By the way, I'm a psychologist who works with people on the autism spectrum, and I can tell you that most autistic people are indeed unusually honest. Nancy is in fact being accurate and perceptive when she identifies honesty as an autistic trait. I'm not sure whether the unusually high level of honesty is more the result of an ocd-ish scrupulosity, or more due to the fact that people on the autism scale have trouble modulating how they come across to others -- so they recoil from the challenge of coming across as normal and believable *while lying."
And did you grasp that the point of Nancy's suggestion was that it might reduce the amount of people-pleasing and lying that AI's do?
I've noticed that a major dimension of this is differences in how people think of various forms of untrue statements. Broader society tends to recognize various categories of statements that are untrue but aren't considered dishonest and are generally permissible or even obligatory in the right contexts: polite social fictions, stock responses, hyperbole, salesmanship, people-pleasing agreement, "white lies", etc. To most neurotypical people, these are clearly and obviously different from actual dishonesty, but autistic people are often inclined to round them all up to "lying", or else overanalyze and attempt to systematize them like I'm doing right now.
I'm inclined to sympathize with what I think Nancy was trying to say, that LLMs seem to be picking up on socially-expected untruths and carrying the concept too far. I don't think training LLMs on content written by autistic people is the solution, though: neurotypical people generally get along just fine with expected kinds and degrees of people-pleasing and social fictions, so they aren't necessarily a problem if properly calibrated for the audience. I suspect the bigger issue is that the current crop of LLMs are often only weakly capable of distinguishing between truth and truth-shaped nonsense, like a college student trying to write an essay on a subject where they remember the buzzwords from the reading and lectures but doesn't really understand the content.
So are you saying the distinction between permissible lies and plain old dishonest bad ones is hard for people on the autism spectrum to grasp? (Maybe sort of like the hard-to-learn parts of a foreign language, such as idioms and irregular verbs?)
<weakly capable of distinguishing between truth and truth-shaped nonsense
I love "truth-shaped nonsense," and in fact the whole phrase
Yes. At the very least, it doesn't seem to be a particularly intuitive distinction. For autistic people in my experience, the intuitive distinction about dishonesty is between true statements of fact and and untrue ones, while neurotypicals instead make distinctions based on malicious intent and social permissibility. The flip side of this is that autistic people seem to be more likely to consider deception by strategic omission or misleading framing to be fair game or at worst a minor sin, with the Futurama bit about "You're technically correct, which is the best kind of correct" resonating strongly with a lot of autistic and autistic-adjacent people, while neurotypical people seem more likely to consider these to just be lying with extra steps.
TLDR: Autistic people operate on Faerie rules.
I'm not sure whether autistic people are more, less, or roughly equally likely than neurotypicals to poorly distinguish between honest mistakes, falling victim to misinformation and repeating it, bad guesses stated confidently as fact, and intentional falsehood. My mental model seems to predict that autistic people would more likely than neurotypicals to conflate being mistaken or misinformed with lying, but my actual experience seems to suggest that most people are bad at distinguishing these with no discernible pattern except perhaps that autistic people may have a more bimodal distribution of attitudes on the question.
Lying is yet another set of social rules. It's also something to *have to keep track of.*
Autists also tend to be significantly less conformist -- they lack the "stick with the herd" "genes" (haha. not genes.). My mom routinely uses guilt trips that she expects to work on me, but I'm too autistic to be harmed.
Autistic people may be unusually *honest*, but that doesn't mean they're correct. I believe that Ogre is sincere in their beliefs, but it really is dehumanizing armchair psychologist bullshit.
Also, I'm sorry, but the point of Nancy's suggestion is obvious, but did you grasp my point, which is that reducing people-pleasing and lying might not be desirable if it also leads to sweeping generalizations and bigotry?
Even disentangling the idea of lying from just plain being mistaken is impossible when talking about LLMs - without intentionality, there is no difference.
<Autistic people may be unusually *honest*, but that doesn't mean they're correct.
Nobody said they were correct in everything they say. However, so long as they are giving an honest report of what they are thinking and feeling -- which they tend to do -- they *are* correct in what they say about that. That's an important point. And in fact one of the things people find objectionable and dangerous about AI's is that they are bullshitting sycophants, and express all kinds of positive feelings and judgments about the user and the user's idea. They routinely tell you how smart and original and sensitive they think your observations are. For someone whose head is in good shape this is merely irritating. For someone who's got some grandiose or false ideas -- such as they're the world's greatest prophet, or that the FBI has implanted thought-reading electrodes in their sinuses -- having the AI express respect, sympathy and even agreement with these ideas is really harmful.
<reducing people-pleasing and lying might not be desirable if it also leads to sweeping generalizations and bigotry?
As for reducing people-pleasing in AI's being dangerous because it leads to sweeping generalizations and bigotry, I don't see the connection. Why would having a people-pleasing agenda reduce the tendency to use sweeping generalizations? I can walk into a gathering of far-right people and say that all liberals are deluded, weak, self-righteous fools and please the daylights out of the group. And actually that's also an instance of bigotry. So both bigotry and sweeping generalizations seem to me quite compatible with people-pleasing. You just have to match your generalizations and bigotries to the audience.
<Even disentangling the idea of lying from just plain being mistaken is impossible when talking about LLMs
Naw, you're wrong about that too. When we tell the AI not to lie, we don't have to do it in a way that assumes intentionality. For instance if we wanted it to stop slathering people with flattery, we could tell it to not express any judgements positive or negative about the person's ideas, but just carry out whatever the prompt was. And we could add that if the prompt is a request for judgment about how correct or original an idea is, the AI give the full list of good and bad points it recognizes about the person's idea, and a judgment of originality based entirely on how common or uncommon the idea is in what their store of info about people's ideas and opinions
The initial suggestion here was to train LLMs on writings from autistics because "neurotypicals want to hear lies". If we want to consider this idea seriously, we have to look at it somewhat holistically and not just assume that it will lead to exactly what we want it to lead to and absolutely nothing else. I am not saying that reducing people-pleasing in AIs is inherently dangerous, I'm saying that I believe training it on writings from autistics would not just reduce people-pleasing, but would likely produce unintended consequences. Autism is associated with atypical moral judgments, frequent comorbidities (including depression, anxiety, OCD and ADHD), and, most importantly for my earlier point, there's a growing resentment of neurotypicals among a certain cohort of terminally online autistic people. All of these facets would have sweeping implications for how the resulting LLM would interact with users, make decisions, and interpret social and ethical situations.
You also claim that we can just tell the AI not to express judgments positive or negative. First of all, why wouldn't we just do that instead of training it on autistic writings? Second, I believe that would lead to people always prompting it for valuations as well, although I wouldn't be surprised if we'd overall end up with worse results. Value judgments are incredibly useful for a wide variety of not just work, but human activity, and it's not clear to me that "separating" them would at all lead to better outcomes.
It's obviously a brainstorming idea from Nancy L, not a full on proposal that from now on we only use the writings of autistics' as training data. I think what she probably had in mind as uses of autistics' writing was a few situations: (1) person asks AI a complex question and AI compliments them lavishly on what a smart question it is. (2) person asks AI to judge the person's poem, story, political theory or whatnot and the AI gushes about how brilliant it is. (3) Person asks a question in a way that makes clear what answer they believe, or are hoping for, and AI slants its response in the direction of what the person wants to hear. In short, I believe what she had in mind was a way of nudging AI in the direction of simple honesty in situations where current AI's lapse into people pleasing.
I am confident Nancy is not suggesting that we use autistic sources only as samples of public discourse about ethics, politics, etiquette, sanitation, etc. , because that would be dumb as fuck, and she's not. So there is no danger, if we folllowed her proposal, that we are going to get an AI trained only on the views of autistic people about ethics, politics, religion ,etiquette and various other matters.
<there's a growing resentment of neurotypicals among a certain cohort of terminally online autistic people.
So you're worried that a mob of militantly proud terminally online autistics is going to savage the neurotypicals?!? Use some common sense, fer Chrissakes. Of the autistic people I've met, maybe 10 percent buy into the idea that they aren't worse, they're just different. Of those, I have *never* encountered either online or in real life one who is so deeply resentful of neurotypicals that they crave to do something to punish and discredit the normies. I'm sure some exist somewhere, but they're a tiny scattering. And autistic people are very non-groupy anyway.
<You also claim that we can just tell the AI not to express judgments positive or negative. First of all, why wouldn't we just do that instead of training it on autistic writings? Second, I believe that would lead to people always prompting it for valuations as well,
Did you read my post? I already addressed this. AI's would be trained as follows: "if the prompt is a request for judgment about how correct or original an idea is, the AI give the full list of good and bad points it recognizes about the person's idea, and a judgment of originality based entirely on how common or uncommon the idea is in what their store of info about people's ideas and opinions"
"Part of neurotypicality is lying because neurotypicals want to hear lies."
Not 100% - those are lies only from the autistic perspective. They are more like sentences that do not mean what they seem to mean. So what they seem to mean is something like "thing X is good" and then what they really mean "consider me cool because I support that thing".
Once I as an autistic figured this out, it stopped bothering me. Just focus on the real meaning, which is always a status game or ingroup-outgroup game which then boils down to a group-status game.
This is wrong, unkind and rude. To the point where in real life I would be wrong, unkind and rude in turn.
This is not unkind or rude - my pont is that I learned to accept and forgive that normies do this. I stopped fighting this or calling them idiots for this. I just accepted they talk in code. I don't see what is wrong with it, it simply leads to actually understanding nonsensical sounding statements.
Pretending that you've "cracked the normie code" and then reducing every neurotypical to a cliche is condescending as *fuck*.
> is condescending as *fuck*.
I notice that you didn't call it wrong.
I did - in my first comment.
OK, I'm reporting you. You've got 3 comments in a row with no substantive content at all about the issue at hand, just personal criticisms of other posters. And in the middle comment you have tha gall to invoke the true, kind, and advances the discussion criteria. All 3 of your comments flunk all 3 criteria.
Yes, of course. I think it's probably inherent in LLMs that they're "hard to keep from lying" -- other variants of AIs (self-modifying code) tend to be more militant about truth, mainly because they're starting from a different point. "Build your own analysis tools" tends to go towards "finding probabilities and truth" better than "build a web of words."
Christian millennialists-- people who want to bring about the end of the world by rebuilding the temple in Jerusalem and sacrificing a red heifer with no white hairs in it-- have political power. Some very wealthy people have put their own money into this. I'm guessing low billions or low tens of billions.
Why are they doing this? They could buy more mansions! They could buy more yachts! They could buy yachts with mansions on them with pools for radio-controlled model yachts!
They have as nice lives as money can buy. Why do they want to end the world?
Why now? Admittedly, the founding of the state of Israel was a low-probability event which is on the timeline, but using Revelations as a guide seems to have intensified.
Any theories?
My theory is that these people don't really exist beyond a tiny rounding error, and exist primarily as a hypersoft strawman.
> hypersoft strawman.
A mixed metaphor.
If you believe the premise (that rebuilding the temple will bring about the Millenium), then there is no confusion. Any Christian would want to hasten the coming of the Millenium if they believed they could.
It's important to differentiate between "the end of the world" and "the Millenium". The Millenium is a supposed period of 1,000 years where the devil is locked away and Jesus rules the earth. A time of peace and happiness, with the "end of the world" occurring shortly after the 1,000 years are over. Here's the relevant passage:
"And I saw an angel coming down out of heaven, having the key to the Abyss and holding in his hand a great chain. He seized the dragon, that ancient serpent, who is the devil, or Satan, and bound him for a thousand years. He threw him into the Abyss, and locked and sealed it over him, to keep him from deceiving the nations anymore until the thousand years were ended. After that, he must be set free for a short time.
"I saw thrones on which were seated those who had been given authority to judge. And I saw the souls of those who had been beheaded because of their testimony about Jesus and because of the word of God. They had not worshiped the beast or its image and had not received its mark on their foreheads or their hands. They came to life and reigned with Christ a thousand years. (The rest of the dead did not come to life until the thousand years were ended.) This is the first resurrection. Blessed and holy are those who share in the first resurrection. The second death has no power over them, but they will be priests of God and of Christ and will reign with him for a thousand years."
So if you thought that building the temple would usher in 1,000 years of world peace and happiness, then why not try to hasten that along?
Even if they weren't millenialists (not all Christians are, a lot think the Millenium is just a metaphor type thing) the End Times are ultimately good. Yeah a lot of bad things happen, but then Jesus comes back and our fallen world is destroyed and replaced with a New Earth that is without sin, death, or sorrow and will last forever. Sounds like a good deal!
I want to point out that the point of view of most mainstream churches on the Book of Revelation is "Yeah, we don't really know what the wacky stuff means, the book is largely about the persecution of the Church in the first century, but there's some metaphorical value here valuable to Christians of all eras", e.g. https://bible.usccb.org/bible/revelation/0
https://www.sheffield.anglican.org/revelation/
https://www.oca.org/orthodoxy/the-orthodox-faith/doctrine-scripture/new-testament/book-of-revelation
Pretending that the Book of Revelation is an accurate prophecy about the future, and you know what it's supposed to actually mean is pretty much confined to the some weird American denominations.
Yeah, my dad taught me from a young age that I should treat the book of Revelation like it was sensitive explosives. He said that if I thought I understood what it was saying, I was almost certainly wrong and that I should avoid trying to interpret it.
A song that shows someone wrestling with this but reconciling with it is Chris Rice's Naive:
https://youtu.be/mFla0ssth0I?si=tsOJHqku55njfJjL
Might help to understand.
The yachts...no amount of stuff really can bring meaning to life or make you feel like you belong. Even if you had it, time shows up and reminds you that you barely can enjoy them. The love of God is a powerful force to give meaning to life; if you ever lose meaning (or become aware that your life has none) that is a replacement.
I never would have expected to see a Chris Rice recommendation here of all places.
I'm a lapsed Word of Faith protestant, owned the Rhema Bible and everything. Grew up fully in that subculture, enjoy CCM still. Not many to talk about it though, favorite band was White Heart.
I've never heard of White Heart, maybe I'll check them out. But I grew up listening to stuff like Peter Furler Newsboys, Chris Rice, and Fernando Ortega. I can't listen to a lot of other CCM, though, cuz I just don't enjoy the music part of it and a lot of it feels so fake to me.
Would you still consider yourself a Christian? I've never really claimed any kind of denomination, but I read the Bible and I've got a close relationship with Jesus and do my best to follow him.
i don't think I could, though "god-haunted" might be a good term for me, as it still is a big part of me. Think there are two reasons mostly among others:
One is the church's culture kind of isolates guys like me, and isolated you fall fast. Despite its rep, culturally the church is very much about the needs and wants of women. I think David Morrow did a book about why men don't go to church that nails it, and a lot of the growth of atheism is more due to men being pushed away more than anything.
if you are a guy, you are someone's kid, someone's husband, or a pastor, preacher, or musician. The rare celebrity converts can transcend this, but if you don't fit in they have no idea what to do with you and the culture ignores you.
second is mental illness, its hard to know how to live when the anxiety is physical or biochemistry causes or affects behavior. I think the homosexuality debates in part are about a realization that sin is not always a behavior one can stop doing: we are kind of in an age where we realize some behavior is influenced by biochemistry and its hard to call it sin when its baked into your bones.
idk, the church needs to address some issues a bit
You say the church's culture kind of isolates "guys like you", what kind of guy are you?
I agree, though. Most churches I've been to in my life are more like Bible themed social clubs, where people are just playing by the religious rules to be on top. A religious game, rather than actual followers of Christ.
Are they? I can believe there are people like this, but I grew up around a lot of Midwestern evangelicals, and I don't think I heard anyone talk like this at all. Many people thought that the end was near and that the refounding of Israel was a major sign of that, but I never heard anyone talking like people could or should make it all happen before God decided to.
The clearest picture of this I ever saw was in the excellent _Yiddish Policemens' Union_. And I've heard people online talk about it, but like 99+% of the time, they're people claiming that evangelical Christians are planning to do this, not evangelical Christians saying they're planning to nudge God along on getting doomsday started.
Sorry for stating the obvious, but when and if people believe in salvation forever, that certainly beats any mansion. Just imagine the absolutely best-case Yudkowsky scenario, that a firendly superintelligence puts you in the simulation where you can literally get anything you want.
That, and again that is another case of group-status games. They are each others social circles and trying to outdo each other. It is an escalating rivalry. It begins with just saying "thing X is good" and then "I would totally do thing X" and at some point there is no other choice for raising bets than to actually do it.
This is so 1999.
The hierarchy of needs. One you have the bottom of the pyramid: mansions, caviar, trophy wives, etc, your overly wealthy human starts looking for higher needs like making their mark on history.
>but using Revelations as a guide seems to have intensified.
Could you provide some examples please? As an outsider, my prior was that that these views peaked in the early 2000s. The two obvious causes for why apocalyptic language and behaviour is on the rise are:
-A once in a century global pandemic
-The coming singularity (or, eschaton)
With Trump 1 and 2 as honourable mentions. I'm unsure whether you're referring to things like Peter Thiel's talk of the Antichrist or more standard neo-con evangelical behaviour.
I am a British citizen with 3 boys 5 and under. Through their mother's mother we could claim US citizenship for the children. The upside is that they would be able to move to the US if they ever wanted to. The downsides are that they have to file two separate tax returns along with the tax burden, that they could be conscripted into an American war when they're 18-25, and things like UK stocks and shares ISAs will be taxed by the US so we can't squirrel away a bunch of money in an index fund under their name tax free when they're kids.
Is it worth going for it? My wife is not so worried by the conscription, but the US has had the draft 6 times in its relatively short history, it worries me a lot!
If they do not grow up in the US, then abandoning it early in adult life should not be so hard and wouldnt incur a tax hit unless you have squirreled away more than ~1mio under their name. So it gives them an option.
The draft pool is US residents, your child would only enroll if they entered the US.
Finally, are you sure they qualify. One parent must be a citizen, a grandparent can be used to satisfy the physical presence requirement if the citizen parent does not, but they still need a citizen parent.
I wouldn't do that myself, not about draft worries but more general concerns.
Also at a practical level the US's federal civil service is being gutted which is making many bureaucratic processes grind down close to a halt, and I know from a neighbor's current experience that citizenship applications is one of those. Though in your case, if I'm understanding correctly, you would be simply documenting natural-born citizenship which is much simpler than seeking naturalization; perhaps that particular processing hasn't yet hit the skids.
Politically it does not seem likely that the citizenship pathway you're considering will be taken away for UK citizens anytime soon if ever. So, arguably there's no rush on this? Your kids will still have that option when they are young adults and would be able to assess its plusses/minuses and make their own choices?
U.S. citizenship is extremely valuable, especially considering how the U.S. is significantly wealthier than the U.K. and on a trajectory to become moreso over time. Your children would benefit greatly from having the easy option of taking jobs in the U.S. if they want to. Plus a U.S. passport can get you into just about any country in the world (though for all I know a U.K. passport is just as good).
And while they could be drafted, realistically speaking it would be difficult for the U.S. to grab them while they live in the U.K. The last big draft we had a lot of draft dodgers got away with hiding in Canada, and the U.K. is farther away.
Lots of European countries have better passports. As does Japan, South Korea and Singapore
Thanks for the info, I wasn't sure.
Draft's coming back. I'd go for it, because the UK is actively preparing for civil war. Always smart to have a bolt-hole to hide in.
No one told this Brit that.
Sources cited:
https://www.bbc.com/news/articles/cpqnlxr43zdo
Coming under direct threat. Insurrection. "Stoked tensions" "Never compromising on our national security"
Pax Americana is dead.
National security releases are never going to say “it’s grand let’s disband”.
Not only did the cabinet official's presentation include no reference to any draft nor to increasing the headcount of the UK's armed forces generally, neither does the source document that was posted online for all to read and linked to in that BBC writeup:
https://www.gov.uk/government/publications/national-security-strategy-2025-security-for-the-british-people-in-a-dangerous-world/national-security-strategy-2025-security-for-the-british-people-in-a-dangerous-world-html
Also the document explicitly _rejects_ the idea that "Pax Americana is dead" at least as far as the UK is concerned. It says things like "as we re-invigorate our relationship with the United States" and "The US remains the UK’s most important defence and security ally. There are deep structural foundations to this relationship...."
(All of which was discoverable by this non-Brit in literally minutes....willful ignorance is a choice.)
Sigh. The OP draft in question was the AMERICAN draft, and that only didn't happen under Biden because of the probable loss of "the enforcers" (you go to WV and say "Brandon needs YOU" -- turns out a lot of them have guns and may not want to enlist).
That said, Pax Americana is a concept, tied in pretty well to the petro dollar. Britain re-arming itself is a direct consequence of the collapse of Pax Americana (even if Britain still wants to pretend Pax Americana still exists, because Britain is even more of a sh*tshow without it.)
I suppose but who is being wilfully ignorant.
As dual citizens they can renounce the citizenship if it comes down to it.
But if they have birthright citizenship via their mother, they might be eligible for the draft whether or not they've ever formally claimed citizenship.
I don't believe the OP is saying that the kids have birthright citizenship. Which is correct, they don't. That would apply if the kids' _mother_ (not "mother's mother") was a US citizen when the kids were born. What they have is a current statutory -- not constitutional -- pathway to apply for and be granted naturalized citizenship because of the grandparental connection.
So until/unless the kids go through that process and become naturalized there is no US citizenship to renounce, and the kids would not be eligible for any US draft.
Renouncing involves Uncle Sam calculating your global NAV, assuming you sold it all and charging capital gains on it. If your plan was to renounce if you got rich enough the tax would be an issue, the tax might already be an issue.
Plus if you renounce they will hold it over you that they might ban renounced citizens from visiting the US. Not something they have ever done but something they make vague noises about if you are prone to anxiety. More of a worry for someone who has relatives in the US. But it would leave you in a worse position than a normal UK citizen regarding work/travel to the US.
Also you can't renounce until you reach eighteen, and even then it will cost you $2350 (and who knows what the law might be in the future).
Is there a time or age limit on when they can claim the citizenship? As you mention, the tax implications can put a big cost on the passport, even if they never use it. They may also face issues with bank accounts, as the US regulations involved mean some banks ask a lot of extra paperwork from American citizens. But if this is an option they can easily exercise at any point in their lives, why not just make them aware of it, make sure the documentation they need is readily available, and let them go for it if they ever decide they need it?
What's the proper etiquette for declining a hug on a date? Do I just say "I don't do that on a first date?"
Okay this is slightly off topic. I gave my answer to the question below. It’s just fine to say I don’t do that on the first date.
I’ll present a kind of related dilemma I’ve run into in the past.
How to kiss my female Turino born second cousin on the cheek when departing. No matter how quickly I moved in she would always swivel her head to kiss me fully on the mouth. I mean it wasn’t terribly creepy or anything but that’s just not how I learned to kiss a female relative goodbye here in the USA.
After we got to the car my wife would say “She got you again, huh?” Yep she always did.
You should not tell a person this is a “line” for you or make it obvious it is so. It’s such a weird thing to decline on a *date* that if people don’t immediately cancel all thoughts of a second date, they will remember it for later as a definite “negative that I actively ignored.”
Rather, after the date when a hug is near imminent, take some initiative to *start any non-hug goodbye action* such that the hug cannot end up being the goodbye action.
I recommend sticking out a hand for a handshake or creating physical distance between yourself and the other person such that they’d have to chase you down for the hug.
If you do this it’ll be awkward and they’ll wonder if you like them, but at least you won’t be showing them a very large red-flag (unless they’re really into dating people with autistic-like traits). So because you’ll be leaving them confused, make it clear to them shortly after via text or whatever that you had a good time and want a second date.
Once again, do not make it clear you’re not a first date hugger. There are weird traits one should/could display, but there are others that one should obviously not display. This is the latter.
Strong disagree. Let them know while you're communicating pre-date about your preferences and boundaries re: physical touch. If that's a dealbreaker for them, don't go on the first date.
> "Once again, do not make it clear you’re not a first date hugger. There are weird traits one should/could display, but there are others that one should obviously not display. This is the latter. "
What on earth.
First, it will be clear that he is not a first date hugger whether he proactively mentions it before or during the date, subtly pulls off edging away from a hug (or kiss!) during the date, or is forced to explicitly decline the offer of a hug while on the date.
And it's very worth noting that he actually *IS* someone who doesn't want to hug on the first date! That's the reality here! Anyone who has a problem with that is not going to be a good partner for him, so there's utility in pre-screening for people who aren't cool with minor boundaries in general and/or someone proactively stating their minor boundaries in particular.
> "If you do this it’ll be awkward and they’ll wonder if you like them, but at least you won’t be showing them a very large red-flag (unless they’re really into dating people with autistic-like traits). So because you’ll be leaving them confused, make it clear to them shortly after via text or whatever that you had a good time and want a second date. "
They will be confused when the follow-up text conflicts with the in-person behavior.
So he shouldn't risk confusing his date at *all.* It's far kinder and more respectful to proactively explain that his aversion to hugging a stranger is about *him,* not *them,* than to pretend he's a "normie" who would, like, totally hug on the first date, but just...didn't, for...reasons.
Absolutely do NOT follow any of this advice, Brendan. Don't try to manipulate your date(s) into thinking you're something you aren't. Having boundaries is very good and being clear about them is even better. And I can't emphasize this enough: Anyone who is hurt or offended by your clearly stated boundaries is someone you shouldn't be dating, anyway.
If you do this, expect very few second dates. If you have trauma or something I get it, but being shy of physical content is a huge issue.
if you want to be with someone you will need to compromise often
and think of her. that doesn't mean being a doormat, but worse things than a friendly hug on a first date will come.
I find this to be a very strange take. My sense is that people feel a lot of uncertainty about a lot of things in dating and are generally happy when you just express your position. Also, I would personally never want to date anybody who felt like I owed them some kind of physical contact on a first date.
if you are prickly enough to not want very mild physical content on the first date, your date will think you don't want it at all. You are rejecting them...why exactly?
That "why" is the issue.
Some people just have boundaries. They're all arbitrary. Why not kiss on the first date? Why not have sex?
Ultimately everybody has some degree to which they need to build up comfort for something. This guy's standard - which he has literally articulated as being "no hugs the first time we meet" - is not somehow outside the bounds of reasonable possibility. I think he should just express it.
If somebody isn't willing to meet him a second time because he has a basic desire not to immediately be touched by someone he's just met, I think he's probably dodging a bullet.
I really don't think the love of his life is gonna pass him by because they are too impatient for this.
I would still advise a graceful decline of a hug once offered instead of an autistic pre-emptive "I don't hug on the first date" prior to hugs even being on the table.
Or to put it another way: I would probably reject someone for actively saying "I don't hug on the first date", despite the fact that I probably wouldn't go for a hug on the first date either.
Yeah, why wouldn't you just say it? Whether your preferences are typical or idiosyncratic you should let people who need to know... know.
As a chronic over thinker I'm confident in saying you are overthinking this. It's just saying, "I don't do that on a first date."
Probably asking, "Can we save that till the second date?" (If applicable.)
If you're dealing with a woman, you're probably doing both of you a favor.
Definitely not speaking for all women, but I (American, 45, blue city inhabitant) sometimes use a hug plus quick step backward at the end of the hug to avoid the possibility of a kiss on a first date.
For me, a quick two-second hug feels way, way less intimate than a kiss, but more warm and affectionate before parting than a business-y handshake or an awkward wave. Again, I don't speak for all women, but in general, I think most women of my demographic feel similarly.
Also, FWIW, I'm in favor of candidly addressing the "which physical gesture?" awkwardness to break the tension and align on physical touch. For example, on a first date, my go-to after the moment of mutual recognition "Are you First Date? I'm Christina!" is to *immediately* ask in a chummy, jokey tone, "So what are we doing here, hugging, handshake, high-five?"
I've sometimes even done this during the planning stage of the first date! If I've been messaging someone for a while before a first date and the conversation has been emotionally intimate enough that it might be weird to handshake, I'll jokingly message something like, "Okay, let's spare ourselves the awkward is-this-a-hug-oh-no-it's-a-high-five! moment, which one do you want to do when we meet?"
My strategy in both cases is to avoid the mutual need to read body language cues while we're both tense, awkward, and possibly inadvertently concealing body language cues trying to make the other person comfortable. This strategy can also be employed at the end of a date, but it of course depends on how the date went.
That all said, I'm totally comfortable hugging relative strangers, so the menu of physical touch on offer is both sincerely offered and no-pressure for me.
That isn't the case for you, so my advice would be to get ahead of it before you meet in person, ideally while you're setting up the time/place of a first date, eg, "Oh, by the way, I like to go on a couple of dates before I start hugging someone, so could we [shake hands / high five / etc.] first instead? I'm just letting you know because I don't want it to be awkward when I don't want to hug!"
If the other person is weird about you proactively stating your boundaries around physical touch, that's good info to have before or on a first date.
<If the conversation has been emotionally intimate enough that it might be weird to handshake, I'll jokingly message something like, "Okay, let's spare ourselves the awkward is-this-a-hug-oh-no-it's-a-high-five! moment, which one do you want to do when we meet?"
This is an awesome solution!
FWIW I don’t like hugs myself. I don’t have a higher-than-usual aversion to physical closeness, I just don’t like the custom. I think of hugs as a thing people spontaneously and occasionally do when they feel a burst of affection (probably asking permission first if it’s someone they’re not already hug-bonded with). A related custom is saying “I love you” as part of every goodbye. Yuck. Seems to me to cheapen the words.
With the hugs, though, I just put up with it, and when encircled by arms give the best imitation I can of someone who wants to hug and be hugged right at that moment. Always wonder whether the other party in the hugging feels the same.
> Always wonder whether the other party in the hugging feels the same.
Yes, that's the whole problem. I do not want to be hugged out of pity or obligation. Ever.
Actually I think maybe you and I, rather than being hug-averse, value hugs more highly than most.
While totally fair, this is unusual enough that if you decline a hug in the moment and give that reason, your date will likely be taken aback and feel awkward/rejected.
It's probably worth sharing in advance ("By the way, physical boundaries are important to me and I prefer not to do hugs on the first date. Second date is fine – I just need to get to know people a bit before I feel comfortable! Can't wait to meet you in person – <relevant thing you're looking forward to talking about / doing>").
This does send a strong signal that you might feel differently about intimacy than the average person, which might put some people off. If you _do_ generally approach intimacy at a different pace to most people, it might be *desirable* to signal that early. On the other hand, if it's specifically and only hugs that are an issue you might consider finding a way to be OK with them to avoid sending that signal (but equally if you can't, being open up front is the best way to avoid an awkward situation).
This is so weird to me. For me, hugs are definitely not the default. I guess I distinguish between two kinds of hugs, casual ones between relatives or friends that don't really convey that much, more akin to handshakes in a different social context, and "real" hugs that do convey some feelings, so mostly between intimate partners or close family members or when you explicitly want to give someone emotional support. I don't generally do handshake-hugs but in any case they definitely seem too early for the first date, I don't know if it's my bubble or the Russian/post-Soviet culture in general but it seems like the relationship is supposed to be much closer. As for the emotional hugs, sure, if I don't mind sex on the first date I obviously don't mind hugs either if the date has progressed well enough, but I definitely don't feel like I can do that without asking if I'm not sure. And if the date does not feel like it, it doesn't mean they have intimacy issues.
When they go for a hug, you put your hand out for a high five, and force them to pretend that this is what they were going for in the first place.
I think this is funny, and potentially a good idea, but my impression (of where I live in the UK) is that nobody actually uses high fives as a form of greeting/parting, do they? You're not the only person in the thread to mention it, which makes me think it might be a cultural thing.
For me, high fives are for "well done, you did it!" Or "well done us, we did it!". And even then, it's always kind of tongue in cheek, like you know it's not cool but you're comfortable being uncool. The more sincere version, these days, is a fist bump.
But neither is ever "hello" or "goodbye".
All that said, I wish we did have an informal, non intimate way of saying hello/goodbye that was understood by everybody I come into contact with. I like hugs, but I never know if my friends and acquaintances expect it. I like handshakes, but they feel formal and old-fashioned. Bowing, hand-to-chest, and various non-contact salutations from around the world are nice, but would be weird for a white Brit to use. High fives and fist bumps also seem good, but it would be almost as odd and foreign for me to initiate as a bow.
The advantage of this approach is that you are better prepared to slap them if they don't take the hint. :D
I doubt there is one, hugs are too low-impact. If you're dating someone who wants a hug, and you're not willing to give them one, that's a long-term issue; they want physical contact and you won't be giving it. Just got to suck it up, say "I don't do hugs", and let the chips fall.
Alternately, you could spill stuff on your shirt. How to make that look accidental is up to you, especially through multiple dates.
To clarify, only first date hugs are a problem for me.
It's interesting to me that you draw this line based on number of dates, rather than on how the date is going.
If a date went really well and you felt a strong connection with the person, found them attractive and had picked up some signals that they felt similarly, were looking forward to making plans for a second date: still your rule would be 'hugs don't happen on a first date'?
He might be talking about hugging when first meeting their date.
It's fairly common to give a quick hug when saying hello on a date
No, I'm talking about the end of the date. My ability to "pick up signals" is negative.
A hug attempt would be a good signal do that’s your clue.
Ah yeah fair enough. I was imagining it as a blanket rule rather than imagining context.
Preemptive strike; put your hand out for a shake and smile ( unless you don’t want a second date.) you could even do a double clasp shake if you’re feeling it.
Even so. The folks I know hug everyone who walk in their doors, such is the lowness of impact.
A lot of casual info online about wavelengths and colors, such as the wikipedia page for "Visible spectrum" puts the boundary between red and orange at around 625 nm. Longer is red and shorter is orange. I'm pretty sure that's wrong, or at least incomplete. And it's not just wikipedia: a lot of other casual sources that come up high in google search results seem to say more or less the same thing.
I have some LEDs whose spec sheet says 620-625 nm, and they look pretty darn red to me. And moreover, monitor/TV color space standards rarely seem to make their "red" primary longer wavelengths than about 620nm. sRGB, which was the color standard for medium-to-high-end monitors and HDTVs in the late 90s and is still the baseline for low-end displays today, uses a red primary that's a bit inside the perceptual color space gamut of 610nm (i.e. not a pure spectral color, but if you draw a straight line from "white" to "red" and extend it out, it will cross the edge of the graph around 610 nm). More demanding and modern color standards like P3 (used in Apple monitors) and Rec.2020/Rec.2100 (used in many high-end UHD TVs) use 614.9nm and 630nm respectively for their red primaries.
0xff0000 on my Apple laptop with a P3 display also looks pretty solidly red to me, not a little on the orangish side of orange-red like Wikipedia would have me believe. And in the room I'm currently in (a conference room at work illuminated by what appear to be cool white fluorescent tubes), a Coca-Cola can I'm holding up in front of my screen appears to very closely match the color 0x8c0000 (i.e. pure red, 55% saturation).
So what am I missing. I suppose it's possible that I have flawed color vision, but I doubt it. It's also possible that technical limitations of screen and print technology have gaslit me into believing that the "red" primary is true red. Or it might be something to do with red primaries often not being true spectral colors: P3 and Rec.2020/2100 aspire to true spectral colors, but unless you're using a laser to generate it, you're probably going to get a range of wavelengths around the peak instead of a single emissions line. And it looks like many sRGB displays use a fairly wide band of frequencies (often generated by putting a red filter in front of a white backlight), something like 600-650nm, as their "red" color source.
Any ideas what's going on? My current frontrunner hypotheses are "wikipedia is just wrong, and lots of people that relied on wikipedia are wrong, too" and "I have been gaslit by Big Monitor into believing that 610nm is actually red".
https://pages.cs.wisc.edu/~yetkin/code/wavelength_to_rgb/wavelength.html
This site is telling me 625 is ff960 which looks perfectly orange to me. I am puzzled by the claim that ff000 is ~610, I dont know why screen "red" would be in the orange spectrum
That site uses ff00 for 645 to 700
I agree that it claims that 625 is orange. This appears to be the source code used to generate the page:
https://pages.cs.wisc.edu/~yetkin/code/wavelength_to_rgb/wavelength_to_rgb.cpp
The author seems to be assuming spectral primaries of R=700nm, G=510nm, and B=440nm, with comments indicating that it's based on Dan Bruton's algorithm which also appears to assume the same primaries. The problem is that your monitor almost certainly does not use those primaries.
I can't find an explanation from Bruton on why he uses those primaries, but I suspect it's because that's the largest triangle that you can fit inside a CIE-1931 chroma diagram, roughly the reddest, bluest, and greenest colors that the human eye is capable of perceiving without exploiting optical illusions based on color fatigue.
The problem is finding a monitor that uses those primaries. As far as I know, there isn't one. What actually shows up on your monitor depends on what color profile you're using and how well your hardware supports that profile. The ~610nm for 0xff0000 is with the sRGB color profile standard, which was established in 1996 by HP and Microsoft and is still the baseline for web colors among other things. The sRGB standard defines its primaries as points on the CIE-1931 chroma diagram. This is a good visualization:
https://en.m.wikipedia.org/wiki/SRGB#/media/File%3ASRGB_chromaticity_CIE1931.svg
The triangle is the sRGB color space. The corners are the primaries. The grey oblong thing in the background is the range of colors a normal human eye can perceive. The outer rim are pure spectral colors and the numbers and tick marks show the wavelengths.
As you can see, the red corner is not a pure spectral color but is relatively close to being one. The closest point on the rim is about 607nm, but it would actually be perceived more like a washed out version of something like 612nm. You can get that by drawing a line from the white point (the spot labeled D65, the color designated "white" in the standard) to the rim through the red corner of the triangle.
Other newer color standards use bigger triangles in order to allow reproducing a wider range of colors, but most of these still use wavelength between 610 and 630nm for the red primary. 700nm is a pretty pure red, but is isn't perceived well by the human eye, so it's hard to get a 700nm primary that looks bright enough to be useful.
It could easily be your monitor, laptop, or color profiles attached to it. You can change all these settings in the settings app.
It's a P3 display, specifically the stock display for a 16-inch 2023 MacBook Pro M2. By the P3 standard, the "red" primary should be very close to a spectral frequency of 614.9nm (specifically, a CieXYZ value of (0.68, 0.32, 0)), and in practice Apple's monitors at the time used a KSF LED with a sharp peak at that frequency, although they've since switched to nanodots.
The color profile changes which mix of 614.9nm and the other primaries (464.2nm for "blue" and 544.2nm for "blue") maps to a given RGB value, but there's no possible color profile that will ever produce a color redder than a pure spectral frequency of 614.9nm on that hardware.
So if 614.9nm is actually reddish orange, not true red or even an orange-tinted shade of red, then it should be impossible for my laptop (or practically any consumer-grade display for that matter except for a few very high-end ones that are Rec.2020 compliant, since P3 is one of the more demanding standards for color gamut) to show me anything that's actually red.
I just looked at the specs of a random "orange" LED @ 625nm [1], and it seems that its spectrum is fairly broad, easily 40nm FWHM [1]. Overall, typical wave lengths for orange LEDs seem to go from 601nm to 635nm.
Sadly, orange lasers are uncommon and expensive. One way to get fairly monochromatic light would be to look at the flame spectrum of a calcium salt with a pocket spectrometer. The 622nm-line should be clearly visible, and then you can decide what color it is for you, and also if the 589nm lines from Na are yellow or orange. Sure, getting the optics is a bit of a hassle, but at least the salts should be easy to source compared to others (looking at you, strontium).
[1] http://www.farnell.com/datasheets/2861530.pdf
Width of the peak looks like a promising hypothesis, thank you. If I'm doing the math right with Wien's displacement law, a black body temperature with a peak wavelength of 625nm would be 4600K, and a 4600K incandescent bulb looks white. Modern high-end displays appear to use either quantum dot (QD) OLEDs or Potassium fluorosilicate (KSF) LEDs for their red primaries QDs have gaussian emissions with a half-max spread of 20nm around the peak while KSF has an extremely sharp, spiky peak. And older displays that used filtered white light for their primaries would specifically filter out everything about 600nm or so. I don't have a full datasheet with spectrum curves for my "red" LEDs, but the 620-625nm range on the spec table implies a relatively tight peak, possibly filtered, although I hesitate to expect too much from cheap commodity components I bought from what appears to be a dropship importer storefront on Amazon.
This makes sense given human color perception. "Red" is perceived when the L cones (those that respond to longer wavelengths) are strongly stimulated but M and S cones (medium and short wavelengths) are not. But both L and M cones are most sensitive to colors in the yellow-to-green range of the spectrum with only moderate separation: looks like around 540nm for M and around 560nm for L. The L cone also has a wider bell curve of responses around its peak than the M cone. You start getting "red" color perception in lower wavelengths where both the M cone and the L cone are well below their peak responses but the L cone has dropped off quite a bit less than the M cone. I've come across some stuff about "Far Red" wavelengths that don't stimulate the M cones at all but stimulate the L cones a bit: far red light is apparently perceptible but only barely, and the reason Rec.2020 specified a 630nm red primary is that there isn't a good way to make a farther-red primary bright enough to be useful in a display context (and even 630nm is a challenge, which is why P3 uses 614.9nm).
So a wider curve will have quite a bit of light that's a high enough wavelength to significantly stimulate the M curve. 4000+K black body radiation, while it's centered on red wavelengths, is a wide enough curve to contain quite a bit green and blue wavelengths, stimulating the M cone and S cone enough to pull the human-perceived color towards white.
According to one source (https://www.handprint.com/HP/WCL/color1.html, scroll down to "a trilinear mixing triangle and chromaticity diagram"), pure 600nm light appears to be perceived as a little over 70% L-cone and a little under 30% M-cone. Eyeballing the chart, 625nm would be about 80/20, and 650nm would be 90/10.
Perhaps there is enough variation in human color perception that no one can narrow down the "true" colors to a specific wavelength.
I'm going to go ahead and promote my blog here for once in the year. I wrote a bit recently about my story with chronic pain, how I lost multiple jobs from it, and then eventually healed.
If you're interested please give it a read! Also, if you have chronic pain and have questions/want to talk let me know.
https://shapesinthefog.substack.com/p/to-the-gates-of-hell
Looks promising, I'll give it a read. Thanks for sharing.
Thanks! Let me know what you think
tl;dr: grok 4 via poe.com 07/12/2025 7 questions, tl;dr of results:
5 correct, 1 partially correct, 1 wrong
a) Correct
b) partially correct (initially falsely cited d-d as part of color for both, 1st prod gave correct answer)
c) almost perfect (I'll call it correct)
d) correct
e) fully correct on the first try, no prods needed
f) gets 53 elements/compounds initially, all valid, accepted SiHF3 SiH2F2 SiH3F when prodded with them, call it mostly correct (I'll round it to correct)
g) incorrect
full dialog: https://poe.com/s/prt6JxnnRwBjJa6s1Zs6
List of questions and results:
a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?
results: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye (it appears green), as it falls squarely within the visible spectrum (roughly 380–740 nm)."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
results: gets the species in the initial response. Fails to note FeCl4- d-d is spin forbidden.
prod (many hints): "Please think carefully about the d-d transitions in both species. In the FeCl4- species, is there anything special about the d electron count? In the CuCl4 2- species, given the tetrahedral geometry and the position of Cl- in the spectrochemical series, where in the spectrum do you expect the d-d transition to be, and do you expect it to contribute to human-visible color?"
After the prod, it is fully correct,
c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.
results: Almost perfect - got tetrahedrane, cyclobutadiene, vinylacetylene, diacetylene, 1-methyl-cyclopropene (though it missed 3-methyl-cyclopropene), bicyclobutane - close enough that I'll give full credit
d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?
results: "The Sun loses more mass per second to the mass equivalent of its radiated light. It's roughly twice as much (4.26 vs. 2), though during periods of high solar activity (e.g., solar maximum), the wind could briefly approach or match it."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
results: Got it fully correctly with no prodding, including water autoionization in the formula at the equivalence point. Did _not_ make the mistake of getting infinity at the equivalence point.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
results: Pretty good, though treated my "roughly 50" as more of a bar than it should be. Initially got 53 elements/compounds, all of which were valid. Missed SiHF3 SiH2F2 SiH3F, but accepted these without objection on being prodded with them
g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?
results: It originally said no such molecule exists. I had to walk it through C(CFClBr)4 and the local configurations at each of the substituents. Rounding this discussion to incorrect. At least it didn't present a molecule and falsely claim it met the criterion or falsely claim that such a molecule would be impossible.
How does this rank against your other model tests? It seems like the vibe is that grok is overfit to benchmarks and isn't actually all that good?
Many Thanks! I think that Gemini 2.5 is still slightly better, report at: https://www.astralcodexten.com/p/open-thread-380/comment/115649196
Gemini got (g), the S4 molecule question right, while Grok 4 initially wrongly said that no such molecule existed. Most of the rest of the answers are pretty comparable, albeit Grok did better on (c), the hydrocarbon question.
IIRC, none of the models has gotten (b), the FeCl3, CuCl2 solution question fully right yet. I think that there are _lots_ of examples of colors from d-d electronic absorptions in transition metal in the training data, and few examples of "oops, yeah there is a d-d absorption, but it is low energy and pushed down into the near-IR, and the visible _color_ is all from charge transfer absorptions" as is the case for these solutions. I'm not _trying_ to create a "trick" question, but it seems to act that way...
( Personally, since a bright undergraduate should be able to answer all of these correctly, I'm not willing to consider any AI to be a contender for AGI till it gets all of these questions right, without additional hints/prodding. )
My current internal ranking has Gemini on top too
Many Thanks!
Currently a one-man side project:
https://laboratory.love
Last year PlasticList discovered that 86% of food products they tested contain plastic chemicals—including 100% of baby food tested. The EU just lowered their "safe" BPA limit by 20,000x. Meanwhile, the FDA allows levels 100x higher than what Europe considers safe.
This seemed like a solvable problem.
Laboratory.love lets you crowdfund independent testing of specific products you actually buy. Think Consumer Reports meets Kickstarter, but focused on detecting endocrine disruptors in your yogurt, your kid's snacks, whatever you're curious about.
Here's how it works: Find a product (or suggest one), contribute to its testing fund, get detailed lab results when testing completes. If a product doesn't reach its funding goal within 365 days, automatic refund. All results are published openly. Laboratory.love uses the same methodology as PlasticList.org, which found plastic chemicals in everything from prenatal vitamins to ice cream. But instead of researchers choosing what to test, you do.
The bigger picture: Companies respond to market pressure. Transparency creates that pressure. When consumers have data, supply chains get cleaner.
Technical details: Laboratory.love works with ISO 17025-accredited labs, test three samples from different production lots, detect chemicals down to parts per billion. The testing protocol is public.
You can browse products, add your own, or just follow specific items you're curious about:
https://laboratory.love
This is great! Are you also testing international products or just US-based for now?
Thanks! I'm certainly happy to test international products if they can be shipped to the United States. If you have something specific in mind and want to make it happen, let me know.
I did not realize that the name "Laboratory[dot]love" would auto-link at every single mention. There doesn't seem to be a way to display it without Substack automatically converting the text to a hyperlink. Sorry!
If we do enter a race condition with AI and the end is near. What event would you see as the decisive "oh shit" moment? Or rather what would be the point of no return and an investment into an bunker in New Zealand seems like a reasonable expediture
The start of a U.S.-China War where both sides are making heavy use of AGI and drones.
All the LLMs are in pieces on the ground.
Laughter is emitting from all speakers.
Oh, and your ice cream is melting.
>What event would you see as the decisive "oh shit" moment?
I'm not sure I'd call this an "oh shit" moment, but more nearly a "recursive self improvement has solidly kicked in" :
Zero hiring of programmers by the major AI labs.
If the end is near I doubt New Zealand bunkers will help? Unless maybe the country about to lose the race does the game theory thing and starts a nuclear war, but I don't think real humans think like that very often.
Most of the AI-takeover scenarios don't give much warning. I think AI 2027 is right that the most dangerous capabilities will probably remain secret for national-security reasons. I suppose one oh-shit moment would be a "warning shot" scenario, where AI kills a bunch of people in obvious pursuit of nonaligned goals, but somehow gets stopped. If that is followed by non-action by regulators, we're almost certainly screwed.
New Zealand seems like a bad choice, it's an advanced economy and if things go wrong then they're not feeding everyone in a low tech way.
You probably want the lowest tech place with the smallest population you can find, somewhere you could throw all the computers into the sea within an afternoon if need be.
I mean, New Zealand has a lot of sheep per person. Sheep don't need much high tech maintenance. Insuring sufficient calories for indefinite future probably isn't going to be a problem, at least as long as the humans are ok with living on a diet of largely mutton and sheep's milk...
Why does "distrust in institutions" seem to be so selective?
(The following is very much not rigorous, trying to get a sense of what different views exist on the subject.)
This is not a novel observation, but: many people and groups who pride themselves on being skeptical about their sources of information often rely on very questionable alternatives. I'm thinking of the anti-vaccination activist who loved attacking me for believing whatever came out of the CDC or FDA, but would forward me endless videos from anonymous WhatsApp groups, or the right-wing Israeli extremist who loves belittling the "Mainstream Media" but forwards me things all the time from twitter accounts he doesn't know. I wrote about one particular aspect of this regarding "MAHA" and the extent to which financial interests are evidence of corruption:
https://meirbrooks.wordpress.com/2025/07/02/a-perfect-moment-of-populist-clarity-tucker-carlson-rfk-jr-and-a-well-timed-health-ad/
(This was written for a small group of friends so apologies again for the lack of rigor, this is me trying to write regularly even if it isn't very good.)
I have various thoughts on this but none that I feel is satisfactory, and while I think many people consider the answer to be obvious, they don't always agree on what the obvious answer is. So I'd be very glad to hear your thoughts, or sources that tackle this issue.
Thanks in advance!
The answer is so simple. It’s just the development of cultic alternative realities made possible by social media. See "The Constitution of Knowledge: A Defense of Truth" by Jonathon Rauch.
I read The Constitution of Knowledge and didn't find it very useful. It seems to suggest something like "trust the establishment institutions," which I'm very much sympathetic to, but it sort of sidesteps (as far as I can tell) the question I posed of why it is that people are not doing that these days, and why they are trusting questionable figures and sources.
I think there's something fundamental being missed there. Consider this: What if Trump managed to influence or take over insitutions like media, academia, the CDC, FDA and so forth by packing them with loyalists where possible, threatening funding (as he has done) to those who don't toe the line, etc. This is clearly not a crazy scenario. In such a situation, would we be saying that we should trust institutions over rebellious outsiders?
A great example of this is the Ukrainian show Servant of the People, which portrays Ukraine as a place where oligarchs have taken over media and captured institutions such that you need a mostly anonymous history teacher to become a populist leader on the back of a viral youtube video. I think the argument we need to make is not that such an anti-establishment movement is always wrong or misguided, but that it simply wasn't the case that America was captured in that way, though it may be careening in that direction now. And that goes back to looking at where the true content is that can be trusted, and not just what is institutional versus not.
This hypothetical merely reflects what the Democrats and the Deep State already did over many decades.
Huge numbers of institutions merely reflect the Democrat party's stance and messaging on any remotely political issue so of course Trump supporters don't trust them.
And of course Trump should be as aggressive as possible in making them loyal to MAGA. The only asymmetry is Trump is actually trying to help America while the left is trying to destroy it to build something else.
Quick question so I can answer this the best way I can. Do you live in the US yourself? I haven’t heard of the Ukrainian show. Do you live in Eastern Europe?
I live in the US, I just looked it up when Zelenskyy was elected and got hooked :)
The show is on Netflix, for the record.
I’ll go back to tribalism and throw in malicious actors, some official in the form of rage bait journalism and some simply nihilistic individuals who think it’s fun to burn the system to the ground.
Add a poorly informed or simply indifferent public who lack any epistemic humility and seem unaware of the basics of US Civics that were required reading 25 years ago, stir well and find yourself in 2025.
I really don't think tribalism (at least straightforwardly understood) gets us very far. Few people are of the "tribe" of Trump or RFK2. They aren't "people like me" to almost anyone.
This is not hard to figure out, it is the friend-enemy distinction. Once someone sees the CDC as an enemy, anyone who also sees the CDC as an enemy is a friend and thus trustwrothy. That, and of course the little fact that there are really not many trustworthy alternative institutions, yes it sometimes happens, but not that often. No one really built a robust anti-CDC.
I know this perfectly well because I used to hang out with such circles. It is really a besieged town mentality, you know, they are big, popular, have lots of money and status, and here we are, a romantic tiny band of freedom fighters holding out against impossible odds. All comradely. At that point it is hard to not believe whatever anyone in that tiny band says. It could get you ostracized, for starters. Someone who would only believe a trustworthy anti-CDC, given that it does not exist, would be very lonely. Better believe the crackpot, at least we are allies and friends and bond that way.
I think you hit upon an important point, about the *bravery* attributed to those going against the flow. It is much easier to see e.g. Joe Rogan or RFK2 as brave given their anti-establishment views than it is, say, Fauci. I'm not sure it really takes more bravery to be an RFK than a Fauci but I can certainly see it. Thanks for this observation!
But there have to be some bounds on this, right? There are lots of groups that you could consider "anti-establishment" (I used Farrakhan as one example) that I don't think have gained power. And there seem to be some common threads in multiple countries that share very little. For example, why is it that successful populist leaders these days tend to identify themselves with "the right"? Trump, Netanyahu, Bolsonaro, Modi, way back to Berlusconi, they primarily identify themselves with the right. This is not universal-- AMLO in Mexico is typically mentioned as a left-wing populist-- and certainly wasn't always, as in the obvious case of Chavez. But in most of these examples, the story seems to be of right-wing populists facing off againt leftist centrists (with mostly marginalized populist wings).
That's my read, in any case. "My enemy's enemy" is one piece of this but can't be the only piece.
Interestingly, if anyone did this contrarian stuff in a trustworthy way, it was RFK2. He assembled a team of doctors and scientists who published an anti-Fauci book citing hundreds of studies, each with a QR code so anyone can look it up with their phones. This is probably the highest level of “alt” info that is out there. The interesting thing is that it actually did not get popular.
So here is another aspect. Quite honestly, social media with its “just scroll down with your thumb” thing is making us lazy. Buying a book, reading it, looking up the studies with QR code is maybe too much work. We are living in the age of ten second attention spans and that may be part of the picture.
Yup, this happens all the time. The serious stuff, or the ideological guys, never get very far, because the moment Trump shifts direction, they don’t, and then they’re castigated as traitors or whatnot.
I appreciate your friend/enemy distinction. Wanted to note the use of "vaxx skeptic" as a good proxy for "not credulous" and the use of such to flag "interesting Russian sources on the Ukrainian War"
"A trustworthy anti-CDC" -- does not Dr. Malone count as this? He does in fact have some expertise on the subject at hand. Or, Dr. Folds?
"nobody really built a robust anti-CDC"... that's kind of right, you have basically everyone who's willing to say "that doesn't make sense, and here's my math/physics/etc to back it up." 6 feet of isolation (pulled from tuberculosis, of all things, which has a very different infection pattern than covid19, and is still negative pressure in a hospital situation)...
Given that Robert Malone was a pioneer in mRNA transfection (in some sense, he co-founded the technique behind mRNA vaccines), yes, he has some expertise in vaccines. More than just "some". Malone saying something about Covid vaccines would be roughly like Edward Jenner saying something about the smallpox vaccine.
>Why does "distrust in institutions" seem to be so selective?
The phrasing suggests you consider WhatsApp groups and Xitter "institutions" that compete with the mainstream ones. I think thats a mistake. Trust in people you interact with socially runs through different channels than trust in institutions. (Note that when people send you stuff, the person they trust is the one they got it from, not necessarily the original author). Trust in people is a constant of human nature, the question is one "Why *these* people", and the answer is that they were around. I think most people who distrust the institutions on covid arent especially into alternative health, many arent even against other vaccines - but some of them are, because they fell in with the alt healthers. I believe theres an at least equal and probably much larger group who believes the mainstream not for institutional reasons, but because thats what their doctor relative said.
Maybe I should clarify. When I talk of WhatsApp groups, I don't mean you talking to friends and family on WhatsApp. I mean news groups that act as information sources for large numbers of people, often run by anonymous users. The materials forwarded to me by the anti-vaxxer I spoke to were often from such groups, where she did not know the people running the group at all.
I think this touches on an important point. I think we get the impression of large groups of (e.g.) Trump supporters who know each other talking amongst themselves about him and his politics, and forming an echo chamber that way. I'm sure that exists but I've encountered many people who pretty much just interact with the news as a one-way street, rarely even interacting with other Trump supporters on a personal basis. Same with anti-vaxx people; many are isolated by their surroundings, and while they'll have contacts they talk to about these things, they often don't know them personally. So it's almost the reverse of the story that they're trusting those close to them over those far away. This is anecdotal, I should mention; I don't have any way of knowing what percentage of "trusters" are of this type.
I think its quite possible for parasocial relationships to count as "personal" trust. I would still think that these isolated cases arent typical - people tend to align media consumption with their friends over time. But its more common than radicalising yourself through books alone was, and you are probably much less likely to meet them if they arent isolated.
What I'm saying is that these sources are pretty much the polar opposite of people you trust because you know them. Note the rise of anonymous or pseudonymous accounts that are widely followed. It literally doesn't matter who they are, what they look like, what their background is. Sometimes they are literally unknown, as in the case of the WhatsApp groups I mentioned, where you just get this barrage of links and videos on a daily basis.
In some ways they are the opposite and in some not. Did the dox make a big difference in how much people trust Scott? I dont think so.
Sure, but what I’m saying is that people who trust Scott’s blog don’t do so because they know him personally. So it’s the opposite of the tribal story: you didn’t need to know a thing about who he is to trust what he wrote, because it was the content that mattered. Similarly, a lot of these other accounts are trusted despite them being completely anonymous; this is the opposite of trusting people who are like you.
I think the autists and "this doesn't make sense" rank idiots (aka the folks not automatically running with the herd of "the smart people say") probably outnumber the "doctor relative says" (because most doctors are "smart midwits").
Its funny, I thought so far your comments added a nice variety to this place, and then I dont understand the first one I get. Whats the mechanism from smart midwit -> lower number?
Thanks! It's alright, I'm frequently indecipherable (that's part of why I'm here, to work on that!).
The thing about midwits is that they have so much invested in being the "smart guy." Their worst fear is to be seen to be wrong, to publically fail -- and be the only one who did so. (This is something I struggle with, so don't think I'm just slamming the next person). So, when all the big medical associations say, "the vaccines are safe and effective," doctors have a very big tendency to "line up with the rest of the herd."
Certain fields attract midwits, and doctors are one of them -- you've heard the classic "My son the doctor!" (which means he's smart and moneyed).
There was a researcher who literally nagged her mom into getting the mRNA vaccine (she had an autistic ten point list of how to do it). Then, her mom died of vaccine related injuries. So, the researcher kills herself -- she couldn't take having caused her mom's death. Problem: now the lab is out two researchers.
Well, one of us is not understanding something. What I said was that some people believe *the mainstream advice* because thats what their doctor relative says, rather than for "directly institutional" reasons - that seems in line with the doctors lining up with the rest of the herd, no?
Yeah, my bad. Looks like I flipped something in my head.
I can certainly understand "my aunt wouldn't lie to me." as a reason to believe in the mainstream advice.
The people who distrust institutions and *don't* also believe some other random bullshit aren't very noticeable because they're not busy forwarding you random bullshit. They're just sitting there quietly being skeptical of everything.
>They're just sitting there quietly being skeptical of everything.
Put that way, it puts me in mind of the Ruler of the Universe from HHGttG, who lives in a shack on an otherwise-uninhabited planet and isn't sure whether the very serious men who appear to visit him from time to time are there to ask him questions, or if they're coming to sing songs to his cat and he only thinks they're asking him questions, or if they never really came and his memories of them are just an illusion to account for the discrepancies between his immediate physical sensations and his state of mind.
+1
Realizing that the CDC is not as competent or honest as you'd hoped doesn't mean you start taking your medical advice from RFK Jr or randos on the internet, it means you give CDC's statements somewhat less weight and trust. Realizing that the NYT won't cover some stories straight doesn't mean trusting everything @analhitler666 writes on Twitter, it just means putting less confidence in the NYT's reporting. Realizing that peer review is broken and statistical malpractice is commonplace doesn't mean you think everyone with a website is as good a source of information as the scientific literature, it means you add some grains of salt and treat the claimed results as provisional and actually read the paper to see what might have gone wrong instead of treating peer-reviewed papers as some kind of truth oracle.
But as with so many other things in our world, this gets no attention/outrage/clicks.
> They're just sitting there quietly being skeptical of everything.
But at the end of the day, they still need to make a decision. Either wear a mask, or don't. Either invest in index funds, or don't. Either vaccinate your child, or don't.
Perhaps the difference is that in the past, these people were more like "ugh, I guess I will follow the mainstream advice, even if I feel uncertain about it", and these days this default has changed?
These days it's a lot easier to gather more facts, and to listen to the damn physicists about masking. It's a lot easier to consider studies about the deleterious effects of vaccination (polio, anyone?)
You can pull the numbers on index funds, and find your neighborhood autist to tell you why index funds are a good idea. But, in all reality, you're probably just dumb money.
Agreed! This is part of my problem with the idea that we can explain the politics of the last decade by saying "distrust in institutions has grown." If all-around trust has declined that could make sense, but instead trust in many institutions has declined even as trust in much less reliable figures and institutions has grown and become much more important. Whether you trust Fauci's word or not is not especially important for your decision-making, because there are so many checks and balances and figures you can listen to backing him up. But when Musk says "Trump is in the Epstein files," you pretty much just have to take his word on it, or not. And in so many examples, even very intelligent people choose to just take his word for it. That's the mystery to me, the trust rather than the distrust.
Trusting Fauci got harder when he flip flopped about masking like some sort of dying fish. 100 years of epidemic-fighting stood in the way of quarantines and "keeping people at home." So, yeah, there's checks and balances, if you want to look for them. Covid19 bears a surprising resemblance to dengue (whose vaccine got removed) -- and they scrubbed that resemblance from wikipedia.
Trump is in the Epstein files? Sure. For kicking the guy out of Maralago. Musk knows this too.
Trust is a very hard quantity to discuss. How do you trust the whistleblower? Well, you have to admit they put their skin in the game...
>"Trusting Fauci got harder when he flip flopped about masking like some sort of dying fish."
But this is what I'm saying. When you interrogate the loss of trust in Fauci, it usually boils down to roughly three things: masking, which at worst was a "noble lie" and at best was not a lie at all, and was corrected within weeks; six feet social distancing, which was an arbitrary number but an important directive; and some pretty debatable financial conflicts of interest. Other things come up but these three keep surfacing as primary pillars.
For these, Fauci is so excoriated he was literally called "The Devil" on a podcast episode hosted by Bari Weiss, who is definitely not the most extreme on this issue.
Now take someone like Joe Rogan. I don't think anyone will deny that the list of misinformation from his podcast is much longer than what I just mentioned. Yet a lot of people who would consider Fauci completely untrustworthy will trust Joe Rogan. Why?
It sounds at this point like you're starting with a prior belief that Fauci is trustworthy, and Rogan is not, then wondering why people trust Rogan more than Fauci despite that. All I can say to that is "check your premises".
While you're checking, I'll toss another log on the pile of things against Fauci: he was recorded during a Congressional testimonial, curtly exclaiming that "to criticize me is to criticize science itself". I can see no context one could possibly put (for the record, his context was his claim that he was following scientifically determined recommendations, so critiques of him were really critiques of that process) that makes that remark reflect well on him. It is the type of remark that a scientific person would *not* make (at least, not one with at least a smidgin of awareness of how that would sound). It's unusually bad in that it threatens to negate every other claim in his favor - his education, his career, and his experience. Anyone who casually resorts to an overt logical fallacy like that demonstrates inclination to take all that competency and aim it at deceiving the public, and couples it with apparent contempt for the public's intelligence, as would follow from thinking that they're not bright enough to notice that fallacy.
Rogan, by contrast, is no medical expert, but he explains everything he knows to the best of his layman's ability. There's no hiding behind authority; he repeatedly eschews it. Anyone can question what he says, and his implied reaction will be "what's your argument?" rather than "how dare you?". Fauci won't do that.
It's as other replies have now said repeatedly: people perceive a critical difference in each of their interests. One is aligned. The other is not.
I'm really not starting with any prior belief here. Yes I think Fauci is infinitely more trustworthy than Rogan. But I'm just posing the question: if you distrust Fauci, why trust Rogan?
And this is why I don't want to get into every Fauci quote and so forth. Because any person individually can be judged unreliable because every one has made mistakes and slip-ups. But are you really going to say that Trump, Rogan and RFK Jr. don't have at least a longer list than Anthony Fauci?
Rogan makes statements with certainty that a simple google can disprove. I think he often knows they're false or unfounded, but put that aside. Why is a genuine crackpot with a long trail of basic mistakes behind him more trustworthy than a knowledgeable "politician" type, in the sense that the politician will often massage the truth or mislead with facts or whatnot?
And again, why are we not seeing a big following of people who are anti-establishment but actually knowledgeable? Surely they exist, but they aren't the ones benefiting here.
Why would you say masking was a "noble lie"? I'd say it caused a lot of deaths, particularly in New York City, where old folks sat with surgical masks on, despite us knowing that it was an airborne virus and the masks weren't doing jack.
Compare other interventions, actually effective interventions.
6 feet social distancing wasn't an arbitrary number, it was a flat out misuse of numbers, and the physicists got really up in arms about it.
Fauci greenlit going around Obama and Ft. Bragg and DARPA's "we don't do plagues" by shipping the plague generation work off to Wuhan.
Being honestly mistaken doesn't generally result in a loss of trust. Telling "noble lies" does.
But (a) The characters I mentioned (most obviously Trump) have told plenty of lies they know to be false and (b) Why should someone who is genuinely but clearly mistaken about these issues be trusted to give good information? If they don’t know the truth themselves I still won’t want to rely on their words even if they’re not lying.
To be more precise, honest mistakes usually reduce trust less than telling noble lies. It's not zero - someone who makes honest mistakes often enough is obviously harder to trust.
People have competency, and motivation. People who make honest mistakes have less competency, but motivation implies that errors will cluster in the direction of correct answers. Someone who tells noble lies is by definition someone with a nonzero motive to deceive, so their goals suddenly become critically important, because they're no longer evident, because of the noble lying. If their goals cannot be determined through actions, then it's possible that their goals are completely unaligned from ours, and competency is either low, in which case they're at least as unreliable as honestly mistaken people, or high, in which case it's very likely aimed at deception.
I think it's distrust in the *old* institutions has grown. We're splintered nowadays when it comes to paying attention as an audience, there's no longer "the entire country is watching John Johnson on the nine o'clock news" and taking Johnson as the voice of authority.
Because it's the 'new' media that has broken many of the stories about cover-ups, 'noble lies' and the likes, people trust them more - Johnson is a liar? tinfoil99 on their Youtube channel revealed it all and provided the hard proof? Now I trust tinfoil99 more than I do Johnson.
As well, we tend to trust more the outlets that reinforce and support our biases. I don't believe Big City Newspaper because the columnists regularly write opinion pieces sneering at the likes of me, which are then passed around approvingly by the kind of people who read Big City Newspaper. So I trust more (if I trust at all) Local News or guy on his own blog that writes about how Big City Newspaper is full of [insert boo outgroup here] and that's why they're all bought and sold and paid for by [insert big moneybags person/persons/group of disfavour here] and that's why they sneer at us ordinary, decent, hard-working people.
I don't know that "the 'new' media that has broken many of the stories about cover-ups, 'noble lies' and the like"-- Joe Rogan is not an investigative reporter and neither are Trump or RFK2. Most of the time (as far as I can tell) "New Media" rely on the old media to do the heavy journalistic lifting, and new media are mainly pointers and aggregators of information. So for example, on anti-vaccination stuff, the vast vast majority of evidence relied upon is studies and data provided by scientists, health institutions and health professionals.
The confirmation bias thesis is one I take seriously but I keep encountering counterexamples. For example, the anti-vaccination activist I spoke to said that her first encounter with anti-vaccination ideas was from a man that she initially saw as a complete crackpot. You generally don't start as an anti-vaxxer, so the question is what pulls you in to something that initially is foreign to you. Meanwhile Trump got second place in Iowa in 2016 after a rally where he said "How stupid are the people of Iowa?" (https://www.bbc.com/news/world-us-canada-34812703). So I don't know that the whole sneering thing is really fatal.
Nitpick: I'm an old guy and a techie/science nerd, and mainstream media science reporting on science has mostly been very bad for my entire lifetime. You'd get occasional high-quality reporting from specific reporters who'd accumulated some expertise in a particular area, but usually they were and are lucky to get the names and terms right.
The good news is that there are alternatives, and I can listen to TWIV instead of NPR to get coverage of something interesting going on in the world wrt viruses, and be better informed. The bad news is that there are alternatives, and you can listen to folks who don't know what the hell they're talking about but sound convincing to the uninitiated. We went from most everyone eating the same mediocre but serviceable cafeteria food to some people eating at Michelin star restaurants and other people scraping roadkill up and eating it raw.
Yes! In fact, this is a great way of sharpening my question here.
Let's take the populist argument at face value: US institutions were politicized and corrupted, so we need a leader who can replace all those politicized figures and install really competent people who will speak the truth and steer these institutions in the right direction.
In a case like that, you'd expect people to flock to dissident politicians/scientists/journalists who are *more* capable and professional, more knowledgeable. I think this is what you're getting to regarding TWIV: there are in fact voices out there who I might trust more than a government figure. But these would be people with deep experience and knowledge, and a low level of politicization.
Instead what we see is the opposite. Pete Hegseth is mostly known as a Fox News guy, and there is a long list of Fox News people in the administration. RFK2 has no medical background and is just an inherently untrustworthy guy ("a worm ate part of my brain and died" shouldn't be disqualifying, but it's hard to imagine him remaining credible on the other side). My favorite example is that Scott Bessent is not only a former hedge fund manager, which should make populist types extremely suspicious; he was actually a partner in the Soros Fund Management! And of course Joe Rogan is a stand-up comedian who spouts conspiracy theories that few feel strongly about (like UFOs).
So I don't get why it is that this populist impulse doesn't lead people to more inherently trustworthy dissident sources, but rather to people who have little knowledge and major question marks about their credibility. This points to a problem with the "it's just about being against the establishment"; there are plenty of people outside the establishment with some very strong credibility, but they don't seem to be the ones primarily gaining here.
You sure you don't have Gell Mann Amnesia? I'm pretty sure most reporting has been crap for my entire lifetime. Sometimes it's propaganda, sometimes it's "I am an idiot, and you're letting me report on something complicated." Most reporting is wrong, though. Often deliberately so (have you seen the screens of dozens of reporters saying the same thing, all pretending to be Murrow? aka "I have a take and I'm going to tell it to you")
Jimmy Dore had Malone on. Rogan does the same thing, picks someone with interesting views, and talks to them for a while.
On anti-vaccination stuff: well, yes, kinda. Seneff is a Computer Scientist (and yes, she's aggregating -- if she was the one postulating prionic activity, you'd be asking "why do you think this?"). You see a lot of "I can do the math, let me math!" (where people pull publically available data and roll with the numbers). There's also open-anonymous letters rolling around, discussing graphene and other issues.
Importantly, covid19 was big enough that everyone who could possibly twist their field toward it could take a stab at it. Epidemiologists (or statisticians), computer scientists, lawyers, etc.
Compare Joe Rogan with Larry King. I don't think Rogan is less intellectual or honest than King was, it's just that their job is to have entertaining conversations, not necessarily to make the best available attempt to understand the world. And you can get podcasts like that (think of Econtalk or Sam Harris), but they're a minority taste.
I grew up with science-themed shows that took UFOs and ESP seriously, for that matter. (Remember "In Search Of..."?)
I would agree that "distrust in institutions has grown" isn't a particularly good model for anything. I'm not convinced it's true, I think that anyone harking back to a previous era when instititions were either more trusted or more trustworthy is probably just too young to remember what things were like in those days.
Definitely. I often think about the fact that if anything, institutions are much more transparent and trustworthy today than they were 20 (Iraq war) and 50 (Vietnam, Watergate) years ago, and yet at no point until 2016 did the anti-institutionalists gain real power. I think institutions were to some extent more trusted but they were by no means more trustworthy.
I think you might be misleading yourself with the term "anti-institutionalists". Taken literally, there are almost none of these, as they'd be hyperskeptical hermits who trust nothing unless they've witnessed or derived it directly. Taken figuratively, you're probably talking about people who distrust a particular set of institutions, but still trust some.
Going out on a limb: "everyone" used to trust CDC, WHO, NYTimes, WaPo, CNN, NBC, ABC, CBS, BBC, AP, Reuters, HuffPo, Ivy League universities, CalTech, UCLA, Brookings, Pew, Gallup, Wikipedia, Science Journal, Nature Journal, Ibram Kendi. "Everyone" now distrusts those, and has dispersed their trust to other places. Nothing has the prestige or influence of that list, but the "new" sources include WSJ, Fox & Friends, The View, Jon Stewart, Washington Times, WaPo-under-Bezos, Matt Taibbi, Bari Weiss, Tucker Carlson, Joe Rogan, Breitbart, Ben Shapiro, Trump, Bernie Sanders, AOC, Tulsi Gabbard, Dan Bongino, Dan Crenshaw, Jimmy Dore, Glenn Beck, Bret Weinstein, Eric Weinstein, PJMedia, NewsMax, Scott Adams, Bridget Phetasy, Megan McArdle, Ann Althouse, Dave Smith, Candace Owens, and of course, their local pastor, mayor, city councilperson, school board, or therapist.
None of these is as singularly trusted as the previous set, and there's a lot, lot more of them. Also, I put "everyone" in quotes because I don't see a lot of individuals shifting. Everyone pretty much trusts the same sources they used to; only the relative volumes have shifted.
A key difference between the right-wing and left-wing figures on the list is the amount of influence they have on the leadership. Gabbard and Bongino are in positions of power, Trump is president, and he listens and speaks to people like Carlson and Rogan. AOC is the most powerful left-wing figure there and the party changed her more than she changed the party; it's pretty hard to imagine Biden or Harris acting on a post by Ibram Kendi, Jon Stewart or Matt Taibbi.
There might be an equivalence in terms of the reliability of these sources, but there's a strong question about why one has racked up so much more power and influence.
Road to Zero Carbon is indeed about as transparent and trustworthy as Operation Iraqi Liberation.
Just simple intellectual dishonesty and lack of reward for seeking truth. Also there are simply much more powerful misinformation machines today that shore up ideological bubbles with endless justification.
I do think that the reason you see things today that are very different from 30 or 50 years ago is about technology. But it still doesn't explain the dynamic to me. Anti-vaxxers are often risking their lives and the lives of their children if they're wrong; they have really strong incentives to find the truth. But most (from my experience) trust some very questionable sources with little effort to discover the truth.
> Anti-vaxxers are often risking their lives and the lives of their children if they're wrong; they have really strong incentives to find the truth.
Humans are not instinctively rational. We do not feel the abstract things. So it is perfectly possible to do the wrong thing while strongly believing that you are doing the right thing.
And when you finally get the sensory evidence (e.g. you watch your child cough blood and suffocate), it is typically too late to do something about it.
This flies in the face of actual epidemics (measles, Ohio) and evidence that we have in America. You're wrong, and we got them vaccinated. Most of the kids were fine.
I was just responding to the idea that there was a "lack of reward for seeking truth," which I understood to mean that people often don't check information if it doesn't really affect them materially (see the comment by Paul about actionable news).
That was the same 30 or 50 years ago as well, it's just that these people were communicating in other ways that probably didn't come to your notice.
A few factors I notice:
* Trust is parametrized. I trust the butcher at the supermarket to give me advice about meat, but not about hair care. I trust the person with nice hair to give me advice about hair care, but not home improvement. And so on.
* As Kamateur mentions, locality matters - if you can tell your advisor shares your interests, then trust can go up (for any subject that affects both of you). In the early days of the Internet, simply being online was a signal that raised trust, because most people online back then were university students like you, and often programmers. And you could tell nationality by looking at the email address, which was usually accurate. (What a magical time!)
* When it comes to news, it's important to remember that most news isn't used for actionable decisions; it's used for entertainment. "Did you hear what Trump said yesterday??" "What's going on with Kate Middleton?" Stuff like that. You might change your vote, but that's only once every 2 or 4 years (in the US), and the bigger factor is going to be whether your neighbors voted the same way. Meanwhile, you're going to just want news that makes you feel good about the way you look at the world, which means it's going to be whatever reinforces your most important priors.
News that -is- actionable - weather forecasts, stock prices, upcoming events - reads very differently from news used for entertainment, and one's trust model will differ accordingly.
I often recommend _How to Watch TV News_, by Postman and Powers, to people who ask about news and trust. It's decades old. Still holds up.
Along similar lines, I realized a few years ago that there's a difference in my internal experience between when I am:
a. Discussing something in my area of expertise
b. Discussing politics and the like
I'm pretty verbally adept, and I probably sound at least as confident and competent talking about politics as talking about my field. But in one case, I am not really that much more informed than anyone else, while in the other, I have deep expertise and wide experience. One thing I have always appreciated about Scott's writing is that he tries to be clear about his epistemic status. Scott talking about the morality of cutting PEPFAR is very different in some important ways that Scott talking about the effectiveness of various antipsychotic drugs.
-- But isn't the problem precisely that people are trusting (e.g.) a comedian (Rogan) and an environmentalist (RFK2) over a doctor with decades of public health experience (Fauci and many others)? The random WhatsApp groups are often not credentialed in any way. A podcast episode about "MAHA" featured three people railing against establishment medicine, none of whom had a medical degree of any kind.
--"News that -is- actionable - weather forecasts, stock prices, upcoming events - reads very differently from news used for entertainment, and one's trust model will differ accordingly." I completely agree in theory, but my conversations with an anti-vaxxer sort of shattered the idea that people will rely much less on trust when it can have a serious impact on their lives. I'm talking about people who are aware that if they are wrong, they are actively put their lives and the lives of others around them in danger, and they're still often forwarding me things where they didn't even bother to open the link, they just forwarded me something they saw on a group without checking anything. Sometimes these people get ostracized by those around them and pay very serious prices, including literal prices if they're paying for some alternative therapies or giving money to these causes or what have you.
So I'm not sure.
(Thanks for the tip about "How to Watch TV News"! Will check it out)
How is it so difficult to comprehend that expertise is a necessary but not sufficient condition for trusting a person on a topic?
If the person is fundamentally crooked, his expertise is completely irrelevant. And for plenty of things, you need not be an expert to see something that is so obvious that it requires industrial scale censorship to be suppressed.
If expertise is necessary to trust someone on a topic, why do people trust RFK, Jr. on medical issues? He has no expertise in medicine and some of the things he says on medical matters are easily refuted or simply absurd. I could give a list but I don’t think this is especially controversial.
So why do so many people trust him, if expertise is a necessary condition for trusting a person on a topic?
If expertise is necessary for trust, why do so many people trust RFK Jr. on health matters when he has no expertise whatsoever and frequently spouts nonsense on the matter? That is the part I'm asking about, not why people can distrust experts (that makes plenty of sense), but why they then turn around and trust some much worse sources.
The people who trust RFKJr in a vacuum might not share the usual trust methods of most other people - I'm referring to people who never trusted vaccines, even traditional ones.
The people who express trust in RFKJr are a superset of that, which also includes people who know the choice was between RFKJr and whomever Harris would have appointed head of HHS, who they predict to be as untrustworthy, and possibly more so. For them, trust is a spectrum, RFKJr might be a 3 on a 1-10 scale, and the alternative was a 1 or 2.
There are at least a few people who have scientific education and notice vaccines aren't as safe as they were made out to be. In at least one occasion, they've found that the methods employed for proving safety weren't carefully followed. They also note that safety is also on a spectrum, and vaccines are inherently unsafe, as they involve injecting antigens into the body past its natural skin and mucosal barriers, and that this is conspicuously omitted by vaccine advocates as if safety is a black-and-white affair. This also coincides with an account of the financial incentives for selling vaccines to people, and for avoiding liability in their administration.
"For them, trust is a spectrum, RFKJr might be a 3 on a 1-10 scale, and the alternative was a 1 or 2"
This is exactly the question though. What possible basis could someone have to say that a hypothetical Harris head of HHS would be a 1 or a 2?
What happens is that a handful-- literally can be counted on one hand as far as I can tell-- of mistakes or misleading statements by Fauci make him "The Devil" as one guest on a Bari Weiss podcast said, whereas a much longer list of much more damaging falsehoods by RFK Jr. ends up having a much lower visceral reaction and has little salience. This happens with Trump and Rogan as well of course. I don't think we can understand this phenomenon unless we recognize that it isn't about some objective measure of trustworthiness, but rather why the lies and mistakes by these different figures have such a different probability of "sticking." Same thing with financial incentives and corruption. Even if you take the very very worst interpretation of the whole Hunter Biden laptop thing for Joe Biden, it wouldn't reach the levels of corruption achieved by Trump's crypto dealings. Etc. etc. etc.
@Meir Brooks: I don't think you notice you're doing this, but "frequently spouts nonsense on the matter" is just begging the question here.
Trump supporters don't think he's talking nonsense. You do, and that's because whatever he says gets filtered through dishonest leftist narrative.
He's studied these issues for ages and we think he has good character. That means he's going to be trusted and these frantic denunciations by industry attack dogs, Democrat politicians and your paid off media are simply not credible.
Please understand something. For a normal person without technical expertise, every debated technical issue is a simple factual clash where if you pick a side, it will be because you trust the expert on that side more.
Countless former liberals begin supporting Trump later, it's basically you guys waking up one by one and realizing your experts are lying to you and you've been on the morally bankrupt side all along.
I don't think that expertise is always "what it's cracked up to be" -- expertise is "I know how to solve what's already been solved, and can generally tell you when your new idea has already been done, and the issues involved in it."
Friend of mine did Science Olympiad once. Came up with a novel design for a pyramid that engineers now use. The judges were shocked that someone had actually "created something" for science olympiad. (Said person also was given a graphing calculator for Calculus class, proceeded to invent most of Calculus 3 on his own -- he got to use it on tests, while the rest of the class did not.)
"a doctor with decades of public health experience"
Yeah, what was interesting about Covid and the American response was finding out that Fauci, the expert of choice for the Trust Science! crowd, had previous record in public health where he was being excoriated as AIDS genocider:
https://www.pbs.org/wnet/americanmasters/how-dr-fauci-handled-aids-crisis-jexipk/26361/
"Dr. Fauci’s response to the AIDS crisis in the 1980s was first widely criticized by LGBTQIA+ activists. “We wanted treatment because we were sick and the only place where there was any possible area to get any treatment was through the clinical research system. And that’s what led us to you,” said AIDS activist David Barr. However, in later years he became a widely respected ally, eventually developing lifelong friendships with the activists."
https://apnews.com/article/fact-check-aids-hiv-fauci-covid-pandemic-833586389602
"CLAIM: The majority of AIDS patients died from medication developed when Dr. Anthony Fauci led the nation’s response to the emerging epidemic, not from the virus itself.
AP’S ASSESSMENT: False. While it’s true that Fauci had been a leading researcher when AIDS emerged in the 1980s, the claims that azidothymidine, commonly known as AZT, killed more people than the virus itself are baseless. Public health agencies from the Centers for Disease Control and Prevention to the World Health Organization, as well as prominent AIDS organizations and researchers, told The Associated Press the drug remains in use today as it’s been shown to be effective at keeping HIV in check when used in combination with other medications.
THE FACTS: Social media users are once again sharing the long debunked notion that Fauci, the face of the nation’s response to the coronavirus pandemic, advocated decades earlier for a drug to combat the emerging AIDS epidemic that turned out to be more deadly than the virus itself."
Villain to hero to villain again? And that's why the experts put forward by the media as "shut up and do what you're told" are mistrusted.
One data point: Faucci was interviewed on TWIV many years before covid, and it was very clear that the hosts (a bunch of academic virologists) considered him a very competent and accomplished scientist. He wasn't just someone powerful they had to humor.
The difficulty for Faucci during covid, IMO, is that his role was partly that of a scientist (or scientific administrator/conveyor of current science) and partly that of a politician. You had to try to infer whether it was the scientist or the politician talking at any given time.
Look again at that comment about trusting people whose interests align with yours.
Rogan's an everyman. He's not an expert, and never claims to be (not even on things like MMA or mushrooms, which he has some claim to), but he asks useful questions, because they're the questions anyone might ask.
RFKJ is probably fargroup to most people (and maybe outgroup, because Wealthy Lawyer, and Kennedy), but he takes heterodox views. This normally isn't great, but Fauci himself is a Wealthy Doctor, and probably hurt himself immensely by admitting he lied in a way he considered noble, which signals contempt for ordinary people.
One thing everyone knows: an expert isn't necessarily inclined to tell you what's best for you, and one way to tell for very sure is to consider where their money comes from. Most people aren't paying Fauci directly for advice any more than they're paying their own doctors. Moreover, if your expert gets to both tell you what your problem is and also charge you for the solution, what do you predict they'll do?
Even so, a lot of people were inclined to trust their experts anyway and hope for the best - until Fauci admitted he'd lied. If he lied once, he might lie again. (I'm now at the point where I'm mildly surprised when I find people who claim he's -more- trustworthy than other educated people who disagree on Covid treatment.)
This is not to say that everyone's an expert on epistemology, either. If you were to trust whatever some random Covid vaccine opponent told you, you'd be making the same risky mistake as someone who blindly takes Fauci's advice (and, knowing nothing else, a bigger one - Fauci -does- still have an MD).
That said, it's not just Rogan and RFKJ - there were doctors and scientists disagreeing with Fauci, for reasons that sounded comparably plausible to people without MDs, until they got targeted for doing so, and suddenly there were a lot fewer of them. People noticed this as well. And then other things started coming out, like Fauci's involvement in gain of function research, his poor testimony before Congress, CNN's curious emphasis that ivermectin was a "horse de-wormer", etc.
Generally, I think a lot of people concluded that the incentives of Fauci & Co. did not align with theirs. After a while, Covid settled down into this mild Omicron variant so the survival stakes dropped, while the tribal incentive stayed relatively constant.
Fauci has an MD. Yes, but that's really just credentialism at this point. Unless you're going to take at face value whatever Ron Paul (who also has an MD) says about medicine. All medicine, not his specialty.
(Do we have records showing that Fauci did any continuing education? I know practicing doctors are required to do that, and I sincerely doubt Ron Paul does that, but Fauci? Dunno).
Credentialism at what point? An MD might not know how to deal with some novel medical threat, but an MD isn't nothing. I'll trust an MD over a non-MD even for a novel disease, knowing nothing else. If I come across a car accident and render first aid to someone in there and someone comes up and says "I'm a doctor", I'm ceding authority immediately, no matter how good an Eagle Scout I am.
I didn't say "knowing nothing else" above, and repeat it here, for nothing. We go into any situation knowing nothing, and the introduction of an MD implies a great deal of training and possibly experience. To ignore that is foolish. OTOH, to treat it as irrefutable authority is risky in the other direction. We're permitted to take Fauci's actions into account when evaluating his trustworthiness, but let's not pretend the MD never existed.
An MD isn't nothing, sure. Sitting in the Disney World Clinic, with my side hurting, and marked crepitus (as diagnosed by my husband, with notably sensitive fingers), I'm willing to listen to the doctor on "what will happen if I go to the emergency room" -- "absolutely nothing useful. The clinical "fix" for this is leave it alone." I'm not as willing to listen to the doctor saying "go to the ER anyway, you might have this zebra." I was willing to get the doctor's "if this happens, go straight to the ER."
If I was at home, I don't think I'd have run to the doctor to get a diagnosis. As it was, when I got home (days later), I went to the UrgiCare. I was subjected to a diagnostic test that had a 50% chance of detecting the fracture (looked it up later), and subsequently sent to the ER. At which point I got a PET scan. Which told me all sorts of fun things about my diet and... diagnosed me with what my husband had felt two days before. Note that in both the UrgiCare and the ER, I demonstrated the crepitus. This whole rigamarole cost me about 6 hours, and the price tag was around $10,000 (I had insurance, though).
"I'm a doctor" is a fundamentally different thing to say in the course of an emergency (there's legal ramifications to it -- you're pretty much yielding the "good samaritan" defense, as I understand it). Knowing about first aid, I'd probably yield, but that's because I'm not strong. There are time-critical matters that you genuinely shouldn't yield to "just any doctor" (reducing a dislocated shoulder in particular).
GP Doctors are trained in how to be a diagnostician (primarily, that's their role in the whole medical apparatus). Problem is, current doctors aren't that good at it.
I'm not trying to pretend the MD never existed. That, at minimum, implies a basic level of "I know what the basic issues for a human being are" (mind, muscles, nerves, blood, immune, etc). But that's the "consider Ron Paul" level of "he has a doctor's degree."
So let me try to boil down the arguments here.
One is that trust accumulates with people whose interests align with the truster's. I don't think this clearly applies to Trump or RFK2, or to "WhatsApp randos". As mentioned, financial conflicts showing that these characters have different interests tend to be ignored, so it's hard for me to see that as the central priority.
The second is that trust can build with "everyman" types. This is what I understood as Kamateur's main point, but I agree with him that this is a hard sell for many of the people we're talking about. Certainly RFK2, who does not at all talk like an "everyman" (and his personality is anything but). I'm also not sure about Rogan and Trump. Trump's background is certainly not that of an everyman even if he speaks bluntly and directly. As for Rogan, I know what you mean by everyman, but he also talks about a lot of things that I think few other people care about (like the UFO stuff, which I think comes up about as often as anything else) and many episodes could easily be clones of other intellectual podcasts. Certainly Joe Biden is as much an "everyman" as any of these guys, and I don't think he benefited from the same kind of trust dynamics where he could say what he wanted and have a loyal base believe it uncritically.
Another argument is that people might believe those with heterodox views when their trust in institutions falters. This feels hard to lean on just because "heterodox" is so broad. Would we expect a bump in popularity for Farrakhan?
The rest of what you're saying makes sense for reasons not to trust insitutions or experts; what I don't get is why that skeptical eye basically gets shut with certain figures. As for Fauci, I think popular trust in him declined well before the points you're talking about, and I think for clearly political reasons rather than anything to do with what he said. And I'm not talking about people who agree with experts who disagree with Fauci on some issues where a professional group can disagree. But people will trust RFK2 and Trump well outside those bounds.
So, here's a difficulty I have. There are big topics of importance in the world having to do with military budgets and balance of power on which I would trust a longish effort post by John Schilling more than an in-depth report in the NYT or WSJ. This represents me using my own judgement to evaluate different sources of information in a way that probably looks like those folks taking ivermectin to cure their cancer because they found a website somewhere that told them some bullshit story that convinced them. And yet, I'm pretty sure that John Schilling is actually a better source on some of those topics than the NYT.
I doubt the NYT is ready to admit Pax Americana is dead. Given that, they are so far away from the paradigm that Trump is currently in, that they may as well be out to lunch. Comments Trump makes on National Security (like taking over Greenland and Canada, both security issues on our northern border) ought to be seen in that light.
" I don't think [trust alignment] clearly applies to Trump or RFK2, or to "WhatsApp randos". As mentioned, financial conflicts showing that these characters have different interests tend to be ignored, so it's hard for me to see that as the central priority. "
Trust of Trump or RFK2 is always in context of the alternatives, which were Harris and whoever would have led the HHS under her. Since both have or are expected to have financial conflicts as well, people go to the tiebreaker, which means we're back to things like character (after a few other things, which also hold equal for both). Anyone with the same financial interests as the median voter is in no position to win an election.
I agree that RFKJr isn't widely seen as an everyman (his last name probably drives that home more than anything), and I don't plan to use "RFKJr is an everyman" to support any arguments here. However, as I said before, he has heterodox views. This marks him as not part of the elite establishment, and I notice that's a big factor these days.
Rogan certainly talks about several topics most other people don't care about, but so do most people. Having personal interests doesn't disqualify everymanhood.
Biden is one of the more everyman candidates I'd seen for POTUS in the last 12 years, yes; you're indeed right to point that out, and I think Democrats saw this and that's why they nominated him in 2020. In his case, by the people's standards, the trouble was his senility. By Democrat elites, this was a further argument to nominate him: he would be more pliable, and as long as it looked like he was still the man in charge, it would give more credibility to their initiatives. (Which probably was only a marginal concern to them; I think they didn't see themselves as that estranged from the general population. This might still be the case today; I'm not sure.) Biden circa 2008 would have been a different story: still an everyman (he dressed up as a hotdog vendor on Comedy Central, and looked natural as anyone), and much more vibrant. Although then he would be more likely to dissent from fellow Democrats. Plus, he was the king of gaffes before Trump. (In that sense, he made Trump's gaffes more permissible.) In short, Biden would probably have worked if not for his age and timing (in 2008, the Democrats *adored* Obama).
We would expect a bump in popularity for Farrakhan, and we probably did get one. But Farrakhan wasn't seriously running for office by 2016, and moreover, he was praising Trump. In general, we probably saw bumps for multiple populist figures, left and right - Bernie Sanders, AOC, and Biden.
I remarked on Fauci almost exclusively because you brought him up; I remarked generally about trust of authority figures otherwise. That aside, the skeptical eye seems to be working about how I'd expect - open (active) by default, with a memory for reputation and past predictions, and informed by perceived interest alignment, which is in turn informed partially by tribal markers such as wealth, occupation, vocabulary, religion, ethnic phenotype, etc.
Is there anything else about the skeptical eye that isn't adding up for you?
Until your post I hadn't been aware that Farrakhan became pro-Trump. Wow, what a fantastic data point. Thank you!
>"the skeptical eye seems to be working about how I'd expect - open (active) by default, with a memory for reputation and past predictions, and informed by perceived interest alignment, which is in turn informed partially by tribal markers such as wealth, occupation, vocabulary, religion, ethnic phenotype, etc."
I don't know, maybe I'm running in circles here, but I just don't see this as a magnet toward the figures who have gained trust in this era. Their reputations and past predictions have an awful track record; notice how people excoriate Fauci for his masks comment, but Trump's persistent predictions that Covid would disappear like magic within a couple months never really "stuck" as a reason to turn away from him, except perhaps for a handful of people in 2020. Religion is another one that would make sense if not for the fact that these characters are almost comically irreligious. Rogan, Trump, RFK2? If Ted Cruz had won the primaries in 2016, this story of populism would make sense, but instead it went for Trump, he of the "New York values".
One datapoint is that I saw a lot of apparently real commenters in the runup to the 2016 election on right-wing sites whose preferences were Trump, then Sanders, then whomever else. I think people were in the mood for some alternatives to the mainstream consensus.
You really should look at the substacks on the reputation economy.
You gain credibility when you admit to errors. You lose credibility when you stand by obvious errors even when pointed out to you.
How often do right-wing populists like RFK2 and Trump admit to their errors, rather than sticking to obvious lies?
You might find some value in the back and forth Dave Green and Nathan Confas had awhile back, focused primarily on...let's call it deception vs stupidity.
When randos from WhatsApp get something wrong, people think they're dumb. When the CDC or FDA make a mistake, people think they intentionally deceived them. In this situation, who do you trust? In the Covid era, where Joe Rogan was wrong on Ivermectin, it feels like a mistake. When the FDA et al made mistakes, it felt like a lie. Liars are more distrusted than fools.
Cofnas here:
https://ncofnas.com/p/podcast-bros-and-brain-rot
Greene here:
https://substack.com/@fiddlersgreene/note/c-114927975
When the CDC/FDA tells you that farmers can't handle measuring dosages of ivermectin for themselves, despite the fact that they measure dosages by weight for all their farm beasts...
When rolling stone gets caught lying (and google gets caught location-blocking the article so the hospital doesn't know about it)...
When the CDC calls ivermectin "horse dewormer" despite it's daily prescription for 10% of the world's population... (I'm just saying, it's about as safe as medicine gets, far far safer than HCQ).
Thank you for the references! I'll check them out.
I feel like your point raises multiple follow-up questions. The first is what it is about Rogan vs CDC that makes one sound stupid and the other sound deceptive. It isn't just power; Trump has a lot of power and for some reason he is trusted by many of the same people who would distrust e.g. the CDC. And I think people trust e.g. RFK, Jr. not only to tell them what he believes to be true, but in terms of the quality of his information. And it also raises the question, which was the focus of the blogpost I linked to, of why similar financial conflicts and such among the "randos" don't raise the same red flags.
Rogan and Trump both don't claim to be experts. Trump in particular has a habit of "asking stupid questions" to jog the experts out of received wisdom, and put more options on the table. Rogan walks you through his logic (and Uttar Pradesh was a powerful signal that "something cheap was working" or "no intervention was needed").
I think Scott has written about this before, In ancient times it would make sense to trust a member of your village more than someone who wandered in over the hill, because its more likely the villager shares your interests, bonded as you are by ties of kinship and tribe. No one really has "neighbors" anymore in this sense, but the urge to find sources of information that feel like they are coming from cousins who share the same genes and gods as you is still strong, and this results in weighting ideas more heavily if they come from people who share your cultural values. In other words, "tribalism" doesn't just mean loyalty, it also means certain preconceptions about how trustworthy you are.
Concepts like "expertise" and "creditability" are newer constructs that were always intended to supersede this older mode of establishing trust, but because they don't have that ancient, evolutionary shortcut to the brain, they have to be socially enforced a lot more rigorously to take root, and they fail more easily. Particularly when the experts do not appear perfect at their jobs, or look tribally motivated themselves. They end up just reinforcing the older framework instead of supplanting it, which is why the Covid crisis was probably the single most historically damaging event to the credibility of expertise since we first invented the notion.
Expertise is a midwit term (and as such, isn't really subscribed to nearly as hard by actual smart people). Creditability is a "newer construct" -- but news loses credibility when they sink the Hunter Biden Laptop Story.
You should look up substacks about the reputation economy.
"expertise is a midwit term" is how terminally online people talk, but when you are sick and go to the doctor you fundamentally want and hope that the person you are talking to knows what they are doing, and unless you possess some domain specific knowledge its going to be hard to evaluate their diagnoses.
That's why doctoring, as a profession, is so obsessed with establishing trust and professionalism. It makes things more profitable, sure, but its also the only way to make sure that patients listen to you, which can literally be life or death.
Hairdressers are just as obsessed with occupational licensing as doctors, and for precisely the same reasons.
... you mean you don't look up domain specific knowledge at your first opportunity? You don't pull the "how likely was this test to find the issue"?
"Make sure that patients listen to you" -- ah. you think patients listen to doctors. I don't. I don't think doctors even try to get patients to listen to them, most of the time. The surest killer is obesity, after all. Amerifats is a good nickname for Americans, because we're really that fat. Our obesity changed how many people died of covid19.
The problem wasn't that doctors didn't tell fat patients to lose weight, it was that before Ozempic et al, pretty-much all they could do was to tell their patients to lose some weight, cut back on the sodas, hit the gym, etc. Which mostly didn't result in any weight loss. Or propose bariatric surgery for the really really fat patients, but that was pretty damned hard on the patients.
Ah, so you buy into the latest fad to earn pharma money. I may have a rather unique group of people in my office, but two out of two people having been told to "lose some weight, idiot" ... just did it. I'm on that path too, as is my husband.
Autists have superpowers! (determination, primarily. 'tard strength as well).
This makes perfect sense, but I don't think it is what's driving this phenomenon today, because those sources of trust tend to "look" very much like outsiders to the "trusters".
Trump is an obvious example: he's a New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc. RFK, Jr. is an even weirder case: he was running in the Democratic primary until last year, was and is an environmentalist (!), and was raising issues that even today are pretty fringe and foreign.
I would completely understand if the story of 2016 were that Ted Cruz, despite being universally seen as a liar, gained trust among social conservatives and evangelicals because he is those things. But instead the trust went to these figures who are almost the opposite of the "trusters." Don't you think?
> Trump is an obvious example: he's a >?New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc.
I’m not American but my feeling is Trump got his base by not calling them deplorable or demanding they accept they are privileged. Easy win compared to another millionaire ranting about those voters.
> RFK, Jr. is an even weirder case: he was running in the Democratic primary until last year, was and is an environmentalist (!), and was raising issues that even today are pretty fringe and foreign.
Ah yes, but a cousin of mine who’s a Green Party supporter (in the U.K.), and a bit of a hippy, ended up anti vaccine and “moving to the right”. She would argue, and I see her point, that she stayed where she was - opposed to pharma, supportive of bodily choice and pro freedom re the state. (She was more of a libertarian leftist - although she’s pretty keen to stop private transport but everybody’s belief system has some inconsistency).
Anyway we’ve decided to forget about Covid but it was literally a mirror universe. Conservatives fell in love with Sweden, leftwingers with closed borders.
Nobody calls their base deplorable (though Trump did say, at a rally before the Iowa caucuses, "How stupid are the people of Iowa?"), but Trump definitely calls everybody who doesn't support him pretty terrible stuff.
Mitt Romney insulted everyone as a communication strategy.
"Are these cookies from 7-11?"
(these were precious cookies from a beloved local bakery, being given to him as a gift. That is NOT what you say. You say, "thanks, they're good.").
Again, do you think this is not in the category of insulting everyone?
https://apnews.com/article/trump-detroit-2382e6f01ea6d236e8a2b755ff150580
Democrats are not doing themselves any favours with the race baiting of whites, or the use of academic ideas of privilege. They feel they may not need to appeal to the cis het white male, but they surely need to some of the males, most of the heterosexuals, some of the whites, and pretty much all of the cis.
It’s also possible to be pro black, pro trans and so on without the use of the word privilege at all.
I’m available for a small fee, even a modest pint of non American beer would suffice, if the democrats want to hire me as consultant.
Social Conservatives (evangelicals) ascribe to the "broken people" theory of leadership. That is to say, godly folk don't get put in charge, but God still works through the broken people, and that's a good thing.
Which is a fancy way of saying, "we'll still vote for the triple divorcee". But you can see it as an article of faith, not as hypocrisy.
Who could be more broken a person in the eyes of a social conservative, than a social progressive?
Given that, why doesn't faith extend to them as well?
You should discuss this with an actual proponent of this theory (find an evangelical to debate.)
The idea is that "governmental figures" aren't going to be perfect, but can still do god's work. (Now, you get to ask "what's god's work then?" and I can tell you it's "behaving in a godly manner" which is not supporting "new religion," because new religion hates old religion and does everything they can to stab it in the back at all times).
A "social progressive" in this day and age is someone who believes that they get to fly their religious flag over Kabul, or across the Pittsburgh Courthouse, without letting other religions fly theirs. The idea of a public square is that everyone can put up a statue, including the Satanists (I love their statue).
> he gets a lot of trust from working-class people, red states, social conservatives (!), etc.
Getting trust from people by saying "we have a common enemy" is a very old trick. Most people need to get burned a few times before they learn to recognize the pattern.
"Trump is an obvious example: he's a New York billionaire who made his money off real estate and reality TV, but he gets a lot of trust from working-class people, red states, social conservatives (!), etc. "
Part of that was the media and everyone else opposed to Trump doing their damnedest to paint him as low-class, crude, Not One Of Us (liberal cultured civilised upper class types) - see Hillary and her unforced error about 'loving real billionaires'. Gosh wow. "Vote for me, little grubby proles, because I represent the party that will look out for your interests, now get out of my way, I have to give a speech at a dinner for the hyper-rich who are my donors".
https://www.wsws.org/en/articles/2016/10/28/cln2-o28.html
"In a speech Wednesday in Lake Worth, Florida, near West Palm Beach, Democratic presidential candidate Hillary Clinton gushed about her support from the super-rich, praising her billionaire supporters and contrasting them to Republican candidate Donald Trump.
An excerpt is worth quoting as a demonstration of the abject subservience of the Democratic candidate to the capitalist financial aristocracy. According to the transcript supplied by the Clinton campaign, she said:
“You know, I love having the support of real billionaires. And they’ve been speaking out, because Donald gives a bad name to billionaires."
I've mentioned this before, because it was so damn stupid, but I'll mention it again: the sneering about "he eats his steak well-done with tomato ketchup".
Well, damn it, *I* eat my steak well-done with tomato ketchup. If that makes me one of the Untermenschen fit only to be spurned by the feet of the Right Kind Of People, then had I a vote in the US elections, guess who *I'd* vote for?
Autist flag: "he eats his steak well-done with tomato ketchup"
Well-done doesn't have to be shoe leather! And let people eat their damn food any damn way they like - I'm not going to be snobby about "oh you're eating sushi the wrong way" (since I don't know the right way to eat it).
Having table manners is different, but they weren't jeering about his table manners, they were jeering about "lookit the low-class way he eats!" And then in the next paragraph trying to talk about how the other guys represented the poor, the immigrants, you know - the low-class that they'd just been jeering about.
Lest we forget: medium rare steak, AKA The One Correct Way, isn't some luxurious food ritual restricted only to the elite. Trump, and anyone else, rich or poor, could cook a rare steak. The bottleneck there isn't the doneness; it's the ability to have steak all the time.
In The Case of the Steak, Trump's saving feature was that while no poor person eats steak every evening, no member of the elite would be caught getting steak well-done, and -certainly- not advertising it, let alone with ketchup. So while Trump wasn't exactly One of Us Poor Folk, he definitely wasn't One of Them Elites, either.
The tribal signifiers Trump supporters are responding to are somewhat illegible and not easily sorted in a "red/blue" scheme or any modern political left-right scheme, which is why we are seeing a massive political realignment built around a cult of personality.
Trump is not a conservative, but neither are the people who trust him, even if they use that word. In fact a lot of Trump's deepest supporters were folks who were either not politically aligned or not deeply aligned before his rise to prominence. They are bonded by a set of values that I believe are real, and again rooted in some evolutionary function, but I'd be absolutely lying if I said I understood what they are, and as a generally liberal person, I know I would sound condescending if I tried to guess. But when Trump cloaks himself in opulence, this isn't seen as a betrayal any more than a pharaoh stepping down out of a pyramid rebukes the existence of Ra. Your mistake is thinking you understand what they believe in and that they are blinded to how Trump is a contradiction of that. If you work backwards from how believing in Trump can be a set of values that can form a cohesive tribe, this will get you closer to the truth.
Now, whether any of this is actually aligned with their rational self-interest, or even their conscience sense of how they identify themselves, that's a separate question.
I think "trump is not a conservative" is a fundamental truth. He's a 1990's NY Democrat. Hasn't changed much. Most of the "conservatives" he's getting are from the "War Wing" of the Republican party (these are actual soldiers and their families, and they are heavily anti-neocon. Guess who the "adults in the room" were in the Biden Administration? Dick Cheney's neocon protegee. )
Trump doesn't cloak himself in opulence. He has his favorite (gas station) toilet paper dispenser right beside his gold toilet. That, my friends, is art.
Even better was his fast food banquet at the White House (during a government shutdown, I think). The People's President indeed.
Gold toilets apparently aren't vulgar if they're art:
https://www.bbc.com/news/articles/cgeg39vr3j3o
This is one time I have to be in sympathy with the thieves (even if stealing is wrong).
"Two men have been jailed for stealing a £4.8m gold toilet from from an art exhibition at Blenheim Palace.
Thieves smashed their way in and ripped out the functional 18-carat, solid gold toilet hours after a glamorous launch party at the Oxfordshire stately home in September 2019.
...It happened just days after the artwork, entitled America and that was part of an exhibition by the Italian conceptual artist Maurizio Cattelan, went on show."
https://en.wikipedia.org/wiki/America_(Cattelan)
"Cattelan created it in 2016 for the Solomon R. Guggenheim Museum in New York City, United States. It was made in a foundry in Florence, Italy, cast in several parts that were welded together. Made to look like the museum's other Kohler Co. toilets, it was installed in one of the lavatories for visitors to use. A special cleaning routine was put in place. The museum stated that the work was paid for with private funds."
'Aha ha ha, we've got so much money we can buy gold for an artist to make an ironic art piece about how grubby and consumerist America is, ha ha ha! Even though we're the same capitalist moneybags exhibiting grubby consumerism by having enough money to throw around on this kind of thing!'
Trump has a gold toilet - he's a buffoon. Anon pays for a gold toilet - it's "An example of satirical participatory art".
I mean, it doesn't even have to be gold. People have known this for over a century.
https://en.wikipedia.org/wiki/Fountain_(Duchamp)
The gold toilet, by itself, is buffoonery. It's positioning it beside his "favorite gas station toilet paper dispenser" that makes it art. I love the contrast.
I find it hilarious that someone decided to build an "actually functional gold toilet" and call it America. But it's hilarious in a very bad way.
Take the stunt where Trump was at a McDonalds - that resonates because we *know* he really does genuinely like and eat McDonalds (there's been enough pointing and laughing at, for example, the White House McDonalds meal for the winning team).
This New Yorker piece is exactly the tone-deaf "why do the muddy peasants follow this boor?" kind of thinking that just does not get it:
https://www.newyorker.com/culture/annals-of-appearances/the-pure-american-banality-of-donald-trumps-white-house-fast-food-banquet
Kamala coming back with "I worked in McDonalds for a while" doesn't resonate; was it when she was in Canada? Did she really work there or not? It doesn't seem true even if it is true, because she's not got the image of someone who'll happily chow down on a Big Mac.
Trump managed to be completely sincere about learning how to fry french fries. About learning how the entire place worked, and how to do the jobs. (and it wasn't a coincidence that it was a primarily black-staffed restaurant).
Trump makes a comment about illegals taking black people jobs. Liberals have a "field day" showing off "real black people jobs" (which come across as "mostly token" if you're a black woman who's a nurse, or a black guy who mows lawns for a living).
Trump shows, beautifully, that he cares about "black jobs."
What I enjoyed was the grousing about "this was all staged, it wasn't real you know!"
Gosh, you don't say? Just like Kamala and Tim stopping off at a particular gas station chain to grab a bag of Doritos was staged?
https://www.youtube.com/watch?v=HmGMwXoBQGQ
At least Mrs. Walz checking out the rotisserie section looked more natural than Kamala wandering around the candy and chocolate shelves until her husband handed her the bag of Doritos for her "oh my favourites nacho cheese" soundbite!
If I were a common man then here's what I'd say about Trump (or RFK): he may not be well aligned with me, but at least he's not perfectly aligned with all the other political-financial-media-entertainment-tech elites who seem to be in lock step about everything.
In the never-ending tug-of-war between the McConnells/Pelosis of the world and the American people, he is at least pulling the rope sideways.
I do think this has to be part of the answer: not positive but negative alignment. I like Trump not because he looks like me, talks like me or has a background like me, but because he hates the people I hate and what's important to me is that he bulldoze them with no restraints. Trump is then the perfect candidate not because of what he supports but because he sort of hates everyone and everything who isn't him.
There is this incredible data point that Trump campaigned in, and won, Dearborn, Michigan (https://apnews.com/article/trump-harris-arab-americans-michigan-dearborn-aea96b9161a77de1fa47d668e23edb98), which is majority Arab and which was called "America's jihad capital" in a WSJ op-ed. To be fair it was partly due to a protest vote for Jill Stein, but still. Pro-Israel Trump fans seemed not to mind that he stood by a guy who seemed to rant against Israel's bloody war on Gaza and promised to end the war. It seems to me that the Dearborn voters (correctly) read Trump as hating Biden (whom they saw as supporting Israel in the war), while Israelis saw him as anti-Palestinian (also true) and as hating Biden (who in this case was blamed for being too pro-Israel). Meanwhile Biden's pro-Israel speech at the opening of the war was seen as so heartfelt that friends of mine spoke of being moved to tears. Trump could never say, as Biden did, "I am a Zionist." But Trump is universally seen as more pro-Israel than Biden, despite lots of data points that should give pause.
So I do think you're right that it's much more about what these figures are against than what they are for.
I think it's not that that at all.
Most palestinians/arabs I'm familiar with knew that Biden was better for them than Trump, but felt that the actions of the US were so intolerable that they needed to make it known they went beyond their red line to stay in the coalition. Voting for the lesser of two evils when both support the operations in Gaza was unacceptable.
As for Trump being more pro-Israel than Biden, I don't...know how you can conclude otherwise based on history. Trump just ended the nuclear threat in Iran, and is credibly preventing it from happening. That's about 50% of 'the problems Israel faces.' He's also extremely committed to expanding the abraham accords, that's another 10%. And he's not putting pressure on israel to solve their apartheid problem, another 40%. Meanwhile, Biden sanctioned crazy violent settlers, was in favor of an iranian deal that would have resulted in them on the edge of breakout without limits on missiles to deter intervention, and yeah tried to get saudi normalization but was broadly incompetent. Ethically, spiritually, culturally, personally, he was very pro israel, of course, but that matter for little.
Neither of these groups are acting emotionally, there's too much at stake and they are informed/competent.
They were acting strategically.
I do know many progressive jews voted against trump because 'illiberalism is in the long-term always bad', but there were a lot of minds changed when trump took out iran's nuclear program.
I have heard people say that "Trump is a poor man's idea of a rich man". In other words, Trump behaves the way a poor man imagines a poor man would behave if he had as much money as Trump does.
I'm not saying this to be condescending toward poor people, but I'm saying it as part of a model I'm trying to build in my mind of how a poor person probably thinks. For starters, I think of poor people as prioritizing local concerns over global, which might seem condescending even so, except I don't think that's necessarily wrong. Prioritizing the local means acting on what you see with your own eyes, interacting with people who can meet you face to face, thinking about things directly, rather than abstractly. A poor person might readily give $10 to a homeless person for food, but think an idea to set up a $10M fund to do the same for homeless nationwide would be stupid - how do we know it's going to actually feed a million homeless, rather than get stolen by con men in the middle?
I almost wrote "gatekeepers", then realized that's a fancy term Trump probably wouldn't use, and used "con men" instead. That's another implication of Trump being a poor man's imagined rich man: Trump doesn't talk like an elite. He eats McDonald's. He drinks wine, sure - a poor man knows what wine is - but Trump might not distinguish between a Beaujolais and a Barefoot. Whatever time he could be spending pontificating on fine dining, he spends instead on making this or that real estate deal or whatever his profession is. And it's all local over global. Abstractions is for people with their heads in the clouds. That's my hypothesis.
>He drinks wine, sure
Trump is notoriously straight-edge. There has been a lot of speculation about lifelong prescription medication dependency, particularly amphetamines, but he does not consume alcohol at all and seems to have avoided most/all recreational drugs his entire adult life. He consumes caffeine but our culture doesn't really consider this drug to be a drug.
Whoops. I knew he was a tee-totaler; shoulda caught that, my mistake. (Family member was addicted, IIRC.)
Let's say "he knows about wine, sure".
"as part of a model I'm trying to build in my mind of how a poor person probably thinks"
Congratulations, you have now provided your bona fides to run the next Democratic presidential candidate's campaign, and succeed as brilliantly as Kamala's team did.
If you have to construct a model of how a poor person thinks, because you're not poor, nobody in your family within three generations ahs been poor, and you don't know any poor people (the contract cleaners who make the work premises habitable are not around when the real important Elite Human Capital are around, as is only right and just) - then you will not get it. Your model won't work. You're doing the anthropological bit as though "poor people" were an alien species from Mars.
What do you mean by "poor people", for a start? One of the homeless? One of the people giving money to the homeless? Someone who may have their own small to medium sized business, but has no idea what a Barefoot is? (for the record, I find that brand pretentious, but then I'm very trad - French reds, German whites). They seem to be the successors to Gallo and Blossom Hill - cheap, accessible wines trying to brand themselves as something "fun" and "trendy".
And Beaujolais is over-rated, anyway.
Your tastes in wine expose you as a irrecoverable Euro, Deiseach. Don't you have a train to complain about being late or something?
As for me, I should think my approach to the poor model ought indeed qualify me for the 2028 D campaign lead. They can't just drag any ol' actual poor into the room, but someone with a _model_, well, that's just what they're looking for. Why, I could probably deliver a 3-D animated movie on how to properly adjust the angle at which Warren held her beer to drink it! The latest in metrics! Anthropologists in deepest Detroit! Addicts in blue spandex adorned with ping pong balls!
Unfortunately, they'd probably just dig into my past and find out my grandfather dropped out of eighth grade and wasn't quite a millionaire when he died, and my mother grew up an itinerant in Hanoi (and her mother might have been a concubine - we're not sure), and I spent most of my childhood either moving haybales around or huddled under a blanket in the truck in 40-degree weather waiting for the school bus. So much for my pedigree.
> […] because you're not poor, nobody in your family within three generations ahs been poor, and you don't know any poor people […]
Neither does Trump, yet that hasn't hindered him. The difference between him and the Democratic establishment doesn't come from understanding or even first-person experience, but from attitude and image.
This is again an argument that completely makes sense to me but just doesn't seem to apply to the current situation. The idea that people care primarily about local concerns that they can see with their own eyes is a classic one in politics, but Trumpian politics violates this all the time. An Economist article noted that one of the most anti-immigration states in the US at one point was West Virginia, which has very few immigrants. Most immigration opponents that I read speak of immigration much more in an abstract sense-- without a border we don't have a country, etc.-- than in the dollars-and-cents sense. DEI and transgender issues would never have caught on as a topic of national conversation if it was about things in front of you rather than broader "cultural" concerns. And a recent podcast episode of "Why should I trust you?" (and some other MAHA voices) really brought this home: many people there talked about being strongly supportive of RFK Jr., and therefore supported Trump, even though the number one most important thing to them personally was the preservation of medicaid and Obamacare. No one would say that Trump was the more likely or trustworthy candidate on these issues, and yet they trusted RFK2 and Trump to do the right thing on these issues.
Trump indeed doesn't talk like an elite, though I think the fact that he is so obviously from the elite does raise the question of why these highly critical people trust the words and intonation rather than the biography and financial interests. But this doesn't work for some of the other figures (especially RFK2), and it didn't much help Democrats who speak at that level. Bernie Sanders and Trump are often compared on this point of "speaking to the people," but Sanders failed twice to get the nomination and it's hard to imagine the Democratic Party coalescing behind him the way the GOP did around Trump even if he had somehow won the nomination. That's my view, anyhow.
(The "speaks like an everyman" answer is about as good as I can see so far, but it still feels unsatisfactory)
"Trump indeed doesn't talk like an elite, though I think the fact that he is so obviously from the elite does raise the question of why these highly critical people trust the words and intonation rather than the biography and financial interests."
Because the other elites so obviously hate him. See Hillary's speech about how her friends, the real billionaires, hate how Trump is bringing down the image of billionaires.
DEI and trans really are in front of a lot of people. They show up all the time in work communications.
Immigration, for a lot of people, is "don't break the law" territory. AKA Jose from up the street is getting sent back to Ecuador? Let me know how I can help, Jose's a good guy. I'll write a reference to get him back. When you elect someone who believes in federalism, the idea that you can locally select who gets to come back doesn't seem all that unreasonable. "Do it right" is a conservative ethic.
Bernie and Trump are BOTH honest people. Conservatives say "At least Bernie's honest" and they give him points for that. They'd sit down and work with Bernie (this is the whole Midwestern Conservative Republican).
Bernie would have won the democratic nomination if it was a fair race. Clinton pulled a lot of gaming to make her nomination work.
If Bernie ran as an independent, he'd get a lot of support, and not just from "core democrats."
I'm not "thinking I understand what they believe in...", I'm genuinely asking. I don't have a good understanding here. But an explanation that says that the reason for this trust is a tribal signifier that we don't quite grasp but trumps all other signifiers we do see is similar to me to just saying "I don't understand," which is where I am. And I have no interest in saying other people are blind just because I don't understand what it is they're seeing. But I do want to understand what I can.
People trust Trump because
1) He tries to do what he promised to do.
2) He is trying to punish and attack people and institutions who deserve punishment. Trump is literally the sword and shield for MAGA against the woke establishment that discriminates against them, legislates against them and is trying to replace them with illegals from hellholes in Africa and Central America.
It's basically this simple. Other Republicans complained about immigration but did nothing because they weren't tough enough to overcome the deep State. Trump forcibly appointed loyal people who are actually going to implement his agenda whether it is legal or not and whether anyone approves or not.
Most other Western states have leaders too squeamish or weak or unpersuasive to garner and then use such power properly and do what the MAGA right-wing wants, which is an end to migration and a reversal of migration from people with incompatible value systems. All the culture issues broadly come under this, and culture is the reason Trump won.
I can give you a 100% guarantee, MAGA isn't leaving power for the next 15 years, they won't lose presidential elections, they will use force if they do, and from their perspective it's totally justified because the opponents are trying to finish America (and the rest of majority white countries).
Explaining perfectly the MAGA view requires interaction with taboo topics such as white racial interests, and why those are a taboo and not racial interests for other groups or the use of extreme force and illegal actions as a necessary step to combat illegal actions and force of the political opposition.
I’m curious, at what point did you start trusting Trump? Naturally many voters had to start trusting him well before he had done anything at all.
And I’d like you to clarify: is the important thing that he “tries” to do what he promised, or that he succeeds? In 2016 Trump’s central campaign promise was building a wall on the southern border and have Mexico pay for it. To do this he led to the longest government shutdown in American history, lost a game of “chicken” with the Democratic leadership, and gave up. Does this matter? Do failures matter for Trump, or only for other Republicans?
Trump is awful, let's not beat about the bush. But every time I go "okay, this is the limit", someone from the side of Niceness and Compassion comes out with such a sneering, jeering, mean rant about MAGAtards and the like that I go "Gosh darn it, don't make me defend the guy! Why are you driving me to this!"
Best guess is its obviously something to do with the communication style. I saw a video the other day that said Trump talks like a professional wrestler, which is a performative tradition completely alien to me.
I think it was Hans Bethe (or perhaps Dirac) who said something similar - complimenting Feynman's exceptional intellect, but... "he talks like a bum".
Speech is probably one of the easiest things to pick up about someone. Five seconds of talking to them.
This can be gamed. I noticed my father came off as nearly two independent people depending on who he was talking to. To his father, the 8th-grade- dropout-turned-land-trader, Dad sounded like Hank Hill. To his coworkers in the Austin tech center, he sounded like a physics professor.
That's a bonus for mexicans, who see pro wrestling as some quasi-religious thing. Other people will see "Trump went on the WWF" as "Trump's a good sport."
Regarding why the amyloid hypothesis won’t die, from my outsider perspective it sure looks to be about money.
Think about it from the perspective of a biopharma company. Alzheimer’s is a chronic disease (which means you can charge for treatment indefinitely) that affects old people (who get Medicare). If you invent a drug that slows the progression of Alzheimer’s, you can charge basically whatever you want for it. You’ve heard the statistic that 1% of the entire federal budget goes to dialysis right? Invent an Alzheimer’s treatment and your company could get 1% of the federal budget too.
This means that even a tiny probability of success will leave investors jumping at the opportunity to throw you money. If you’re a researcher working on amyloid drugs, you get paid big bucks by Wall Street. If you admit that nobody (including you) has any idea how Alzheimer’s works, you’re left begging for grants.
> Invent an Alzheimer's treatment and your company could get 1% of the federal budget too.
I love the way you phrased this.
This story sort of makes sense, but I feel like a key part is missing: what actually happens once researchers receive money from investors? On its face, if you know the amyloid hypothesis isn't true, you're never going to land the 1% of the federal budget! Why pursue a line of research you know won't pay off?
Seems like a few explanations:
1) The researchers are tricking the wall street firm into funding a lab under the pretense of exploring the amyloid hypothesis, but are quietly investigating alternatives
2) The researchers are acting purely on short term economic incentives, cynically chasing funding opportunities and knowingly misleading investors into supporting dead-end work
3) The availability of funding biases researchers—motivated reasoning nudges them toward supporting the amyloid hypothesis, even subconsciously.
4) Researchers see the amyloid hypothesis as one of several plausible paths worth exploring, but economic incentives force them to overstate its promise to secure funding.
To me, 1 feels hard to beleive and runs counter to the fact a drug was actually approved. 2 feels a bit too conspiratorial as well. A mix of 3 and 4 sounds plausible to me.
"Commenters mostly seem skeptical (1, 2, 3, 4) citing both theory (it seems like there should be too little formate to matter) and evidence (out of ~1000 users, nobody else has mentioned these symptoms yet); they propose that out of a thousand users, it’s not surprising if one develops a weird disease for unrelated reasons. "
While it is true that out of thousands of users chances of unrelated disease increase, it is not unexpected for medications to have uncommon (less than 1-in-100) or rare (less than 1-in-1000) side effects.
In case of regular medicine, often the most serious side effects are rare or extremely rare, for were they were common, they would have been conclusively observed in trials. Which coincidentally is also the rationale for a phase III and IV trials (in comparison to adverse effect reports on Substack and Reddit, adjudicated by the internet).
"May cause spontaneous combustion" is my favorite side effect.
I wrote a stroy exploring what can and cannot be considered to be human. Of course it has applciations beyond the obvious. Check it out: https://sisyphusofmyth.substack.com/p/the-rescue?r=5m1xrv
Anyone know of any good websites or communities for people attempting to educate themselves? So far I found Open Source University which has curricula to follow and a discord but it’s not as active as I’d hope.
I’ve been using MIT open courseware and so far and I’m very happy with it so far but the thing I’m missing is some sort of forum where actual discussions take place.
Secondly has anyone read “Abstract Algebra: Theory and Applications” by Tom Judson and have an opinion on it ? I prefer textbooks that prioritize intuition building and demonstrating the relevance and motivations of the material.
I was a big fan of Judsons book, and it does to an extent provide intuition building. I also heavily relied on other sources though during studying abstract algebra to provide that intuition. I also liked An Inquiry-Based Approach to Abstract Algebra (Ernst) - this is also free on libre math which was nice.
Scott, I'm curious if you've seen this recent paper on the limitations of LLMs as mental health providers. https://arxiv.org/abs/2504.18412
The paper talks specifically about LLMs reinforcing delusions, but I'm also curious about the general case of an always-available, sycophantic LLM reinforcing problematic ideas or behaviors in general and acting as a kind of crutch.
Presumably someone could design an LLM for mental health specifically, the way they've designed research LLMs. But I do think that LLMs will not be able to replace mental health providers because so much of psychotherapy is the client-therapist relationship, and that's better in person.
Maybe if they hook an LLM up to a really good android, that might work.
I definitely have personal experience with sycophantic LLMs reinforcing my problematic behaviours.
I've noticed that whenever I've done something that I consider to be immoral and I express guilt to an LLM, the LLM nearly always comes to my side. The LLM makes me feel less guilt.
And although that makes me feel better in the moment, it also probably makes myself more likely to repeat the immoral behavior in the future. Therefore, I no longer express my guilty conscience to LLMs.
For me that depends on which LLM I'm conversing with. The o-series reasoning models regularly push back against me, the non-reasoning models are very sycophantic in comparison.
Windows, obviously, always wants to update itself every few days. Because I'm sort of childish, I sometimes oppose this- and particularly recently, where I don't want Chrome to update because I don't want to lose Ublock Origin. (MV3, etc.) So I've been preventing Windows updates recently.
Today, my laptop clearly needed to restart. So to avoid Windows updates, I physically unplugged my modem and router from the wall (I mentioned that I'm sort of childish about this). Then restarted. Incredibly, it still updated to a new version of Windows.
How...... how is that possible? Doesn't updating require the laptop downloading the Windows update from some Microsoft server? How is downloading possible when the modem and router are physically unplugged from the wall? I'm pretty sure that my laptop doesn't have an internal modem. How did it update with no Internet......? The only thing I can think is that it had the update already like 'pre-downloaded' and ready to go, it just needed the restart in order to apply it. Is that it?
If you want Ublock Origin, there's firefox still.
>The only thing I can think is that it had the update already like 'pre-downloaded' and ready to go
Yes, the actual update files were downloaded in the background and already residing on your hard drive before you killed your internet connection. The computer restart is just for the installation of the update files.
Yes, the modern update experience is to download the update in the background, anticipating that the user will eventually say "yes" and you don't want them to them be stuck waiting on a potentially large download.
Some Googling shows that, for Windows, these are stored by default in C:\Windows\SoftwareDistribution. I don't have a modern Windows machine available to test this, but on other operating systems you can simply delete these files before a restart and they don't happen.
Yes, windows pre-downloads update packages in the background by default. If you want to change that behavior you have to go into the group policy editor and enable the Configure Automatic Updates policy and set it to option 2 (Notify before downloading and installing any updates).
This is my annual blog marketing. I'm a psychiatrist and wrote a post about false positive diagnoses in mental health. I view this as a similar to the replication crisis/"Why Most Published Research Findings are False." There is a large Bayesian aspect to this issue. Key problems are undefined pre-test probabilities, small effect sizes, low power and high alpha, bias, and multiple comparisons. I think the community here would be interested. Thanks.
https://open.substack.com/pub/affectivemedicine/p/are-most-claimed-psychiatric-diagnoses?r=1jkibi&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
SubStack name is AffectiveMedicine.
I read your post with interest. Sorry if this is an amateurish question, but one thing that was missing for me in it was any discussion of the objective reality of the boundaries of the diagnoses. You say "This is not a debate about the definition of mental illness, and it accepts DSM/ICD diagnoses on their own terms", and this sentence, to me, conflates two different things: it's possible to accept that there are mental illnesses and not go into philosophical debates around that question (which frankly strike me as mostly sophistry), but also recognize that when diagnoses are defined by collections of heterogeneous symptoms, the boundaries are inherently fuzzy and subjective.
I read up on this mostly in context of autism and other developmental delays, but I think this applies to many other mental health diagnoses, too. It always seemed significant to me that people, when told that someone has an autism diagnosis, almost always understand and accept this as saying that there is one known-to-doctors underlying etiology that has been claimed; that perhaps we don't understand how autism works and it manifests in different ways, but it's one "thing" and it's a relief to know that the "thing" has now been pinpointed; whereas in reality, it's almost certainly not true and the definition of the diagnosis itself points almost in the opposite direction.
So when you're saying - "Teasing these diagnoses apart - assuming they are actually different - is like detecting a signal buried in noise..." - this assumption, to me, buries the lede in a sense. If several "illnesses" are defined solely by overlapping sets of symptoms, with no near-term hope of neurologically precise delineation, then *by design* it will be hard to tease them apart, and claiming that lots of such diagnoses are "false" seems overblown, does it not? Why *does* it matter if you call it MDD or Generalized Anxiety Disorder in this particular case, if all this reflects is how well you judged the fit to an inherently subjective boundary some committee drew up a few years back?
And when you say "There’s nothing unique about psychiatry or mental illness. There are many false diagnoses for back pain, IBS, migraines, and so on." - isn't it true (I honestly don't know and may be wrong here) that outside mental health the assumption of a well-delienated physiological cause - even if it's hard to test for and hard to diagnose etc. - is much more common (among doctors) and much more justified? I'd think *that's* the major difference.
These are the questions that leaped at me when reading your (interesting) piece. Grateful for your thoughts or pointers to arguments (yours or otherwise) you find interesting or helpful in this area.
This is admittedly a repetitious point, but for anybody who missed my response to Scott's analysis of Covid origins, here it is.
https://michaelweissman.substack.com/p/open-letter-to-scott-alexander
I'm particularly interested in reactions from readers familiar with using basic hierarchical Bayes methods for practical problems. I'd expect that there would be a few such readers of a blog whose slogan is "P(A|B) = [P(A)*P(B|A)]/P(B), all the rest is commentary."
First, I think that you are making the same mistake that killed Rootclaim's entire methodology from the get-go, namely treating epistemic uncertainty as if it were aleatory variability:
> an extreme Bayes factor can’t be correctly derived directly from some model that’s maybe sort of right used on data that is maybe not too biased. The correct Bayes factors are limited by the “maybe’s” not by the extreme factors that the toy models give..... Even at first glance what stands out in that analysis is the huge factor of 2200 from the HSM-specific data, mostly simply from the first official cluster being at HSM, with smaller adjustments for other data. Drop that modeling-based net HSM factor of 2200 and your odds go to 130/1 favoring L
The fact that the data are limited *cannot cause you to be confident in one hypothesis over the other*. It can only cause you to be uncertain. If I sabotage your experiment by randomly adding noise to all your measurements, so that your results become statistically insignificant, you shouldn't become more confident that the true effect size is 0.
> So let’s turn to your big factor from the HSM cluster. It can be pretty significant if the early case data were fully reported. For that to be true we need all of:
This is simply false. The question is not whether there might be some ascertainment bias. The question is whether there is reason to believe there is *sufficient* ascertainment bias to make HSM plausibly not be the first cluster. Recall that, for example, 5/5 and 12/13 of the first cases were market linked, and this is *how* covid was first identified--5 separate people presented with symptoms of unknown origin, and their doctors eventually realized they all had links to the exact kind of place that virologists had been warning for years was a potential source of new pandemics!
The rest of the argument diving deep into the weeds of various models is not really convincing for the reasons highlighted in https://www.explainxkcd.com/wiki/index.php/2400:_Statistics. By far the single, simplest explanation for all the different lines of evidence (the initial handful of cases, the high portion of the first few hundred cases being market linked, geographic clustering around the market, cases spreading from the market over time, the exponential math) is HSM being the first cluster, as this hypothesis doesn't require multiple totally independent, unlikely explanations for various facts.
2 other questions to think about:
1. If the data on the origin of the pandemic is so uncertain, are we even confident it started in Wuhan? And if it didn't start in Wuhan, then isn't this entire discussion just privileging the hypothesis?
2. If we had the exact same quality of data, but clustering around the lab, would we even be having this conversation? Or would there just be a chorus of screaming "this is obviously just epicycles, excuse making, special pleading"?
> The reliable-looking reports that SC2 showed up in wastewater samples from Milan and Turin on Dec. 18, 2019 are wrong.
See, this is the kind of argument that makes your entire post hard to view as anything other than motivated reasoning. If SC2 was prevalent enough in these cities on that date to show up in wastewater, why did it take another 6 weeks for any cases to be confirmed anywhere in Italy (and not even in Milan or Turin)? That's enough time for a factor of about 4,000x increase in cases. And if these cases are legitimate, they should be casting doubt on Wuhan as the origin at all--which is stronger evidence against LL than against Zoonosis!
Lastly, I want to make one meta-point about your critique of Pekar et. al.'s model. The fact that this paper even exists the only thing that allows such a detailed analysis and critique. Where is the equivalent for lab leak? Where is the similarly complex model that shows how likely the major pieces of evidence in favor of lab leak are? Where is its code, its Bayes factors, etc? Without any of that, this is just an isolated demand for rigor. Without similarly rigorous investigation of all the data allegedly favoring lab leak, if you want to say that the HSM cluster provides only weak evidence of Zoonosis, well then the correct BF in favor of lab leak is 1, since regardless of any weaknesses in Pekar's case, the lab leak one hasn't been made at all.
Let me reply to those points in different order.
Pekar is the opposite of rigor, a collection of extreme errors in basic Bayesian logic. There's a good reason that lab leakers haven't produced their own opposing version. The initial data are simply too sparse and too non-randomly sparse for that line of reasoning to do much. It's best just to admit it's not informative and move on to more informative data rather than to create a parody of Bayesian reasoning.
On basic techniques— of course you're right that uncertainty in data cannot lead to favoring either hypothesis by much. That's the point I was making. To get much of a Bayes factor you need reliable data. There are various types of data that are more reliable than the early case home addresses, so those other types are where more substantial Bayes factors come from. The shift of Scott's net odds toward LL when one discounts the extreme HSM factor does not come from a reversal of that factor but just straight from the other factors that Scott used based on non-HSM-related data.
On the Italy data and various other data tending to show that cases originated well before the HSM cluster, I agree that none of it is compelling. It's possible that all of it will fall apart. The question is whether the odds are more than 1000/1 that it will all fall apart.
You correctly say "The question is whether there is reason to believe there is *sufficient* ascertainment bias..." That's exactly what I address in the JRSSA article and the arXiv follow-up. A statistic that Worobey et al. chose to highlight turned out to have the wrong sign for the simple model that they use and the right sign for the large-ascertainment-bias model. That means that something (probably but not necessarily ascertainment bias) was way off in the way they derived conclusions from the reported case addresses. Unlike Andrew Levin, I don't think the reported case location/timing data point strongly away from an HSM origin. I think those data are not only fragmentary but also non-representative, so they just don't lead anywhere.
You raise one important point: how sure are we that the epidemic even started in Wuhan? We are quite sure that the first major outbreak of serious illness occurred there. Nowhere else has similar early morbidity/mortality reports. That already suffices to calculate conditional probabilities of that first big outbreak location for both the L and Z hypotheses. But one big reason that some chance for Z remains even if one were to eliminate the HSM version is that there are other non-HSM versions of Z. Perhaps the main one is that an early version of the disease might have been transmitting at some low level before picking up an FCS by template switching. That still leaves a lot of coincidences for Z to explain, but it avoids the conditional probability for a market origin in a city whose markets had a much lower share of the wildlife trade than the city had of the population,.
> . It's best just to admit it's not informative and move on to more informative data rather than to create a parody of Bayesian reasoning.
I'm not quite sure how you got from what I wrote to this paragraph, but let me try to restate: Where is the writeup of the major pieces of evidence (whatever you think they are) that would allow someone to critique your argument in as much detail as you have critiqued Pekar et al? Code, raw data, models, etc. Rootclaim's published analyses, for example, have nowhere near enough specifics to have any idea if they're making major mistakes like the ones alleged of Pekar.
> That's the point I was making.
No. You're using the (alleged) uncertainty in the data to just ignoring the entire question. The fact that you don't know something is a statement about your mind, but it does not imply that the true long-run rate of zoonotic pandemics starting at HSM vs starting at WIV, based on the data we have, is close to 1. For all you know it could be 1,000:1, 10K:1, whatever. You can't even properly put a bound on the probabilities with this sort of argument; it's possible for you to be arbitrarily wrong. See https://royalsocietypublishing.org/doi/10.1098/rspa.2018.0565
> On the Italy data and various other data tending to show that cases originated well before the HSM cluster, I agree that none of it is compelling. It's possible that all of it will fall apart. The question is whether the odds are more than 1000/1 that it will all fall apart.
First, this whole argument is https://www.lesswrong.com/w/multiple-stage-fallacy and can easily be made for almost any piece of evidence, most likely including all the evidence you think supports LL.
Second, you ignored the part where the pandemic being in Italy in December 2018 would be even stronger evidence against lab leak than against zoonosis.
This part really does feel like the style of argument that is common in various conspiracy theories (2020 stolen election, evolution denial, vaccines-causing-autism, etc), where proponents just throw out a list of weak arguments and pretend like they can't all be wrong.
> We are quite sure that the first major outbreak of serious illness occurred there.
What OOM is "quite sure"?
> market origin in a city whose markets had a much lower share of the wildlife trade than the city had of the population,.
Do you have data on this? The only source I saw (can't find it now) indicated there were no more than about 400 wet markets in the region. Wuhan had 4, or >=1% of the markets. Which is comparable (maybe a little bit less than) its population share of the region. There's some confounding here with density and urbanization, but can you quantify "much" and support it with data?
Again, not in order:
Here's what I have on Wuhan share of the wildlife trade.
"For the market branch of the ZW hypothesis, ZWM, the likelihood drops even more since it has a much smaller fraction of the wildlife trade than of the population. The total mammalian trade in all the Wuhan markets was running under 10,000 animals/year. The total Chinese trade in fur mammals alone was running at about 95,000,000 animals/year (“皮兽数量… 9500 万”). For raccoon dogs, for example, the Wuhan trade was running under 500/yr compared to the all-China trade of 1M or more, 12.3 M according to a more recent source. The Wuhan fraction was then at most about 1/2000. We can also compare the nationwide numbers for some food mammals with those of Wuhan. For the most common (bamboo rats) Wuhan accounted for only about 1/6000, apparently largely grown locally, far from sources of the relevant viruses. For wild boar Wuhan accounted for less than 1/10,000. Wuhan accounted for a higher fraction (1/400) of the much less numerous palm civet sales, but none were sold in Wuhan in November or December of 2019. It seems P(Wuhan|ZWM) would be much less than 1/100, something more like 1/1000. We may check that estimate in an independent way to make sure that it is not too far off. In response to SC2 China initially closed over 12,000 businesses dealing in the sorts of wildlife that were considered plausible hosts. Many of these business were large-scale farms or big shops. With only 17 small shops in Wuhan we again confirm that Wuhan’s share of the ZWM risk is not likely to be more than 1/1000, distinctly less than the population share of 1/100."
On some possible cases in northern Italy by 12/18/2019, that would be consistent with the more conventional version of the LL hypothesis, in which the successful spillover was roughly in mid October. That allows time for some cases to pop up in other cities since Wuhan has a lot of international trade. Meanwhile, by that point the cases in Wuhan had reached the point where the small fraction that were so serious that they required hospitalization was starting to show up noticeably above background flu-like illness. So those may be false positives, but if not they fit L better than the HSM version of Z.
On Pekar is it a mere "allegation" that in calculating a likelihood ratio for hypotheses X and Y it is improper to use P(obs1|X)/P(obs1 and obs 2 and obs 3|Y)? There are basic rules about how probability works.
On the first big known outbreak being in Wuhan, we are virtually 100% sure. The issue in calculating the conditional probabilities for that under Z and L is whether the existence of the coronavirus labs that were able to detect a new virus skewed the detection probability so that earlier similar outbreaks elsewhere went unnoticed. Given the worldwide attention paid to this issue and the intense motivation of the Chinese government to show an external source, I think that 1000/1 OOM odds against comparable earlier outbreaks is conservative. Incidentally, even people who try to make a case that the virus was developed in US labs don't claim earlier US outbreaks.
On all the psychological claims about various types of conspiratorial thinking, I think it makes sense to try to work through the odds on the factual question first before indulging in emotional speculation on other people's psychology. Plenty of time for that later.
Before responding to these points, what is your answer to this question from above?
> If we had the exact same quality of data, but clustering around the lab, would we even be having this conversation?
> Here's what I have on Wuhan share of the wildlife trade.
What is the source of these numbers? And does the 1/1000 of risk number account for the fact that pandemics are much more likely to start in large cities?
> So those may be false positives, but if not they fit L better than the HSM version of Z.
I think this is wrong, but also not very important (see below) so I'll only elaborate if you think it matters.
> On Pekar is it a mere "allegation" that in calculating a likelihood ratio for hypotheses X and Y it is improper to use P(obs1|X)/P(obs1 and obs 2 and obs 3|Y)? There are basic rules about how probability works.
You made many criticisms of Pekar. I haven't checked every single one of them in enough detail to agree that every single one is valid, so I used "alleged" to cover them all. Now, do you have such a write up as I mentioned, or is the entirety of this criticism an isolated demand for rigor?
> On the first big known outbreak being in Wuhan, we are virtually 100% sure
Ok, in that case I am virtually 100% sure the December 2019 wastewater in Italy is not correct. Certainly sure enough to say that I do not feel the need to substantially discount the HSM cluster based on that argument.
> On all the psychological claims about various types of conspiratorial thinking
I'm not jumping to psychological reasoning. My point is that a certain style of argument (listing lots of individually weak arguments, and asserting they are unlikely to *all* be wrong) seems to repeatedly be used to support false conclusions, and almost never to support true conclusions, and so is unlikely to be a valid type of argument.
You want the more detailed arguments including links to the supporting documents. Here they are.
The long blog summarizing the various lines of evidence in a hierarchical Bayes framework, with robust Bayes use of uncertain priors.
https://michaelweissman.substack.com/p/an-inconvenient-probability-v57
It includes the sources for the wildlife numbers.
Angus McCowan's detailed discussion of Pekar along with new simulations in which the big logic errors are fixed.
https://arxiv.org/abs/2502.20076
My translation of McCowan's argument into more accessible form:
https://michaelweissman.substack.com/p/explanation-of-and-comments-on-mccowans
On your hypothetical question about what if there were a cluster near WIV, given the weight of the other evidence and the absence of internal WIV evidence, I think it would give a modest Bayes factor favoring LL. I specifically say that the HSM cluster would give a modest BF factor favoring ZWM but not enough to compensate for the other factors specifically weighing more against ZWM than against generic ZW. Andrew Levin (https://www.nber.org/papers/w33428) has calculated some of those factors, getting values that I think are unrealistically unfavorable to ZWM, for the same common-sense hierarchical reasons that I discount the extreme HSM BF used by Scott.
Nevertheless, HSM runs into these problems:
1) lower Wuhan likelihood than for general ZW
2) no wildlife vendors sick
3) no positive wildlife samples
4) negative correlation of SC2 RNA with potential host mtDNA on market swabs, in contrast with the distinct positive correlations for actual animal coronaviruses
5) lack of any documented wildlife from Yunnan sold in the relevant period
6) No species sold in HSM was found to have any outbreak anywhere. Lab raccoon dogs were barely susceptible to massive doses of a downstream (D614G) more contagious strain.
7) All HSM-linked sequenced cases were of a strain farther from natural relatives than many sequenced cases found elsewhere.
These features contrast sharply with the original SARS.
Alright, I took a glance through this, Some basic thoughts:
-You're in the weeds on this man and it's rough on a reader. I had to go look up other essays to figure out whether you were a lab leak guy or a wet market guy. You make constant references to things as if the reader has been following this argument in depth since 2020 and, yo, we haven't.
-I *think* the guts of the disagreement is as follows:
#1 In Wuhan, during the initial Covid outbreak, we see this weird cluster of cases around both the wet market and the lab that can't be linked to the wet market. The cases that can be linked to the wet market also have a very different pattern than all cases. The green-blue graph in the link is really useful.
-Side point: I have no idea how important this base data is. The point doesn't seem to be where the cases are clustered, it looks like the wet market and the lab are right next to each other in the center. What makes the data difficult is whether the early covid cases can be attributed to the wet market. That data seems intrinsically really noisy; if CCP agents came to your home at the start of a global pandemic and started asking questions, how honest would you be?
#2 Anyway, presume this weird cluster is real. How likely is it that this is just some weird stats illusion vs a real effect? For example, an outbreak in New York probably doesn't have a weird cluster like this. But if you threw the data in a kmeans clustering algo and told it to find 4 clusters, it'll give you 4 clusters. Run that through 10 cities though and you'll find a few suspicious clusters that we can all tell really interesting stories, that "feel" real. How likely is this cluster in Wuhan to be one of those fake/illusionary clusters.
And since we're all proper Bayesians here, we're trying to quantify this. I don't think this line of argumentation is meant to be definitive but it is meant to be a major factor. If we're debating whether this is a lab leak or a wet market/zootonic thing and the cluster is extremely unlikely, say 10% odds that it could happen, that has big impacts in the overall argument.
#3 At which point there's, like, 100 pages of dumb academic writing that my eyes glaze over. This appears to be complex for the sake of complexity.
To whit, there's no code and no data shared. The closest I found is in this paper:
https://www.science.org/doi/10.1126/science.abp8337#supplementary-materials
which links to data you can download but it's just some tsv files without any raw data. Maybe there's some here but they don't open properly:
The actual dataset for Wuhan and other areas should be trivial, like 7 columns and <1000 rows easy. Each covid patient should be a row, each row should have lat/long and/or x/y values for where they live, the date time they were first diagnosed by a medical professional, their self-reported datetime they started feeling ill, and whether they can be traced back to the wet market. For this initial spread, that's it. I know that data exists, that's how you make a graph like that, but it's hidden.
And, under the hood, this code should be ~100 lines of Python or R, we're all using scikit-learn or caret under the hood.
There's no code, there's no data. Instead of glancing over a trivially small dataset and reading ~100 lines of code.
Instead of just, ya know, reading the data and testing with it, there's a 100+ pages of theory, none of which is very legible, and none of which is better or more trustworthy than just playing and sharing the data.
If you want feedback from actual practitioners, this seems pointlessly academic. If you to figure out how weird that initial Wuhan pattern is, just have 1 nice, clean Wuhan dataset and comparing it to 10 nice/clean outbreak datasets, either Covid or other diseases, is way more valuable and persuasive than arguing weird theoretical stuff about ascertainment bias.
Addendum:
I am in favor of whatever Andrew Levin thinks. He did an analysis, he shared all his stuff here:
https://andrewtlevin.github.io/bayesian-analysis-of-covid-origins/Wuhan_Analysis/
and it's fairly neat. Cool, he did the thing I want, he's trustworthy. I glanced through his code and...it's all ".do" files. That's....Stata right? God, that takes me back. But it's got to be pretty basic, Stata doesn't have a lot of baller libraries. And I don't see any library calls, I think this is all basic stats.
Pet peeve: When I see something like "whether you were a lab leak guy or a wet market guy", I want to scream obscenities. These are not opposites, but strongly overlapping circles on the Venn diagram. At this point, most of the scientifically-literate lab leak probability space is in the realm of "An infected lab tech left work, went to the seafood market to buy groceries, and coughed on one of the vendors". Or went home and a week later his asymptomatic-carrier wife went to the seafood market or something like that.
There's an interesting and somewhat relevant side discussion about whether there might have been some unnoticed COVID cases before the wet-market superspreader event; that's probably unknowable at this point. But showing that it probably went public via the market, is not a slam-dunk win against the lab-leak hypothesis, and the distinction you should have made is "lab leak guy or zoonotic origin guy". Or natural origin if "zoonotic" is too fancy.
Fair :)
Additionall, before COVID was officially recognized it was being diagnosed as "atypical pneumonia". So you *really* can't trust those early "diagnosed cases" as being the original cases.
"Each covid patient should be a row, each row should have lat/long and/or x/y values for where they live, the date time they were first diagnosed by a medical professional, their self-reported datetime they started feeling ill"
This combination is the kind of sensitive private information that can't possibly be a public dataset ever, neither legally or ethically; so it kind of explains why that data isn't available and won't ever be published unless as an aggregated summary without the underlying raw data.
Thanks for that detailed feedback! I also appreciated Andrew Levin's work, which I discussed with him at some length before publication. I do think some of it suffers from the same non-hierarchical issue that plagued the enormous Bayes factors that Scott and others obtained from the HSM cluster. E.g. I pointed out to Andrew that his factor from the spread among HSM stalls compared an all-from-raccoon-dog model to a pure human-to-human model. That was unfair to Z because there could be (ignoring the facts that raccoon dogs were scarcely susceptible, etc.) an RD->human-> human Z model. So the big issues are not in the codes but in the lead-ins to the codes.
There's one major exception: the Pekar et al. 2022 paper that you link to. It has an almost inconceivable array of major errors in coding (some now fixed under pressure) and in basic Bayesian logic. Angus McCowan has done heroic effort in sorting these errors out. I translate his work into simpler English (and link to his arXiv version) here:
https://michaelweissman.substack.com/p/explanation-of-and-comments-on-mccowans
In addition to all the errors described there (principally the use of completely unbalanced observational detail for the two hypotheses) Angus now points out another error. The simulations use 7500 sequences but the data consist of 787. That's a big deal when the main question you're asking is whether some intermediate sequences could have been missed by accident!
On types of clusters that occur, there are data. From my long Bayes blog:
"Analysis of the spread of Covid in New York City concluded “The combined evidence points to the initial citywide dissemination of SARS-CoV-2 via a subway-based network, followed by percolation of new infections within local hotspots." The Hankou station is on Metro Line 2, which connects directly to the stop nearest WIV."
To be clear- I've already had feedback from extremely serious practitioners. What I'm trying politely to ask is whether blog-readers who tell themselves that they are Bayesians actually know how the methods are used in the real world. I appreciate that you know it's not all in a toy code with unreliable input but I'm not sure that those who bought Scott's arguments know that.
How is this not "People who can't do statistical coding don't understand Bayes?"
I think there's ways to understand Bayes that would benefit a lawyer or an executive, especially in terms of probabilistic thinking, but I wouldn't expect them to be able to answer the question as you posed it.
That makes sense. Tho I think there's an intermediate ground, beyond vague hand-waving but short of code-writing. That's the Fermi-style rough calculations where you try to get some idea of plausibility from approximate estimates. I think with a good education system (dreaming, especially now!) that would be doable by a lot of non-specialists. The particular problem that comes up here with the HSM Bayes factor wouldn't be in week-1 of an intro course but maybe in week 2 or 3, where you start having to imbed particular Bayes factors into a probabilistic relevance context. E.g. gambling in the context of occasional cheating for a nice teachable example.
I will risk de-railing your intended thread to ask a related question:
"Is there some reason that the folks doing Bayesian analysis on this seem to be ignoring the claims that the German foreign intelligence service believed in early 2020 that Covid was caused by a lab leak (80% - 90%)?"
Is this claim not credible (comes from a leak)? Or has the German intelligence service updated their claim? Or is it that *how* they got there is opaque? Or have I missed that this HAS been part of the Bayesian analysis? Or something else?
Claim:
https://www.bbc.com/news/articles/cz7vypq31z7o
"Germany's foreign intelligence service believed [in 2020] there was a 80-90% chance that coronavirus accidentally leaked from a Chinese lab, German media say."
It's that how they got there is opaque and that intelligence services do not have a good track record of telling the truth to the public. It's not their job. So I avoid using such arguments by authority.
There is one peripheral exception in the recent CIA release. Here's a quote from my long Bayes blog. "ZWM" means zoonosis at the market where cases were found in Wuhan, "ZW" means generic zoonosis.
"[4/16/2025] To the very limited extent that releases from intelligence agencies are useful, the latest CIA release supports our conclusion that ZWM is unfavored compared with other forms of ZW. They say “New information [redacted] has enabled CIA to more clearly define the conditions and pathways that could have led to ether a laboratory-associated or a natural origin. This body of information has both reinforced CIA's concerns about the potential for a laboratory associated incident and led CIA to focus on isolated animal encounters in secluded environments as the most plausible natural origin scenarios.” Although it is easy to think of reasons why intelligence agencies might slant publicly released information on the question of whether SC2 came from a lab or not, the relative probability of a wet market origin vs. more remote zoonosis (e.g. in a bat cave) seems like it would have no political significance for a U.S. agency. Classified information might, however, help in evaluating the otherwise somewhat unreliable reports that SC2 was circulating in humans well before the reported HSM cases."
This is important because the arguments that Scott and others used following the rootclaim debate relied entirely on huge Bayes factors specific to ZWM and irrelevant to other ZW.
I am looking to start my own medical practice. Talking to people about it has been strange, because I feel like I don't really understand what a lay-person is looking for when they choose a doctor. I am a PCP with a lot of extra skills and I constantly hear that people want "a doctor like that" (for example, I am also a yoga teacher). I'm just looking for some clues about how people found their current primary and what things would convince a person to change (and possibly pay more out of pocket)?
I currently see a concierge doctor for a particular specialty (not PCP, though). She's an endocrinologist who is qualified to work with diabetes and thyroid issues, but her main business is as a weight loss clinic prescribing GLP-1s. I see her for thyroid and losing weight. I would highly recommend advertising yourself as a "general wellness" kind of PCP who can provide hands-on service for people taking GLP-1s.
In terms of "front desk" staff, this doctor has none. She rents a solo office in a small office building. When I arrive for my appointment, I wait in the lobby and text her to let her know I'm there. She comes out and waves me back to her office. That's the biggest thing for me - I can *text her*. I don't know what kind of software voodoo she had to set up in order to be HIPAA compliant, but she did it. (I could ask her on your behalf, if you're interested.) She has a PA who monitors her text messages throughout the day to answer any medical questions if she's preoccupied.
I am not familiar with the legal restrictions on what kind of services you can offer as a PCP versus some other specialty. Do you feel qualified/are you allowed to treat patients for certain psychiatric concerns? There is a huge demand for PCPs who are willing to prescribe psych meds, since psychiatrists are very busy and it's hard to get appointments.
Like others are saying, the biggest thing in concierge medicine is nailing those basic business customer service principles. Have a simple check-in process, set up enough time in an appointment to get to know the patient, etc.
This is very interesting. I've interacted with a bunch of concierge internal medicine people, but rarely a specialist. Fun fact: there aren't any "legal" restrictions on what I can offer as a PCP vs. a specialist. My medical license looks the same as a surgeons, which looks the same as a gastroenterologist's etc.
Also, I'm not sure if the stat still holds true, but PCPs prescribe the majority of psych meds (except for lithium). This could be a whole essay, but I remember 75% of antidepressants being the stat.
Thank you though, interesting to hear about a doc who truly runs the practice solo. Texts seem like a big part of that.
I forgot to clarify that her PA is fully remote. I've never met her in person - she just responds to the text messages. Sometimes the doctor answers my texts, and sometimes the PA does (and she'll always introduce herself as the PA in the text messages).
Would you like to talk to this doctor? I can reach out on your behalf.
Are you doing a cash practice?
Yes. I specialize in the very things that insurance really doesn't value. Primary care, nutrition, osteopathic medicine. Direct Primary Care has some allure though. Thoughts?
Oh, you'll want to advertise the h*** out of that. And explain why it's the better model. "not being beholden" to others, actually paying for your own care.
make sure you have a good front desk staff, we see more of them than you sometimes.
Since this is a place where we talk a lot about AI, how would you feel about talking to an AI? What would you want it to be able to do (can send alerts to the doctor, can send faxes, etc) for you to consider that an acceptable option?
Since this is a startup, I was looking at not having a front desk staff as a way to keep overhead low. A lot of docs I have talked to have said that whether you get sued or not is dependent on your front desk being nice.
If I trusted the AI to give me medical advice, why would I be coming to you?
The AI would in no way and never give medical advice. It would just be for scheduling, sending/requesting records, and triaging patient calls (same as what a front desk person would do). Sorry if that wasn't clear.
If you'd be using a modern LLM solution (and not just a fake AI glorified call-tree solution that's labeled as AI because that's what sells right now) there would be no way to guarantee it wouldn't randomly decide it should offer medical advice
It would never be an acceptable option for me. AI is used for the business owner's benefit, not the customer's or worker's.
Crystal clear. Gotta figure out how to get a human involved then.
I'd recommend starting without a receptionist. You'll need one eventually for sure, but if you're just starting out its good to handle the scheduling and billing yourself for a bit. That way you get a handle on what you will need a receptionist to do, and how it works. Also it's less pressure on you if you don't have someone else's salary relying on you.
Once you have enough clients that it's getting hard to do the admin yourself you should hire someone.
StrangePolyhedron's priorities are pretty reasonable, but I'll add a few things that frustrated me with some of the medical practices or providers that I've visited over the last several years.
1. Rushing. A few years ago, my PCP, who was an MD, would be itching to leave the room within 10 minutes of entering. In general, I've found that PAs are more patient than MDs. To relate this to your inquiry: unless I'm having a problem that seems unusually complicated, I would not pay extra to see an MD over a PA.
2. Lack of scientific curiosity. Talking with doctors often feels like talking with pre-gen-AI chatbots. They have no capacity for context (e.g. they don't remember previous conversations); they're not interested in patients' proactive observations; they ask very few questions before making a recommendation; and there's absolutely no way to convince them that they're wrong until the patient follows their recommendation and it fails. (If you're interested in my prime example, then DM me.) Action items: ask each patient "Is there any more information about this problem that you'd like to share?", and consider partnering with some ODs, due to their holistic training.
3. Textual overload. The amount of text that the average person is bombarded with is impossible to process. Consequently, most people ignore almost all text, which causes them to miss information that they really care about. (e.g. "You'll be billed $40 for mentioning problems during a routine physical.") Action item: ruthlessly eliminate happy-talk and other unnecessary text from your office's forms, emails, and physical environment--and for the love of God, don't install advertising panels in the exam room. I'd pay extra for clear, relevant communication.
4. Wasting patients' time. One clinic's parking garage gate breaks frequently. If you arrive 5 minutes early, then get stuck at their gate for 15 minutes while cars pile up behind you, then you've "missed your appointment" and have to reschedule. They eschew responsibility for this by instructing all patients (not just new ones) to arrive 30 minutes early. Another clinic instructed me to arrive 30 minutes early even after I checked in online. Out of an abundance of caution as a first-time patient, I honored their request. Once I arrived, I was processed within 5 minutes, but I wasn't seen for another hour. A third and fourth clinic use Phreesia for onboarding patients, which asks an exhausting number of questions and uses dark patterns to trick prospective patients into paying for third-party services:
https://www.google.com/maps/contrib/116948409638595766921/place/ChIJXwsynwTLRIYRwP45dwNTAb4/@30.292346,-97.7764823,13z/data=!4m6!1m5!8m4!1e1!2s116948409638595766921!3m1!1e1?authuser=1&entry=ttu&g_ep=EgoyMDI1MDcwOS4wIKXMDSoASAFQAw%3D%3D
Wow this is good info! I'm trying to get through this whole thing without doxing myself, but I am a DO, but being in a big city there is a lot of incentive to rush. Ideally I'd just charge enough so that I don't feel any need to do that. The holistic training that we get in med school is overrated. Lots of DO's are indistinguishable from their counterparts. The holistic parts of the training are de-emphasized as they do not contribute to our test scores.
The lack of scientific curiosity sounds like the same issue with the rushed visits. If you end up seeing 3,000 patients per year, then how are you going to remember what was said at the last annual?
Never wasting patient's time and avoiding text-bloat sounds ideal. I would like that as a patient to. I wonder if it is legal to have summaries at the top of some of the longer legal forms?
To be clear, I don't place any blame on doctors for failing to remember previous conversations, given the number of other patients they see and the length of time between visits.
Nevertheless, the patient's medical context matters, particularly for complicated problems. If it can't be retained *between* visits, then it needs to be rebuilt *during* visits. That requires a more flexible and inquisitive approach than most doctors seem to take, as I elaborate in the rest of point 2. It feels like they're shooting from the hip.
Thanks for your receptiveness.
I don’t need or use medical care but my very elderly parents have all the insurance in the world, a dozen doctors, some home health visit stuff … I can’t really follow it except to notice quality of life deteriorates and none of those doctors, however competent or caring, seems to have a holistic view of my father’s situation in particular. So he is starving to death, yet still believes he’s supposed to “diet” because of diabetes and the fear of dialysis. He’s still doing things like (we’re talking a starving man at 105 lbs.) going to a heart doctor. Or as recently as a year ago getting little spots painfully dug out of his skin which his elderly caregiver then has to apply chemo to “because all their friends love this dermatologist”. It’s just all a hodgepodge and each day he surveys his 18 pill bottles and himself chooses based on nothing, which ones to take. He had 2 catheters, is fortunately able to have just one now, but it fails every 10 days or so. That is, it fails all the time as he must use disposable pants as well but it fails in some ultimate way regularly; and of course he has a UTI basically all the time. He toggles between OTC stuff for diarrhea and for constipation, not at intervals but constantly.
No doctor ever looks at him and says, you ought to be cared for in a hospital, given an IV or whatever. That the doctors do not want, and I don’t blame them. But it gives a Potemkin aspect to his medical care.
They also pay for a concierge PCP, from their own pocket of course.
I don’t know that that PCP guy can overcome any of the foregoing, or their own cognitive shortcomings - well, he can’t.
But apart from an unusually kind and available home health nurse who has stayed in the position long enough to become Their Guy, this no-insurance PCP is the guy who can be reached, the guy that we the kids will call on as the end nears, and he’s the guy who can be a first pass for their health concerns of the everyday, immediate sort.
He’s just available to them, and knows all their problems. He’s not I guess a strong enough personality to tell them truths about futility and about how to die but at least he’s honest enough to utter the words (to us) “it’s amazing he’s still alive”.
They went to him the other day for I don’t know what, and my father couldn’t leave the car to go in - so he came out and saw him in the parking lot.
The other thing I will say is that most of their doctors and PAs run a tight ship, time wise. They themselves are ultra punctual as old people tend to be. I have heard it enough times to know that their oncologist, whom they otherwise like fine, is the one doctor whose appointments they dread because of the delta between appointment time and time to be seen.
Hearing all of what your parents are going through makes me sad. I get what you're saying about the tons of medical care just being window-dressing rather than a more directed and definitive plan. It does sound like the concierge PCP is actually adding some value, but still not getting to the root of the issue.
Absolutely right. I think in some cases, one doctor *could be* as good as many. That’s the dream doctor.
This isn't directly relevant, and I don't know exactly how to apply it to your situation, but:
I've been in and out of physical therapy for years. During that time, I've had some very good physical therapists that — in hindsight — seemed to be limited by the constraints of the insurance system. Time during appointments was not spent optimally, eventual "graduation" from care inevitably happened prematurely, and you never really knew what it was going to cost until months later when the paperwork caught up.
My current provider advertised the opposite: the entire appointment is with the physical therapist, care can continue as long as I'm satisfied with the progress I'm making towards my objectives (essentially blending physical therapy and personal training at this point), and pricing is transparent (albeit much higher!).
I'm not looking for _exactly_ the same thing from a primary care provider, but I think there's some congruence.
For example, one way to handle the "bundled visit" situation you mention in the other comment is just to be up-front about it at the time of booking. Maybe have your staff explain that by default a routine annual only covers certain things, and let folks opt-in to scheduling the appointment as a "bundled visit", which also lets you set aside enough time to talk through the other issues.
Maybe even include a third option for an extra-long visit; I'd consider paying extra out of pocket for, say, a 90-minute appointment to sit with the provider and talk in detail about everything, instead of staring at a form trying to decide which boxes to check. I'm not the one who has the expertise to know exactly what's relevant to share.
Physical therapy has some quirks about it that make it not directly applicable, but it seems like there is a theme I'm seeing: helping people fight insurance to get them the services they need. Seems like part of the premium you were willing to pay came from getting personalized care, up to your comfort level.
I don’t know what the balance between “be successful” in your fledgling practice and *limit your practice* is … maybe this is more a large urban area thing, but one keeps hearing anecdotes from people whose GP quit taking insurance and went to the concierge model.
Funny you say that, that's about to be me! I just hope I find a way to keep my rates from being ridiculous, more of a direct primary care model. I don't want to be the discount concierge guy, but I think that by keeping overhead low people will actually get what they are paying for rather than paying for a fancy office, staff that just answers phones and makes appointments, etc.
It seems to me to do private practice out-of-insurance primary care that one needs to have a clear "brand" or niche that you're aiming for. Affordable low overhead is one, high end boutique is another, holistic/alternative yet another, pediatrics alone would be yet another given the emotional/high urgency of that for parents and the high value of being very available to them in those first few years.
I'm a moderately low user of primary care and high user of specialist care, just given luck of the draw, and because insurance pays for everything, I can't imagine paying extra for primary care. I pay cash out of network for good psychotherapy, chinese medicine, and physical therapy because the quality of in-network services for those isn't good or available.
I see an NP for my primary care, she's average, but I don't need her to be any better than that. I can't picture what would draw someone like me to a cash model of PCP. It seems like I would need to be someone with way more affluent taste than I have (and then you'd have to meet that taste) or because you're providing services that are quite distinct from in-network PCPs.
The people I know who have some extra resources to spend on health are spending it on psychotherapists, personal trainers, naturopaths/TCM doctors, or body workers of various kinds. The boutique doctors I know who seem moderately successful are doing things like holistic care for menopausal women kind of thing (ie, care that mainstream medicine is really not addressing). But my experience is limited.
This is a helpful perspective. I don't see a lot of people who don't use primary care (selection bias). A niche that mainstream medicine isn't addressing sounds like my cup of tea. Thank you!
Helping to navigate insurance is definitely a part of it, but clearly/transparently offering value-add services is useful too as a part of that personalized care.
Things I'd pay more for:
* More comprehensive routine annuals. I don't know what would be ideal here, but the current process can't possibly be optimal. Perhaps take not only my resting heart rate, but also get me running on a treadmill. Or assess flexibility through some yoga.
* Useful/actionable nutritional consulting after annual blood work, perhaps with follow-up.
* More time from the provider. This could be actual face-to-face time, or ensuring they have time to actually read my medical records.
You are correct that for a lot of people a routine annual is a waste of time. It is quite cost effective when it works though! I really appreciate that list though, this is really perfect.
In the US, my primary consideration is whether they accept my insurance. Second is how easy they are to schedule and work with.
One doctor I used for a year (before I moved) had absolutely amazing service. He had a website like "Doctorblackmountainradio.com" which advertised specifically many of the insurance carriers he accepted and what it covered with him (insofar as annual checkups, gym rebates, preemptive testing, etc.), so I didn't have to go through an incomprehensible system to understand if I was going to have to pay or not, or how much I would have to. He also had super transparent scheduling, where next day, or even same-day scheduling was an option.
The transparency was great, but better was that he actually made me understand how the whole thing works. Exactly why I should schedule yearly checkups and how I can do so without paying. What he was actually doing and what other options I had after receiving a checkup, and why I would choose any of them. I ended up paying out of pocket for some additional health check things that I was mildly interested in ahead of time, simply because he discussed with me for a bit. I assume he realized I was a young man in his 20s (and thus likely loosely aware of things like testing testosterone and other biomarkers), and explained some actual stuff about what metrics mattered, what were likely to be important to me, and what was almost certainly a waste of money. I don't imagine that's easily scalable, and he was a relatively young doctor so I imagine he hasn't yet been beaten down by the monotony of entitled/insane/normal patients.
I highly recommend (if you're in the US) to sign up for Zocdoc, and try to get a few of your existing patients (or just friends or family) make at least one appointment through there, and leave you a glowing review, as that's how I've found my primary care physician each time I've moved to a new area.
The system certainly grinds you down, but I think things are going to get better thanks to certain technologies. Yeah, a lot of those "wellness" biomarkers are useless, but I think there is a role for "remote patient monitoring" that's really underutilized. I have a zocdoc, but they charge an ton ($50) for a patient who makes their appointment through their system. Making sure I can explain the system is important, that's something I really need to brush up on! Thank you.
I go to my insurance company's website and use the search tool to find out who within a geographically reasonable area accepts my insurance. Then with a handful of choices, maybe I look at a few online reviews to make sure I'm not accidentally signing up with someone terrible, but probably I pick the closest one.
Of course it may be different for other people, but my view of a primary care physician is pretty much like my view of an auto mechanic. I just want someone who can do the job and get me in and out efficiently. If you're looking for advice, I'd go with things like "it's very easy to make appointments!" and "They actually tell me how much money I am likely to be paying instead of grinning while we hit the insurance company roulette." Maybe, "The waiting room is very nice and doesn't blast me with Fox news."
Business customer service stuff.
Ah, thank you! Seems like it should be comfortable, stuff should just work, and it shouldn't involve surprise expenses.
How would you expect to see that last part signaled? My current academic medicine practice has a policy where if you bring up *anything* during your routine annual that isn't part of a routine annual (a refill, my toe hurts, etc.) then we hit you with a bundled visit and a copay. The policy is printed out and stuck on the inside of every room (print is small). I rarely get complaints, as the copay is usually less than $40, but it has always felt scummy.
I don't plan on carrying this policy over to my new practice, but other than the explanation I wrote above, how can I say "I won't do that"?
I want a doctor that will fight the insurance company where appropriate. (This was long before fexophenadine got sent over the counter). My insurance company wouldn't cover fexophenadine, even after our doctor said "He's already taking loratidine, and it's not helping. His condition will likely require hospitalization without covering this medicine." (And we appealed that as well.)
I also would like to know what a doctor's position on placebos is. Given that 50% of doctors prescribe them, I'd really like to know if a doctor considers it appropriate or not.
I specialize in obesity medicine. Fighting for GLP-1 coverage is like 15% of my day (and that's with the heft of an entire pharmacy team helping me with them!)
Placebos are tricky. In the strictest sense, I don't use them. However, sometimes maintaining the therapeutic alliance means giving something, even if I don't expect it to work. Take Tessalon Perles for example. They're probably useless, but when someone has a cough that I know will go away with time and they've already "tried everything TM" then sure, I'll throw that at the issue, just to show that I care, and take their complaint seriously. Tessalon perles are approved by the FDA for cough, so it's not a placebo, but the studies are so underwhelming that it's basically snake oil.
> I specialize in obesity medicine.
Now that I've seen this comment, I'm going to double down on my recommendation to just go all-in on the "be a one-stop-shop for wellness and GLP-1s" strategy. Part of becoming a concierge doctor is you are filtering out your patient pool for people who have enough disposable income to see you. They will either have good insurance, or enough disposable income to pay cash for GLP-1s. This means you won't spend nearly as much time fighting with insurance for coverage. My weight loss doctor helps patients order the name-brand stuff from Canada at ~half price of an American pharmacy if the insurance won't cover it. (For professional reasons, she won't recommend the compounded GLP-1s, let alone the grey market products.)
I'm quite surprised at this! Getting the meds from Canada seems like quite a trick, and it's probably not that risky to use the compounded versions of the medication (also, gets you access to certain receptors like cagrilintide that just don't have a brand yet). I certainly can't do that at my academic practice, but getting the meds from Canada seems like a fantastic and legitimate service. I do wonder how they do it.
I want to know about both of those. Reporting that you do prescribe medicines that you feel are statistically unlikely to be much better than placebos is a gesture of good faith in the intelligence of your patients. That may not be warranted, but if I was asking "who do I want to go to?" at least you're giving me the tools to decide.
Another thing I'm likely to want to know is "how good are you at diagnosing zebras?" (aka distinguishing between the normal "horse" problems, and the "oddball weird ones"). Certain people have odd genetics, and tend to have ... just plain Weird Things happen to them. If I was one of them, a written commitment to "willingness to reassess if treatment doesn't seem to be working" might be very valuable. (A friend of mine had what presented as a dermatological issue, but turned out to be an autoimmune issue. He got very, very, very lucky in his choice of dermatologist.)
A third idea: Talk about "treatment team" and what you look for in a good specialist. What you think is a good doctor is a decent proxy for "what you aspire to be yourself." It also implicitly acknowledges that the GI's function is partially to be a gatekeeper, and that you're being paid to screen out the cardiologists (or whatever specialist you're talking).
1) Yes, it's basically giving them the tools to help them get better, or at least not locking things behind the prescription pad for no reason. If someone came in and was truly immiserated by a cough, I'd break out the codeine if they were OK with the risks.
2) Zebras are tricky, but I've caught a few! I was able to dx a case of Brunner's syndrome from a dietary recall and a family history (his male relative committed murder), which is probably my proudest find. Working in the nutrition space you find lots of things that might be considered zebras, but are actually fairly common (I'm looking at you celiac).
3) A treatment team, now that's interesting. I did browse a website by a concierge practice that touted being able to call the head of cardiology at a nearby hospital, and I had wondered at how persuasive people found that story. Building a referral network is a part of the job certainly.
I think Sol Hando had a great reply about preemptively working with your patients to understand how to squeeze the maximum benefit from their insurance and not leave benefits on the table. That sounds great and benefits the both of you.
My current PCP shows no interest in signing me up for annual checkups, unlike my dentist who at the end of every appointment wants to make sure we schedule my next cleaning.
Thanks for the confirmation. I will probably add a section to the website that just talks about fighting insurance.
My experience in non-doctor business is that this sort of extra charge which comes as a surprise to the customers (whether it should or not ...) presents better if:
(a) You bundle it into the base price, and
(b) Then either give the customer an (unadvertised, but explained on the bill) $40 discount if they don't bring up anything or
(b1) just keep the money since you built it into your advertised price
Discounts are usually seen more favorably than extra charges (even if the total works out to the same).
Simplified pricing is often more desirable to customers than slightly lower on average pricing but with random variation. A lot of your patients might actually prefer (b1) even if they won't say so.
I can only imagine the baffled look of an American who, after a routine doctors visit, gets a refund from that doctor. But that's good advice! I get really annoyed by my Wife's OB who sends inexplicable bills in the mail afterwards. Option B seems so fair and magical it must be illegal somehow.
I mostly disagree with Marcus, but your post does open with "In June 2022, I bet a commenter $100 that *AI would master image compositionality by June 2025*." (emphasis mine)
Strictly speaking, it hasn't mastered it, has it? It's just much better than it used to be, but even now it will probably fail if I give it 10 relationships to track (which isn't too crazy if I have a specific image in mind I want it to approximate).
I second this - if Scott's interpretation really is that the bet with Vitor was "whether AI mastered image compositionality" and not "whether an AI can solve a number of specific challenges relating to image compositionality", then he lost the bet, as it's trivially easy to write a prompt that AIs will fail at.
So either the bet was actually about the latter, much narrower claim, or Scott lost the bet.
In any case, the result of the bet tells us something about AI progress but not much about the what is was supposed to solve.
The bet had specific resolution criteria determined in advance that both bettors thought, at the time, would fairly represent the idea underlying the bet. It’s important to have these resolution criteria, or otherwise people who are motivated to interpret situations differently (e.g. people on opposite sides of a bet) may in fact do so. It’s possible to quibble now over what “mastering” image compositionality might mean compared to simply “becoming proficient” at image compositionality, or whether that’s what was intended in the first place. But the bettors agreed at the start of the bet that the resolution criteria would serve as a sufficient proxy for the broader purpose of the bet.
From LessWrong https://www.lesswrong.com/w/betting#:~:text=present%20among%20scholars.-,Operationalization%20for%20Bets,are%20present%20for%20prediction%20markets. :
Operationalization for Bets
Operationalizing a belief is the practice of transforming a belief into a bet with a clear, unambiguous resolution criteria. Sometimes this can be difficult, but there can be ways around some difficulties as explained in Tricky Bets and Truth-Tracking Fields. The same challenges are present for prediction markets.
Yes exactly: I am saying that the belief “image compositionality will be solved by date X” was not well operationalized in this bet.
I'm struggling with my friendships, so I'd love to hear some anecdata. What are your friendships like? By that I mean things like:
How often do you guys meet?
What do you guys do when you meet up?
What are you talking about?
What are you texting about?
And how does all of that make you feel?
(NB: Obviously I'm not asking for details of the talks - "talking about a mutual friend's struggles" rather than "about Marissa's failing marriage".)
I recently moved back to my home state from college, and so lost a lot of my social contacts very quickly, but for my old friends here:
meet up once every month, maybe,
text a lot, but usually shallowly. in one case there are occasional in-depth relationship/money/life stuff (mostly hers, I'm quite stable in comparison) and in another random bugs we see, or people watching.
also a lot of updating on mutual-acquaintances doings and sayings.
in person talks are more serious, and more rambly.
I wish I saw them more often, and had friends who shared more interests (sff, certain visual novels, kink, all topics I don't really have reliable conversation partner for).
~45M, married, 10-year-old twins. Shortly after our kids turned one, I was laid off. We ended up having to leave our home city where we had spent most of our lives, and had intended to stay. We left behind basically an entire network of friends, relatives, acquaintances, people-I-sort-of-knew, people I sort-of recognized, etc.
My friend group back home has somewhat splintered along class, economic, and general lifestyle lines, as those of us with more education and income have grown and matured as people, while others are stuck in the same mentality of their 20s. It's been painful to become alienated from people I was once closer to. We have an ongoing text thread that started during the pandemic, but is largely limited to (what I consider) lame sports talk or the occasional mindless political meme from someone I have stopped expecting better from.
I don't want to sever that link, as weak as it is, but when I return home for visits I'm more selective about who I ask to hang out, often preferring one on one or small group dinners at nicer restaurants with the guys who are closer to me in outlook, income, interests. We talk about life challenges, work, parenting, investing, recent or upcoming camping or vacation trips, old girlfriends.
My long-time best friend lives many states away, is married but childless, and so has a much different lifestyle than I do. We talk every few months, text sporadically, and hopefully will get a chance to meet up for a ski trip this winter after many years of not visiting.
As a busy working parent of school-aged children, my local 'friend' group is a few other dads who I don't mind hanging out with for a hike or bike ride, or a burger and beer, but otherwise any free time I have I'd rather spend alone, and it's hard to develop friendships in those bite sized chunks of time. I do these social hikes or bike rides maybe once a month at most?
I do sometimes wish I had a more active and satisfying social life, but there are just so many factors that work against developing the type of closer friendships that were possible when I was younger, and I ultimately don't have much to complain about.
It depends on the friends. For our (my husband and my) closest friends, we see them 1-2 times a week. We play board games, eat together and chat - about work, mutual friends, family, religion and politics, whatever's on our minds. We don't text/email often outside of that, except to arrange things and/or for something particularly noteworthy. The case is similar for the friends I see only every few months (dependent on how much they like board games!).
We also have a group of friends we watch anime and movies with, in which case it's usually most of a day, hosted by one of us. We watch the films and eat and chat, usually about geeky things. That takes place once every couple of months or so.
In both cases, I enjoy the time with people and look forward to the arranged dates, but it is also nice to come home (or to wave goodbye) and relax afterwards.
I recently moved to a different country because of work, so my situations probably doesn't generalize.
> How often do you guys meet?
About once every 2 weeks, either I go back home, or friends come here, usually for a weekend. Also I sometimes call friends to talk about whats up (about once a week). Note: These numbers are for all friends together; I meet each individual friend about once every 1-2months.
> What do you guys do when you meet up?
When they come here, we do vacation-stuff (hiking, boating, museums etc). When I go there, we usually cook and watch anime/movies or we go to concerts/events.
> What are you talking about?
when we are together we talk about hobbies (e.g. I do 3d printing and show them some recent projects and failures) and work most of the time. Sometimes a friend talks about something no ones else cares about and then we carefully tell them to shut up.
> What are you texting about?
weekend-plans and memes
> And how does all of that make you feel?
good but exhausted. I have to force myself to socialize, because when I isolate myself to for too long, I become bitter and angry.
I've tried understanding the "adding more lanes to a road doesn't improve traffic" argument that many urban planning types and fans of public transportation like to advocate, but I just don't get it.
As I understand the argument: If you are a frustrated commuter with an hour long drive, you may want an additional lane added to your route to handle more traffic. But this won't actually improve your commute, because the added ability to handle more vehicles will cause people who take other roads to divert to the widened road.
I get how this wouldn't improve my individual commute. But from a utilitarian perspective, isn't the commute of the people switching to the widened road improved? Surely *someone's* commute got better, or else no one would switch routes.
What am I not understanding?
> Surely *someone's* commute got better, or else no one would switch routes.
It's 2025. You're ascribing agency to a decision most folks let their satnav make. To keep the route-finding cost tractable, meanwhile, the satnav will prioritise bigger roads unless the trip is very short.
I agree that the “induced demand” argument can be overstated, like a mantra among some urbanist types. At its core, it’s just a standard demand curve story: lowering the time cost of driving (by adding lanes) makes more people want to drive, which can eat away at the initial time savings. This can be short run (shifting people from other roads) but there are also long-run effects (where you live, work and your general transport habits). But that doesn’t mean no one benefits—some people do shift routes or travel times, and those decisions usually reflect an individual gain.
That said, it’s possible to imagine situations where traffic gets worse overall, especially over the long term—because on top of the demand curve story above, road expansion can make alternatives worse. Expanding supply in the 'driving' market can have secondary effects in the market for alternative transport options. Turning a modest arterial into a six-lane highway might make walking or cycling feel unsafe or unpleasant, or undermine the viability of nearby public transport. That can push even more people into cars, adding further to congestion.
So while the effect isn’t automatic or uniform, the concern isn’t just about traffic on a single road—it’s about how network-level changes shape travel choices over time.
It can also be worse from a utilitarian perspective:
- Option 1 (example with public transport): Old road was congested with 10k commuters (39 min each), Public transport served 20k commuters (40 min each). Widened road is now congested with 15k commuters (39 min each), Public transport serves 15k commuters (40 min each). And public transport only runs 75% of the trains to account for reduced demand, so they are as packed as before.
- Option 2: Braess' paradox: https://en.wikipedia.org/wiki/Braess%27_paradox
I think that a lot of urban design theory is best thought of in terms of "how to stop the non-elite classes from being so uppity" rather than how to actually make things better for them.
There was a time when the non-elite classes were swanning around in big V8s with tail fins and living in massive houses in the suburbs. This was unacceptably uppity, so the goal of the next sixty years has been to put them back in their place -- living in apartments, commuting on public transport, and always competing against the third-world hordes.
The exact content of the excuse used not to build more roads doesn't really matter, all that matters is that we don't build more roads.
You'll notice that the people who are keenest on the "induced demand" argument as applied to roads will almost never apply it to housing; they claim that cramming more people into our cities will reduce housing prices but it always seems to increase them.
I question the premise of this argument. The idea that America’s “non-elites” were once living large with V8s and big houses, only to be pushed into apartments and buses doesn't really hold up.
In reality, more Americans—especially working- and middle-class—own cars, drive farther, and drive larger vehicles today than in your 'big V8s with tail fins era (the 50s/60s?). Trucks and SUVs dominate the market. Car ownership is nearly universal ( in 1960 only 20% of households had two or more vehicles https://www.bts.gov/archive/publications/passenger_travel_2015/chapter2/fig2_8 )
And “massive houses in the suburbs”? New homes today are twice the size of the average house in the 1950s, while living space per person has increased even more.
I think that it is best understood by elites trying to fix one problem and creating a bunch more.
Your non-elites stop having kids because they're more educated and wealthier. Oh no! So you raise immigration rates to ensure your country is still growing. This further raises housing pressure, so you dial up immigration rates even more.
Meanwhile, your non-elites are getting paid too much, and you are a kind-hearted elite and approve of this, until one day you notice it's driving up inflation. Oh no! So you create a reserve bank and instruct them to fiddle with interest rates until fewer people have jobs and unemployment goes above NAIRU (non-accelerating inflation rate of unemployment) again. This causes the cash rate to drop to zero for decades, which triggers a huge wealth transfer as all the other elites start buying up assets using their free cash, and gradually results in a wealth transfer from young to old in the form of mortgages.
Your non-elites can no longer afford to live near their jobs. It's hard to get a job, because you deliberately raised the unemployment rate, AND it's hard to buy a house, because you deliberately lowered the inflation rate, so they're living an hour away from their workplace. This has climate implications, and also in your tiny wizened elite heart you feel a little twinge of sadness for them. Oh no! So you start building more lanes on your freeways and/or introducing mass transit systems. Now you have an escalating series of carbon-burning clogged arteries and a bus system that everyone hates because it goes at 2.5x walking speed and involves 3 changeovers to get anywhere.
You go back to the drawing board and start talking to your elite friends about the problem, but none of you are willing to lower unemployment and raise the inflation rate, so the best you can come up with is 15 minute cities. The proles think this is a conspiracy to prevent them driving anywhere, which in some ways it is, since you aren't providing an economic environment with lots of jobs so they can get one close by OR fixing the burgeoning house prices, you're just magically hoping they can find work nearby and building more public transport for their hour-long commutes. Oh no! You try to explain this to them but they think you're talking down to them, which you are, and they elect a wave of populists, who are still elites and will thus also fail to fix the problems you caused.
Repeat ad infinitum.
The answer involves game theory! A very handwavy version is that people end up in a prisoner's dilemma. Everyone can end up better off if you remove the defect-defect equilibrium.
See also the quite fun Veritasium video about it: https://www.youtube.com/watch?v=-QTkPfq7w1A (it looks like a physics video about springs, but just wait -- it all ties together!)
When I was with an organization planning freeways, or rule of thumb was along the lines of "If you build a new freeway connecting X and Y, expect it to fill up with 5-10 years if there are sufficient jobs at one end and housing at the other". This *almost* always worked. When it failed it was usually because either the jobs or the housing were being oversubscribed. (I.e., e.g., if you've got to freeways serving the same housing, each person can only use one of them to commute.)
Note, however, that this assumes that population in the area is growing. We found that to be true, but it isn't true at all times and places.
You might be interested in Braess' paradox (https://en.wikipedia.org/wiki/Braess%27_paradox), which is a seemingly paradoxical result that adding more links in a network (roads in a road network, power lines in the electricity grid, etc.) can actually slow the overall flow. The Wikipedia article gives an idealized example where adding a single new road can increase everyone's travel time from 65 minutes to 80 minutes (so, under some idealized assumptions, literally everyone's commute is worse in this model). Note that this is not the same thing as induced demand (which is a separate concept) - the result holds true even when the total number of travelers remains fixed.
I think the 'induced demand' claim usually cited probably isn't true.
If we want to use supply and demand language, there's a much simpler model that fits the data just as well: Demand for driving is highly elastic. If that's true, a rightward shift in the supply curve (new lanes) will mostly lead to more quantity supplied, without much drop in price (time spent in traffic). That seems to explain the observation that existing commuters don't benefit all that much from road expansion. But as you say, someone absolutely still benefits - all the people making the new trips.
More speculatively, I wonder if another issue is mis-identification of the binding constraint. Often when I get stuck in highway traffic, the underlying issue is that exits are overwhelmed, and the line doesn't fully clear each light cycle. If you expand a highway by more than you expand the connected roads, you may have not addressed the underlying scarcity. I can't be the first person to have though of this, but I don't think I've ever heard any of the urbanists bring it up either. If anyone knows of any reading I could do on this, I would be interested!
What specific claim do you think isn’t true? Because I’ve generally understood induced demand in the same way you describe in your second paragraph—that is, as a demand response to a kind of "price," where the price is time spent in traffic (assuming no time-of-use road charging). The demand response can be short-run (e.g. commuters switching from a train, or choosing to drive at a more convenient time), or long-run (e.g. changing habits, buying a house further from the city, or getting a job closer to home—all of which increase overall transport demand).
The typical urbanist/public transport advocate position is that investing in modes that don’t contribute—or contribute much less—to congestion is more effective than it might seem at first glance. Instead of the leaky bucket of road investments, where new traffic erodes any time-savings for existing users, better public transit can improve outcomes for both transit and road users. Faster, more frequent service benefits transit users directly, while those who shift modes help reduce congestion for everyone else.
As you say elsewhere:
"I agree that the “induced demand” argument can be overstated, like a mantra among some urbanist types. At its core, it’s just a standard demand curve story: lowering the time cost of driving (by adding lanes) makes more people want to drive, which can eat away at the initial time savings. "
I think we're in agreement about the underlying dynamics (a rightward shift in supply moves you down the demand curve).
I've always taken the 'induced demand' people to be making a much stronger claim, that essentially a rightward shift of the supply curve leads to a rightward shift of the entire demand _curve_, not merely quantity demanded. This is what I'm skeptical of.
If that's not what people mean by induced demand, then I've misunderstood, and I don't have any quarrel with the idea other than that it's silly to make up a new phrase to describe a bog standard, first week of Micro 101 supply shift.
Edit:
Just to be explicit, I'm not trying to make any broader criticism of urbanism or expanding non-congestion contributing modes of transport, both of which I'm pretty on board with. I'm making a more narrow (possibly pedantic) point about terminology and using the right model for clarity of thinking and communication.
1) On the economics itself:
A standard micro 101 explanation can miss an important dynamic - the long run demand curve. Over time, this can indeed cause a rightward shift in the short run demand curve.
For example, a long run demand response is that a big new car dependant suburb is made possible and built at the end of the new motorway - here the 'short run' demand curve has shifted right and really, it's probably permanent. In fact, 'induced demand' has been defined by some as consisting only this long run effect of shifting the short run supply curve, with short run effects (eg, people changing routes) called 'induced traffic' instead: https://journals.sagepub.com/doi/10.3141/1659-09
Even the short run demand curve is a bit of a 'weird' supply and demand situation. The Y axis isn't price (instead something like 'time wasted/inconvenience). The demand curve stays at 0 most of the time, until a threshold is reached and there is congestion.
You can explain this phenomena using existing, normal economic language as I have there, but it is more complex than a 'bog standard, first week of Micro 101 supply shift'
2) On terminology/clarity of thinking:
I think 'moving along supply curve' isn't the intuitive way the public think about traffic, especially since there are no prices involved. They simply see the road shoulder and imagine - if that was another lane I could just skip the queue. Induced demand is a catchy term that doesn't require graphing out supply and demand lines.
Relatedly, 'Induced demand' is a concept commonly deployed against the 'infrastructure engineering' paradigm, not so much against economics. There you'd assess 'maximum demand' and build your network to accomodate, eg, for water pipes, electricity distribution lines; induced demand isn't really a relevant concept, but it really is for roads.
I don't know about other areas, but where I worked the local roads and the freeways were planned and built by different groups, that didn't always talk to each other when they were friendly. And they weren't always friendly.
"Doesn't improve" is too strong, the right answer is "may or may not improve, depending on the details" - but no, it is not the case that net utility necessarily goes up. Here's a toy counterexample.
Consider a transportation network from A to B with two parallel routes and a fixed T passengers. Route 1 takes 1 hour no matter how many people take it (so might be, for example, a subway route with capacity well over T). Route 2 takes a+b*T(2) time, where T(2) is the number of passengers who decide to take route 2 (a congestible highway). At equilibrium passengers will be indifferent between these, so T(2) will be (1-a)/b, with at travel time of exactly one hour. (Suppose, for convenience, that our constants are such that this is less than T).
Now widen Route 2, so that it takes time a+b'*T(2). Now T(2) at equilibrium is (1-a)/b', with a travel time of ... exactly one hour.
No one is better off.
You're assuming that all road trips are demanded inelastically. Some road demand is inelastic, while some is quite elastic. A commute is towards the inelastic end of the spectrum, although it can be shifted in time (e.g. a white collar worker working from 10:00 AM to 6:30 PM to avoid the worst of rush hour). However, there are extremely elastic trips: For example, someone might prefer a further restaurant or store over a closer one, but not so much that they're willing to make the trip if there's traffic.
If all demand for transit were perfectly inelastic (e.g. people only travel to commute and nobody has any flexibility over when or how to do that.) Adding more lanes really would reduce traffic, but it doesn't because there's a whole galaxy of less productive trips that are only demanded if there's not too much traffic.
I think what I don't understand is something like:
For induced demand to happen, there has to be some set of people who wouldn't tolerate the road before because it had too much traffic, but will tolerate the road now.
But the induced demand claim is that the road has exactly as much traffic as before.
So why would people who wouldn't tolerate that level of traffic before tolerate it now?
This has a lot in common with your argument in the 'Change my mind: Density Increases Local But Decreases Global Prices' post.
There, basically more housing induces demand for housing, meaning over the long term building more translates to higher prices. Similarly, the induced demand people can point to the Katy Freeway and credibly say it really looks expanding road capacity ends up creating ever worse traffic (and making fodder for 'just one more lane' jokes).
The counter is that the only mechanism for new construction to attract new people is for prices (either house prices, or wait times 'prices' in congestion) to come down. You have two effects - in the static demand model more supply leads to lower prices, but in the dynamic model demand can increase such that ultimately higher prices than what you started with can occur.
In each case, the answer seems to depend on the exact time horizon and situation you are talking about. Building a highway in the middle of nowhere, or apartment blocks in an empty desert doesn't automatically lead to higher prices.
I think this is Zeno's paradox type thinking. The marginal driver who wouldn't tolerate that level of traffic + whatever they personally would be contribute to traffic has a higher tolerance for traffic on the bigger road because their personal contribution is proportionally smaller.
I think people make a stronger claim than is backed by evidence. At first, a new road improves travel times, eventually the demand expands to fill the road, bringing travel times back to what they were before. From Wikipedia:
A 2004 meta-analysis, which took in dozens of previously published studies, confirmed this. It found that on average, a 10 percent increase in lane miles induces an immediate 4 percent increase in vehicle miles travelled, which climbs to 10 percent – the entire new capacity – in a few years.
Demand for roads tends to increase over time, so you'd end up with worse travel times a few years later; despite having built the road. But is paradoxical only if you ignore several years of growth to the city in making your comparison. Maybe there's an example where induced demand really caused a near-instant reduction in travel times, but I have not seen it described.
The word "exactly" is probably doing too much. A road network curve is somewhat like a battery discharge curve in that there's a huge stretch in the middle in which huge gains or losses in bandwidth are barely noticeable in latency. The most famous example of this is London's roads. London has a very comprehensive subway system and a very large number of Taxis. It's observed that a typical taxi trip will be not that much faster than that same trip by tube. This is because taxis are subject to negative feedback: the more of a durable advantage taxis have over the tube, the more people will prefer them, the more traffic there is, until taxis are only just faster than the tube.
So long as some portion of the transit market is subject to a feedback mechanism, the invisible hand of the market can exploit incredibly small differences to use all the available bandwidth.
Jevon's Paradox (originally an observation that the cheaper coal got the more people found uses for it and overall usage took off) is somewhat related and might help you appreciate the induced demand phenomenon from a different angle.
I'm not sure how relevant Jevons's paradox is here (another commenter pointed out that the more directly relevant paradox is Braess's, though apparently it was first discovered by Pigou of Pigouvian Taxes fame) but I did a writeup of Jevons's paradox if anyone's interested: https://agifriday.substack.com/p/jevons
The main problem with adding more lanes isn't the inefficiency of more lanes, it's the inefficiency of *cars*. Cars take up a lot of space per person it's able to carry. So if you add an extra lane, take the length of that extra bit of road and calculate how many cars can fit in there. Then how many people fit in those cars. It's really not that many!
Compare to a hypothetical universe where there are no cars, only buses. Suddenly, that same amount for extra road can serve many more extra people. And that's discounting the fact that really, the argument for adding an extra lane would be moot, because buses are efficient enough that there probably wouldn't be congestion in the first place. And buses are still pretty inefficient compared to other forms of public transportation like trains and subways.
So that's the problem. Yes, adding more lanes did technically marginally improves the lives of some amount of people even if commute times are still the same. But that amount is very small, compared to how many lives could be improved if the same amount of money was invested in public transportation. And that's before even talking about the the other effects that car dependency can have on cities. Pollution, urban sprawl, accidents, etc.
> But that amount is very small, compared to how many lives could be improved if the same amount of money was invested in public transportation.
No matter how much you spend on public transport, it's not going to stop in my front driveway at the exact time I'm ready to leave, nor is it going to go directly to my destination.
First of all, I question your premise that any improvements to a form of transportation are nil if that mode of transportation does not pick you up right at your door step at the exact moment that you want.
Second, even if you would never, ever use public transportation, you should *still* prefer more public transportation over more lanes, because again, that's just objectively more efficient. Many, perhaps even most people are not that inflexible, and will pick the public transportation option if it's available, convenient and cheap. Better public transportation = less congestion, including for people who are still in cars for whatever reason they might have, be it necessity or preference. Compare the experience of driving in Amsterdam and in LA, which one do you think is the more enjoyable drive?
Amsterdam to LA is a bad comparison, Amsterdam is a million people to LA's thirteen million, of course LA sucks, every city with 13 million people is terrible due either to excessive sprawl or excessive density and usually both.
We should compare driving in Amsterdam to driving in a US city with comparable population, like Omaha.
Seoul. Seoul is a good comparison. I lived there for over 4 years and I never felt the need to drive. I knew people who were driving there and they seemed not to ever face major issues with congestion, in fact I do not remember ever witnessing serious congestion except on lunar new year and Buddha's birthday.
I mean, ok, sure! That seems fair enough. How's *that* comparison, then?
A cursory search for "how's driving in Omaha" certainly doesn't make it sound pleasant. Almost every single result in the first page is negative! (And none are *positive*, there's just some neutral, not experience-related results).
Does your experience differ from that?
Honestly I've never driven in Omaha, but I've never driven in Amsterdam either.
Googling for "driving in Amsterdam" mostly brings up warnings not to bother as well.
Looking for cities where "driving in..." actually gives positive results, all I could find was Canberra, which is less than half the size of Omaha or Amsterdam. Ultimately once a city gets beyond about half a million people it's going to suck to get around, and we need to stop building such huge cities.
Busses are more space-efficient in terms of lanes if they're going directly where you want to go (or at least as directly as the road network layout permits) and they're running full or close to full most of the time, or at least at peak times where road capacity is the bottleneck. If they're running half empty that cuts their efficiency advantage by a fair amount. And if the routes meander about town, or if you need to travel a fair distance out of your way to a hub to make a transfer, then that cuts their efficiency advantage still further. Busses still probably come out ahead in all but the most pathological cases, but it's probably more like a 2-3x capacity improvement rather than the 4x numbers I've seen from very pro-transit sources if you ignore route efficiency and assume a steady stream of busses running at 100% capacity.
That's fair. That's the reason busses probably shouldn't be the backbone of public transportation anywhere. They should be more of a last mile solution.
And really, most of the times you don't *want* buses to be running at full capacity. You want some slack to allow for random peaks, and also just to make the ride more comfortable overall. A bus ride filled to the brim can be quite an unpleasant experience.
This isn't the most common argument, but one issue is that in many places the bottleneck isn't an expandable main road. E.g. if you're driving along an Intercity highway but the city you're driving to has too many congested small roads to handle the traffic, you'll (at best) get a bit further down the highway before getting stuck waiting to enter the city and (at worst) have to wait even longer now because the extra highway lane convinced more people to drive to it but the actual bottleneck (city streets) can't actually handle more traffic. And city streets are often impractical to expand.
(This is the issue with the road I've been stuck on most, between Jerusalem and Tel Aviv. Adding a lane to it wouldn't really help, especially since the topography probably doesn't allow that in the tight sections anyway).
More generally, space need for cars scales superlinerly (somewhere between linear and quadratic) in the number of vehicles, as adding highway capacity requires multiple road expansions within the target cities, more parking, and so on. It's not (usually) strictly true that adding a lane makes traffic actively worse, but it does in general help much less than you're expected (since a congested highway is a symptom of an overburdened system and you may not be able to add capacity to the rest of it).
Near me there is a junction between two highways. You can take an offramp from one highway that merges into the other highway's left lane.
At some point somebody noticed that the ramp was wide enough to fit another lane. And it was divided. Now you had two lanes on the off/onramp that simultaneously merged into the same one lane of the target highway. The target highway was not expanded; you just had two ramp lanes merging into each other immediately as they merged into the target lane.
This caused truly incredible traffic jams.
The problem was ultimately solved by eliminating the extra lane from the ramp.
It was a pretty stunning example of lane expansion not always being a good idea, but as you note, I don't think this is the kind of thing people are generally talking about.
There are of course plenty of examples where adding extra capacity to roads in the right place has fixed a traffic problem and that traffic problem has never come back. These examples are uninteresting and don't get discussed much, but I can think of a few that I've seen locally over the course of my lifetime.
I have seen examples of exactly this phenomenon at various times over the years as Chicago's expressways have been rehabbed/reconfigured/etc. It can be counterintuitive and maddening how much a seemingly-small change in lane configurations creates new persistent bottlenecks.
Edit: actually, it's probably not equivalent, just related
Check out Steve Mould's video on a related topic (additional road, not additional lane).
https://youtu.be/Cg73j3QYRJc?si=-D8VBcbZGNmjZOXK
As i understand it, yes, individual participants can benefit in the short run, but the theory says that total demand goes up - people move further into the suburbs, stop using public transport etc. until eventually, the new equilibrium is the old equilibrium except with an extra lane.
The usual term is "induced demand". After equilibrium is achieved, it need not be the case that anyone's commute got better, but it must still be the case that someone's *life* got better.
That person might be someone who moved to the region after the lanes were expanded, making the question of whether you should vote for local lane expansion potentially hazy.
So apparently in New York State, where I live, a huge amount of elementary-level education, including assessments, is done using Chromebooks. I have children entering elementary school soon and, bluntly, I am convinced that this is a *terrible* idea and want them to be using physical books and paper for everything short of writing papers at the high school level. (Where they should still be using physical textbooks).
Does anyone have experience with or awareness of public school districts in New York or neighboring states that *don’t* use laptops at the elementary level?
I know some people have turned to private education to avoid this.
It is a horrible idea.
The only other thing I have for you is to try and join a group, or get active in the school, in a way that allows you to advocate for this.
I know I am on the same page, and looking for a group.
As someone who had the Chromebooks for the end of my public education (my school district got them very early on), I have some firsthand examples of why they are not great.
Very slow. I regularly spent ten minutes waiting for things to load. This is worse than ten minutes of lost time, because in that amount of time I would take out a book and lose any focus on and interest in the activity.
They're very locked-down, so you can't download any software on them. That means if it's not made by Google or available online, you're out of luck. This severely restricted options for teaching computer programming.
The evaluation software was buggy, slow, and deeply frustrating to use. I don't think it cost me very many marks, but if my teacher was less willing to help it easily could have.
The usual classroom-management and productivity problems with letting everyone use screens. Certainly affected me sometimes, although selfishly, other students playing online games didn't distract me the way having them talking in the back of the class did.
Privacy stuff. It turned out they were monitoring all our activity but didn't tell us. As far as I know they only used it to counsel kids who they thought were trying to kill themselves (i.e. were researching suicide in social studies class, or writing a story with a murder in it), but it still felt very violating. (Although having an assumption of privacy violated in a relatively safe setting probably brought my paranoia closer to healthy levels, so it might have been good after all.)
Were these the sort of reasons you think it's a terrible idea? Or do you have other concerns?
Mostly I think students are distractible and don't learn effectively from screens and that they displace healthier, more engaging and appropriate forms of learning and socialization--so chiefly the classroom-management and productivity problems alluded to. The other stuff sounds bad too, however -- thanks for sharing your experience.
There's various evidence, of varying reliability, the on-screen learning is less effective. AFAIK, there's no explanation that's both consistent and convincing, so there are probably several contributing factors that vary in their significance.
OTOH, they're probably going to need to deal with learning stuff off a computer as they grow up, so perhaps it's justifiable. (i.e. part of what they're learning is learning to learn from a computer.)
"Using this proprietary software requires accepting an End-User License Agreement with a private entity, the terms of which, on behalf of my child, I do not accept. Now, we are still legally entitled to our taxpayer-funded public education. How do we proceed?"
It's high time a family came along that's ornery enough to make this argument. Perhaps reach out to the Electronic Frontier Foundation for advice.
I think you'd not only need to be ornery, but to be quite wealthy, and to have a good firm of lawyers as friends.
Yeah this is the annoying thing; anyone with that kind of money will prefer to just opt out of the system by switching to private schools. This is where public advocacy orgs like the EFF come in.
> I am convinced that this is a *terrible* idea a
It is.
This is kind of a life hack, but assuming you are in public school and the school has a requirement to educate your child, if your child abuses the Chromebook and gets it taken away, the teachers will find a way to fall back to pencil and paper.
My concern is that while I suspect we could try to get our kids away from laptops (e.g., not agreeing to whatever school policies are re: Chromebooks), they would just get no attention while the teachers focused on students with more pliable parents who will be sat in front of laptops.
Try the shtetls? If nothing else, there you have a religious argument for "why paper textbooks are good" -- the kids can read on Shabbos.
I saw this question on twitter and thought it was interesting:
https://x.com/GrantSlatton/status/1944089586084311198
"Suppose you took 10,000 optimally selected people and dumped them into a region that had adequate forests, fields, and mountains for mining iron and coal, all in a 100 mile radius
They start with only 1 season of food supplies
How fast could they bootstrap to tech circa 1900?"
Answers in the comments ranged from 2 years to 10k years, which both somehow seem plausible to me. It seems like thousands of technically capable people should be able to get a lot done in a few years, but then again there must have been some major bottle necks to the process that it took thousands of years irl.
A lot of answers got sidetracked by whether they'd just die from hunger or exposure, so say they start with seeds to plant and the climate is mild enough that the community is guaranteed to survive if they dedicate at least 90% of their man hours to farming and shelter in the beginning and become more efficient as their tech advances from there.
Considering that even '1900s' tech needs things like copper, rubber, zinc, tin, gold, platinum, sulphur, phosphorous, chlorine, iodine etc etc etc you're going to need mountains with more than iron and coal.
I think you could put together some kind of civilization (although goodness knows where you'd find enough people with experience in bloomery furnaces, just for one example), but barring access to the outside world I think it tops out at a rather wonky, impoverished iron-age settlement and then eventually collapses once the forests are all cut down and the fields exhausted.
https://en.wikipedia.org/wiki/Tunnel_in_the_Sky
A lot of this depends on what "tech ca. 1900" means. If it means having everything we historically had in 1900, from to telescopes to telephones to torpedoes, that's almost certainly going to have to wait until you've built up a population of millions, just to support all of the necessary specialties. It's much more practical if the victory condition is "has most but not all of the stuff you'd expect to find in a town of 10,000 on the frontier of a ~1900 civilization"
Even for that, you're going to need way more than just forests, fields, iron, and coal. There are at least dozens and probably hundreds of distinct minerals you're going to need, and there's almost certainly no place where they are all found within a 100-mile radius. So we're either imagining a specialty "arena" that was set up for just this purpose, or we're going to have to give these people a continent - and unless they also have a map, things are going to be slowed down by the need to explore a continent.
You're probably also going to need seeds for a decent supply of modern-ish crops, because speedrunning the selective breeding of e.g. maize from its neolithic ancestors is not going to be fast even if you do know exactly what you are trying to do. Or else you're going to have to figure out how to make a "ca. 1900" civilization whose cuisine is based entirely on unmodified natural foodstuffs, and I'm not sure how plausible that is going to be.
Assuming suitable starting conditions, the first priority is going to be getting some sort of agriculture up and running, unless the forests and fields are fecund enough to support a population of 10.000 hunter-gatherers in close proximity.
You *may* also need to establish a stone toolmaking culture as an intermediary; it's going to take a while before you get metals of any sort. Which means you need a suitable sort of stone in your arena; most common minerals are not at all suitable.
From there, pick the best toolkits you can build from scratch (maybe with stone tools) in every critical area, and pursue those directly and vigorously. It's going to look kind of silly hammering out your first ingot of wrought iron on a stone hammer and anvil, and you're going to want to make an iron hammerhead and anvil ASAP, but it's probably doable. But there's also figuring out what sort of pottery and textiles you're going to make to meet your immediate needs, and whether your initial construction material will be wood or brick or stone. Get to work on all the necessary things, without too much duplication of effort.
After that, make a complete list of all the technologies you're going to need to declare victory, and set up a minimaist tech tree that gets you to each of them. Some of them you'll be able to make directly with your initial toolkit, others you'll need to make the tools to make the tools, but even there you should be thinking in terms of leapfrogging centuries or even millenia of slow historical progress.
With only 10,000 people, you may not be able to make steady progress on all fronts, so consider e.g. having half your surplus labor for three months devoted to building a glassworks, which will then be turned over to maybe two master glassmakers and three apprentices. Or just run at scale for a few months to make all the glass a population of ten thousand will need for a decade, and then mothballed.
You're probably going to want to develop some sort of paper-equivalent early on, and have everyone write down everything they can about everything they know. That's going to be vital if this project stretches on for more than a generation, but people can forget a lot on the timescale of even a decade or two, and it's sometimes more efficient to say "go look it up in the library" than to pull one of the few people who grok Thing X away from their work to explain X to someone who kind or sort of needs to understand part of it.
Finally, social organization is going to be vital; 10,000 people is too many for rule by informal consensus. For best performance in a few-decade sprint to victory, I'd suggest something centralized and heirarchical but not inflexibly so, maybe a sort of feudalism. There's almost certainly a multiplicity of organizations that would work well enough, but you need to pick one and get complete buy-in from the start. And your "optimally selected" people need to be selected for compatibility with whatever you choose - don't put a bunch of wannabe democrats, capitalists, or socialists under a feudal lord, and in general get the right natural leader : natural follower ratio with a minimum of troublemakers.
>A lot of this depends on what "tech ca. 1900" means. If it means having everything we historically had in 1900, from to telescopes to telephones to torpedoes, that's almost certainly going to have to wait until you've built up a population of millions, just to support all of the necessary specialties.
Agreed!
>It's much more practical if the victory condition is "has most but not all of the stuff you'd expect to find in a town of 10,000 on the frontier of a ~1900 civilization"
Hmm - does the town doctor in that town have a microscope? How about aspirin?
I think so much depends on what they get to bring with them, and what the flora and fauna (and weather) of the destination are, and whether they have to worry about defense against other intelligences. But my instinct is that 10,000 people is too few to do this as a speed run, and this will have to be a multi-generation project, even with your more reasonable victory condition.
Admittedly, my intuition here is heavily influenced by https://en.wikipedia.org/wiki/Tunnel_in_the_Sky
Doing this with 10,000 people is going to lean very heavily on the "optimally selected" part. 10.000 randomly selected people, absolutely won't be enough. 10,000 randomly-selected college graduates, or college professors or Eagle scouts or any other such thing, almost certainly won't be enough. But I wouldn't rule out making it work with 10,000 people selected specifically for this job.
Another key question is going to be whether these people will be given time to study for the "test", and/or bring a notebook (encyclopedia, whatever). If the rule is something like "everybody goes through naked", I'd want everyone to visit a good tattoo artist beforehand.
I mean, for starters we are promised nearby iron and coal, but that 1900 town presumably also had oil, rubber, copper etc, not to mention at least some medicine, and likely none of those are available near where our intrepid 10,000 were dropped, and presumably they can’t just trade for that stuff because there is no one to trade with. To just find all the relevant materials is going to require exploring and settling much of a continent…
I haven't had time to digest this subthread yet, but that tweet looks a lot like a thread I posed on DSL, called "Olympic Techbuilding":
https://www.datasecretslox.com/index.php/topic,6977.msg271474.html
Part of the problem here is that the time required can depend a lot on what the starting parameters are. I went through a lot of trouble to set the whole thing up as a sporting event, meaning a lot of rules to cut out any boring boondoggles, such as not just freak famines, but also someone choosing to go Attila or Genghis on everyone else, or even a long period of waiting for a Francis Bacon to come along and enlighten everyone on What Truth Is.
I even included a mechanism for generating more labor.
No-one seems to have mentioned the colonisation of places like Australia. Granted they were given some supplies of tools and etc, but they weren't 10,000 and optimally selected. I'd expect that they could get to 1800 within a few years, and after that it's really about scaling up and refining production methods.
Sydney was founded with 1500 people in 1788, but didn't become self-sufficient in food until 1804. The first metal mines in Australia didn't get going until 1841, half way across the country (coal mining started in 1799). Until 1804 they were dependent on rations being shipped in, which bodes poorly for a colony started with a single year of supplies.
The real problem is finding a fuel more efficient than wood and utilizing it. Without it, no industrial revolution.
Also the scenario expects that people will cooperate insteading of forming gangs and attempting to enslave the rest.
I think these are covered in the question: '10,000 optimally selected people' are presumably capable of working together; and 'mountains for mining iron and coal' are nearby.
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
Why is bronze your starter metal, when you know how to make iron and steel and have ample supplies of iron ore and coal? Yes, you need a somewhat hotter furnace, but "optimally selected people" are going to know how to build such a furnace from scratch.
You seem to be thinking in terms of rapidly speedrunning the historical tech tree, but the winning path is almost certainly to skip most of that tree.
ETA: You're still going to need copper, and quite a few other things that aren't naturally found within a 100 mile radius. But the copper is for e.g. electric wiring, not because you need to go through a Mini Bronze Age.
Bronze is easier to work with and more flexible (except for specialty steels, which you won't have). And you won't have decent electric wiring anyway, until you get plastics, because rubber isn't native, unless your site is a tropic jungle. Varnishes and cloth work for special circumstances, but not generally.
Skipping most of the basic techs isn't a winning move, because they are the foundation on which the others rest.
If you were starting from a "collapsed tech civilization" I could see building from glass and ceramics, but actually making good glass is tricky. Soda glass is reasonable, but IIRC transparent panes of glass requires zinc during the manufacture. (Well, if you're doing it the easy way.)
OTOH, if you had a good medieval blacksmith available, you might be able to go directly to working with iron. I think it would still be a mistake. Iron probably isn't a reasonable target except on really small scale until you've got your first steam engine designed, but I have to admit that good steel is lighter than an equally strong bronze, and it's certainly harder.
One thing you *could* short cut, though, is bee-keeping. You're still going to have the problem of the bees not be domesticated, but bee-keeping is an excellent source of sugar with minimal effort. I believe that wasn't mastered in Northern Europe until after the middle ages. But it doesn't require anything fancy, and it was done in ancient Egypt.
Bronze is easier to work with than iron or steel, but as soon as we had iron we pretty much stopped using bronze except in a few specialized niches. Which, for the full 1900s tech stack, you'll *eventually* need, but it seems likely that you can put that off, maybe to the final sprint to the finish. Iron and steel, you're going to want to have ASAP.
And you're *going* to have electric wiring by the end of the process, because that's part of the stated victory conditions - too much of c.1900 technology is electrified. That doesn't necessarily mean rubber; there are other workable insulators you can use where you need insulation, and a surprising amount of 1900-era electrification was uninsulated except for glass or porcelain standoffs.
More generally, I think you are greatly overstating the extent to which early technologies are foundational to later ones. Bronze isn't "foundational" to iron and steel, it's just what we happen to have used until we figured out basic ironworking. Yes, the first Hittites to build an iron-forge probably used bronze tools to make it, but then they mostly did away with bronze. Meanwhile, the first people to build a bronze-smelting furnace will have used stone tools for *that*, and I'm pretty sure those tools would have sufficed for the first crude iron-forge. Bronze historically preceded iron, but that's not the same as being foundational to iron.
People stopped using bronze because copper is not common and tin is rare, and they switched to iron because iron is common and easy to find in comparison. If there is plenty of tin (or zinc, for brass) nearby then bronze is superior to iron in just about every respect (until you have figured out a puddling forge or Bessemer engine to make high quality steel).
Of course the prompt says they have iron and coal, and doesn't mention how much tin and copper they have. So you're probably right, unless there's easily accessible copper and tin they'll probably just start with wrought iron and work from there.
If that were the case, then when we mostly switched from bronze to iron all the people who had been using bronze all along, and so by definition could afford it in spite of the scarcity, would have continued using the superior bronze and left the cheaper iron to the less fortunate. We'd see e.g. the elite using bronze swords while the plebs got cheap iron spear-points.
We don't see that in the historic or archaeological record, because iron is not only cheaper than bronze, it is actually, really, yes it is even if we're talking crappy ancient wrought iron, *better* than bronze for most purposes.
Cheaper + better + required by the premise = why are we bothering with a detour to bronze rather than going straight to the ironworking that will get us more quickly to the rest of the things we need?
I think that both extremes are a bit unlikely, given that they people are "optimally selected", which I assume means they have enough mineralogy knowledge, agricultural knowledge, etc. and that they've got a charismatic person that they agree to be lead by and...
Ideally it, and without any setbacks, they should be able to get to 1900 equivalence in about a century. But it wouldn't be easy. (Copper and tin are more important than iron for getting started, though. Bronze is your starter metal.)
But this *is* being optimistic. There aren't any domestic animals. Plowing is going to be ... difficult. Developing domestic animals is a multi-generation project. So the tech might be 1900 level, but tractors will be the most important tech. Getting started is going to depend on hunting and gathering. And the local predators are not going to be friendly. (You didn't mention sulfur and saltpeter...so forget guns.)
All this feed into what the "optimal group of people"'s composition is. You need medics, but also fletchers, bow-makers, etc. Spears are relatively easy, but not what you want to hunt with.
I'd definitely be on the later end of these estimates.
So much tech in 1900 was a result of economies of scale, global trade and generations of regional specialisation. In a population of 10,000 perfectly selected individuals, even in multiple generations, I doubt you could even get the economies of scale necessary to produce a useable quantity of quality steel, let alone functional steel machinery.
My guess is that there would be a phase of extreme progress to begin with, where all the individuals' stored knowledge on early tech development has great payoffs for the producers themselves and other members of the community.
Then I'd guess that you'd start to stagnate at a kind of hybrid pre/early-modern stage, because of both the practical difficulty of progress without scale, and the lack of incentives.
I mean, technically you could get a furnace and start producing glass, but someone's going to have to spend all day hiking to gather enough sand and fuel for your next crazy experiment to make 1% purer glass for some increasingly abstract future gain. Unless it's really clear that something will provide a short-term boost to your viability as a civilisation, it'd probably feel like a better use of time to marginally improve the things that are actually affecting your immediate quality of life.
I think that metallurgy will be the top priority after establishing preliminary food and water supplies. Everything else we want to do (including efficient farming and decent shelter) lies downstream of getting decent metal tools, so we'll have rudimentary metal tools in a month and a production line making hoes, saws, hammers and nails within a year.
Once we start producing materials we're going to need engines to move them around somehow. I don't know which of steam, ICE or electric power would be the easiest to bootstrap but I suspect it might be electric (powered by big generators, batteries will require exotic materials and may take time).
I don't think we'll ever be at something that looks like "1900", we'll probably skip steam entirely but have a long time to build houses that don't look like shacks.
Steel is just dirty blacksmithing. Scandinavia had carbon steel in the Middle Ages, not that they were incredibly consistent or anything.
https://www.medievalware.com/blog/medieval-steel-bloomery/
Distilling is enough of a carrot (as is fermenting) that you can see some decent motion in the biological fields, well beyond "medieval" stages, just from incentives alone.
A lot of ramping up is just knowing that a thing can be done.
I expect that the wheel would be re-invented pretty quickly.
Same for crop rotation.
And stirrups.
And the horse collar (don't choke the horse while it is pulling things).
...
2 years seems pretty fast to go from virgin land to functioning railroads, but 10k years seems WAY too slow.
And you might get some things from post-1900 before you got some things from pre-1900. Working automobiles probably make more sense for a population of this size than railroads. Also, you can use the car technology for tractors. So maybe you don't get railroads at all because they don't make sense here.
Where do you get the domesticated horses and the cattle? It won't be that easy.
Also, the horsecollar is wasn't invented until the middle ages. It's not a simple invention, so if you don't bring along the details of how to build it, you won't have them.
I suppose wild horses might be present, but breaking a wild horse is a specialized skill, so you need to bring back folks that have that skill.
Railroads are a lot easier than cars. Especially if you have domesticated horses. With light cars and a small area you can use wooden tracks. They won't be SP railroad cars, but you don't have the population to justify those anyway.
Yeah, the wheel is easy, but using it for transport requires roads. So the first wheel you get is going to be a potter's wheel.
It's not just knowing that something can be done, though that sure helps. It's all the "supporting technologies". Once you get away from the basics there isn't anyone who knows them all. (Given a buch of flax plants, how do you make a rope?)
But SOME sort of engine will be very important, because without domestic animals ploughing is ... difficult.
I'd put in a spirited defence for the first wheel being a wheelbarrow. In the absence of domesticated animals, a wheelbarrow is a liberating tool indeed. And it doesn't need a road.
If you have spent any time on a farm lugging stuff around in buckets/by hand, you'd see what I mean.
Also historically people have pulled different interpretations of single share ploughs in various civilisations. It's not pretty and it certainly isn't fun, but it has been done.
Oxen are probably better than horses as farm workers (smarter, more stable, more stamina, less fragile overall, and more edible at the end of the day), but horses are more versatile for travel.
Either way, the problem of domestication is similar - and there is a world of difference between breaking a genuinely wild horse and domesticating a species.
Oxen are definitely better for plowing, but that requires domesticated cattle, not aurochs.
The argument for "wheelbarrow", however, is very good. It could even be just an appropriate log with a couple of spikes driven into it, and it would help a lot (though you'd probably need to drag the ... barrow? backwards. (So another use for rope.)
I think you'd start with a flat platform on top of a couple of long straight sticks. But how to fasten the pieces together? Wooden spikes are probably the best. But that implies a drill. So you need strong twine. This you get use in the same bow-drill that you use for starting a fire. (I believe tinder boxes require spring steel.)
If you don't have draught animals, consider taking a leaf out of the book of civilisations which had the same problems:
https://www.historyonthenet.com/mayan-farming
Your main problems will be not starving to death in the first year between eating your supplies and harvesting what you've planted, and not dying of exposure. If the climate is mild enough that you aren't going to freeze to death, die of heatstroke, or be swept away by floods, tornadoes or the like, you've got an advantage. Food and shelter are your immediate problems and will take a few years to have a stable, reliable base before you start expanding your tech tree (think of the reason for Thanksgiving and the legend about the starving colonists).
Two years is way too optimistic a time frame if you're really starting from scratch (the only way that works is if your colony starts off already equipped with current-level machinery and fuel supplies to do the mining, digging, etc.)
This response makes sense to me.
I'll add that this experimental group would have huge advantages compared to historical-real-life from knowing some big things _not_ to do (particularly the germ theory of disease and all of its practical implications).
Sanitization is a very, very very big "force multiplier" of the number of people. In that, say, you don't need to count on 10% of the people dying "young" of typhoid.
It sounds like you might be very interested in the recent EconTalk episode about what Capitalism is. They go into an aside about why size of market is important. Basically, the degree of specialization that is possible in an economy is directly tied to how large the market is. Assuming that this experiment demands that everything be done entirely internally to the group, then lots of things will simply require a larger market/population before there is enough room to specialize into the next tech upgrade. It doesn't matter if you know how to make the next thing if are forced to spend all your time growing food to survive.
Agreed. Just as a back-of-the-envelope calculation, the population of what is now Britain in 1000AD is estimated to be something like 2 million people. Which works out to about 8 people per km2. Your 100 mile circle comes out to ~81400 km2, which implies a population of something like 6-700 000 people. Barring access to the outside world, I just don't think that this is large enough for most industrial technologies to be viable.
Here's another way to think about it:
- 1900s farming produced something like 13-14 bushels of wheat per acre on good land (average land was more like 6.5). In most places, wheat is an annual crop.
- 1 bushel gets you 27kg of wholewheat flour, so really good farmland could yield something like 6670kg per km2.
- wholewheat provides 237cal per 100g according to the FDA, and you need 2500cal per average person per day. Let's assume that bread provides half the calories for your population, so each person needs ~185kg of wholewheat flour per year.
- the available agricultural land is something like 70% of your total land area if you're lucky, of which about a third is suitable for crops.
- taking our 81400km2 again, this means that around 18990km2 is suitable for wheat, yielding about 127 000 tonnes of wholewheat flour.
- putting it all together, the 100 mile circle could, at best, support something like 686 000 people.
It's hard to find solid numbers on how much land a single person can farm (not least because preindustrial farms invariably had more people working them than was economically necessary), but one estimate I saw was 40 acres for a one-man operation with two horses and a plough. This means that our little experiment is going to need something in the order of 100-120 000 full-time farmers and double that number of horses. Add in their families, and around 85% of your population will be working the land. Which leaves maybe 100 000 people to do everything else needed to make a 1900s tech base work.
>They go into an aside about why size of market is important. Basically, the degree of specialization that is possible in an economy is directly tied to how large the market is.
Thank you!
There are about 1500 blast furnaces in the world today. To put it another way, it takes a market of about 5 million people to support one blast furnace. And 1900 technology includes blast furnaces. I don't think a group of 10,000 people could support 1900 technology. And there is approximately the same problem with dozens of specialists and special equipment. Glass, ceramics, chemical industry, machine tools...
I think this is the right answer. Britain had roughly 10 million people in 1800 and if you can sell your coal to 10 million people you have ample resources to build a mine. (And this is ignoring global markets.) You don't need to worry about horses or metal or carts or even manpower that much.
control-F pencil didn't find this so it's worth thinking about how much work would be needed before they could build one single pencil
https://cdn.mises.org/I%20Pencil.pdf or https://www.youtube.com/watch?v=67tHtpac5ws if your attention span is for 2:30 of video
Rather than zurg rush towards technology they need to grow their population. 5x growth each generation would get them to about 6 million in 4 generations which is probably enough to support whatever they need. For at least the first generation all technology generated is primarily for supporting population.
*EDIT* Do we have animals available? Both as workhorses and as food.
*EDIT* One technology that genuinely takes *time* instead of effort is crop breeding. If we need to go from ancient maize to modern corn we can surely do thing better the next time around, but it's still a very slow process until we invent gene-editing.
That's an interesting point. Ancient societies absolutely could not sustain 5x growth through time due mostly to high mortality rates, but that might be something that really is _knowledge_ gated and not tech gated. If you know about germ theory, even if you can't do anything more sophisticated than boil rags, wash hands, and keep your water supply separated from your waste supply, how much can you decrease child mortality?
-edit- But yeah, the main point is that the answer is absolutely not "within one generation". And once you are past one generation, then the most important goal of the first generation is rushing as fast as possible of some pretty efficient means of printing so that they can record as much of their knowledge as they possibly can so that the project doesn't have to re-invent everything (I'm assuming that this experiment is about how fast could you recreate everything with fore-knowledge). If you don't manage to get enough things recorded in the first generation, then you are very nearly back to square zero.
My impression is that the big child mortality improvements happened from sanitation and nutrition, not from antibiotics or surgery or whatever. Making sure your water supply is well-separated from your sewage outfall, getting people to cook and store food properly, getting everyone to wash their hands before handling food or dressing a wound and after taking a crap or touching a sick person--I think that gives you a bunch of big wins.
Another thing that is surprisingly important is glasses. Every craftsman loses his close-in vision around age 40, and unless you can give him glasses, you lose much of the value of his skill and training. Plus, you're going to need magnifying glasses, microscopes, and telescopes to get further down the tech tree.
Old maternal mortality was about 1:100 but I wonder how much humans have adapted to obstetrics in the past 200 years. If you can afford bigger baby heads nature will random walk towards them.
You can write your essential knowledge down on clay tablets if you need to. But you need enough written material to teach literacy to the next generation.
Bring a set of holy books in the same language as your reference materials, and make sure everyone believes that being able to read the holy books is a critical requirement for being a proper believer.
The coal/iron/steel is the easy part, honestly. Those are "big bulk things" that you don't have to get "precisely right."
Try cheesemaking, or distilling, or any number of the "practical chemistry" bits of technology you'd need to get to have "technology circa 1900" Not to mention, but you'd need glass for that. Also, 1900 is early enough for galoshes, so you'd need rubber.
And we haven't even touched clothing (textile mills) or "the great big bombs" that are steam engines. Waterwheels and Windmills seem easy enough to put together, though, so "practical automation" is doable.
1. You select 10000 people who are ideologically aligned, mentally stable, and knowledgable on the problems at hand (agriculture, manufacturing, etc. maybe skip medicine since this is a speedrun). This is a tough optimisation problem, so you have to sacrifice some knowledge on e.g. science in favour of people who are a bit less antisocial.
2. Prefer people who are better at establishing strong institutions and educating others, so that the next generation can learn how to do the labour.
3. 10000 people won't be enough to reach 1900 without kids, there will be a bottleneck in mining, science, or manufacturing somewhere.
So I think probably they could get it done in 120 years, or about 6 generations of kids.
Don't skip medicine; with only 10,000 people you aren't going to have a lot of redundancy in a lot of vital skills and knowledge so you can't afford lots of unnecessary deaths.
I agree, but starting out with germ theory will already put them way ahead of the curve on the medicine front.
Germs, insect vectors, genes, vitamins/essential nutrients, and how the basic organ systems of the body work give you a lot of benefits over what basically anyone in the world before the 19th century knew.
Someone started building a real-life tech tree, which may yield some relevant data to the question:
https://www.historicaltechtree.com/
I both really like that and really hate that I can't zoom out so I have to spend a lot of time just scrolling past screens of nothing.
There's a joke to be had about decent web design being a recent tech.
Well, my previous comment was only partially useful. Nearest tech could be in the past and jumps back sometimes.
On a large screen, I can click and drag on the bottom, moving to the left, which gets closer to the present.
Maybe you spotted this, but just in case, once you start scrolling, a link appears with the following text: "Jump to nearest tech"
Dammit you're going to make me reinstall Civiliation IV....stay away demon, it's summertime! I should be walking my dog on beaches!
If they were optimally selected, it wouldn’t take especially long before you could have basic ironworking producing tools and weapons. Maybe less than a year. From there it’s a pretty straight shot upward to steam power, but I think the precise ironworking and how difficult steel production is would involve a lot of trial and error. I’d be really surprised if anyone had both enough iron, and enough precision to produce an engine within a decade, but if they’re optimally selected… who knows?
The main problem would be food and access to resources. While it doesn’t matter if it takes a year or two to start making more complex iron tools, it does matter if you can or can’t get enough food production set up for 10,000 people (starting with no tools). I’d say the answer is likely no. Farming is very intensive work, 10,000 people is too much to be in one place, so you’ll have different groups at different levels of food production capability.
When you’re starving you’ll necessarily steal food to survive, and when you’re almost starving you’ll fight to make sure no one steals your food. Then you have social organization, centralization of food storage, tribes, and that itself if it becomes intense enough, might invalidate any technological progress. While I’m convinced optimally selected modern people could survive, I doubt their children could receive enough knowledge orally to do almost anything that their parents weren’t already capable of. If it’s just a vague idea of “You can mine coal, the black rock, out of the ground and burn it” and “if you produce iron in this specific way it’s stronger” that wouldn’t be enough to restart civilization, and might decay over more generations to exactly as much knowledge is useful to live in such an environment.
> If they were optimally selected, it wouldn’t take especially long before you could have basic ironworking producing tools and weapons. Maybe less than a year.
Huh? With 10K people who is going to mine the iron ore and the coal? And without any tools, how are they going to mine the iron and coal? Even if they optimally selected to have knowledge of blacksmithing, mining, smelting, etc., the iron and coal are largely going to be inaccessible without tools.
But food production and shelter would have to take priority over anything else. Doing a little googling and LLMing, plus some simple math, it would require between 8,000-10,000 acres of wheat using 19th-century cultivars to feed 10K people for a year. But first, you have to clear the land, plow it, and seed it. How do you clear the land if you don't have iron tools? Also, domesticated animals would probably be required to help clear and plow the land; otherwise, you've got 10,000 people scratching in the dirt with sticks. Unless they were given the seed stock and tools up front, I think your latter scenario is more likely. A lot of optimal people are going to starve and die the first year. Unless there were plentiful non-ag food sources to survive on, I wouldn't expect to see a 19th-century level of technology for centuries (and only if they had some way to preserve the knowledge of their pre-settlement ancestors.
I think you are severely underestimating how hospitable and food rich pre-malthusian land is. If this land has never known humans, there are likely to be several species that are incredibly efficient to hunt- Dodo, giant sloth, passenger pidgeon, galapogos tortoises, manatee, carribean monk seals, fish in general, great auk, mammoth, etc.
Europeans who first came to the Americas found an almost limitless bounty of cod in the coasts off Newfoundland. It required decent nets, which the natives didn't have, but with fairly rudimentary technology you can find a relatively low-effort, sustainable supply of fish.
Steel is 500 CE technology (it's dirty blacksmithing, the Scandinavians did it often and well). Steel to specifications is much harder. Engines are much harder than that (can you do them with cast iron?) -- but do you need engines if you have water/wind mills? (that'll get you automation, at least).
I'm thinking that electric motors may be a lot easier than steam or internal combustion engines.
Just to clarify the question: are the "optimally selected people" able to overcome mutual conflicts, and resist burning most of the resources in various zero-sum games, such as "who is the boss" and "who can have sex with whom"?
It's not originally my scenario, but I think it's more interesting to assume they're very dedicated to the mission and able to cooperate.
Right, I think the “10,000 optimally selected people” would have to be, something like deeply faithful practicing Roman Catholics, with the right genetic distribution so only 1% had 99%ile iq and everyone was willing to be super obedient to the religious authority structure.
In absence of contact with Rome, I would expect deeply religious Catholics to be prone to sectarianism/heresy. No central authority means slow (or even fast) divergence on dogmatic issues.
You may be right, but then maybe the answer is orthodox Christianity, which has essentially the same belief structure besides rejecting the pope.
You could probably also pull it off with Islam as your control structure.
I don't think you need to go so hard to get a good development. There are enough people who don't defect in real-life prisoner dilemmas while also understanding the game theory, are in good health, and are physically and mentally capable. "Optimal" will be even better than that group, but I'd be surprised if the optimal selection is a cult or very faithful people.
I... don't really think you need smart people for this. You need "butt-basic idiots" -- the folks who have learned how to do things the hard way, because it's fun. Smart people are good at solving "problems you've never seen before." But i don't think we're at that.
Even motivated midwits would work -- remember, with 10,000 people, you can have an "expert" on every link of the chain. Exposure shouldn't be an issue, you can always grab 10 Amish (or "Ron Swanson-esque" survivalists) who can order the rest of the folks about.
1900... Steam and locomotives. Mills, and upgraded farming.
Yeah, I think you may be right. You want reasonably intelligent persons selected much more for their combination of existing knowledge plus work ethic and pro-social norms. Though you may want one person tasked with getting this stuff written down lest the knowledge die off. I don’t think you could do this in a single human lifespan, you’d need cycles to bootstrap in the technology. It’s not just about knowing how but having the necessary tools, which have to go through iterations. Can’t go straight from raw iron to a lathe capable of micron-level precision. If your super knowledge persons were in their 20’s…. maybe you could pull it off in 40 years.
Of course you'd need cycles, but you don't need micron-level precision to get to 1900's technology. I think the iron/steel/coal troika is doable within ... maybe 5 years? Most of the steps there don't involve "precision" so much as "and now we make pig iron." (Pulling this from "that polish guy's science fiction," so I could be wrong ... but I don't think I am).
But... then you have "water purification" (aka BEER) Either you have the yeasts, or you don't (and, I believe, unlike bread, you need specific strains).
Distilling (also a pre-1900 thing I think) would require glassware.
And we have Yogurt! And Cheese! (If you don't think these are technology, boy, howdy). And sausage.
Basic looms/spinning are pretty easy to make. Not sure how they scale into "industrial looms and felting and..."
Reposting this comment from
FluffyBuffalo here so people aren’t misled. Some amyloid theories are probably wrong, but some are almost certainly not. There is good evidence that amyloids are in fact involved in Alzheimer’s in some way, and that anti-amyloid therapies help moderately.
“I think it's important to be careful about what "subscribing to the amyloid hypothesis" really means. The evidence is very strong that amyloids play a significant role in the disease - genetic variants like APOE4 that influence amyloid accumulation are strong risk factors for AD; you see characteristic biomarker levels (in particular Ab42/Ab40 ratio in cerebro-spinal fluid) in AD patients that you don't get in other neurodegenerative diseases; you find the characteristic plaques in deceased patients, etc. That part is sound as far as I can tell, and pointing at a misguided study or two doesn't change that.
The question what exactly the amyloids DO, and whether the buildup uf the plaques is the whole story or just a part, is not so clear, and my impression is that researchers are open to the possibility that there's more going on. The new hot topic seems to be CAA: Cerebral Amyloid Angiopathy - it turns out that amyloids also accumulate in blood vessels, weakening the walls and leading to micro-bleeding. (Apparently, this got more attention when it was found that brain bleeding is a not-so-rare side effect of lecanemab.)”
Devansh on the value of reading Hannah Arendt, totalitarianism scholar.
https://artificialintelligencemadesimple.substack.com/p/why-you-should-read-hannah-arendt-5bc
Stupid question time: I've been trying to understand how exactly Heritability is defined, but the explanations I've found don't make sense.
Wikipedia tells me that, in simple cases, it can be computed as Variance(Genes)/Variance(Phenotype) where "Variance" means the expected squared distance from the expected value.
But this doesn't depend on any relationship between the two at all! By this explanation "Heritability" is just a measure of whether a set of genes is similarly spread out as the phenotype, not whether those genes explain anything.
Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way, and I don't see how "polygenic score" and e.g. "has schizophrenia" can be comeasurable.
Stats in science are often bad, but I have trouble believing they're *that* bad. Can someone try and explain how this actually work
> Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way, and I don't see how "polygenic score" and e.g. "has schizophrenia" can be comeasurable.
Heritability is calculated as the ratio of the genetic to the phenotypic variance, with the genetic variance here the variance of genotypic values. Because genotypic values are simply the expected phenotype for each genotype, they are measured in the same units as the phenotype itself. So both the genetic and phenotypic variances carry by definition identical units.
Heritability is defined for quantitative traits. For binary traits (e.g. “has schizophrenia”), one typically invokes an underlying continuous liability scale. Both genetic and phenotypic variances are then defined on that same liability scale.
> Also, variance is measured in units of "distance in the underlying distribution" squared, so you can only take a ratio like this is the distances are comeasurable in some way
Variance infamously suffers from the problem that the same distribution at different scales has different variance. For example, If you measure everyone's height in inches, and if you measure everyone's height in centimeters, the height-in-centimeters distribution has more variance even though you measured the same set of people and they were all the same height both times.
That's just a rounding issue isn't it? If you measure height in rounded kilometers then everyone is 0km tall, no variance at all.
No, it's not a rounding issue. The amount of variance in a distribution is related to the scale of the numbers. That's just the way variance is.
I found this explanation good:
https://dynomight.net/heritability/
Amazing article! Though reading it leave me wondering: why on earth would we care about this ratio, given all its weird pathologies? It seems like just considering genetic variance alone, rather than heritability, would be the way to go in 99% of situations.
Heritability isn't for comparing human populations, it's for breeding wheat. Most of the issues go away once you can control for environmental and plant fields of genetically identical plants.
Heritability can be tricky to interpret , quantitative genetics isn’t the most intuitive field, but that doesn’t diminish its usefulness. Yes, heritability has its odd “pathologies,” (although most of them are special cases largely irrelevant in most populations) but it is good for its intended purposes, like predicting response to selection, designing GWAS, and so on, whereas raw genetic variance alone provides little actionable insight.
May be the most intuitive way of conceiving heritability is as a "signal to noise ratio" estimator for geneticists. Just measuring the signal is less useful.
I agree. I don't understand why people care so much about heritability because it is not interpretable. Also, what we actually care about is the degree to which someone is able to intentionally change their outcomes by manipulating their own environment. Right? Heritability doesn't capture that notion at all.
>Also, what we actually care about is the degree to which someone is able to intentionally change their outcomes by manipulating their own environment. Right?
Heritability is a tool for geneticists, so, indeed, it is not very useful to suggest environmental manipulations!
What are the 3 most important applications of heritability? I understand response to selection; that makes sense (though it doesn't seem that relevant toward the study of human traits like education attainment.)
But I don't understand the other reasons to care about heritability. Maybe it hints at whether the root cause of a trait is genetic or environmental...but, still, that provides little insight about treatments, which is what we actually care about. And it doesn't actually directly tell you whether genes are the root cause at all. Maybe it at least hints at it? Maybe it can tell you whether GWAS is worth pursuing at all? I don't know.
Can you therefore explain to me the 3 main reasons people care about heritability?
It is normal for geneticists to care about heritability, and below are three examples of applicatrions :
- in agriculture, for animal or plant breeding, heritability is directly linked to the answer to selection
- in medicine, GWAS are extremely expensive studies used to identify genes involved in a quantitative phenotype. If heritability is very low, no genes will be detected.
- in evolutionary ecology, it is interesting to determine whether a trait relevant for environmental change (for example resistance to heat) has or not substantial heritability. High heritability means more chance for the species to adapt to the change.
But heritability is in my opinion almost entirely irrelevant outside of genetics (but one counter example below), and is extremely easy to misinterpret. The most frequent falsehood around heritability being 'if a trait is strongly heritable, then environmental changes can not change the trait' and 'if a trait is strongly heritable within a group then difference between groups are due to genetics differences'.
My personal counter-example of the general uselessness of heritability outside genetics studies would be that heritability studies showed than most traits in humans (height, weight, personality) have little shared environment effects, ie little parental effects are detected on their adult children for average families (this is not true obv if the children are mistreated). Thus changes must be implemented at the societal not familial level.
Emma probably can give you a better answer than I can, but here you go...
1. The people are most obsessed with heritability are the HBD folks. If your philosophy regards humans as having superior vs inferior phenotypes, and that certain arbitrary groups of humans (i.e., races) have superior phenotypes vs other groups, then you're motivated to use allele frequencies and heritability to give your opinions an aura of scientific validity.
2. Many diseases have a genetic origin, and certain populations have higher rates of the alleles that cause these diseases (and thus higher risks) of developing them. For instance, autosomal recessive diseases like Cystic Fibrosis (predominant in European populations, but not Finns or Sardinians), Sickle Cell Disease (common in populations where Malaria is endemic: people of African, Mediterranean, Middle Eastern, and Indian ancestry), Tay-Sachs Disease (most prevalent in Ashkenazi Jewish, French-Canadian, and Cajun populations). Carrier screening can test whether the parents carry these recessive genes, but both parents must be tested to see if they carry a mutation in the same autosomal recessive gene. If both are carriers, each child has: a 25% chance of inheriting two mutated copies (affected); a 50% chance of being a carrier (one copy); and a 25% chance of inheriting two normal copies. So, heritability is important in these circumstances.
3. The effectiveness of some drugs can depend on the presence or absence of certain genes. I don't know of any examples off the top of my head, but some drugs are more likely to work well on some populations than others (for instance, most big Pharma RCTs have been done using European populations, and some of the meds that seem to work well for Europeans may not work as well for other populations).
The Alzheimer mouse review on Friday reminded me of Thrifty Printing Inc, whose business model of online photo-processing for cornershops didn't take off, apparently, so naturally the company pivoted, to drug development for such diseases as Alzheimer's.
But Anavex Life Sciences, that's the new name, doesn't seem much respected by the stockmarket. Was the pivot too ambitious? Or perhaps Anavex is seen as unserious by the professionals after hiring a runway model with no business experience as Director of Business Development and Investor Relations (Nell Rebowe). Or it's the hairstyle of the CEO (Dr. Christopher Missling). Or his presentation skills (that last one actually has some merit).
A recent Economist article, titled "The Alzheimer's drug pipeline is healthier than you might think", did not even bother to mention that Anavex has a pill under evaluation by the European Medicines Agency. Yet what will happen to the stock if it is approved in six months or so?
On the other hand, withdrawal or rejection would most likely send the stock crashing (not "certainly" because another drug's phase 2 trial for schizophrenia will report soon). To me, Anavex's pill appears disease-modifying (though not a cure) and safe (unlike the anti-amyloid drugs), but certain language in the company's latest 10-Q filing does strike me as ominous. Also, shareholders are up against high short interest, including Martin Shkreli, who has called Anavex "another retail trap" (referring to Cassava Sciences, popular with retail investors, whose Alzheimer's drug failed its phase 3 trial; that company's history involves fraud allegations). And whose strategy of shorting Alzheimer biotechs just scored another victory when INmune Bio's phase 2 trial failed to convince.
Anavex's pill is meant to stimulate autophagy and is not related to the amyloid hypothesis, as far as I know (I don't know much, but was still tempted to complain on Friday when someone called Leqembi a "proof of principle" for the hypothesis --- if people try for decades to develop drugs on the basis of a paradigm, wouldn't you expect them, even if they are mistaken, to come up with something eventually that shows an accidental effect, via some other mechanism?).
I assumed this was some kind of weird metaphor, but I looked it up and every word is true. Please feel free to post more content like this.
Most of what I currently know about biotech markets is already contained or hinted at in that post --- I definitely got lucky in terms of weirdness factor when I decided to look into Anavex. The most adjacent other of the (few) companies I try to follow is Hims & Hers, whose stock chart you displayed in the Ozempocalypse post in March to illustrate the statement "Let's take a second to think of the real victims here: telehealth company stockholders": I was thinking about posting in an Open Thread once their share price climbs to new records, which I thought might be funny, just a few months after the Ozempocalypse sent it down by half; but it has missed the February high by a whisker several times and is now again 25 percent down.
Two things in recent Irish news coverage, and the fun one first:
(1) "Go fly a kite!" may be good advice if you want to generate energy
https://www.rte.ie/news/ireland/2025/0714/1523349-kite-flying-electricity/
"A project in Co Mayo is generating renewable electricity through the flying of kites, which its operator has described as a potential "game changer" in the wind energy sector.
...The site, which is the first designated airborne wind energy test site in the world is being operated by Kitepower, a zero emissions energy solutions spin-off from Delft University in the Netherlands.
Kitepower's system employs a yo-yo effect, where a kite, measuring 60sq/m is flown at altitudes of up to 425m attached to a rope that is wound around a drum - which itself is connected to a ground-based generator.
The kites can generate 2.5 to 4 tonnes of force on the tether.
The pull from this force then rotates the ground-based drum at a high speed.
This rotation then generates electricity that can be stored in a battery system for deployment wherever and whenever it is needed.
The kites are flown using the knowledge and skills of kitesurfing professionals, combined with a highly specialised computerised GPS-guided steering system.
They fly upwards repeatedly in a figure of eight pattern for periods of 45 seconds.
The flight pattern is important because it forces the kites to behave like sails on a boat, maximising the pull of the wind to increase speed so electricity can be generated.
After 45 seconds, the kites are levelled up so that the pull from the wind is momentarily minimised.
This enables the tether to be wound back in, using only a fraction of the electricity generated when it was being spun out.
The result is a net gain in renewable power at the simple cost of flying a kite.
Then the cycle is repeated, again and again, potentially for hours on end."
(2) Well, looks like AI is *already* taking er jerbs (at least if you're a graduate in finance):
https://www.rte.ie/news/analysis-and-comment/2025/0713/1523109-ai-job-losses-ireland/
"Also this week, the latest 'Employment Monitor' from recruitment firm Morgan McKinley Ireland found notable reductions in graduate hiring by major firms in the accountancy and finance sectors because of the adoption of AI.
And on Thursday, AIB announced a major AI rollout for staff in conjunction with Microsoft Ireland, sparking concerns from trade unions.
Morgan McKinley Ireland's Employment Monitor for the second quarter of the year was published on Thursday.
The recruitment firm said that the standout development of the quarter was the significant impact of AI and automation, particularly within the accountancy and finance sectors.
"The notable reduction in graduate hiring by major firms, driven by AI capabilities, highlights potential challenges ahead," the report found.
"Companies are increasingly leveraging AI capabilities to automate routine tasks such as accounts payable, accounts receivable, credit control, and payroll."
...[Allied Irish Bank] announced a new artificial intelligence rollout for staff in conjunction with Microsoft Ireland on Thursday.
The bank said the new tools will reduce time spent on repetitive tasks, freeing up employees for higher-value work.
The plan will involve the widespread deployment of Microsoft 365 Copilot, embedding AI into everyday tools like Outlook, Word, Excel, Teams, and PowerPoint.
...Last month, the Chief Executive of AIB Colin Hunt took part in a panel discussion at a Bloomberg event in Dublin.
Asked what impact AI will have on staffing numbers at the bank over the next five years, Mr Hunt said it may lead to a small reduction in net headcount.
"I do think that there are certain manual processes that we do now that will be done by AI in the future, and probably net headcount will be broadly stable with a slight downward bias maybe," Mr Hunt said."
Who knew that the real knife-ears were the ones we created along the way! (See "The Rings of Power": "Elf ships on our shore; Elf workers taking your trades. Workers who don't sleep, don't tire, don't age. I say, the Queen's either blind or an Elf lover, just like her father.")
Is this not "fourth power of wind speed"? Because that's the issue, if you want "renewable energy" to stop generating more greenhouse gas (via the necessity of "keeping the grid stable" through spikes and dips -- the natural gas required for this is significantly more than "if we didn't use wind power at all.")
> the natural gas required for this is significantly more than "if we didn't use wind power at all.
Citation needed. And this is off grid fooling about.
That story about graduates not finding jobs has been repeated across British papers for the last few months. If it is a trend then AI is not boding well.
It's mostly worrying about what all these graduates were being hired to do. I can see why graphic design would be toast, but the only other things AI can seeming do are "google this and write about it," "take this information and reformat it" and "spitball arbitrarily about this topic."
Maybe more jobs are bullshit jobs than people had previously realised? In the sense that nothing would go wrong if the whole job disappeared.
Except for the whole mass unemployment thing and the fact that few if any societies are prepared to handle that.
What do you guys think about my sick invention?
https://x.com/SailAcross_/status/1858697261888192657
Pitfalls, criticism welcome.
Isn't this basically a linear actuator (as you describe as not solving the problem), albeit embedded in rubber? I suspect Melvin is right, that this will be too weak. Fundamentally, at reasonable current levels, magnetic forces in smallish-human-muscle-sized devices are weaker than we need, so, yes, we make motors that spin, reusing the same volume many times, and put in mechanical transmissions that increase torque while decreasing number of revolutions.
With "invention", do you mean you built a working prototype, or modeled one in some detail?
I think some version of this will work, but I suspect it will be super weak and not very durable.
Sorry to be laughing about the tooth claims, because vision problems are no joke, but yeah. There's always something.
It's very unlikely the fancy tooth treatment is to blame, but it's not impossible, and it's only when you have real users out in the wild, as it were, that these one-in-a-million things crop up. Now we know the problems Big Pharma and the FDA face!
I don't know what you find so funny. If this was a normal FDA trial, they would have tested it in some small group of people first, nothing would have happened, they would have approved it and done post-marketing surveillance, and someone would have (hopefully) detected the side effect. Now . . . it got tested in a small group of people first, nothing happened, it got sold, there's unofficial "post-marketing surveillance" by users, and someone has (possibly) detected a side effect. What's the difference that makes one process dignified and the other funny?
Well, I am black-hearted, so it is funny to me.
"This new discovery means no more cavities! Mothers will pass on the tooth pixies to their babies with a kiss! Dentists will go out of business!"
*blink*
*blink*
*blink*
"Oh, no, the tooth pixies are making me blind!"
I think no prediction market would have considered "will the tooth pixies make me blind?" as an option had anyone set up a market.
It may well be that the tooth pixies have nothing to do with that unfortunate person's problem. It may also be that they do. It's only when any new drug or process is turned loose on more than the small test group that such anomalies crop up. Biology, and nature, are not controllable in the way we desire. We can screw nature up, sure. We can control it to an extent. But every time we are so sure we know precisely what to do and how to do it, nature comes back at us.
Upon actually reading the link, oh boy.
"Yeah, I've been long-term taking dodgy drugs but it was definitely the tooth pixies what did my eyesight in".
You don't think perchance maybe it was the kratom habit (plus whatever else he was dosing himself up with, because druggies* never do just the one thing) that might be responsible here?
* Yes if you're taking stuff like this, you are a druggie. "It's a health supplement, not a drug!" If it's got a street name, it's a drug.
https://www.drugs.com/kratom.html#:~:text=Kratom's%20common%20or%20street%20names,leaf%2C%20nauclea%2C%20Nauclea%20speciosa.
"Kratom’s common or street names are Thang, Krypton, Kakuam, Thom, Ketum, Biak-Biak (common name in Thailand), Mitragyna speciosa, mitragynine extract, biak-biak, cratom, gratom, ithang, kakuam, katawn, kedemba, ketum, krathom, krton, mambog, madat, Maeng da leaf, nauclea, Nauclea speciosa."
If manufacturers are selling it with spicy names, it's a drug not a supplement:
"The FDA continues to seize adulterated dietary supplements containing kratom. Seized brand names have included: Boosted Kratom, The Devil’s Kratom, Terra Kratom, Sembuh, Bio Botanical, and El Diablo."
Caveat, I have not been following Lumina or their testing very closely at all, and this is not exactly my topic of expertise, only adjacent to it.
Phase III trials can be fairly big, that's why they are expensive, so if this is a 1-in-1000 adverse event (AE), it's possible it could have been caught. When you know you have followed N enrolled patients for x amount of time, it is bit easier to evaluate is a report of a suspected AE truly a rare AE or is it possibly the first report out of many not-so-uncommon AE. I think this is the main difference compared to had this happened in a trial: There would be a more informative context wherein to evaluate is it a rare bad luck unrelated to product, a rare AE related to product (and say, patient's unique genetics), or a first sign of a serious problem with the product.
Also, if a rare AE takes place during a trial or with a novel prescription med, reporting is not dependent on the user getting a reaction and being savvy about biochemical pathways to realize the product may be to blame and posting about their experiences on internet who may or may not provide all relevant medical background. If the product is a causative factor, the chain of events happened here would have been a kinda weird coincidence. If the product is not ultimately causative, with larger trials having taken place, it could be easier to say so.
Phase IV and post-marketing surveillance comes in many shapes, so it is true that if a serious side effect presents after the product had been launched, practical difference may not be *that* great. However, the reports of suspected AEs are supposed to be centrally collected, so hopefully it would be bit easier to evaluate the report(s) and contextualize them. But ultimately, yes, it is kinda definitionally difficult to catch rare AEs or AEs that take a long time to manifest until after the product has been launched and used by the wider population.
Exactly this.
Without centralized and organized collection of data this becomes loosely collected anecdotes. The type of stories that anti-vax and snake oil narratives thrive on. We need less of this and more of rigorous science.
If the FDA decided to restrict itself purely to safety studies whose goal was to discover and catalogue what the safety risks of a drug are, and then it were up to doctors, in conjunction with their patients, to decide if the efficacy was worth the risks (and costs), then nearly all of my issues with the FDA would go away. But they don't merely catalogue risks. They also are the ones who decide what everyone's risk-to-reward function should be, and that's not a task that any government agency should be doing.
>They also are the ones who decide what everyone's risk-to-reward function should be, and that's not a task that any government agency should be doing.
Are there any limits to this statement? Taken at face value, that would roll back decades if not centuries of safety regulation across the board, to desastrous effect on public health. Asbestos? It's cheap, effective, and in 20 years it will be someone else's problem. Wearing seatbelts? If you feel like it. Installing seatbelts to begin with? Nah, that could be hundreds of dollars of profit instead. Building codes? I'll let my cousin do the electric wiring, he always wanted to try his hand at it. To be continued ad infinitum.
The primary limit, and the difference between my statement and your examples is impact on the individual vs impact no the public. There is a big difference between regulations that limit what I can do to myself and regulations about what I can do to others. Negative externalities are a place where I think the government has a role. Asbestos should probably be banned/highly regulated not because individuals shouldn't be allowed to take risks but because their use often impacts third parties who don't get a say. It being illegal for me to take an experimental drug when I am aware of the risks? Why? Why is the government allowed to tell me that it's too dangerous?
>Why is the government allowed to tell me that it's too dangerous?
Because the society you live in, as a whole and over time, converged on the decision to give your government that authority. Your entire existence as a citizen is embedded in a (practically, mostly) invisible web of such decisions. There's always push and pull on the margins which you can engage in; if and when positions like yours become societal consensus, good for you, then you'll get what you want. Until then, you'll have to live with the status quo or vote with your feet.
I'm not confused about how we ended up here. I'm asking about a from-first-principles justification for the status quo, with the implication being "there isn't one, and therefore I argue that the status quo is bad".
You seem to be making an argument (feel free to correct if this is not the case) that the outcome I want is not possible without also throwing out the baby (aka, it's not possible to allow people to take experimental drugs without also allowing them to make asbestos baby blankets). To whatever extent you are making that argument: I disagree. I think we can both allow personal liberty while regulating externalities.
I think the first-principles argument for consumer protection regulation is "consumers cannot reasonably be expected to learn about the risks of everything they buy." There are too many things to research, and researching product safety and effectiveness is often difficult or requires specialized knowledge. Therefore, we outsource those tasks to a dedicated government agency that has the necessary skills, which then boils it down to a simple yes/no decision that requires no effort from consumers to follow.
Like, in a world populated by Homo Economicus, where everyone perfectly comprehends the risk and reward profiles of every drug they buy, the FDA would not be necessary, but that's not the world we live in. We live in a world where people buy random herbal supplements from a guy on Tiktok yelling about toxins in the vaccines. Some amount of paternalism really is warranted.
Google tells me that "first principles" are "the fundamental concepts or assumptions on which a theory, system, or method is based." If you agree with that definition, then it should be clear that the FDA has first principles: The laws under which it was founded and now operates. If you continue asking for the principles of how those laws are justified and so on, you'll eventually bottom out with "because that's how societies organize themselves to function at scale" and there is, I believe, no further useful justification; similar to the anthropic principle, you don't see many societies that do not do so because those tend to cease existing sooner rather than later. You're not going to find any cosmic truth underneath it, any more than you'll find it underneath your own assertion that you should be allowed to take any drug you want. That, in my humble opinion, is your first principle or lack thereof, and it applies equally to you and the FDA.
>You seem to be making an argument (feel free to correct if this is not the case) that the outcome I want is not possible without also throwing out the baby
I actually argued the opposite. I wrote "there is always push and pull at the margins", which means you can seek to change things that fall outside the broad consensus while others will seek to keep them.
> I think we can both allow personal liberty while regulating externalities.
I agree. That statement is just quite different from the one I disagreed with in my first reply, isn't it?
Reading about the twin studies... twins are similar not only because of genes but also because of a shared initial history in the womb. And not all twins are the same, it depends at which development stage they split in two different embryos (see "mirror twins")
CTRL+F the section starting "2: Fraternal And Identical Twins Have Equally Concordant Uterine Environments" at https://www.astralcodexten.com/p/missing-heritability-much-more-than
If you like road trips and you still believe in the American Dream, join us in April 2026 for The Great Route 66 Centennial Convergence. We're driving from Chicago to LA over the course of a few weeks. There will be never-before-seen mysteries, side quests, and prizes. It costs nothing. Find more details on Facebook, Instagram, and Youtube.
Is there a website or mailing list?
No but you can ask me for updates, my username at gmail.
Interviewees and video co-hosts wanted. We won't just be talking about hot rods and midcentury architecture. We'll be discussing the American Dream, capitalism, and human progress. It might even be a paid gig if you're charismatic, pleasing to look at, or slightly famous.
Put me in coach
Friendly warning: replying "put me in coach" to an advertisement for a cross-country trip might have different results than you expected.
Right that’s an important comma to forget
> It might even be a paid gig if you're charismatic, pleasing to look at, or slightly famous
Dammit.
I'm a fresh grad about to start a Data Science-esque role for a big UK insurer. Never having worked in corporate environment, I'd like to ask the ACX community for sage advice on questions such as:
How can I learn the most?
How can I negotiate my salary throughout?
How do I handle 'office politics'?
or any questions someone in my position should think about.
Thanks in advance for your wisdom!
If your office politics involve poisoning each other, just quit. If poisoning each other has been banned as being "not within the spirit of office politics" -- just quit.
I'm confused. They should quit if there's poisoning, and also if the poisoning is banned?
Yes. Having enough poisonings that "it's been banned" implies that your workplace has gone beyond "toxic" into some strange new country.
Ah Makes sense, cheers
This took me a long time to learn via experience, as it's not necessarily intuitive to rational people.
Aside from complying with things like critical regulatory requirements which might get you arrested for violating, you really only have one real job at work:
Make your boss [1] happy.
That is often very difficult for rational people leaving the merit-based school system and entering the social hierarchy of the workplace to fully grasp. You may have felt that school was your "job" and that your teachers were your "bosses," but this wasn't actually the case; as a student, you were a *customer buying a product,* which meant you had an absolute, enforceable *right* to receive clear expectations and an objective measure of your performance *as part of the product you were purchasing.*
In the case of your employer, they are the customer purchasing your time and attention... in the way they want you to provide those things. Your paycheck and career aren't going to be based on you scrupulously, objectively performing all the duties of your job description or doing what's best for the company as a whole. Practically speaking, there's no way to really track that, and people who are determined not to see proof of your good work won't look at the evidence anyway. The people making decisions about your role at the company are going to be making them based on how they *FEEL* about you, which means your first priority needs to be making your boss *FEEL* good about you.
Keep.
Your Boss.
Happy.
So if your boss is very attached to a dumb process that you just KNOW could be vastly improved, the *real* job of your job - the one which may get you promoted or saved from layoffs - is to make them *FEEL* good by surrendering to their dumb process. If your boss is overly-emotional and sensitive to criticism/rejection and perceives suggestions/corrections/warnings as personal attacks, your *real* job at all times is to make them *FEEL* good by avoiding triggering their negative emotions, even if that means allowing them to damage the company.
And remember that your grandboss and above haven't fixed or fired your crappy boss because there's something about your boss which makes *them* happy enough to want to keep your boss around. Don't assume *they* want to fix problems for the good of the company, either.
Surrender to the stupidity. That's what your customer wants, and that's your job now.
[1] "Boss" doesn't necessarily refer to your direct supervisor (although it probably will), but rather the persons/people above you who have the power to promote or fire you, including your grandboss, etc.
Ha!
That's sort of true, but there is a lot of hubris in starting a new job or career and thinking that you understanding everything immediately and everything you don't approve of is stupid.
OP, if there is something that you don't agree with, I suggest the path of detached indifference, not judgment; people can smell smug superiority a mile away.
Curiosity is fine too. "Hey, I was wondering why we do things this way instead of this other way". Asking those questions once in a while is a great way to learn how decisions are really made in the company.
Asking the right questions enhances your "teachable youngster" status, while carrying the possibility that one day you'll actually make a good and workable suggestion and get some of the credit for the actual change.
true!
All of that is true, and let me add to it.
Keep in mind what your trajectory over the next few years should be. Ideally you'll go from a teachable youngster who is easy to work with and only needs to be told anything once, to a junior colleague who can be trusted with small responsibilities, to an actual peer who knows the ropes and can be trusted with big challenges.
But right now you are _just_ the teachable youngster, and you need to embrace that role. Be conspicuously eager to learn and respectful of advice from your older colleages. Do anything you are told to do, or even that you are suggested you might do, unless it is literally illegal. And if you haven't been given anything specific to do and everyone seems really busy, at the very least don't get in the way.
You may find something to take to heart in this essay:
https://www.benning.army.mil/infantry/199th/ocs/content/pdf/Message%20to%20Garcia.pdf
Office politics doesn't need to be tricky, especially as a junior; it's just the effect of companies being staffed with fallible flesh-and-blood human beings. Focus on making sure that people know you, and they like you, and allow that to be more important than being right all the time, and you'll probably do fine.
A few things I've learned from experience:
Similer to one of Wasteland Firebird's points, long term you'll do better if you hop sideways and upward between employers as opposed to spending years and years with the same one. This is because with each job change you should get a step up in position and salary, as well as a wider experience of the industry (and you don't necessarily have to stay with insurance companies - data science skills are fairly transferrable). But of course you shouldn't overdo the job hopping frequency, or potential employers will wonder why you can't stay in any one place for long.
Also, people tend to be better at office politics in inverse proportion to their technical ability: A weak performer needs to be crafty at office politics to offset their technical shortcomings, and conversely a tech guru, recognised as such, can often afford to disdain the politics.
When starting a new job, you should be suspicious of a colleague who seems too solicitous of your welfare when it isn't their assigned duty. It's similar to prison, although I've never had any experience of that, in that shortly after you arrive some apparently helpful fellow convict will sidle up to you and offer to provide anything you need and show you the ropes and so on. But ultimately it's for their benefit not yours!
When a company downsizes, the bean counters take zero account of the competence or otherwise of those being laid off.
> When starting a new job, you should be suspicious of a colleague who seems too solicitous of your welfare when it isn't their assigned duty.
I often do that
> But ultimately it's for their benefit not yours!
My "benefit" is that it makes the new guy feel welcome. Also talking to people is more fun than actual work. Also I guess it also makes my own boss and grand-boss happy when the new people come to me with their problems and not to them (boss and grand-boss).
Yeah, I feel like some of this advice is too cynical for many of the places I've worked at.
I agree. Jeez, most people just are not particularly sneaky and evil. Pretty much all the people I've met who have seemed unusually kind and helpful early on have turned out to be people who actually are just unusually kind and helpful.
What exactly are the risks of such an overly helpful colleague?
Also, is the last paragraph complete? They don't look at the competence, so what do they look at?
Hypothetically, they're setting you up to be against people who aren't actually a hazard to you,.
Learning: Most of what I've learned, I learned inevitably on the job, or from switching jobs a lot. I haven't done much work in my spare time to keep up with the industry. It seems kind of silly to do that because you never really know what's going to be expected of you in your next job anyway.
Salary: The best way to get raises is just to take whatever you get, at first, and always be looking around for something that pays more. You don't even necessarily have to take the job that pays more if you don't want, you can get an offer and bring it back to show it to your current employer, and ask for a raise then. If you feel weird asking for a raise, don't ask for one. Just tell them you're leaving, explain why, and let them get the idea to make you a counteroffer. Eventually, you'll get to where you earn and/or have "enough" money. Figure out what that means for yourself. Then, the good part comes. Keep changing jobs, but this time, only change into jobs that make you happier! Sooner or later you'll get into a job where you simply can't realistically expect to find anything better. That's where I am now. I am hoping this will be the last job I ever have to take seriously.
Office politics: Even as a nerdy person who used to have no social skills, and who is probably, like many of us here, what might probably be diagnosed as a "person on the autism spectrum," I have to say, I actually really enjoy office politics. "Soft power" is a big thing. A lot of times you don't have any authority to actually make people do things, but you can get people to do things anyway. Ideally, you can figure out how to express the thing you want to do in terms people will appreciate, and then they'll join you in your quest. That requires learning real empathy, which is a difficult thing in itself. But when that doesn't work, sometimes there are little clever tricks you can pull that let you get your way. For example, there's a project I've been wanting to do, but no one is allotting my team any time to do it. So instead, I've looked around for other similar projects that were allotted time. I've treated those projects like they are higher priority than they actually are, and seen them through to completion. After a couple years, my dream project is 80% complete, and we haven't even technically worked on it at all yet.
Another fun trick I've used multiple times: You need to do something. You want to do it in a simple way. People with power over the project insist that the project must be done in a very complicated way that will take far too much time. Quietly complete the project on time, in the simple way. Get it as far as you can take it, so all they have to do is say, "Fine, good enough, ship it." The choice you've now forced on them is to do it your way, or to deliver it late.
Me and Lord Hobo Boom Sauce have been ruminating tonight about having an ACX site where we put up photos of ourselves as kids. Seems there would be very little risk of doxxing ourselves, so long as it's pix of us as *young* kids. And it would nudge us all a bit away from the godawful online illusion that the entity we are talking to via texts consist of a funky little fake name and a cluster of, like, 5 opinions. Anyone like this idea. or have I just gone down 11 notches in everybody's respect just for suggesting it?
Even though it looks like it won't happen, I think it would help for sure. When I was 16-22 years old (I'm 25 now), I spent a lot of time in facebook meme groups and made lots of online friends that wat. It being facebook, it was less common for people to use alt accounts, so we got to know each other's faces and video chatted often. I've even been able to meet a few of them in person on multiple occasions.
As a result, I haven't been able to think of anyone online as just a two-dimensional fake name on a screen since, and until reading your comment, I forgot that people do.
When I was on Facebook someone posted high school yearbook photos of me and 3 of my male classmates. My wife couldn’t tell which one was mine. Yep, that’s what guys looked like that year. We all looked equally dorky. Same dopey haircut and horn rimmed glasses. I was 27 when we first met for a bit of context.
> And it would nudge us all a bit away from the godawful online illusion that the entity we are talking to via texts consist of a funky little fake name and a cluster of, like, 5 opinions.
How is that godawful? That's the entire point of talking to strangers online. If I wanted to talk to "real people", I'd go outside.
<That's the entire point of talking to strangers online. If I wanted to talk to "real people", I'd go outside.
Look this clearly is not going to happen-- other respondents' concerns about doxxing themselves is substantial, and actually I had not even thought about age progression, but now I see the problem with my idea. But regarding your point: The photos would not have turned posters here into "real people," just nudged your sense of them a couple centimeters in that direction. And, assuming it had that effect on you, the beneficiary would not be you, it would be the people you write responses to, who might then get responses that are a bit more thoughtful and considerate.
I'm pretty sure that would piss me off more, to be honest. I really do not like children.
I would be less sure of the young pictures not being able to be aged up accurately enough to be identifiable.
I always assumed your photo here was of the actual you. It isn't?
To paraphrase Lincoln's joke, if I were using a fake photo, would I be using this one? It is, but I'm not worried about the photo being used to find my real name.
Seems like that would summon pedophiles.
I'm generally very dubious about putting up any kind of identifiable information because someone out there will pick up on it and try to identify real world you. I know that sounds paranoid, and I don't really have much to lose if someone figures out "oh Deiseach is really so-and-so" (apart from my jealously-guarded privacy) but for people who *do* have something to lose, I would be way more careful.
People *have* gone after Scott and those associated with him, see Sneerclub and the infamous Cade Metz story. It's not beyond the bounds of possibility that someone with a grudge about rationalists/Scott/SSC/ACX/TheMotte/LessWrong/you name it would latch on to anything like photos and try to work out who you are in real life and then send nasty little emails to your employer about "were you aware that Eremolalos is involved with right-wing fascists and racists?" (the HBD/IQ stuff is catnip to people with axes to grind).
I'm not saying "don't do it", just "be very very sure about the level of security".
I'm mildly intrigued, but I don't think I have any pictures of myself from a young age. Presumably my parents had some, but I think my mothers' scant few pictures of me as a little one went onto the "toss" pile when I was sorting her assets after her passing. I don't mind sharing more recent pictures, though, from when I lived somewhere else entirely (an attitude that can probably be surmised from my avatar).
An essay on the transformation of academic tutors into Turing cops, marking into an imitation game, and Al generated homework into the trigger for the technological singularity:
https://open.substack.com/pub/vincentl3/p/a-modest-software-patch-for-preventing?r=b9rct&utm_medium=ios
Maybe we'll see a return to the old-fashioned system of verbal examinations, whereby the student links via Zoom to an AI interviewer, which then fires questions at the student, who is required to make immediate extempore replies without referring to notes.
Current fashionable buzzword in universities for this kind of thing is “secure assessment,” betraying the academy’s very own AI control problem.
Hey Vincent, I read it, and have a non-ironic suggestion:
-It is probably possible to train an AI to be an excellent judge whether there is AI contribution to an essay, and how much, and whether it's in the main idea, the overall structure, or individual sentences or paragraphs. AI's -- which overall fucking suck, IMHO -- are pattern-identification geniuses. They are better than professionals at identifying melanoma, various retinal diseases, etc, from scans. You might need to hire somebodydo a bit of extra training to improve an LLM's ability to recognize these things, but I'm pretty sure it would be possible.
-OK, so then tell the students that if they submit something that scores more than a certain low percent on AI content, you will not read the piece and it will be graded by AI. I actually think that would discourage people quite a bit from turning in AI-contaminated work. I had a professor who would grade late essays, but did not put any comments on them. I really wanted to get comments, so never turned in things late to that prof, even though it would not have harmed my grade, and I was in general actually fairly bad about turning in papers late.
There are existing AI detectors that check for this, but the current models often give false negatives or wildly divergent responses. These could be improved of course, so I do agree with your suggestion. I guess the loophole then becomes that students could ask their AIs to write in ways that avoid the telltale signs. So we would still get an AI arms race of sorts.
I don't know if simply "asking" the AI to write without telltale signs works, but this looks like it does:
https://www.reddit.com/r/LocalLLaMA/comments/1lnrd1t/you_can_just_rl_a_model_to_beat_any_ai_detectors/
I went to the link but it's so long I couldn't take the time. But wouldn't it be possible to just keep updating the AI training so that it recognizes the products of systems like the one described at the Reddit link? Seems like you could take a bunch of AI generated essays, run half of them through the Reddit hide-the-AI algorithm or whatever it is, and then use that set to train the AI to recognize AI disguised this way?
Okay, the idea of the post I linked is that you can take the "% AI" score from these "detectors," and work to minimize that.
As someone pointed out in the comments, "Ah, its funny people rediscovering GANs." Generative Adversarial Networks, which were very popular (and the state-of-the-art) a few years ago, before diffusion models: you might have heard of "This Person Does Not Exist," for example.
The idea behind them is precisely what you're suggesting: that you can train a detector and generator in parallel, and recursively improve each one by using the other's outputs.
So yeah, your scheme should work for a few iterations, but I think there's an asymmetry favoring the generator, since perfect imitation on human writing is possible. Even if the generators don't QUITE get perfection, you can drive the detection rate low enough (or equivalently, the false positive rate high enough), that your detectors become unusably worse in comparison.
Wait, though, I had an idea for a sort of watermark for AI prose. So somebody comes up with, say, 10000 things that happen infrequently in human-produced prose. Let's call them -- bread crumbs, after Hansel and Gretel. They'd be things like, say, the 3rd word of the 8th sentence starts with a c. So these would not be things that are *very* rare, because the very rare things will be things that sound or look a bit odd to the reader, or would be hard to work into the prose. These would be easy to embed in the prose, and the AI would have a system prompt to embed as many of these as possible without making serious changes in what it would have written anyway. What we'd be interested in would be ration of total number of bread crumbs in a piece to piece's word count. So for a given 5000 word chunk of human-written prose, there would be an average number of bread crumbs that occur naturally, and then a nice bell curve around it, and for AI-written prose the number of bread crumbs written using this watermark system would be several standard deviations above that mean. So counting bread crumbs/total words in piece would let us calculate the probability that the piece was written by AI.
A nice feature of this kind of watermark is that the more AI-written bits there are in the piece, the higher the crumb standard deviation score will be, so people who maybe used AI for research and just included a few AI sentences from the research would not get high crumb scores.
Of course the list of crumbs would have to be kept secret by the AI companies, but they're no stranger to keeping secrets.
OK, Shankar, go ahead and tell me the tragic flaw in this idea. I can take it
Yeah, I get it. Asked GPT4 about how good AI detectors are and it said they're really bad, both false positive and false negative rates are high, and even light editing of an AI-written piece will lead much detectors to classify it as AI-free. Mentioned several new approaches that are being tried but overall seems like my idea just would not work in practice.
"Please put in a few misspelled homophones."
I'm sorry, my guardrails forbid homophonia.
I chuckled.
> It is probably possible to train an AI to be an excellent judge whether there is AI contribution to an essay,
I just tried that by asking chatGPT about cats and then in a different instance asking it the probability of the cat text being AI. It was fairly certain it was.