based on everyone coming to agree with him that LLMs are inherently limited and won't scale to AGI.
My vague feeling was that Marcus had repeatedly made more specific predictions about generative models that were again and again falsified, and so the victory lap is the result of a long sequence of goalpost-moving. But I don't have the links to prove it and am open to the suggestion that this is completely wrong and is something I osmosed from anti-Marcus posts and tweets without proof.
Is there a convenient record-keeping posts with a tally of Marcus predictions and falsifications? Were the goalposts moved much or not at all? Would appreciate clarity on this.
I tend to be on the skeptical side but I still think doing stuff like that is dumb. Marcus has been repeatedly proven wrong before but just keeps embarrassing himself.
I can't imagine a weaker "win" for AGI-skeptics. Even if it was 100% true, it would mean these AI companies just need one architecture change to reach AGI, and they have many billions of dollars and plenty of years to accomplish that, assuming it isn't right around the corner already. OpenAI might go bankrupt in the meantime, but Google won't.
And even if that one single upgrade doesn't materialize, the current tech still has enough runway to decimate job markets and enable killer drone swarms, this would just play out one industry at a time instead of "most white collar industries all at once, then blue collar work once the robots are building robots".
Does anyone know what the necessary "architecture change" actually is? Because without that key bit of information, this is just a fancy way of saying "these AI companies just need the secret of how to make a working AI..."
Oh, and they need enough compute to implement the imagined new architecture; there is no reason to believe that said architecture will have the same compute requirements as LLMs, and if it's significantly higher then even Sam Altman's nonsensical seven trillion dollar figure probably won't be enough.
Really, one could (and many did) say the same thing about AI in the 1960s. A clever architecture that someone will surely figure out Real Soon, and more compute, and we'll have AI. How long can that take?
Jesus Christ, these are like arguments from a defense attorney who's desperate to find the slightest reasonable doubt for his client. "There's some chance the architecture is too expensive!" "There's precedence from Eliza 60 years ago!" "We don't know EXACTLY what change will be needed!"
AI-skeptics act like they're being painted into a corner with every new AI advancement, and they're desperate for any sign that things will turn out differently. Except nothing is forcing them to hold their position, they could just wait-and-see like everyone else, they choose to stay in the corner and be upset, and try to declare victory 5 years prematurely on the flimsiest win.
John's argument is that if you don't know what you need in order to achieve X, saying you're just one step from achieving X doesn't really say anything. Your response to that was to rant about AI-skeptics and complain about the form of John's argument as if it wasn't true.
But it's still true.
It reminds me of people who rant about economists in order to distract from the fact that their theory about how to run an society requires people to not respond to incentives.
No, it's completely asinine. The scope of the "unknown" shrinks dramatically every single year and you guys continue to say "there's still 1x unknown!" as if that's a compelling argument we should be moved by. And it will look incredibly stupid when the unknown gets solved and you have nothing left, but I'm sure the goalposts will just get moved, and posts will get deleted. But until then, you guys will just keep posting and posting, with zero evidence or real arguments, just Sam Altman quotes.
Saying something shrinks dramatically may sound important, but if the unknown thing is large, you can keep carving out "dramatic" shards of it and never attain the goal everyone agrees is the goal.
The goal is AGI. That goal is *hard*. I can say as convincingly as you that _you_ guys continue to say "dramatically shrinking!" as if that's a compelling argument, and that it will look stupid when 2030 / 2050 / 2100 AD rolls around and we still don't have autonomous artificial intelligences living and working alongside us, but I don't have to; I can simply point out that that one thing you claim is all that stands in our way is undefined, and is therefore anywhere from "guess the lock combination" to "implement a general purpose FOL theorem prover".
Now, it's possible that we're closer to "guess the combination" than to "write FOL theorem prover", but if you knew that, you could just demonstrate that here with ideas you couldn't have unless you were that close. If your response is instead intellectual bullying and misplaced accusations of goalpost moving and hiding behind a literally anonymous user name, then I'm led to believe we're closer to FOL territory after all and that you're flustered because you can't get us to fall on our knees and repent, for our AI god is nigh.
I *might* be convinced I'm mistaken about that if the next response here isn't just more of the same, but so far, that's not the way to bet.
> He wrote a guest opinion essay. Things didn’t go great. That starts with the false title (as always, not entirely up to the author, and it looks like it started out as a better one), dripping with unearned condescension, ‘The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking,’ and the opening paragraph in which he claims Altman implied GPT-5 would be AGI.
> Did you notice the stock market move in AI stocks, as those bets fell down to Earth when GPT-5 was revealed? No? Neither did I.
>The argument above is highly misleading on many fronts.
> 1. GPT-5 is not AGI, but this was entirely unsurprising – expectations were set too high, but nothing like that high. Yes, Altman teased that it was possible AGI could arrive relatively soon, but at no point did Altman claim that GPT-5 would be AGI, or that AGI would arrive in 2025. Approximately zero people had median estimates of AGI in 2025 or earlier, although there are some that have estimated the end of 2026, in particular Anthropic (they via Jack Clark continue to say ‘powerful’ AI buildable by end of 2026, not AGI arriving 2026).
> 2. The claim that it ‘couldn’t count reliably’ is especially misleading. Of course GPT-5 can count reliably. The evidence here is a single adversarial example. For all practical purposes, if you ask GPT-5 to count something, it will count that thing.
> 4. GPT-5 still is not fully reliable but this is framed as it being still highly unreliable, when in most circumstances this is not the case. Yes, if you need many 9s of reliability LLMs are not yet for you, but neither are humans.
> 5. AI valuations and stocks continue to be rising not falling.
> 7. Claims here are about failures of GPT-5-Auto or GPT-5-Base, whereas the ‘scaled up’ version of GPT-5 is GPT-5-Pro or at least GPT-5-Thinking.
> The fact about ‘many users asked for the old model back’ is true, but lacking the important context that what users wanted was the old personality, so it risks giving an uninformed user the wrong impression.
>> Shakeel: The NYT have published a long piece by Gary Marcus on why GPT-5 shows scaling doesn’t work anymore. At no point does the piece mention that GPT-5 is not a scaled up model.
Your vague feeling may be caused by unfamiliarity with what Marcus actually says, coupled with epistemic seclusion in a bubble that just really likes to do things in the pattern of: "successfully make LLM rote-learn a specific task -> claim vaguely-defined skeptics predicted it can never be done -> claim skepticism debunked -> ignore all observations about the solution making silly mistakes or just outright failing out of distribution -> call pointing out that it does not generalize 'moving the goalposts'".
(My vague feeling is that he's always extremely careful and hedging when making predictions, and you won't find him make even one that turned out to be wrong. But admittedly, I don't follow him closely nowadays, mostly because I've already internalized his arguments several years ago and... well, nothing changed since, he keeps having to repeat himself and it gets boring. So if somebody does pay attention and keeps the score, yeah, I'd appreciate that too. But again, what he does say is probably too boring and vague to get anyone interested in fact-checking it years after the fact.)
Does anyone here think hes likely to lose that bet? We are now a third of the way through the bet period, and those tasks don't seem a whole lot closer than they did at the end of 2024.
I won't say that there has been no big progress over the last year, the big thing I've seen is that LLMs are much better at moderately complex coding tasks than they were this time last year. But they are approaching the asymptote of perfectly generic text, not perfectly high quality text. They're playing Family Feud instead of Pointless.
No, I think he'll win. My point is that the phrase he's known for – "deep learning is hitting a wall" (from March 2022!) – actually implies a far looser upper bound of the capabilities it produces than what one might assume at first glance.
It implies no upper bound of [AI capabilities] - provided they're achieved with techniques other than deep learning.
This is what people on the hype side just can't get wrap their heads around, I guess - that "skeptics" (of Gary Marcus variety) aren't saying it's impossible, they're saying "you're doing it wrong".
And of course deep learning qua deep learning does indeed appear to have hit a wall, with everyone pivoting to self-prompting ("reasoning") models applying rules iteratively and utilizing external input. There's still a separate question how far building on top of LLMs can get us. (I'm with Gary in the "not very far" camp.) But on the question of [LLMs alone] Gary has arguably already been vindicated.
Yes, it's true that the statement makes no claims about non-deep-learning AI models.
You say the more recent chain-of-thought "reasoning" LLMs (basically o1 onwards?) AREN'T limited by Marcus's Wall? He doesn't seem to think so, and continues to make the same assertion. You think his more recent pronouncements are to be understood more as "deep learning WAS hitting a wall, but they changed tack and now they're past it"?
RLHF was in widespread (practically universal) use before he first coined the phrase, so I don't think it makes sense to interpret it as applying only to PURELY deep learning instead of the entire category of LLM neural networks built atop a foundation of deep learning, which I believe is how it was intended, and how it was understood at the time.
I think Marcus's statement in 2022 was about pre-chain-of-thought LLMs, yes. This does not and should not imply chain-of-thought LLMs don't also have limits.
Gary treats the emergence of chain-of-thought models as vindication because they introduce what he considers neurosymbolic mechanisms that he long advocated for. (Disclaimer - I'm just outright repeating his argumentation now, I personally wouldn't use the term "neurosymbolic", which I don't find clear enough to be meaningful.) At the same time, he thinks it's nowhere near enough, and that LLMs with their obvious deficiencies are too shaky of a foundation to build upon. (Which, yeah.)
I assume he doesn't feel RLHF makes a similarly meaningful difference because it's just a training tool that ultimately doesn't alter how LLMs work. But yeah, I guess this does make "deep learning" in his famous statement semantically incorrect. Which, uh, never bothered me before you pointed it out - per the above, I think it's pretty clear what he meant.
I never fully really grokked what the exact AI-booster argument against Marcus was. Every time Marcus said his Gary Marcus things there was an avalanche of quote tweets basically going OH IT'S GARY MARCUS OPINION DISCARDED but with relatively little material for why they were so dismissive, apart from him disagreeing them of course.
I think the limit for self-advertising on ACX outside the Classified Threads was about twice per year (please correct me someone, preferably with a link, if you remember otherwise), so this is a notification that you have reached your limit for 2025.
The Online Right is busy laughing at internet feminist Emily Witt, who was dating a DJ at 43 years old and produced the following gem that was published in the NYTimes. "Until you’re saying the stuff that upsets your parents, you’re not really doing your job. You have to cross that threshold."
It's easy to laugh at the older-than-40 woman who sounds like she's failing high school algebra. What you won't hear, except from me, is that the Republican Party increasingly relies on people like her for its political future. That demo, unmarried white women who date DJs, is probably as likely to vote Republican as Democratic now, the result of Trump's unique appeal to people who make poor life decisions. Of course, Trumpy women like that don't get megaphones in liberal publications to air their grievances, and the Online Right, selected for those personalities who produce Comfort Food, doesn't want to highlight them either. They'd rather let their audience think of the Trump coalition as being made up of successful, married people, a demographic that is moving away from the party of Hulk Hoganism and medical quackery.
The reality is that married, college educated whites and unmarried white women who date DJs are both cross-pressured demographics split 50-50 between the parties. Given that Trumpism appears likely to be replaced with Vance-ism, which will likely drive both groups away, Emily Witt will have the last laugh.
This is one of those situations where I start reading the second paragraph and realize "oh it's that commenter again"
Now there's nothing wrong with having personal style, but I do think that your opinions on what the Republican Party is all about are sufficiently idiosyncratic that it should give you some second thoughts.
I get it, most people with these one-of-a-kind syncretic ideologies aren't very smart. What I'm saying isn't that unique, Hanania, Spencer, Lion of the Blogosphere, and recently Steve Sailer have been saying the same things.
Well, to be fair, absolute loyalty to Trump seems to be the *one* definitive thing you must have to be MAGA, ie to make your way forward in the modern Republican Party, so it's also basically the one gimme where you'd expect the previously disloyal GOPers to make a 180. Doesn't yet reflect a *particular* amount of opportunism, beyond the normal baseline of politics.
I find Trump hard to pin down. He is obviously a patriot, but "the art of the deal" is such a part of his self-image that in practise any position he takes might be open to negotiation, hence TACO. Vance seems fully on board with this - he praised Trump for Trump's "strategic ambiguity" before Vance was named as running mate. He's come around, sure, but Trump was a shock to the system and quite a few conservative journalists opposed him at first then adapted.
This reminds me of Mečiar in 1990s Slovakia. His political opinions could change 180 degrees overnight (many specific examples were documented in a book "Mečiar and Mečiarism"), but half of the population totally loved him and updated quickly. Like, one day that would be like "EU is the best" and the next day "EU is the worst", or vice versa, depending on what he said that day on TV, and you could see his fans repeating the same on the streets... sometimes some of them shortly embarrassed when they didn't get the memo and mistakenly shouted the yesterday's version, but they updated immediately when corrected.
And yes, patriotism was a rare constant. Whoever criticized Mečiar, domestic or foreign, it had to be because he "hated Slovakia", because Mečiar by definition personified the nation (from the perspective of his fans).
I find it funny how American politics sometimes seems to copy Slovakian politics, a few decades later. Wokeness was analogical to socialism, Trumpism is analogical to Mečiarism. (Perhaps you all should study Slovak politics, to be prepared for the future.) Coincidence, or maybe convergent evolution?
Sure, but at least Vance has goals beyond personal vendettas and pride. As far as I can tell, his desire to Christianize the country seems genuine.
I think Vance, as well as most of Trump's supporters, understands this, seeing Trump as a mere means to an end for righting the course of this country.
I trust this place more than I trust the CDC right now, so: what's the current advice on covid boosters? My previous heuristic was that I should probably get one alongside the annual flu shot. (I'm a healthy woman in my 30s.) It's not a pleasant shot (I'd say comparable to tetanus), but I didn't have a strong adverse reaction; and on the other hand, covid knocked me out thoroughly for a few days, but I didn't have any long-term effects that I noticed. The selfish main question, then, is whether the covid booster is likely to actually prevent the covid illness, which is a question about prevalence and effectiveness.
If I had to put numbers on it, I'd rate the unpleasantness of 1 covid as that of 5-10 covid shots: is an annual booster likely to be effective at that rate? Do they even get updated annually?
This one has been updated so it’s a decent match for current variants. As for whether it’s likely to prevent the illness — not very. But it’s quite likely to make the illness briefer and milder if you have it. On the other hand, Covid is now a considerably less severe illness for almost everyone, because almost everyone has some lasting protection from previous shots and infections. Since you’ve had more than one vax, and also had Covid, I dunno how much additional protection you would get from this year’s vax. Anybody know?
Why is it different for Covid vs. the flu? Are the Covid variants more similar to each other than influenza variants, so that you end up with more lasting protection against severe illness for Covid? Is Covid slower-onset so that your long-term immune system (whose correct name I've forgotten) has more ability to catch it before it gets severe? Is it actually the *flu* that is more similar (at least within a single season), so that you can have a reasonable hope of avoiding it entirely (rather than just hoping for a milder case)? I had flu and Covid placed in the same mental bucket of "quickly-evolving respiratory viruses that would be thoroughly annoying to get sick with," and am trying to understand what the differences are.
I strongly suspect Covid strains to be more closely related because all human Covid variants share a common ancestor in late 2019. Influenza has two major types (A and B) in widespread circulation. Each of these have subtypes, although there are a lot more subtypes of A than of B.
Influenza A subtypes are defined by major variants of the two major surface antigens. There are 11 types of one of these and 18 of the other, making a total of 198 theoretical subtypes. Fortunately, most of these are some combination of relatively rare, only found in birds or bats, or able to infect humans poorly or not at all. It looks like in recent years, it's mostly just H1N1 and H3N2 in widespread circulation among humans, but each of these have their own substrains which have developed over decades since they first made the jump to humans. I think the original human outbreak of H3N2 was the 1968 pandemic and H1N1 is from the 1918 pandemic.
B had two "lineages", Yamagata and Victoria. The former, last I heard, is suspected to have died out because of the Covid lockdowns but Victoria is still around. I don't know when the Victoria Lineage originated, but I'm pretty sure it's also quite a bit older than Covid.
I actually don’t know the answer to most of those. This is the kind of thing I ask GPT. (Then I click the link to a coupla its main sources to make sure there’s no hallucinating going on ). What I remember is that flu has something like 8 separate single RNA strands, and each strand can easily swap a gene of 2 with another strand, and so it mutates very rapidly
I don't think the US military is under enough pressure that we can take its vaccine policies as reflective of accurate risk evaluations, rather than political winds.
I've never blocked anyone. But inspired by WoolyAl, I decided to open an incognito tab and compare the comment-count, to quickly estimate whether a significant number of commenters have blocked me. Signed in, I see 1645 comments in OT 399. In the incognito tab, that number goes up to... 1633.
I just opened it in regular and incognito, and both times it showed 1633 for like 10 seconds and then switched to the actual number (1663 right now). Maybe it was doing the same for you and you didn't see it switch to the accurate number
As in... an example of a comment from someone who blocked me? No can do, chief. Ostensibly, the number of people who've blocked me is -12. Yes, that's a negative number.
I decided to read this paper after Newt Gingrich summarized it in a tweet. The gist is "we find that U.S. prescription prices are actually 18% lower than in these nations". The nations being UK, Germany, Japan, France, and Canada. How? Because the US has the cheapest generics and the US has a higher generic prescription mix than those countries.
And I missing something or is this an extremely facile analysis? First - they only consider Medicare/Medicaid prices, which are both much lower than private(insurance and OoP) and only cover ~45% of the pharma market. This would make the analysis broadly inaccurate at the country-level alone.
Second - the analysis appears to simply be Price difference(%) * %Prescriptions, effectively weighting the average price difference % between the 2 categories - (name brand) and (generics) by percentage of total prescriptions of those categories. This is troublesome because generics are far cheaper than name brands, so 40% of $4 is far less important than 200% of $10000, but it appears this analysis would weight these differences evenly. It also well known that 50 or so drugs alone account for ~40% of spending, so any accurate analysis must take the huge price differences into account.
Major Point - Is this what passes for academia at the UofC? Am I missing something? This is something like college sophomore work. They even used the data from another report (Rand 2022), which resulted in an opposing analysis. "Prescription Drug Prices in the U.S. Are 2.78 Times Those in Other Countries"
I guess I'm not surprised by weaponized academia as politics, but this is just incredibly shoddy work by a seemingly high status tenured professor / former acting Chairman of the Council of Economic Advisers. I Hope I'm wrong about my analysis.
They devised a metric that showed the result they wanted. You want a different result, you can construct your own metric. How else do you propose it work?
Their metric weights a $.30 aspirin tablet price differential the same as a $20,000 biologic price difference. It's poor analysis, and doesn't provide any benefit.
Do you think their method is valuable? The Rand study I linked does far better.
I'd prefer that our political parties are informed by good analysis, so we can make better decisions, rather than justify a preferred policy with asinine methodology.
If they want good analysis, they can pay for it themselves. Either way, I'm sure the consequences are not major enough to justify jeopardizing a major source of funding.
Could you beat the hallucination problem by asking the same question of three or more hopefully independent AIs and ask them to report on what the answers have in common? Or beat at least some parts of the hallucination problem?
I've been asked whether LLMs can be reliable enough to recognize whether one hallucination is different from another. I think they could, but I'm guessing.
Minor point: There's an error in the video which displays the importance of actually knowing what you're talking about. It says that you don't need to fact check a poem, well maybe you could count syllables in a haiku. This seems like a person who hasn't had contact with poetry since elementary school.
A good haiku has a change of mood for the last line. Recognizing whether a haiku has it or not would take a lot of knowledge of the world and human emotions.
Many poems include factual material which can be gotten right or not, and there are a lot of poetic forms other than haiku.
> Could you beat the hallucination problem by asking the same question of three or more hopefully independent AIs and ask them to report on what the answers have in common? Or beat at least some parts of the hallucination problem?
No. All LLMs are constructed in a similar way and they deal with incomplete information in a similar way, and thus they produce hallucinations in a similar way. Unless someone creates an LLMs that deals differently with incomplete information, we will have to deal with hallucinations.
That being said, you approach has some merit, since it can and should reduce the number of hallucinations, since they is some difference between different LLMs. I think your approach is called "LLM Ensemble", or maybe "mixture of expers"?
I did a short search and found papers relating to your approach, e.g. this: "Harnessing Multiple Large Language Models: A Survey on LLM Ensemble"
I don't see how you're concluding that it would result in fewer hallucinations. However, it does seem clear that polling three separate LLMs could increase the number of hallucinations successfully detected, rather than accepted by mistake.
It depends how correlated hallucinations are between LLMs. My guess would be that given similar levels of sophistication and built by similar techniques on similar data, they would be fairly highly correlated on what questions they hallucinate on and somewhat less correlated by still significantly so on the content of the hallucinations. If my suspicions are correct, your technique should improve accuracy but won't come close to eliminating hallucinations.
You might be able to improve somewhat by also asking the question in different ways. I have noticed that LLMs tend to pick up the assumptions implicit in your prompt and cue off of them, so using exactly the same prompt is likely to increase correlation in wrong answers.
I just discovered that Youtube is automatically replacing the audio track of videos with a robotic machine translation. Normally, that'd only be a minor annoyance except that **there's no way to fucking turn it off**. Even by the usual low standards, this is amazing. Everyone involved at Google needs to be fired immediately. Like seriously, WTF?
YouTube's automatic translation is so bad and useless, it's really unbelievable that they're doing what they're doing.
For example, they auto-translate video titles from English into German, which is idiotic in the first place, but particularly awful in the case of VIDEO TITLES that are full of specific references, memes etc. and where the translation engine has no context to even remotely translate it accurately, and that's assuming it's even translatable at all.
I really want to know the decision making process behind this, because it's probably the worst UX decision I have ever seen.
You're really lucky if that's your first "WTF, everyone involved needs to be fired" moment with Google. I lost count of how many times that happened to me.
I finally switched all my default search engines away from Google after Google started crashing my web browsers (this went away since then, but I'm not going back). Previously, I'd lost a very basic webpage on Google Pages because these went through two upgrades turning my very basic HTML into complete mishmash; after the second one, I caved in and quit. Now I merely roll my eyes at more minor WTF stuff from Google, because, in my experience, Google just does this all the damn time. They don't care.
Honestly, I didn't like any of those I switched to. (One was DuckDuckGo, another Brave search.) So I am just using perplexity.ai for anything more complex than addresses or open hours for businesses. I find Perplexity incredibly useful - way better than any non-AI search engine.
Unfortunately, it appears that that doesn't solve the problem because the issue is with the Youtube mobile app. On the web, you can already manually switch to the original language track, but there's no way to do this in the app, and a Firefox extension doesn't help with that. But thanks for the suggestion anyway.
The uploader can disable it for their videos but as a viewer there is no direct way to change the audio. You could change your language settings to the language of the video to get the original audio track but usually I choose to just not watch the video.
I tried changing the app language, but even that didn't seen to work. Youtube decides which language audio to play based on some ineffable data you have no control over.
Apropos of the "book review" on Ted Nelson's memex project, what do ya'll think about Urbit?
Okay so... Lord Moldvort is trying to fork the internet? I think? But then why do we need to invent an entire assembly language of unreadable nonce words? Like, what are we doing here. I'm not seeing the vision.
I'm trying to figure out what the Grand Vision is. I don't quite understand it, but I get the sense that the original purpose was to fork the internet because the first internet was built on a bad stack and now we have to live with the legacy code.
the unreadable assembly language seems instrumental to that somehow, but im not sure in what way. There is, indeed, an awareness about how social engineering works, so it's certainly possible that the nonce words are a deliberate ingroup signaling-mechanism. But I don't think that's the whole story.
It's like asking why do many scam e-mails contain obvious red flags. It filters the audience. You don't want to waste your time on people who are smart enough to recognize the red flags.
If you don't see anything crazy about Urbit, you are exactly the kind of customer Urbit needs. They have a few virtual "planets" to sell you (for actual money), and an esoteric language to learn so that you can distract yourself from questioning your investment.
not sure about that. Moldbug founded the project, and then later replaced himself as CEO so he could return to blogging. Afterward, the new CEO tried to distance the Urbit Project from moldbug's... political reputation. But the project didn't go so well. So Moldbug returned as "wartime CEO", and a bunch of devs resigned in protest.
the fact that a bunch of devs resigned in protest, just doesn't very cultish to me? or if there *is* a cult, it's not out of loyalty to the founder.
“But, sire, how can I know what your thoughts are?”
The king stopped dead in his tracks, and stared at me.
“I believed thou wert greater than Merlin; and truly in magic thou art. But prophecy is greater than magic. Merlin is a prophet.”
I saw I had made a blunder. I must get back my lost ground. After a deep reflection and careful planning, I said:
“Sire, I have been misunderstood. I will explain. There are two kinds of prophecy. One is the gift to foretell things that are but a little way off, the other is the gift to foretell things that are whole ages and centuries away. Which is the mightier gift, do you think?”
I recently became interested in a certain sort of function: one that acts on compresses bitstrings in particular ways. I don't know if there's any sort of standard literature or where to look for more information. The properties are as follows:
1. The function compresses the bitstring by some multiplicative factor: the output might always have e.g. 1/8 or 1/32 the length of the input.
2. As a result of 1, inputs are necessarily mapped many-to-one onto outputs.
3. Inputs that are a short Hamming distance apart should be very likely to map to the same output. In particular, flipping one bit in the original string should only rarely change the output.
4. None of the digits of the original string are treated as especially more or less significant (e.g. generally don't manipulate it like a binary number)
A toy example would be the following:
Chop the bitstring up into bytes (there may be a short byte at the end), and for each bite, alternately add and subtract the bits. Apply a mod 2 to the sum to get your output bit for that bite. So
EDIT: this example explicitly fails to have one of the key properties; bitstrings with Hamming distance 1 will map to different outputs. How embarrassing. That's what I get for trying to think up a new example (I originally had a different one) on the fly:
The simplest way of doing this would be to use an error-correcting code of rate 1/8 or 1/32: round your message to the nearest codeword, and then apply your favourite linear map down to the appropriate dimension.
I note in passing that when you're working mod 2 there's no difference between adding and subtracting, so your toy example is just taking the population parity of each byte; some instruction sets have builtins for this.
This is a well-studied problem in coding theory. In your example you want to look at linear codes with parameters of length ck and dimension k, with large minimum distance. A good decoding algorithm for the code will map vectors near a codeword onto the codeword - which you then map onto a bit string of length k. (Without loss of generality, you can make the code systematic, then you're just truncating.)
There are lots of differently structured codes with different decoding algorithms - any of those will give a way of doing what you want to do. E.g. look at the Reed-Muller codes and their decoding algorithms.
Idea: democracy is a self-contradiction, because if the people would be truly the ones who decide things, they would always vote for a charismatic-populist dictator. Not necessarily right-wing, can also be left-wing, like Chavez was.
The reason for this is that it is not possible for millions of people to actually participate in politics and wield power. Politics is always a TV show to watch and comment on, not truly participate.
And really charismatic-populist dictators or kings make the best TV show. Chavez did this 100% literally.
I don't know about that. I expect if we thought about it, we could devise some system of governance by local committees that feed up into regional committees that feed up into national committees and in which everyone who wants to take part in actual self-governance has an actual part. But let's not kid ourselves; most people in this system would be on something like an HOA or condo board, not deciding national-level issues.
If by "democracy" we set the bar high and mean actual self-governance with the people governing themselves day-to-day, then we are talking about something that hardly even exists. Governing is almost always done by representatives, and even they tend to be constrained by aristocratic institutions like upper houses, constitutions, independent courts and central banks.
Some places are a bit more democratic, like California with its binding statewide referenda, others less. And my impression is that the record of those referenda is a bit mixed; they smacked down the UC for racial discrimination on the sly, but they also required warnings about potentially carcinogenic chemicals on damn near everything. So I'm not clamoring for more direct democracy.
My own thinking is that the system of elections is mostly there to keep out the real crooks and self-dealers. And occasionally elections serve as a proxy for major policy fights where a choice needs to be made. Most other decisions get made by the bureaucracy.
Theoretically you could have a populace that wasn't completely brainrotted. The US is obviously a lost cause, but with some aggressive selection pressures, you might get a population that simply isn't interested in such things. Japan is probably the closest thing to that.
Not quite aware enough. They never specified whether a president could rule from prison ,or pardon himself, because they never thought the populace would knowingly elect a crook.
First, as noted by others, people do not "always" vote for populist dictators--maybe over some time horizon the probability of that goes to 1 or something, but there are very clearly democracies that have lasted 100+ years without voting in populist dictators--at least, no populists dictatorial enough to prevent future elections.
Second, there are degrees of "participating in politics and wielding power": obviously it's not possible for tens of millions of people to all be President or Prime Minister, or even Senator or MP, or even legislative aide--but it's obviously possible for millions of people to vote occasionally, join a handful of political groups, etc.
Essentially you should think of this as a power law distribution or some other 80/20 sort of thing: a tiny minority actually run for political office, or propose new tax plans, or whatever; a larger but still small minority are engaged enough to debate what that tiny minority do in a fairly principled way; a yet larger group follows those debates--maybe by now they're less driven by real principle or expertise and more hobbyist, but they're still open to persuasion, trying to align policy with their values and their best understanding of the world; and at the end of the funnel a majority who follow the cues of their friends and family and favourite celebrities or whatever in a way that is mostly hobbyist/faddish/etc but is still shaped by the more intellectual/principled work done at the other levels.
It's obviously true that the entertainment/mob mode is always present, and it's also certainly true that it can overwhelm the other mode, but whether and when that happens will depend on the exact mechanisms by which the country is run (there are lots of ways of aggregating up the political impulses of millions of people in ways that are democratic) and also on features of broader civil society.
But the fact that the tendency to populist dictatorship is always present in society doesn't make democracy "self-contradictory" any more than the fact that populist dictatorships can become democracies means that popular dictatorship is inherently self-contradictory. No yet-discovered method of running a country is completely stable in all circumstances; that doesn't mean they're all "self-contradictory".
Imma share a pet theory of mine: the Printing Press marked the beginning of the Modern Period. The impact of this is massively underestimated by the standard historical narrative, and I suspect a lot of the alienation and disruption associated with modernity is ultimately downstream of this.
For politics specifically, I think the press was indispensable in the formation of the nation-state. Especially Modern Liberal Democracies. In theory, it was inspired by the Athenian model, etc. But actually, the modern version is a different beast entirely. Because it's fundamentally powered by the mass-media (moldbug came so close, but didn't quite get there). Notice, for example, that you compared politics to television. coincidence? No, because television is mass media and media is the sine-qua-non of *modern* *liberal* democracy.
I suspect the Wars of Religion, American Revolution, French Revolution, European Spring, Holocaust, and Russian Revolutions are all downstream of political propaganda (only possible with mass media) effectively one-shotting everyone's brains and turning them into zealots. Everyone complains about social media being miserable, but it was probably worse when, say, the Catholics and the Huguenots were slaughtering each other in the streets.
So contra Wooly, I don't think athens/rome is really all that commensurable with modern democracy. We give them the same label of "democracy", but that's an error. And I think Democracy is actually *high*-entropy (not low-entropy like an F1). And I think the classical liberalism is what makes these places nice places to live (but the term "liberalism", like "democracy" has been grossly corrupted). And I think liberalism takes false credit for the technical progress of modernity. And I suspect U.S. and G.B. succeed despite their democracy, not because of it.
There's a number of layers of indirection here, which makes a naive discussion of "democracy good? democracy bad?" a complete quagmire. And it gets worse. Because the internet is overthrowing the old printing press.
"For politics specifically, I think the press was indispensable in the formation of the nation-state."
Not a historian, but I don't get the sense that this observation would be considered remarkable or novel by historians AT ALL. I think it's pretty widely accepted. A lot of my recent interaction with history has been reading Bret Devereaux's excellent blog ACOUP (highly recommend, BTW), and while his focus is on a MUCH earlier period, he has written in quite some detail about how the background literacy level substantially restricts what sort of governing structures are even possible. Unfortunately I can't find it easily to link it, but he has an excellent post (series?) discussing the rise of feudalism[1] and how it was very much an adaptation to the decline of literacy rates after the fall of the Western Roman Empire, as it allowed for warlords who'd conquered large amounts of land to administer and profit from them in a decentralized way, as central governance requires far greater numbers of literate officials collecting and recording information.
" suspect the Wars of Religion, American Revolution, French Revolution, European Spring, Holocaust, and Russian Revolutions are all downstream of political propaganda (only possible with mass media) effectively one-shotting everyone's brains and turning them into zealots. "
And this is where I think you showcase some rather alarming biases, and as a result this observation goes off the rails. Here you effectively reduce a bunch of other humans to NPCs in your minds, modelling them as fundamentally dumb and easy to manipulate[2]. Let's take the French Revolution, for example. Do you remember why the French peasants were angry? Rumor has it, it was less to do with "reading lots of political propaganda" and more to do with, y'know, starving.
Of course, peasants had been poorly treated and taxed to starvation in the past without always revolting. And many previous peasant revolts had failed. But literacy does a couple of things here that are totally orthogonal to "turning people into zealots." First, it makes it easier to communicate with people who are not your immediate neighbors so you can tell how bad and widespread the problem plaguing you is (and thus how much support you might have if you try to rise up). Second, it makes it easier to coordinate large groups of people around actions that need to be taken in concert to be effective. Both of these are ways in which widespread literacy can be transformative and lead to revolutions that acknowledge (and indeed depend) on people's individual agency in ways that imagining them as "zealots" who have been "one-shotted" by political propaganda do not.
Let me finish by adding that of course I believe that it is POSSIBLE for political propaganda to manipulate and radicalize people. Just that I find the above framing to be incredibly simplistic and childish. First, you can guarantee that regardless of what they read/watch/hear, a large bit of people's political opinions is shaped by their life experiences and material conditions: if those don't create fertile soil for radicalization, it will be MUCH more difficult and less likely. Second, the idea of propaganda "one-shotting" almost anyone is patently absurd. Everything I've read about radicalization is that it is almost by strict necessity a gradual process, that most people who go through it will shift their position slowly over time and that groups who are *actively trying* to radicalize people often know this, and avoid showing their more extreme ideas to people who have only recently encountered their movement.
Of course, in the age of the internet and social media, we don't need radical zealots to carefully drip feed people gradually more extreme propaganda: we have social media algorithms all to happy to do that at scale.[3]
[1] If I could find it, I'd also be able to elaborate on the important distinction between feudalism (political), manorialism (economic) and one other thing which I forget, which are often all lumped together in the popular terminology and imagination.
[2] There has been some push in *ahem* certain circles recently to trash the concept of empathy. This is incredibly stupid that have nothing at all to do with morality (though of course it's highly morally suspect as well). Fundamentally empathy is about understanding and being able to model other humans. Even for an amoral bastard who simply wants to defeat their opponents, being able to empathize with them is an *incredibly useful tool.* Fun fact for anyone reading this who *has* but into the anti-empathy drivel (especially anyone on the right): you can observe Orson Scott Card (not generally known as a raging lefty) make this exact point in Ender's Game way the hell back in 1985.
[3] Rant for another time, but I fairly firmly believe that this dynamic bears a substantial share of the responsibility for the tempestuous state of modern politics. And what's scariest of all is, I don't think that was really *anyone's* explicit intent: the widespread radicalization was rather an unintended side-effect of financial incentives and conditions created by new technologies.
> but I don't get the sense that this observation would be considered remarkable or novel by historians AT ALL. I think it's pretty widely accepted.
Great! Though why do I feel like I had to figure this out for myself? E.g. how is it the case, that someone like Scott did a dictator-bookclub review on Hugo Chavez, and the fact that television was used to hijack the government is treated as surprising?
> Here you effectively reduce a bunch of other humans to NPCs in your minds, modelling them as fundamentally dumb and easy to manipulate.
Yes, including me. And yes, I've read Ender's Game. And yeah, I'll admit that "oneshotted" was an exaggeration.
On one hand, people don't just wake up one day decide to become radicalized. And I do believe that people almost always have coherent reasons for acting the way they do. I didn't vote for trump, but I feel like i have a decent understanding of why people voted for him. I don't endorse hitler, but i feel like I can empathize with why he rose to power. I'm not a fan of Ted Kazincski, but I feel like I can roughly trace the path of his logic.
On the other hand, I do think humans are pretty suggestable under the right conditions. Propaganda then, is more like hypnosis. It's not capable of convincing people to do just *anything*. But it's definitely capable of nudging, amplifying, rationalizing, and channeling latent impulses. This is why I harbor doubts about Eliezer's "super persuader" arguments about ASI doom. If persuasion turns to to be a problem, it's not because the AI will be super smart, but because people are already highly suggestable. Remember ELIZA?
> Let's take the French Revolution, for example. Do you remember why the French peasants were angry? Rumor has it, it was less to do with "reading lots of political propaganda" and more to do with, y'know, starving.
Jimmy [0], what motivated the Reign of Terror?
> Enlightenment thought emphasized the importance of rational thinking and began challenging legal and moral foundations of society, providing the leaders of the Reign of Terror with new ideas about the role and structure of government. Jean-Jacques Rousseau's Social Contract argues that each person was born with rights, and they would come together in forming a government that would then protect those rights. Under the social contract, the government was required to act for the general will, which represented the interests of everyone rather than a few factions. Drawing from the idea of a general will, Robespierre felt that the French Revolution could result in a republic built for the general will but only once those who fought against this ideal were expelled. Those who resisted the government were deemed "tyrants" fighting against the virtue and honor of the general will. The leaders felt that their ideal version of government was threatened from the inside and outside of France, and terror was the only way to preserve the dignity of the republic created from French Revolution.
> The writings of Baron de Montesquieu, another Enlightenment thinker of the time, also greatly influenced Robespierre. Montesquieu's The Spirit of Law defines a core principle of a democratic government: virtue—described as "the love of laws and of our country." In Robespierre's speech to the National Convention on 5 February 1794, he regards virtue as being the "fundamental principle of popular or democratic government." This was, in fact, the same virtue defined by Montesquieu almost 50 years prior. Robespierre believed the virtue needed for any democratic government was extremely lacking in the French people. As a result, he decided to weed out those he believed could never possess this virtue. The result was a continual push towards Terror. The Convention used this as justification for the course of action to "crush the enemies of the revolution…let the laws be executed…and let liberty be saved."
Robespierre, what is best in life [1]? "To crush one's enemies, see the laws be executed, and to hear the liberation of the people."
> Of course, in the age of the internet and social media, we don't need radical zealots to carefully drip feed people gradually more extreme propaganda: we have social media algorithms all to happy to do that at scale
Yes, Social Media is the new Mass Media. And honestly, I don't think Social Media is nearly as bad as the original.
But also, no, the sorry state of Social Media isn't *just* because of the algorithm. It's the Golden Age of Journalism that was the aberration, and what we have today is just reversion to the mean.
Gutenberg's printing press is a staple of middle-school history and was named the most important invention of the millennium by Time Magazine in the late 1990s, so I don't know why you think it's underappreciated.
Is this what they're teaching in middleschool, these days? My memory of it in middleschool starts and ends at "it caused the Protestant Reformation", as if it were a footnote. Nothing to do with being responsible the rise of ideology, or all the bloodshed of the 20th century, or the death of God, or the narcissism/anxiety/depression of the modern age. I remember mentioning a while ago that the U.S. founders were viewed as terrorists by the brits, and I think Lapras got offended at the suggestion. And we both know Time Magazine likes to position itself well within the Overton Window. So whatever Time Magazine said in the late 1990's, I don't expect it to quite line up with what I had in mind. Since DDG isn't helping me find the time magazine story, I asked Sydney.
> Here’s what Time emphasized:
> Revolutionary Impact: Gutenberg’s movable type transformed printing from a slow, manual process into a scalable, repeatable system. This enabled mass production of books and documents, democratizing access to knowledge.
> Catalyst for Change: The printing press was credited with fueling major historical movements like the Protestant Reformation, the Enlightenment, and the rise of modern science and democracy.
> Global Influence: Though movable type had existed in Asia, Gutenberg’s adaptation for the phonetic alphabet made it practical and transformative for the Western world.
Now, at the mention of "the Enlightenment", I expect the standard blather that the Enlightenment was the best thing that ever happened. But if you're a regular here, you should know by now that all I do is cheerlead for moldbug, who thinks the enlightenment smelled a little fishy, and spent a huge amount of time reading primary historical sources, to try to figure out what the hell actually happened during these last few centuries. And when I follow Sydney's citation to the original Times article [1], I find this:
> [...] The dissemination of the writings of Greek and Roman authors led to a revival of the classical learning that spurred the Renaissance. Printed religious texts put the word of God directly into the hands of lay readers. Such personal contacts helped fuel the Protestant Reformation.
> Before print, the ability to read was useful mainly to the elite and the trained scribes who handled their affairs. Affordable books made literacy a crucial skill and an unprecedented means of social advancement to those who acquired it. Established hierarchies began to crumble. Books were the world’s first mass-produced items. But most important of all, printing proved to be the greatest extension of human consciousness ever created. It isn’t over: the 500-year-old information revolution continues on the internet. And thanks to a German printer who wanted a more efficient way to do business, you can look that up.
Yeah ok, so according to Time Magazine, the Printing Press gets us:
- science
- renaissance
- protestantism
- social mobility
Lots of positive vibes, here. But I'm not seeing any of the downsides that I'm trying to point out. I'm not seeing any mention of "modern liberal democracy is a facade", or "modern liberal democracy has more in with Communism and Fascism than you think", or "and then everyone had a psychotic break", or "9/11 = (printing_press)^2", or "and this is why everyone is on xanax and lexapro".
I think the rule of the press was always understood, look up "fourth estate"
I think that when democracies work well, it is because they inherited aristocratic ideas. I would be focusing on ideas like honor, patriotism, duty, service, but classical liberalism is also an aristocratic idea.
As weird as it feels defending and praising the United States for any reason, it is all of:
1. One of the earliest modern democracies, helping pave the way for the spread of democracy.
2. One of the most successful democracies, whether measured by stability, economic growth or liberalization (though perhaps not for much longer).
3. A nation founded in large part on *hostility* towards aristocracy.
One can further note that they region of the original U.S. that was the most culturally tied to the old aristocracy (the South), proved to be the most anti-democratic AND poorly-functioning, by basically any metric you choose. It's strong desire to keep one of its aristocratic privileges (the ownership of other human beings) set it deeply at odds with the rest of the nation in ways that still have huge ramifications down through the present day.
Of course, if you just automatically define "aristocratic ideas" to be coextensive with "good things," then I suppose you will *strangely enough* find that they coincide with nations that function well. But only because your logic is circular.
> I think that when democracies work well, it is because they inherited aristocratic ideas. I would be focusing on ideas like honor, patriotism, duty, service, but classical liberalism is also an aristocratic idea.
Yes, I've had inklings of this as well. But I haven't gotten very far. Feel free to say more.
edit: Also, your conversation with Wooly is definitely sending my thoughts in new directions.
> I think the rule of the press was always understood, look up "fourth estate"
On some level, yes. The 4th Estate is another datapoint which convinced me of the Printing Press Hypothesis. (And now that I think about it, I think we've had a similar discussion before, where I mentioned Carlyle's quip about the 4th Estate.) However! I still think the impact is massively underestimated in the Discourse. The term "4th Estate" makes it sound like it's merely *equal* in importance (at best) to the executive/legislative/judicial branches. But as moldbug points out, it's not equal, it's superior. It's the actual seat of sovereignty, and it's entirely unaccountable.
-- Look at Scott's recent post [0] where he still thinks "cHecKs & bALaNceS between the other 3 Estates" is a meaningful analysis, as if the 4th isn't sovereign over the rest of them.
-- Look at WoolyAl's comment, where he thinks the USG is commensurable with athens/rome just because we both label them "democracies". It's the same problem as stuffing ~4 different syndromes under the name "autism". While we're at it, why don't we call diabetes and melanoma "autism" as well. This is not cutting reality at the joints. The dynamics are entirely different.
-- Just recently, I responded to a guy in another ACX thread [1] who tried too correct me when I mentioned that democracy used to be quasi-synonymous with nationalism. "nah, bro, democracy means 'rule by the people'. It's in the etymology".
-- For months, i've been telling people here that Trump is a result of the internet, not just a blackswan.
These ideas are not in the drinking supply yet. And I'm not saying any of these ideas *in isolation* are original to me. But I've never seen anyone else put them together into a single coherent narrative, or elevate the invention of the Printing Press as the single most impactful event of the Modern Era.
Great comment. There really should be some consideration for shutting down the internet and press. Moral entropy and division is inevitable as long as you give individuals the power to shape public opinion.
> There really should be some consideration for shutting down the internet and press.
Personally, I haven't really thought through what the correct response is, yet. But I'm not really sure if shutting them down is desirable (or even feasible). Running hypothesis is that the best response is to try to convince everyone that literature/internet is a synesthetic hallucination, and not to take it too seriously. low confidence, though.
> Moral entropy and division is inevitable as long as you give individuals the power to shape public opinion.
I suspect what's actually going on might be more complicated than that, although I'm not able to articulate my thoughts coherently yet. Trying to figure out the true nature of Modernity is a personal project of mine, and a work-in-progress at that.
Technically, yes. But quantity is a quality all its own. When you go from having to copy books by hand, to mass producing books by machine, it's a sea change. Because suddenly everybody is reading things for themselves, and this is compounded polynomially by the network effects of language.
Analogously, engines were known about since antiquity. And yet the 2nd Industrial Revolution didn't arrive until the late 1800's? Because to get the industrial revolution, it's not enough to just *know* about engines or have a few of them lying around. You also need to have the metallurgical knowledge to know how to mass-produce them cheaply, to reap the network effects that transform say, American society into being entirely car-centric with a gas-station on every corner.
> It didn't take the printing press to use the Church as propaganda to keep the Leaders In Charge because God Said So.
And yet, notice that the one mention that the Printing Press *does* get in the standard historical narrative, is that it was directly responsible for the Protestant Reformation. Suddenly, you get all these christians who are actually reading the bible for *themselves* instead of hearing them through sermons or seeing the scenes displayed in mosaics. And some of them are a little... oughtistic. So when they read the holy texts for themselves, the logic doesn't add up. So they start taking things a little too seriously, and voila, half the population are religious fundamentalists. Protestant in this case, although i think the exact same thing happened with the recent case of Jihadism.
The printing press effective broke the information monopoly of the traditional catholic institutions. The Gutenberg Press hit operation in 1450. Then a little over a century later... you get Bloody Mary burning protestants at the stake. hmm... coincidence?
Today, we have this guy named Martin Gurri [0]. He was supposedly a CIA analyst whose job it was to monitor the internet. And after 2008, he saw... something. He wasn't sure what it was a the time. But it was a global phenomenon, and there was a lot of rage and negativity. The year he published his book (or was it the year after?), the U.S. replaced its first black president with its first orange president. Coincidence? No, the internet is the 2nd Coming of the Printing Press. Because it's cannibalizing the 1st printing press. And the internet has technically been around since idk,1991 in the year of Al Gore by some estimates. But it required mass adoption to begin to really reap the effects of it.
(and yes, I do also harbor suspicions about the effects of literacy on the axial age. but i'm not quite ready to defend that one.)
There are two rather obvious counterarguments to this.
First, we can pretty clearly observe long periods of democratic government without this effect. The early-mid Roman Republic (1), the US in say 1810-1850 or 1870-1930, 19th-20th century Great Britain, all massive democracies without populist dictators or anyone even remotely dictator-ish.
And I mean this at a factual level, not like a theoretical level. We have observed non-self-contradictory (2) democratic governments exist for an entire human generation without voting charismatic-populist dictatorships. These are also fairly central examples of democracy; I can imagine definitions of democracy that would exclude the US, Great Britain, and Rome but then we wouldn't be talking about democracy the way the overwhelming majority of people think of it.
Second, it's worth noting that these three governments are also literally the strongest and most powerful Western states in our history. They each dominated their world during their respective period of dominance (3). We aren't just picking the winners amongst democracies, were' picking the winners in global politics, ever, only really contested by, like, the Mongol empire at its height or the Tang dynasty or the greatest Islamic empires (4). Democracies dramatically overperform relative to all other governments.
At the metaphor level, I think you're arguing that democracies are stupid poo-poo systems that don't make any sense. It's probably better to think of them as like F1 cars: insanely high-performance when running but much more fragile and high-maintenance than we previously believed.
(1) Say the pre-Grachhi Roman Republic.
(2) How the devil am I supposed to write this?
(3) Alright, Great Britain is letting the team down a bit here.
(4) Sorry for no names, not a historical period I've done much research on.
This was a powerful argument, thank you! And I actually failed at what I think I am usually good at, thinking historically and not in present sense.
I would say, these powerful democracies have inherited a lot of their values from aristocracies. In case of Rome and 19th century GB not even merely inherited, they WERE in many sense an aristocratic republic.
But you do have a great point that elect-a-populist-charismatic-authoritarian is a fairly new phenomenon that started with Mussolini and in fact it was a feature of new and hence not well established democracies.
Except that it happens again now.
Let's dwell on the concept of the aristocratic a bit more please. I do not simply mean it as a rule of the few. I mean it as aristocracies tend to have values that they pass down to lower social classes: honor, duty, independence, pride, responsibilitiy.
So when there is an aristocracy or when there was in recent memory, the middle class also behaves "aristocratically".
Okay, then what I can say is that when a democracy does not have inherited aristocratic sensibilities, they will vote for such a man.
Except wait Weimar Germany DID have inherited aristocratic sensibilities...
I don't think populist-charismatic authoritarians are a fairly new phenomenon. I think populist-charismatic authoritarianism perfectly defined Caesar, and to a certain extent his predecessors Marius and the Gracchi brothers. In a similar vein, Pericles, "First Citizen" of Athens and de facto founder of the Delian League, isn't a dictator but he's certainly a populist strongman. These are failure modes we've seen before.
For a good overview of this in the Roman context, I'd recommend "The Storm Before the Storm" by Mike Duncan (1). It goes over the late Roman Republican period with a strong emphasis over the decline of "mos maiorum" or the unwritten societal rules of this period.
And while the Senate and aristocrats of this period do not cover themselves in glory, I think it's worth remembering during this period how much change there was. Rome had become an empire and while a lot of unrest was drive by Roman legionnaires who lost their farm, which sucks. Having said that, the old model of Roman citizen-farmers growing wheat and barley which worked so well in the Republican era was never going to work in an era of cheap Egyptian and Sicilian (2) grains and borders that were hundreds of miles from Rome, instead of a few dozen miles. That inherently required a full time professional army but that transition had its own risks.
In general, I think you will find better reading on this subject in historical books rather than current events, for a variety of reasons.
If you do want to look into modern aristocratic norms a bit more, the Psmiths just did a delightful review of "Class" by Paul Fussell (3) or you could read Scott's review (4). It's pretty clear that Fusell's "X" class, the "Bohemians" or "Bobos" who pretty clearly evolved into our modern "woke" class. Scott's review of Brooks' "Bobos in Paradise" (5) is also a great read on changing aristocratic norms in mid-late 20th century America.
As for where the aristocratic norms are going and what future developments might look like, I would refer you to Taneer Greer's excellent "The Silicon Valley Cannon", which sketches out most of the important writers and writings in Silicon Valley. I've heard Nate Silver's "The Village and the River" also dives into the deep cultural divide between West Coast/Silicon Valley elite norms and East Coast/Washington DC elite norms.
But the primary reason to read these is that American aristocratic culture and cultural norms have changed dramatically since the 1960s but so has the American environment. Not only sexual and other controversial changes but also the rapid pace of technological change (Twitter comes out in 2009 and by 2015 it arguably put Trump in the White House) and political (we are the global hegemon). It's not clear what norms are even useful now to maintaining a high-performance democratic government.
(1) The History of Rome podcast guy
(2) I think Sicily was the other cheap grain province, correct me if wrong.
I think what Duncan is saying in many words and Montesquieau in a few, that a republic requires the ability to put the public good over personal gain. If it is not there, it goes to civil war then tyranny. In that case, a mild monarchy is better.
While Duncan did not mention this, he demonstrated this in the Revolutions Podcast, saying by the last years of the Commonwealth, the military and the Rump Parliament were only interested in their personal gain, so everybody wanted Charles II back.
I will agree that good men are necessary for a republic but I doubt they're sufficient by themselves. Consider Caesar against his senatorial opponents like Cicero, Brutus, and Cato the Younger. Do we really have a lack of personal virtue amongst those defending the republic? Or is it the fact that the virtuous couldn't win wars?
I think this line of thought confuses personal corruption with, um, "paths to power". Like, a republic is endangered when its political elites steal from the public treasury to spend on hookers and blow but it's far more endangered when it's most ambitious and capable see the path to power being outside republican/democratic norms. Caesar didn't end the Republic because all the senators were partying it up in their private villas, although a lot of them were, he ended the Republic by following the precedent set down by Sulla and others that the best path to power was to be a successful general with the personal loyalty of your army and then to crush your enemies.
> Way out of what? Well, of class, of course. Becoming an X person, joining Category X, is your only way to escape! X people, Fussell tells us, are talented bohemians, independent-minded, an unmonied aristocracy drawn from all classes but rejecting all their conventions. X people just do what they like, regardless of what their class script says they “should” do. They “adopt towards cultural objects the attitude of makers, and of course critics.” They are “independent-minded, free of anxious regard for popular shibboleths, loose in carriage and demeanor.” They are self-directed, so they pursue “remote and un-commonplace knowledge—they may be fanatical about Serbo-Croatian prosody, geodes, or Northern French church vestments of the eleventh century.” So far, so good — you can probably add “weirdly into hill people” to that list.
Nah, no way that the boho class is the wokies. Just the other day, I saw some internet rando (can't remember where, probably youtube) mention that they've never seen a wokie have any interesting hobbies or interests. The blue hair and their pronouns is their entire personality.
contrariwise, I'm convinced wokism is secular Calvinism. the class of interest is the boston brahmins. Whereas Bohos sound more like hipsters who listen to obscure music or maybe artschool girls.
edit:
but the other traits seems like a pretty good fit. maybe there's an argument that the hipsters ran out of things to be hipster about, or something.
The woke are not the class X/bobos, they are the children of the bohos. I think it's helpful to remember that "Class" is published in 1983 and "Bobos in Paradise is published in 2003 and view these books as contemporary commentary of evolving cultural norms over a 50 year timespan.
Or imagine Bonnie the Boomer, born in 1953, prime Boomer age. When Fussell publishes "Class", Bonnie is 30 years old and her and her bohemian cohort are clearly in the ascendency. Fussell is correctly noting the new upper/aristocratic class. When Brooks publishes "Bobos in Paradise", Bonnie is 50 with two Millennial children in their teens. Brooks isn't describing the up-and-coming class, he's describing the existing "liberal elite" of the early Bush administration. But, despite the PC culture of the 90s, it's not Bonnie but Boomer but her Millennial children who will become the woke. We're not describing solid, eternal cultural groups but rather a vague constellation of, uh...liberal-ish aristocratic cultural norms evolving over time and across generations. The hippie Boomers of the 60's sell out into Fusell's "Class X" of the 80s who establish cultural dominance as Brook's Bobos of the 00s whose children become the Woke of the 2010s and on.
Now, we could stretch this back further to the late 19th century, early 20th century progressive movements, where there's a much clearer "progressive" religious movement and connect that to modern secular progressivism, which is part of where the heretical protestantism -> woke historiography comes from but that's...certainly outside the scope of modern aristocratic norms that allow high-functioning democracies.
hmm, that would make a great deal of sense, actually. still though, I wonder how the X class went from "niche interests" to "no interests" in a generation or two. also,
> but that's...certainly outside the scope of modern aristocratic norms that allow high-functioning democracies.
what's being argued here? that the current discussion is about upholding aristocratic norms, and wokies don't behave nobly?
> Indeed, democracy is robust in this sense in that if the majority do not like the leader, they can vote them out--as Venezuela was about to do with Maduro until he essentially shut down elections.
I don't know why you would use that as an example of democracies being robust. If it can be subverted that easily, that's a point against robustness.
> Democracies are robust in that, bar systemic changes, they do well with bouts of bad leaders
But if it can't avoid systemic changes, that's not an improvement. It doesn't matter if it theoretically has higher leader turnover if it just ends up as a dictatorship/oligarchy anyways.
Charismatic-populist leaders winning elections is a failure mode of democracy that tends to happen when a large fraction of the electorate have lost confidence in institutions that produce leaders
The normal historical mode of elections is that they function as a mechanism for recognizing "natural" leaders. What counts as a "natural" leader is very culture dependent, but is usually tied to already being the leader of something else and being seen as successful in it in such a way that people feel they owe you favors (which they repay by voting for you and urging others to do so) or see the results and want more of it. This something else might be a patronage network, a feudal estate, an appointed civil or military office, a business or guild, a civic or advocacy organization, or a lower elected office.
Charismatic populist getting elected to high office thus tend to be downstream of voters looking at candidates who come from the "natural" candidates and being fed up with the lot of them. Sometimes it works as intended, with the populist rising to the occasion and actually changing things for the better (which still counts as democracy working). Sometimes they bumble around and mostly fail but don't really break the system worse than it already was. And sometimes they "succeed" and catastrophically challenge the system for the worse.
A key ingredient for the last bit seems to be that the potential veto points in the system that could constrain a would-be tyrant (including the voters themselves) are dominated by people who don't put a high priority on preserving democratic institutions. And if that's the case, then that's a dangerous situation even if the leader comes from a more conventional channel.
"That counts as a "natural" leader is very culture dependent, but is usually tied to already being the leader of something else and being seen as successful in it "
The kicker is, Trump actually fits that bill. Construction biz, TV show etc.
Other autho-populists do not, for example Viktor Orbán never had a job or business outside politics ever. He was running his political party out of college at 28ish and that was what he ever did.
"Charismatic populist getting elected to high office thus tend to be downstream of voters looking at candidates who come from the "natural" candidates and being fed up with the lot of them."
I would say, it is being fed up with the political-bureaucratic-expert class, not other kinds of "natural" leaders like businesspeople.
Erica, I appreciate your thoughts, but maybe you could work out the difference between "natural" leaders and political elites more accurately, I guess your positive example would be Obama, community organizer -> senator -> president, and it is a good example, but big business boss + TV celeb -> president would also be a good example of "natural" leadership.
Yes, he does fit the bill, at least in broad stroke. The same though occurred to me while writing my comment.
I think part of the distinction you're reaching for is that in the US, it's very rare for wealthy businessmen to go directly for the Presidency. They try semi-frequently, but it's usually treated as a vanity campaign, like Tom Steyer in 2020 or Carly Fiorina in 2016, and only barely taken seriously. When wealthy businessmen have run for the Presidency and been treated as serious contenders they've usually already held mid-level office (Mike Bloomberg, Mitt Romney, Nelson Rockefeller) or been nationally well-known as political activists (Ross Perot). Steve Forbes was probably on the boundary between "vanity candidate" and "already a nationally well-known activist". Notably, none of my examples here won the general election, and only Romney (who was a two-term governor of a medium-sized state) won the nomination.
On the other hand, business leaders do run directly for Congress, Mayor, Governor, or Senator with little other political background all the time. The Roman Republic had the concept of the Cursus Honorum, a series of elected offices where you're expected to have already held one office before being seriously considered a candidate for the next. In the early-to-middle Republic, this was customary and generally followed but not a formal requirement. In the late Republic, it started to break down; Sula tried to reimpose it by codifying it as part of his constitutional reforms, but these were more honored in the breach than the observance. American political culture doesn't had as formal a concept, but I think there is an understanding that people who have held certain positions are more qualified to hold higher offices than people who haven't. Some step-skipping happens fairly often, but skipping several steps at once tends to be seen as an aberation.
Off the top of my head, the usual American Cursus Honorum tends to follow this pattern:
Level 1: Mayor, state legislator, district attorney, municipal office
Level 2: House of Representatives or senior state-level cabinet office (treasurer, secretary of state, attorney general, insurance commissioner, or lieutenant governor)
Level 3: Senator, Governor, Speaker of the House, or federal cabinet secretary
Level 3.5: Vice President or Secretary of State.
Level 4: President
So for 20th and 21st century major-party Presidential nominees apart from Trump, the path as of when first nominated has been:
- Harris: District Attorney (1), state AG (2), Senator (3), VP (3.5)
- Biden: County Councillor (1), Senator (3), VP (3.5)
- H. Clinton: Senator (3), Secretary of State (3.5)
- Romney: Governor (3)
- McCain: Congressman (2), Senator (3)
- Obama: State Senator (1), US Senator (3)
- Kerry: Lieutenant Governor (2), Senator (3)
- Bush the Younger: Governor (3)
- Gore: Congressman (2), Senator (3), VP (4)
- Dole: County Attorney (1), Congressman (2), Senator (3)
- W. Clinton: State Attorney General (2), Governor (3)
- Bush the Elder: CIA Director (3), VP (3.5)
- Dukakis: Congressman (2), Governor (3)
- Mondale: State AG (2), Senator (3), VP (3.5)
- Reagan: Governor (3)
- Carter: State Senator (1), Governor (3)
- Ford: Congressman (2), VP (3.5), President (4)
- McGovern: Congressman (2), Senator (3)
- Nixon: Congressman (2), Senator (3), VP (3.5)
- Humphrey: Mayor (1), Senator (3), VP (3.5)
- Goldwater: City Councillor (1), Senator (3)
- LBJ: Congressman (2), Senator (3), VP (3.5), President (4)
- JFK: Congressman (2), Senator (3)
- Stevenson: Governor (3), UN Ambassador (3)
- Eisenhower: ---
- Truman: State Judge (1), Senator (3), VP (3.5), President (4)
- Dewey: District Attorney (1), Governor (3)
- FDR: State Senator (1), Assistant Secretary of the Navy (2.5?), Governor (3)
- Wilkie: ---
- Landon: Governor (3)
- Hoover: Secretary of Commerce (3)
- Smith: Sheriff (1), Alderman (1), Governor (3)
- Coolidge: State Senator (1), Lieutenant Governor (2), Governor (3), VP (3.5), President (4)
- Davis: Congressman (2), Solicitor General (3)
- Harding: Lieutenant Governor (2), Senator (3)
- Cox: Congressman (2), Senator (3)
- Hughes: Governor (3), SCOTUS Justice (3?)
- Wilson: Governor (3)
- Taft: Territorial Governor (2.5?), Secretary of War (3)
- Bryan: Congressman (2)
- Parker: State judge (1)
- TR: Assistant Secretary of the Navy (2.5?), Governor (3), VP (3.5), President (4)
- McKinley: Congressman (2), Governor (3)
More generally, skipping one of the lower levels is fairly common. Skipping both of the lower levels or skipping directly from level 2 to 3.5 or 4 somewhat less common but far from unheard of. Skipping all of the lower levels and going directly to level 4, or skipping from level 1 to level 4, is very rare with one big class of exceptions: being a senior wartime General seems to put you straight to Level 3.5 with no expectation of holding levels 1-3.
That only happened once for an actual President in the 20th century (Eisenhower), but at least one other (Wesley Clark) has tried and was taken seriously, and several people have been seriously discussed as candidates but declined to run: Colin Powell, Douglas MacArthur, and Jack Pershing are the main ones I can think of off the top of my head. Also, several of pre-20th century Presidents were generals with no other political experience: Washington, Jackson, Taylor, and Grant. Plus two Presidents who were both generals and mid-level politicians: Garfield and W. H. Harrison.
Since 1900, the only nominees who didn't hold at least a level-2 position before first nomination were Trump, Eisenhower, Wilkie, and Parker. Wilkie is pretty similar in profile to Trump (or Perot, for that matter): a wealthy celebrity businessman with an interest in politics.
----
Obama is an interesting example. He did check most of the conventional boxes and isn't an outlier in skipping level 2, but he did kinda speedrun level 3 by serving less than a full term in the Senate before running for the Presidency. We remember him as a conventional establishment politician now, but that's from the perspective of him having been President for eight years and an elder statesman for almost a decade since then. When he first ran for the Presidency, he ran as a charismatic outsider who was offering himself as an alternative to the political-bureaucratic-expert class; this was somewhat disingenous (apart from the "charismatic" part, which even his detractors generally grant him), as he was still a career politician (albeit a more junior one than most major party nominees) and was well-connected with a great many establishment political types.
That’s a known flaw of democracy. Some of the ways to get around this are Parliaments and other institutions of representative democracy, independent courts, etc. None of it is perfect!
"The reason for this is that it is not possible for millions of people to actually participate in politics and wield power."
Counterpoint: voting in democracy is as much about aggregating information as it is about having people "wield power." The higher the fraction of a population the votes, the better job the voting process does at capturing relevant information.
Corollary: most modern democratic systems are very bad at their jobs because each vote extracts very little information, and mixes together information in non-helpful ways (i.e. "how has [law/practice/policy] affected you" gets conflated with "what are your overall vibes of [party/candidate/movement]).
Viewed through this lens, it's obvious that democratic systems aren't all that great at pre-selecting good courses of action by voting, but do EXTREMELY important work in DE-selecting courses of action that are currently being implemented and have widespread negative impacts. The tendency to ignore huge negative impacts of policies because they aren't a problem *for the people in charge specifically* is on of the MAJOR weaknesses of all "government by the few" systems, and the primary reason they often undergo violent collapse.
It is ironic then that most real-world 2025 "democracies" are government by the few. Like if a judge decides the constitution says this or the human rights treaties say that, the people are powerless about it.
I don't live in the US but I watched the afterlife of the Obergefell SCOTUS with amazement online. Conservatives just stopped arguing against same-sex marriage, they all felt like once the Kings have spoken, the case is lost.
And how strange that Yarvin predicted this around 2008, saying that if one day the SCOTUS decides The President must make all his speeches standing on his head, there is absolutely nothing anyone can do against it. And what can be more kingly?
We have the same thing in the old world. The courts decide that a guy from Zaire who was convicted here of pedophilia must be given asylum not despite that he is a pedophile but precisely because of it: because it is likely that they kill pedophiles in Zaire and that violates human rights. The people absolutely do not want this, but are powerless.
Having thought about it further, I think Obergefell is actually a really excellent case study on what the courts do, and how that interacts with (in a pretty necessary way) the more directly-democratic parts of the U.S. system (and many others like it).
In a democracy, you could very crudely model laws as being derived from opinion polls asking the voting public "do you think X should be allowed?" The public forms an opinion, democracy happens, and a law is put on the books saying either "doing X is explicitly forbidden" or "the freedom to do X is protected by law in the following ways."
Given the wide, wide variety of possible choices of X, and given that language is an imperfect tool, it's inevitable that a nation that passes very many such laws will discover some of them contradict each other: the law both disallows Y and disallows the disallowing of Y for certain Ys (which are quite often *subsets* of the original X rather than entire Xs). To have a coherent body of law, these conflicts need to be resolved *somehow.*
Of course, the best way to do this is simply not to write those conflicts into law in the first place. This is one of the big reasons that basically all modern democracies are "representative democracies;" understanding the existing laws well enough to write new laws in ways that hew as closely as possible to the original public will is a tricky job that certainly requires people specializing in it[1].
But no matter how well they do that, the complexity of the problem makes it inevitable that conflicts between laws will sometimes exist anyway. So you still need a resolution procedure. One could try a very simple procedure like "newer instructions always supercede older instruction in the case of a conflict." But anyone who's worked with computer programs before can probably intuit how many ways that could produce unexpected and undesired results in situations as complex as you find in the real world. Another possibility is to send the situation back to the public for a vote any time a conflict is discovered, but that is A very unwieldy. And even in if you go with a different solution, you're still going to require some formalized procedure for noticing and reporting that this conflict in the law exists[2].
Judges are a pretty reasonable solution to these problems. They are professional law-conflict-resolvers; since laws can be complex and there are a lot of them, it's useful to have people with dedicated training doing this job. They also effectively serve as the official point at which conflicts in the law are flagged for public review: a judge ruling on it gives the voters a direct and explicit answer on where, how and why two existing laws can be regarded to conflict, so that they can fix it (if they don't like the judge's interim resolution). It's not *impossible* to have a complex, democratic nation without them, but it would probably be pretty tricky.
OK, so that's a lot of words before even getting to Obergefell. But with that framework in hand, it's actually really simple. Voters in many different states had been asked (in regards to their specific state) the question "should marriage be allowed between consenting adults of the same sex." Some had answered "yes" while others had answered "no." In itself that is not an issue. The issue happened when somebody discovered a possible conflict between the laws that had registered a "no" and a pre-existing law. Construed loosely (of course IANAL so take with a grain or three of salt) the one law said "if you are a man (woman) you are not allowed to marry a man (woman)." The other law said "laws must apply to everyone equally; specifically they are not allowed to apply differently to people solely on the basis of [list of things of which "sex" was one]." The judges thought about this carefully and said "Aha, see! Clearly saying 'if you are a man you can't marry a man' is applying the law differently based on sex. That law wouldn't restrict a woman from marrying a man, so it can't restrict a man from marrying a man." Of course, there are different ways to parse that, and others (including some judges) disagreed, but that's what formal conflict resolution procedures are for. One law conflicted with another law, and the judges pointed it out and produced a resolution.
Now, the other very important feature of this story is about WHICH of the two laws one. See, the second law wasn't just any law; the U.S. system has a special category for the *highest* law, which automatically supercedes other laws in the case of a conflict: I'm talking of course about the Constitution. And the law that included the equal-protection clause was part of the Constitution. As part of its special status, it is significantly harder to change (requires more democratic buy-in) than lesser laws. So when the SCOTUS ruled that no, in fact, you cannot make gay marriage illegal just by passing a law about it, they were *protecting* the results of the democratic process, by ensuring that a law with MORE democratic support won out in a place where it conflicted with a law with LESS. Claiming that doing so is anti-democratic just because some fraction of the population didn't like it is quite silly.
The moral of the story--as, of course, should be the moral of all stories--is that Yarvin is full of shit. The SCOTUS could not rule that the president must give speeches standing on his head, because there is *definitely no conflict of laws* whose resolution would include that determination. Obviously. The implicit motte-and-bailey here is that, well, the SCOTUS could just go crazy and stop even pretending to do its job. This is an ASTOUNDINGLY stupid point for a monarchist to gesture at, because Yarvin's entire philosophy of government *specifically* involves centralizing political authority and removing guardrails and making the "what if someone involved goes crazy and stops doing their job well[3]" problem ENORMOUSLY worse. This exact kind of thing is why I've always found Yarvin interesting as a writer, but never been able to take him remotely seriously as a political thinker.
[1] The other big reason is that when voters say "we want X" at the ballot box, X is often going to be fuzzy an imprecise (it is, after all, spread out across the brains of millions of people) and maybe not directly practical to realize in law. Representatives can (at least in theory) figure out the best laws to write to get voters as much of the *spirit of X* as possible, without running afoul of the constraints of existing law or, y'know, fundamental reality.
[2] If this last part isn't clear, imagine being stopped by a police officer. The officer says "you just did Y, which the law says is illegal." And you say "wait, but this other law says that I can do things which are Z, which this is, I genuinely didn't think it was illegal." A system could just empower the police officer to be a dictator in this case--whatever they say in the moment goes--but this has REALLY dangerous consequences and isn't exactly solving the "unelected official wielding massive authority" problem. So somehow we'd need a way to get this interaction promoted from the level of "local dispute between officer and individual" to "something that the system can apply its resolution procedure to."
[3] Hypothetical now available in dynamic, full-color, yes-its-actually-happening form. Thanks, 2025!
This is quite a substantial tangent that doesn't really interact with my point at all--and indeed largely ignores it. Yes, countries need judicial systems. Yes, the key officials of judicial systems are going to be among the limited group of people wielding disproportionate power, unavoidably, because of the nature of what judicial systems have to do. [1] All of that is (perhaps) germane to your original point about the wielding of power but really *not at all* about my point about democracy as an information-aggregation-system.
Judges in the U.S. system are QUITE DELIBERATELY insulated from the results of direct, popular votes. And when you look at the system in terms of information aggregation, it is really easy to see why. There are two types of information that matter in a court room: information about the facts of the specific case, and information about the contents of the law. As it happens, *neither* of those types on information are distributed throughout the population in the way that information about, say, how a recently-implemented economic policy is working in practice are. The only a judge could learn from a vote in most cases are what the *popular perception* of the law is[2]. And modern states are supposed to follow the ACTUAL LAW, not whatever the public *thinks the law is today.*
" The people absolutely do not want this, but are powerless."
Case in point. That is literally the WHOLE DAMN POINT of having laws. If the only relevant question was "what do the people want today" why would you ever need to write down or even conceptualize a notion of "law?"
What do you think a law IS, Ogre? It's a rule or set of rules that the state rights down and promises to enforce on people *even when they would prefer that not happen.* Now, if the people in a democratic polity decide after sober reflection that their definition of asylum routinely forces them to do or tolerate things they don't want to do or tolerate--and whether that is broadly true is a question of distributed information--then they are, of course, perfectly free to vote in ways that will get the law changed to be closer to their preferences. They just don't get to make that decision case-by-case, on the fly.
And that's exactly the answer to all these conservative complaints about "unelected judges." If you don't like the way judges are interpreting the instructions that your elected officials left for them, you are PERFECTLY FREE to tell your elected officials to go back and leave clearer instructions. Yarvin was not quite as full of shit in 2008 as he is in 2025, but he was still pretty full of shit then. OF COURSE there was something the people could do about it. They could get a law passed that said "no, obviously the president is not required to give all speeches standing on his head." If the judges rule that law to be unconstitutional, the people can pass a constitutional amendment that says it. If the judges don't respect THAT because they've been possessed by alien mind-control parasites, they can pass a constitutional amendment dissolving the supreme court[3] and replacing it with something else. Funny thing about the U.S. system, it includes a provision for legally doing literally anything at all *provided that thing has enough popular support*.
And Yarvin, of course, knows that and has always known that. He's a scummy, bad-faith troll, and no reasonable person should repeat his arguments. This whole conservative whining about "unelected judges" has always really just been whining about conservatives not being able to do literally whatever they want when they win one election, because that was *never, ever* how the system was supposed to work. I'll give you a pass for this one, not being American. But all those American conservatives who waive the flag and shout about the greatness of the country in the same breath as they spit on its founding principles are beneath contempt.
[1] Though fun fact, it's quite possible to design a judicial system that *doesn't* concentrate power like this. I was recently reading about the ancient Athenian system over on an old ACOUP post. No judges, no lawyers, massive juries. Doesn't seem very workable in a modern society though.
[2] Though anyone who has spent any time around humans (especially on social media) should be well aware how many people *don't actually care* what the law says when it conflicts with their feelings on a case.
[3] Or, y'know, use the impeachment procedure that already exists to remove the justices in favor of non-parasitized jurists.
You are conflating limited government with "government by the few." Not remotely the same thing. Supreme Court justices, for example, cannot pass laws.
> Supreme Court justices, for example, cannot pass laws.
If they can interpret laws to mean whatever, is there a practical difference? This is a lot like saying the administrative agencies make regulations, not laws.
?? Of course there is a practical difference; they can't interpret a law that does not exist. Moreover, if Congress does not like how the Court interprets a law, they can amend it.
And, let's get real: Congress passes hundreds of statutes per year; the Supreme Court decides maybe 70 cases per year (and not all of those involve federal statutes).
Nothing ironic in this. Minorities rule and have always ruled. And greater the population, smaller is the ruling element. Thus said Mosca in Theory of Ruling Class.
> The people absolutely do not want this, but are powerless.
Shouldn't this administration be proof that the people aren't powerless? The one thing democracy actually does in practice is make it extremely easy for people to organize populist revolutions.
The US Constitution can be changed to overrule the SCOTUS and we used to do that. The 11th, 13th, 14th, 16th, 19th and 26th amendments all reversed court rulings wholly or in part.
I've read a number of arguments for why we've collectively given up on that option, none of which were very persuasive. Whatever the reason(s) it's a tragic mistake which the Framers of our constitutional system would be very disappointed with.
Ouch. And some elections are like an equivalent of the toddler screaming and throwing the plate on the floor. Even if it's not going to help the toddler feel any better, beyond the brief moment of the joy when the plate shatters.
Certain sites are having a lot of fun with another extract from Kamala Harris' book where she makes the comparison of Trump with a Communist dictator.
Now, being charitable, I know that she was trying to get at cults of personality and the modern examples of that being communists (e.g. North Korea) but yeah, it lends itself to a lot of mockery: Trump is a Communist? the Democrats are the party of the oligarchs who will defend democracy?
"Harris wrote in her memoir “107 Days,” which was released Tuesday, that she predicted how Trump would act in a second term, but she didn’t expect the level of capitulation from the private sector toward him.
She was pressed on MSNBC’s “The Rachel Maddow Show” about why she didn’t anticipate such action and responded that she believed “titans of industry would be guardrails for our democracy.”
“And one by one by one, they have been silent, they have been … feckless,” Harris said. “It’s not like they’re going to lose their yacht or their house in the Hamptons.”
“Democracy sustains capitalism. Capitalism thrives in a democracy. And, right now, we are dealing with, as I called him at my speech on the Ellipse, a tyrant,” she said, referencing her rally last year on the White House Ellipse in Washington. “We used to compare the strength of our democracy to communist dictators. That’s what we’re dealing with right now in Donald Trump. And these titans of industry are not speaking up,”
Vote Democrat - the party of *real* billionaires! 😁
Democracy is a self-contradiction only if you conflate theory with practice. Democracy as theory describes a concept on which you can model an actual, practical state (also called democracy, or republic, etc.) made of actual people, with all the amendments to the process that lets the real thing scale while mostly preserving the core ideas laid out in theory.
Most people are followers. They would lead only in a few (mostly personal) contexts, and rather stay away from political initiative. Even political activists are mostly just pushing the course set up by big names or ideas, enlightened figures or educational institutions. Democracy is a feel-good demo layer of some emotive public engagement. Humans minds are meant to be swayed by Cialdini's factors of subconscious influence: reciprocity, authority, liking (familiarity and flatter), scarcity, social proof, group solidarity, and commitment (with consistency).
All right Warhammer 40K fans, it's time for the annual Chaos pot-luck. What are you bringing, and what Chaos god are you dedicating the dish to?
My take on Slaanesh is less about sex and more about a certain careless excess. So I'm bringing Dom Perignon by the case, and making mimosas with it. #YOLO #becauseImworthit
All I'm seeing is a bunch of sane people who haven't fully embraced the spirit of Chaos. Bringing something easily edible without preparation or acts of devoted worship is so Imperial. I'm going to do something different. Something better. Something more fitting for the worship of the dark gods. Something Khorne would actually enjoy. I'm bringing a live wild boar, pumped full of rage inducing stimulants and abominably treated to create an all consuming rage to be released on the dining table. I will then battle the boar, using nothing but a singular knife to stab the beast to death or die trying. No matter the victor, the room will be splattered full of blood and gore, shattering the plates and upturning the table and chairs in a spectacle befitting a true Khornate sacrifice. Then, after the guests violently froth and chant for 88.8 seconds in worship of the Skull Throne, they shall lunge like savage beasts to feast upon the boar or my own flesh raw in a feat of senseless, primal carnage, all the while gurgling "Blood for the Blood God, Skulls for the Skull Throne!"
In all seriousness, I'd probably bring a couple racks of baby back ribs dedicated to Khorne. Seems simple, delicious and feasible.
I'm not actually manly enough to bring a dish befitting Khorne, which I'm pretty sure is wild elk steaks or buffalo burgers from an animal you've personally hunted.
I am not bringing anything to a potluck that Nurgle would approve of.
For Tzeentch, I'm bring a few pounds of Bertie Bott's Every Flavour Beans (1). It's just...right on so many levels.
For Khorne, it's easy. Black pudding (it's got blood in!) Various versions throughout the British Isles (which includes Ireland), this one is a fancy Scottish cook version:
But really alcohol is more Nurgle than Slaanesh. It's a product of fermentation (a polite word for rot) that leads to all sorts of spewing and disease.
Maybe alcohol is the equal-opportunity Chaos drug. It gets people fighting mad, impairs their judgements, makes them ill, and leads to YOLO sexytimes. There's something there for each of the gods.
I haven't followed the Kirk murder aftermath a whole lot, and I haven't read all the threads on it around here, so the following may have been already addressed.
This is a genuine question, not one I think I already know the answer to. Apparently Ted Cruz and some other prominent US conservatives have been condemning parts of the government's implicit crackdown on speech. My question is whether any progressives of similar status and influence as Cruz and the relevant right-wing thought leaders have similarly condemned progressive suppression of speech (explicit or implicit) at moments of peak left cancel culture. *Especially* at a similar sensitive moment e.g. right after George Floyd, during the height of Covid, during the British anti-migrant riots last year.
I'm trying to get a sense of whether US conservatives are better people, worse people, or no different than US progressives and British progressives. And this seems one of the clearest metrics that I could possibly use to resolve that question.
A bunch of liberals kind-of fell out with a lot of the progressive movement because of pushing back on cancel culture during its height. Steve Pinker, Nicholas Christakis, Sam Harris, and Matt Yglesias are all examples off the top of my head.
The ACLU is almost too obvious to mention. It's pretty left wing but has fought for (and won) for some extremely not left-wing groups. E.g., in 2012 they sued on behalf of the KKK. While they have been more recently criticized for lessening this universal pro free-speech stance, in response they compiled a list of 2017-2024 instances where they defended speech they don't support: https://www.aclu.org/news/civil-liberties/defending-speech-we-hate I think it's a pretty compelling list, though maybe a list of what they didn't support would be more telling.
The right-wing analog to this is more FIRE than Ted Cruz and it's worth pointing out that FIRE has done a good job of supporting left-wing free speech now that the situation has reversed.
I'll also say that I think a lot of discussion along this line is disguised attempts to justify further crackdowns from one's own side as revenge. (Why ask which side is better instead of just trying to make both sides better? They're both clearly doing poorly.)
When I worked at FIRE, there were National Lawyers Guild members who worked there. As well as Republicans, libertarians, etc. It simply is not right-wing, or left-wing.
As a former ACLU donor who has been quite happy with becoming a FIRE one, I agree both that the ACLU's list above is pretty good and that I also want to know what all they declined to take action about during that period.
There was no mass wave of firing and witch hunting random observers for comments about Floyd, and Covid is obviously a different scenario where misinformation or rebellion can be actually dangerous.
By the way, if you were actually serious and good faith about wanting to determine which category of people tend to be better, is this really the best possible metric? Does their ideology not inform an important difference already?
Nitpick: I recall quite a few stories about people losing jobs over rude comments about Floyd after his death. I would have to search for examples but I remember hearing about them.
Misinformation can be dangerous in many cases. But it's worth noting that the official information was often wrong during covid. Mostly this was just because it was a new virus and people didn't know what was going on, especially early on[1]. Some of that was also administrative stuff, like not wanting to formally say covid was airborne because that would trigger a bunch of expensive required safety measures. Some was political, as with the handling of the lab leak claims.
Policing misinformation in that context sometimes involved suppressing correct criticisms or defensible disagreements among experts.
[1] I think people had a hard time with this largely because most people, including politicians and journalists, don't really get how science works and just expect science to be some kind of truth oracle, instead of a "well, the best we can tell right now is X, but maybe Y or maybe Z."
Can you disagree with anything factually? Covid is a public health emergency, it killed millions of people. It is much worse to be wrong about it than to make fun of a dead podcaster. How does the difference not justify different actions?
*Covid* killed millions of people, but *talking about covid* did not. And that's the reference case if you're contrasting with "making fun of a dead podcaster". It's all talk.
And if you want to call it "misinformation" and invoke second- or third-order effects where one person talking results in some other person dying, both sides get to play that game.
You're simply opining on which censorship you prefer, and if you want to pretend that's a factual disagreement, sure: it's objectively orders of magnitude worse for people I like to be censored than people I don't, regardless of the topic.
Millions of people died from covid. No riot has come close to doing that even when you factor in any kind of stress caused by lost property, and cherry picking (and then possibly misinterpreting) the statement of one bloke will not change facts.
The stakes in the covid misinformation thing aren't the total number who died from covid, but rather plausible the number of additional deaths due to misinformation. Most covid deaths had nothing to do with misinformation about covid online, they were just people who got exposed to covid and ended up getting very ill from it.
Before the vaccine was available, I doubt there were many additional deaths due to misinformation in the sense of "information contradicting public health guidance." People ignored the lockdowns (where they happened) mainly because they didn't wanna, not because they had been convinced that there was no such thing as covid or something. The masking guidance we got was pretty bad, since it required cloth masks but not N95/KN95 masks with a good fit that could actually protect you--people who read "misinformation" saying the cloth masks were not much good were probably better informed.
After the vaccine became available, misinformation probably led to additional deaths by convincing high-risk people not to get vaccinated. Their body, their choice, but if you were a 60 year old 300 lb diabetic who didn't get vaccinated because you thought the vaccine was too risky, you were making a pretty bad decision. OTOH, it's worth considering how the suppression of those claims that the vaccine was dangerous would work out in general.
As best I can tell, the worst things we did during covid were sending covid patients back to nursing homes (like tossing a lit match into a haystack) and long-term school closures, which slowed transmission but also really screwed over a lot of kids. Neither of those were due to misinformation.
The best thing we did was get the vaccines and treatments out quickly. Its availability wasn't affected by misinformation, but its takeup was to some extent. And probably there were some people who tried dumb things to treat it instead of the actually useful stuff we ended up with (Paxlovid, for example) and died as a result--some of that was misinformation, some was simply not knowing about it or not realizing you needed to start it early instead of waiting until you were very ill.
I agree, but worth mentioning that the government's attempts at manipulating the public, both with exaggerating the dangers of covid, the efficacy of the vaccine and trying to censor misinformation, caused a very predictable backlash against vaccines which killed a lot during covid and will kill a lot from increased vaccine hesitancy generally.
One could argue that public health people aren't responsible for those knock-on effects and the backlash, but that is only a defense if they were committed to telling the truth: not when they are dishonestly trying to manipulate the public. They played stupid games and we all won stupid prizes.
When the establishment flip flopped on public gatherings for BLM, and tried to silence anyone who disagreed, it destroyed the public faith in NPIs and the people of "science" advocating for them. If you think that the NPIs were important and could have saved lives, the "censorship" around Floyd and BLM killed many people.
Was this ever guidance from public health agencies? I recall the open letter from a bunch of public health authorities, and the signers did indeed burn their credibility in order to support their preferred politics, but I don't recall that being anything from, say, CDC or state health departments. But I'm happy to be corrected.
In an effort to be somewhat helpful, I'll add that if you really do want to make an apples-to-apples comparison of this sort, you need to pick a single, specific, well-known event in which a left-leaning government actually did crack down on speech in a very public way.
The example that leaps to mind for me is the Ottawa Convoy protest. Sadly it's not a *perfect* comparison both because it happened in Canada not the U.S. and because a lot of the sticking point was over things that went rather beyond mere speech, like threats and assaults of residents[1]. I don't specifically remember hearing about how any left-leaning U.S. officials responded to Trudeau's heavy-handed crackdown, so I couldn't say off the top of my head whether they were appropriately outraged, on inappropriately supportive or merely indifferent. But it would be a fair metric on which to judge them.
[1] Which to be clear, were presumably carried out by *individual* protestors, and shouldn't negate the rights of other people who were at the protests and not doing those things.
Much as I hate giving the nod to Wimbli, they got the point. Street protests are absolutely a form of speech. Historically they're one of the most important forms.
They're also much trickier to navigate because they blend speech with physical action (if nothing else, being present in a specific place) in ways that are hard to disentangle. So too with this one: the Trudeau government weaponized the financial system to shut down the protests, curtailing Canadians' free speech in the process. But on the flip side, a large part of the government's issue with the protests seems to be the things that were happening along side them that pushed beyond the bounds of merely speech: things like harassment, threats and occasionally assaults of local residents. It wasn't a simple situation (but I ultimately think the Trudeau administration was more wrong than right).
I wasn’t arguing with the speech part, I was arguing with the “threat” part.
Regarding the Freedom Convoy, it was a mistake to launch an open-ended protest in the capital and let it go on so long. Any state will eventually take action if you give them enough time and provocation.
Fair enough. I do not think highly at all of the convey protestors, and don't think the government was unreasonable for acting to curtail their more disruptive activities. And in cases where any specific protestors were breaking existing laws (in this or any protest), arresting them on those grounds is quite reasonable.
My issue here is pretty much solely with debanking as a means to apply pressure to end a protest. For ordinary citizens with ordinary means financial obligation, it's quite the heavy bludgeon to wield. It seems reasonable to use against *organizations* operating illegally, against dangerous fugitives at large or in response to specific crimes that revolve around the use of those bank accounts (e.g. financial crimes or contracting illegal services). It does not seem reasonable to wield as a means to apply pressure against individuals outside those limits, even if the government believes them guilty of other crimes. If the government has substantial evidence of other crimes those individuals have committed, it can arrest them on those grounds, it doesn't need to debank them.
Given all that, the fact that at least *part* of what the government was accomplishing with debanking in this case was ending a protest, that makes it a freedom of speech issue (albeit a pretty noncentral one). But I think the noncentral nature of it and the fact that the targets were enormously unsympathetic makes it MORE important to speak up against it, not less. Those things make it a perfect candidate to be the thin end of the wedge in normalizing applying such pressure more often and more widely.[1] I note with satisfaction that it has since been ruled unconstitutional.
[1] Of course, sitting here in 2025 it feels pretty ridiculous to worry about the Liberal government waxing authoritarian when we've got the *gestures at everything* happening right next door in the U.S. But just because our neighbors are busily blowing up our sense of what "normal" levels of government coercion look like doesn't actually mean we should change our standards.
On a related note, the fact that the convoy protests were heavily funded by U.S. money does serve as a somewhat of mitigating factor, in my eyes. The inescapable fact of living next door to a larger, richer neighbor with a long history of meddling in other nation's affairs certainly invites some wariness any time they try to push their politics across the border, especially by throwing cash at the problem.
"And this seems one of the clearest metrics that I could possibly use to resolve that question."
Really? Not the reaction to the murder itself? There has already been LOADS of ink spilled about the difference in reaction between high-profile progressives to this murder and that of high-profile Republicans to the Horstman murders.
This thing you have chosen is actually a really crappy metric for a couple of reasons:
1. This crackdown started at a very specific time in response to a very specific event, which makes coordinating to speak out easier.
2. This crackdown was not only (as Anon mentioned) spearheaded by the government, it was led by *one specific leader*. Having a clear target to speak out against ALSO makes coordination easier.
3. The actual mechanisms used to "crack down" are quite different. Different people may have different standards for what methods are and aren't beyond the pale when it comes to things like this.
Basically, to me it looks like you've decided to watch US conservatives swing at a slow toss right over the plate, and then go find a fastball from recent history pitched past the other team in order to argue which is the better batter. I highly doubt that somebody who didn't know what answer they wanted to see would come across this particular algorithm by chance.
There was a ton of criticism of the cancelling of David Shor when it happened in 2020, but Tom Wheeler wasn't going on MSNBC in 2020 and talking about how he was going to threaten local cable affiliates into refusing to carry Fox News if they carried critical coverage of Black Lives Matter.
Well first of all, I mentioned much stronger government crackdowns like in Britain, which no one has addressed so far. I'd be perfectly satisfied with a conclusion like "US progressives are better or no worse people than US conservatives, British progressives are vastly worse people than both groups".
Second, I'm not sure there actually is, in itself. Obviously, jailing or fining someone or anything else that involves violence-backed coercion (the methods the government tends to use) is vastly worse than mere economic pressure from a private entity or a vast collusion of private entities. But is economic pressure from the government worse than economic pressure from a collusion of private entities? You can argue it's worse because it has more power, but you can also argue it's better because it's democratically elected, and subject to democratic removal. In the vast majority of progressive cancellations, there was no vote where any appreciable subset of the population agreed that "these opinions are from now on no longer allowed" or even that "these are the people who get to decide which opinions are no longer allowed". Instead, the power was entirely exercised by either those who happened to have media/academic/business connections, or those who were the most brazen and loud and threatening and openly unwilling to ever cooperate or tolerate or compromise with anyone different them. And as a result many of the cancellable opinions were those actually held by the majority of people! And if the majority of people didn't like the cancellations there was...nothing they could really do, aside from "build up your own media and business and academic ecosysyem over decades and hope it doesn't get captured that by the interests that control the existing ones, which history shows is a total pipe dream". Compared to that, being able to straightfowardly vote out the canceller in a few years is such a vast improvement it's hard for me to endorse your statement at all.
But if you really want a direct American, government-backed cancellation equivalent, then something like the Obama Education Department college directives about banning "offensive" speech would seem to qualify (I'm actually of the opinion that a huge amount of broader cancel culture is directly downstream of that government action; it certainly fits the dates). Was there a Democrat Cruz-equivalent condemning those at the time?
> I mentioned much stronger government crackdowns like in Britain, which no one has addressed so far
The biggest crackdown on free speech in Britain was the recent designation of a Palestinian activist group as a terrorist group and beyond that, in a new addition to the terrorism act of 2003 (amended in 2019), that any display of support for a terrorist group so designated was itself illegal which meant that 300 grannies were arrested a few weeks back for protesting the designation of Palestinian action as a terrorist group.
There is, along with that, the arrest of Graham Linehen - the Irish writer - for anti trans speech for what was a clear turn of phrase about punching some trans activists in the balls. Another Irish writer - Sally Rooney of normal people fame - says she can’t visit London because she has voiced opposition to the designation of Palestinian action as terrorist. Maybe that’s performative of her, maybe not. My Irish wife thinks the police like arresting the Irish anyway.
Online there’s a general concern for either or but not both, offline it’s a bit better. At least where I live.
This. I'm not defending UK attitude to free speech but it's much older than wokeness. Ideally we'd have a written constitution but I can't see that happening now, social trust is too weak. So we're stuck with the common law combined with a latitudinarian spirit which is tolerant up to a point, so long as people don't "go too far" whatever that means.
No there wasn’t, to my knowledge. It’s also much harder to notice and mentally model the downstream effects of DOE directives, versus the president doing something. See the baseball analogy, the situations are completely different.
We were also debating the validity of death panels at the time, so there wasn’t a lot of opposition brainpower directed at it or calling attention to it.
Does anyone have advice on the role of networking for matching into a residency program in psychiatry?
My girlfriend is a medical student preparing for match (hopefully in psychiatry) and I work in IP, so we’re hoping to land in the Bay area as it would be ideal for both our careers. I’m wondering if networking with decision-makers at the programs she’s looking at is a viable route for increasing her odds of getting an interview. I know in my realm of law, networking is the go-to way to land a job. But i’m curious if this applies for residency programs? Does anyone have experience with this and have thoughts they can offer?
Public service announcement: Plan the timing of your flu shot thoughtfully.
Here’s the relevant info:
-When you get a flu shot, it takes 2 weeks for protection to kick in.
-Flu shot protection is never 100%, even right after the shot — usually more like 50%. That figure applies both to your chance of catching flu, and to your chance of serious illness. So if your protection is 50%, your chance of each is cut in half.
-The shot protection wanes by 10-15% per month. It wanes faster than that for seniors.
-On average, it takes about 10 weeks from the time flu cases begin to rise in an area to the time they peak.
-The month they peak varies, but February is the most common one.
I apologize for not giving links to the factoids above, but if I had held myself to providing links to everything I would not have written this post at all. I got the relevant info a few years ago from CDC and other big sources who had no ax to grind. It is not hard to find — look it up yourself if you have doubts. If you find something that doesn’t support the info above, post it in a reply.
Big picture: If you get your flu shot in September, its effectiveness will have waned to almost nothing by February. If you are a senior it will have waned to almost nothing by December or January. My conclusion is that the conventional advice to get a flu shot in September is lousy.
The system I follow is to keep checking the CDC flu map: https://www.cdc.gov/fluview/surveillance/usmap.html. Right now most states, including mine, are dark green, so they are at the bottom of the “minimal” category. I wait until flu cases in my state reach the lower of the 2 “Low” levels, then get my shot. That date is most often between late October and late November in my state. Seems likely to me that getting the shot means then will give me maximal protection by the time flu prevalence is in the Moderate category, and decent protection at the peak. I actually think it would probably be better to wait a little longer, so as to match maximal protection to the actual peak, and stay protected through the early parts of the downslope — but once I see the levels rising steadily I get a bit uneasy and just go ahead and get the shot.
Another option is to get a flu shot in Sept., and another in, say, December. As far as I know there is no down side to that, though you should probably double-check that with an MD (or just find the answer online). I doubt that insurance would cover a second shot, but I don’t think the flu shot’s expensive.
Does that 50% already include "We guessed the wrong strain for this year" risk? Because if it does, then the effective protection period in the cases where you get any is longer.
The 50% is due to the imperfect protection of all flu shots. In fact when I was looking at some sites after I posted I came away with the impression that most years the maximum protection, when the effect of the vax is strongest, is less than 50%. The low ceiling on flu shot effectiveness is partly due to the fact that they have to be manufactured before we know what strains will dominate in the coming year, but also to the fact that in flu season there are several strains at once around, and no vax can be an ideal max for all of them. I think there are other reasons too having to do with the way flu shots work (made from killed virus) and other factors, but I don't know what they are.
Can you explain what you mean about how guessing the wrong strain means the effective protection when you get any (any what?) is longer?
As you said, they need to guess which strains will dominate the coming year, and then try to make something effective against them. If in theory they guess totally wrong 50% of the time, so wrong that you get no protection at all, then that would already explain the 50% peak protection average, even if its actually 100% with a correct guess. That would mean that just calculating when protection gets negligable from a 50% start is wrong: Half the time you get no protection anyway, so it doesnt matter when you take it. The other half, you start from 100% and so get a longer window - so you should take it earlier than your calculation implies. (And of course, you dont know in advance which half youre in.) Obviously the reality is not going to be this extreme numerically, but thats what I had in mind. Basically, you dont want to average the protection percentage in a way that weighs all yeas equally, because the ones with less protection matter less to your decision.
Oh. Does this have a practical implication -- for instance, that people should get the vax following a different approach from the one I recommend (wait til the levels start to climb)?
You would do a similar calculation, just weighing the misguessed years less, therefore taking it a bit earlier. The more variance the peak protection percentage has over the years, the more earlier.
This is crazy timing for me - my kid just had a checkup, I got offered a flu shot, and I went through an off-the-cuff, epistemic status "uh I think I remember reading something along the lines of" version of this post in the doctor's office. (Basically "vaccines decay, what if this is too early?") I ended up going for it since it seemed like the normal thing to do, and the nurse wasn't particularly worried about the decay thing - although I also assume his entire approach in this situation is "don't let people not vaccinate their kids", so that's not necessarily the word of God. Maybe if I had read this post first I would have said no... then again maybe I would then go on to forget to get him vaccinated at all. That's certainly something to keep in mind; strike while the iron is hot / in the doctor's office!
Anyways, maybe we will in fact get him another in December; I'll have to look into it. Thanks for this post!
Public health workers don't tell the truth, they say the thing they think is most likely to get the public to do the whatever it is they think is good for the public. I think they start in August hounding people to get the flu shot ASAP because they assume they are dealing procrastinating idiots. So their logic is that if they they start hounding the sheep several months before flu levels are likely to start going up, the sheep will get around to a flu shot not long before the flu hits.
The problem is, people who are not procrastinating or idiots, and there are lots who are not, go right out and get their flu shot in early September. Lately in my drugstore I've seen elderly couples getting their flu shots, then walking out together smiling, content that they've done what they need to to stay safe, and I feel so *angry* at the horses asses who hounded them to hurry up and get the shot,
Its quite alarming that effectiveness wanes so quickly. I did a gpt check of those stats. I think the 10-15 number is high and combines intraseason waning (antibody effectiveness declines) and interseason waning(strain shift). It gave a meta summary of 7-10 pct as intraseason only. So this is directionally correct, but maybe not quite as bad as sept = useless
I'm pretty sure the figures I got do not include interseason waning. What I read was clearly talking about waning of the effect of that year's shot. And interseason waning is kind of an odd concept. Yeah, last year's flu shor would be less effective than this year's, but that's due to mismatch between vaccine and flu type, not to waning.
Search the thing for "effectiveness" and you'll soon land on a long paragraph full of the relevant stats, based on multiple studies. (And the end of it your head will be a jumble.) Something I'm not clear on is the meaning of percent decline. If the effectiveness starts at 50% and wanes by, say, 10% per month, does that mean after one month the effectiveness is 40%, or 5% of 50%, so 45%?
In any case, though, the rate of waning affects *how* bad it is to get your flu shot in September, but does not change the fact that it's bad timing. What's the point of getting immunized 10 -12 weeks before flu levels even rise to "low"?
I developed an AI coding assistant for neovim, and wrote about it and my thoughts on using AI to program: "AI Whiplash, and neovim in the age of AI" https://dlants.me/ai-whiplash.html
>I now have a much better sense of when the AI agent might be successful or not successful at a task, and how much scaffolding it needs to have a decent chance of success.
Yeah, I feel the same, and it's definitely helping me (and I feel it when I slip up and try something that in hindsight was less clearly suited to the AI). I like your "gradient of control" strategy; I'll have to try that - so far I usually just do highly detailed prompts in Aider with no expectation of any useful multi-turn interactions, just my own pure human tweaks on the AI's output.
Somewhat separately: I've started lazily thinking that "what files are relevant to this prompt" would be a nice feature. Since it would run over large volumes of code for every prompt, it should be a very lightweight model - even cheaper than Gemini Flash. But thinking through what files are relevant is enough of a busywork distraction, that I'm sure if it was done for me I'd be like "how did I ever manage without this".
I have a "learn" subagent that is meant to do this, though I've struggled to make it effective. I think when transition points between subagent contexts are handled via messages that the AI composes, it tends to lose the thread. It's not great about capturing all of the relevant context (it often forgets files that it's discovered), or in picking out what the most important pieces are.
Currently I'm having the subagent either mention files or copy snippets of files into a notes.md file... But both strategies can be kind of bad.
Copying snippets is slow and feels like a waste of tokens to just copy existing code, especially when the agent decides to copy large swaths of code. And the parent agent often just ignores some of the file references (similarly, I've struggled to get the agent to use references from a context.md file based on the tasks I'm asking it to do. It seems to just forget they are there).
I've thought of something more mechanistic, like a list of files and line ranges that gets built up via tool calls and automatically gets included in downstream contexts, though that seems like it might degenerate into every downstream agent reading every file.
Maybe the answer is semantic search? I think I'm adding that next and I'll have to see how it changes things
I think this matches some of my own experience, which finds that "context control" and "prompt planning" are crucial elements to success. My own workflow involves a number of custom slash commands, principally:
- /fresh : whenever I start a new session, I run "/fresh path/to/subfolder", which has Claude read the CLAUDE.md file and README.md file in both the target subfolder and the repo root. I include tidbits like "this part of the project is pre-launch, and doesn't require backcompatability" or "we're an early-stage startup, and prefer time-to-market over enterprise scaling". This does a lot to guide Claude towards the level of complexity I want
- /draft : I'll often have a conversation with Claude about what I'm planning, identifying various sticking points, etc. Once I'm ready, I issue "/draft DRAFT_MYPLAN.md", and Claude will prepare a step-by-step plan (suitable for an agent to execute).
- /critique : after producing the draft, I run "/critique DRAFT_MYPLAN.md" in a fresh agent window (after running the /fresh command). This often catches a lot of small holes Claude got myopic about; the critique is appended to the bottom of the draft.
- /finalize : also in a fresh session, this command takes the draft + critique and produces a final, step-by-step plan, called "PLAN_MYPLAN.md"
- /implement : also in a fresh session, but I often use Sonnet-1M instead of Opus (Opus is a must for the planning stages). Normally Claude can execute the plan in < 20 minutes; the planning portion typically takes an hour.
- /update : I usually run this in the same session as "/implement"; it's a checklist prior to commit, including running all tests, builds, lints, and then creating a descriptive commit message about everything we've done in this session.
At that point I do many iterations of debugging and tweaks in the same session; the added context does a lot to help Claude remember all the previous work that was done. After the debugging is finished I run /update again.
It is unquestionable that my speed is at least 5x what it would be without an Agent. I wrote a new feature, entirely by myself, front-end and back-end, in less than 6 weeks, in a language I'VE NEVER USED BEFORE.
What was the feature? I think I've never worked on a feature that took longer than a week, including at times I've had to use languages or tech I had no experience with.
I have written a sub stack. As I am the reincarnation of famous prophet Nostradamus, this is a very momentous occasion for the world and for the universe in fact. Unfortunately, no-one is reading it, because they are blind to the truth.
The effect of this is being blocked from some fairly popular engineering platform like GitHub. Alternative tools/platforms of course exist but they either do no approach the quality (eg. Free azure/AWS credits provide access to great hosting services) or social network size (is. audience, networking, etc), so there is a cost to this kind of banishment.
The pragmatic case for sanctions that I can see is that it hobbles the target country's economy, making it more difficult for the government to execute on its programs. Iranian government programs appear to be rather risky for a lot of people, especially outside it's borders. The cost of that, however, is increasing misery for people ruled by said government.
Culture war aside (if there's any here?), in a more generic sense, are there better ways to impede bad (defecting) governments without exacting such a cost on their citizens? Is the current setup "fair"? Curious about your thoughts
Misery for the population is actually the point for these sanctions. Sanctioning governments often claim to target only the leadership and military but thats PR. The real, revealed and often explicitly stated aim is to immiserate the population to such a degree that it causes internal instability and either regime change or civil war (like in syria). Excluding goods like medicine is ineffective because once a country is under sufficiently strict sanctions regulated companies like banks will simply refuse to transfer money even to the neighborhood of the country. The risk for them is simply too high.
Source: consulting gigs for large banks where i had to undergo part of the standard training for employees. Once a bank had payed fines sometimes > 1 billion USD it becomes very risk averse. Very, very risk averse. Do not make jokes in the comment field of bank transactions (“here bro, for sexual favours”). Every transaction is scanned for key words (coke, teheran etc. - the list is confidential and i haven’t seen it) and flagged if necessary (“know your customer” process). Technically it’s just a grep job running on a zOS looking for nasty words.
A normal government is in power in its country. Being in power means that it gets to distribute harms and benefits to a significant extent. Thus any harm you inflict on the country must either be so minimal that it cannot change behavior or will be big enough the government can choose where it's allocated and will almost always choose to allocate them onto their own least powerful citizens.
Basically, the North Korean government is always going to choose that food goes to its soldiers, and you can't really change that. So your options are either give them so much food they let it trickle down, strengthening them, or don't and watch them starve their own citizens. And ignore their accusations that somehow you are responsible for their own bad policies because they have some divine right to trade and aid.
There's a bit in Yes, Minister where the protagonists are dealing with an issue where a visiting African head of state has privately informed them that he's about to make a deliberately inflammatory speech (something about Scottish independence, IIRC), and the Minister (Jim Hacker) is asking his permanent secretary (Sir Humphrey) about their options for responding. The relevant part of the exchange was:
Humphrey: Well, Minister, in practical terms we have the usual six options. One, do nothing. Two, issue a statement deploring the speech. Three, lodge an official protest. Four, cut off aid. Five, break off diplomatic relations; and six, declare war.
Hacker: Which should we do?
Humphrey: Well, if we do nothing we implicitly agree with the speech. Two: if we issue a statement we'll just look foolish. Three: if we lodge a protest it will be ignored. Four: we can't cut off aid because we don't give them any. Five: if we break off diplomatic relations we can't negotiate the oil rig contracts. And six: if we declare war it might just look as though we were over-reacting.
---
I think the overall issue is that for one sovereign state to do stuff specifically targeted against the government, military, or leadership of another, there usually aren't that many good options in the space between "harsh language" and "acts of war". Countries that are friendly with one another to some significant extent tend to have options along the line of the "cutting off aid" option that Humphrey listed, but even those are often hard to target because governments tend to have a lot of tools to control who in their country is left holding the bag.
It's fairly standard to limit export of high-tech weapons systems to countries that are at least nominally friendly, but that only goes so far. I've also heard of stuff like (in the early 2000s) embargoing iPods and other high-end consumer electronics to North Korea as a targeted sanction because only the Kim family and other high officials had access to them anyway, but that's very situation-dependent and probably isn't enough by itself to move the policy needle.
I think your reply articulates something that Erusian also touches on: the citizens of a state are the responsibility of that state, so if another state's sanctions are inflicting negative happiness on them, why isn't their state working to prevent it?
This reminds me of a bullying tactic where the bully sets some condition that is impossible to meet and then procedes to punish the victim. Kind of a "why are you hitting yourself in the face" deal where the bully forces the victims hands into their face. Not a 1-1 match, but it helps me pin point this slick transfer of responsibility that I hadn't noticed before.
While it has been government policy to consider policy to be a matter for ministers and administration to be a matter for officials, the question of administrative policy does cause confusion between the policy of administration and the administration of policy, especially when responsibility for the administration of the policy of administration conflicts or overlaps with the policy of the administration of policy.
Always. The obvious one is noting that this kind of collateral damage, inflicting disproportionate suffering for marginal gain, is normal practice in conflict between states, and only selectively condemned, and using Gaza as an example.
>…are there better ways to impede bad (defecting) governments without exacting such a cost on their citizens?
Other options are likely to be more invasive (up to & including actual invasion) and at least have substantial chances of exacting even higher costs on the citizens.
The difficulty is in the coercive nature of government; there's no way to reliably exclude only a hostile government from something to which its people have access.
In an odd way, maybe it was for the best that the communists took over Russia. I love my alternate histories, and you have to wonder what would've happened if Hitler had attacked the Tsar's Russia. Odds are good he would've crushed them quickly, I think, maybe going on to win the war. So things would've been much worse if the revolution hadn't happened and Russia hadn't gone totalitarian + had industrialized.
Great blog on this question here - https://blog.daviskedrosky.com/p/twilight-imperium. I think it's generally accepted that the Russian economy was growing rapidly in the quarter century up to 1914, although some argue that the growth could not have been sustained by the Tsarist regime.
Given that industrialisation had begun, and was moving quickly, it seems likely to me that growth would have continued after World War One under the unpleasant and inefficient Tsarist regime, just as it did under the much more unpleasant and inefficient communist one. Massive land reforms had already been initiated, and were bearing fruit as the war began - see https://worksinprogress.co/issue/the-road-from-serfdom/. Again, I think it's uncontroversial to say that one of the main reasons the German army and government wanted war in 1914 was their fear that Russia was industrialising fast, and as a consequence the war would be much more difficult for them to win if fought at a later date.
Early Communist Russia was in many ways more fragile and vulnerable that Tsarist Russia had been, and then Stalin made a number of avoidable errors (many of which were also horrific crimes). I think Tsarist Russia could have been a genuine economic rival to the US in the second half of the twentieth century, rather than the Potemkin state the USSR became - though maybe that's stretching the counterfactual too far.
Aside from all the bad things that happened to Russia, a Tsar probably would not have signed a peace treaty with Hitler in exchange for half of Poland, then purged most of the good generals as Stalin did.
The big problem would be "who succeeds Nicholas?" If no successful revolution and he manages to remain on the throne, he would certainly still be alive by the time of the Second World War. But his heir was sickly, and his remaining children were all daughters; ask the Tudors how this works out. So the succession problem would need to be sorted out fast and the likely successor on board with continuing the programme of reform and modernisation.
I think that Nicholas was reform-minded, just not enough. A failed revolution might give him both the impetus and the ability to push through more reforms, and so Russia starts to ramp up the pace of modernisation. Russia with a Tsar still on the throne is likely to be more sympathetic to and allied with the Allied forces of Europe.
Can Nicholas persuade the Eastern European countries that no, honest, if they ally with Russia against Germany this is not a Trojan horse for Russia to swallow them up? That too would be tricky to pull off.
*IF* all this can be done, then we have an at least notionally united Eastern front against Hitler's plans to push into Poland, plus the Western powers again notionally allied with the East at an early enough stage to perhaps slow down Hitler's advances.
I think this misunderstands Nicholas both in terms of his personality and in terms of his capabilities. He was almost uniquely unsuited to lead, being by turns reactionary (especially whenever his wife had his ear) and credulous (whenever she didn't). He was also naively romantic, erecting a bubble around himself and his family and then seemingly genuinely believing that he was beloved by his subjects (minus a few malcontents who needed to be harshly dealt with). He was not at all reform-minded, and in fact resented and tried to undo whatever reforms were pressed upon him (see, e.g. hid handling of the Duma). He was also apparently enough of a mark that a ragged charlatan who everyone else at court dispised (Rasputin) somehow wormed his way into the royal household and ended up commanding real power.
There was no way for the Empire to survive his reign. The best we could have hoped for, I think, is that the Kerensky government lasted long enough to see out the end of the war and then consolidate into a parliamentary system with Nicholas as a powerless figurehead. But that is also the one thing that Nicolas absolutely refused to do. So some sort of civil war was more or less inevitable, along with the breakup of the Empire. And the Resulting Russian Republic, if it even managed to exist instead of turning into Warlord-era China, would certainly have been less organised and industrialized than the Soviet Union.
It seems to be like the most immediately probable alternative to "Bolsheviks take over Russia" isn't "The Tsardom persists for another several decades." It's "a revolution overthrows the Tsardom and you get a non-Bolshevik group forming the government instead."
In fact, the fall of the Tsardom and the Bolshevik takeover of the Russian government were temporally separate events: it took several years of civil war for the Bolsheviks to establish control of the nation after it was freed from Tsarist rule.
For that matter the Bolsheviks weren't the group that overthrew the Tsardom, and they needed some particular good luck in order to seize nominal charge of the country six months after the last Tsar's abdication.
I don't think so. Stalin's handling of Hitler was comically inept at first - he was completely blindsided by all accounts and had politically purged the officer corps. If it hadn't been for a combination of incredibly bad weather and the sheer difficulty in invading Russia in terms of logistics (plus a ton of outside aid), the Soviets easily could have lost.
I don't see a Tsarist Russia that survives the events of 1917 making those same mistakes, if only because whoever succeeded Nicholas was unlikely to be such a Great Idiot of History as he was and would require no small amount of capable politicking to make it that far.
As for industrialization, I remember reading that when you looked at Soviet industrialization rates, it was basically a continuation of what had been happening under the Tsarist regime in the prior 20-30 years (not a drastic break).
If Stalin isn't in charge of the USSR then Russia is likely more hostile to the Nazis and Molotov-Ribbentrop doesn't get signed. If it's the Tsar then he'd have a strong ideological dislike of revolutionary and populist movements. If it's the Socialist Revolutionary Party (democratic socialists) then they likely see fascism as right wing and a threat to democracy. Without Molotov-Ribbentrop Germany doesn't overrun Poland as quickly and perhaps Russia enters the war to help them.
Remember, World War 2 in Europe was started by Russia and Germany. As much as Russia tries to erase that fact. The Soviets repeatedly chose to ally with the fascists over democratic counteroffers.
It's also not certain Russia would have been worse off economically. Russia was industrializing before WW1. The deindustrialized state Stalin inherited was due to general economic collapse. Russia was less industrialized in 1921 than in 1914. The Tsardom or Socialist Revolutionaries also likely don't spend as much time purging people and are more willing to attract foreign capital and so on. They're also more easily able to invest foreign states in their survival and get other alliances.
Uhm, what kind of offers are we talking about here? As far as I know, no one other that Nazi Germany has offered Soviet Union to split Europe between them... but apparently you have something different in mind.
Yes there were. The USSR turned them down repeatedly. In fact this isn't even solely a Stalin/Nazi thing since even in the 1920s they were rejecting British offers and preferring to collaborate with (then democratic) Germany.
Russians sometimes portray themselves as having been forced into Molotov-Ribbentrop. To say they were forced into it by democracies not reaching out. And to claim their conquest of their neighbors was defensive. But this is not in fact the case. It's a Soviet era lie meant to explain away their behavior. The USSR used to jail people for even bringing up Molotov-Ribbentrop.
I take my facts from AJP Taylor. You may think he is unreliable. Chamberlain distrusted Soviets and disliked being dragged into negotiations with them in 39. The Anglo-French proposals were not entirely sincere and tended to be delayed. The Anglo-French even sent a low-powered negotiating team to Moscow. The stumbling block was refusal of Poland to allow Soviet troops and the reluctance of liberal democracies to offend the small countries.
This is not to say that the Soviets were forced to make pact with Nazis,
The Nazis probably would have had the same quasi-genocidal intent vs the Slavs, which helped rally the Russian people to defend a regime many of them despised, so that points to the Russians still pulling out the W. However, one must also consider that the Soviets' experience in the Russian Civil War, along with paranoia about being the only communist country with a bunch of capitalist neighbors meant that when the Germans did invade, the Russians at least had the manpower, industrial capacity and other resources in place to (eventually) win. I think a Czarist regime would probably have been as unprepared for the Second WW as they were for the First.
Yes, but the Soviets were also about as unprepared for the Second World War as the Tsarists were for the First. Partly because of an ill-timed Purge, of course. But in spite of Soviet "paranoia", Stalin completely ignored numerous explicit warnings from the British, the Americans, and his own intelligence agencies documenting in detail the German plans and preparations for invasion including IIRC the actual date to within a few days.
I was mainly thinking in terms of manpower. When Barbarossa began, the Soviet Army was not some little peacekeeping force focused on internal security:
“When Germany invaded the Soviet Union in June 1941, in Operation Barbarossa, the Red Army's ground forces had 303 divisions and 22 separate brigades (5.5 million soldiers) including 166 divisions and brigades (2.6 million) garrisoned in the western military districts.[58][59] The Axis forces deployed on the Eastern Front consisted of 181 divisions and 18 brigades (3 million soldiers).”
That's big. That's about twice the size of the Russian Imperial Army at the outbreak of the First World War. I know weight of numbers frequentlydoesn't tell the tale in industrialized warfare, but I think this at least demonstrates that the Red Army was not wholly unprepared for war the way, say Britain and the US were. I think the Soviet Government simply had more state capacity to marshal available resources for a war effort than the Tsarist Regime could have ever really contemplated, and they had certainly made use of it.
The Tsar's army was not "some little peacekeeping force focused on internal security". I do not believe that any Russian army ever has been a "little peacekeeping force focused on internal security". Indeed, given Russia's deep cultural psychology re invasions of the Motherland, I'm pretty sure that proposing to have only a "little peacekeeping force focused on internal security" disqualifies one from governing Russia and would lead to the sort of broad dissent that would need a ridiculously ginormous internal security force to suppress.
It's easier to just give the Russians what they want - an army that they can believe makes Russia unconquerable. Especially by the Germans. Or the French. Or the Swedes. Or the Poles, even. But also forces in the East because they remember the Mongols and the Japanese.
*Whoever* governs Russia in 1939-1941, is going to have a very large and powerful army that is designed to stand up to the German army. The only question is how capably they will use it.
Tsarist Russia started WWI with 1.5 million soldiers under arms, and 5 million reservists that could be mobilized with a bit of time. It had the largest standing army in the world at that time. If they had the largest standing army in 1914, what reason do you have to believe they wouldn't continue to have the largest standing army in 1941? That the Tsars would decide to become less focused on military power after WWI?
The tsarist regime was in a pretty advanced state of decay in 1914. Despite its size, the army wasn't terribly effective. It was poorly led and even more poorly supplied. If the ancien regime had managed to hang on til 1940, it would probably have been in an even more advanced state of decay and the army probably would have been about as effective as the Poles were against the Wehrmacht. Eg, Soviet tank production was outpacing German figures by 1942. Would that have been possible under the tsar? Seems unlikely to me, but of course that's ultimately unknowable. That's why counterfactuals are fun.
Without the Bolshevik Revolution, it's unlikely that Hitler would have rose to power. A lot of German and foreign powers in the interwar period either supported or failed to suppress Hitler specifically because they saw him as a useful bulwark against either the Soviet Union or German communist movements.
The whole world just looks too different in 1939 if there's no communist revolutions and the Tsar is still around.
Agreed on that, if the Communist threat is not a threat and the Bolsheviks are just a bunch of internal revolutionaries that the rest of Europe is happy to let Russia handle, then trying to play Hitler off against them isn't in the cards.
Note that the communists toppled a democratic government, not the Tsar, so that's who would run Russia in that scenario.
The world would have been so fundamentally different that I can't really imagine what would have happened.
To me, the communist revolution in Russia is a leading candidate for The Worst Thing That Ever Happened, so I expect things would have ended up much better.
The Great War ends differently if there isn't a Bolshevik regime to take Russia out of it. Or, at least, Versailles is different.
All sorts of things are different but at the very least we can say the anti-communist fears that drove some fraction of Nazi support might well have been weaker.
So the boring answer is that you change one key ingredient from 24 years earlier and it's likely that a lot has changed. No Bolsheviks, no Hitler Government is what I think the simulation will show.
You've got the causality backward : Germany allowing Lenin to transit though their territory into Russia as an objective ally is one of the factors that lead to a Bolshevik Regime.
Russia was industrializing already in 1914. A Russia under the tsar wouldn't have the mass losses of the Russian civil war or the early 1930s famines, and the Tsar is unlikely to have killed off a huge chunk of his officer corps. It would have performed better against the Nazis, most likely.
The Holodomor was a direct and, if not intended then at least accepted, consequence of rapid Russian industrialization; not so much an economic mistake due to communism. Russia had to buy its factories and know-how from Europe and USA, but did not have enough foreign currency to pay for them. Ukraine was bled dry to pay for all that Western tech in grain. It may have happened the same under a Tsarist Russia because the overall situation (need to industrialize, no dollars) would have been the same.
>Deputies of the State Duma, honoring the victims of the 1930s famine on the territory of the USSR, strongly condemn the regime that has neglected the lives of people for the achievement of economic and political goals [..]
Tsarist Russia would have had plenty of access to foreign credit markets for borrowing for industrialization (a significant factor in their alliance with France and the UK going into the First World War as well). They would not have been killing millions through starvation to pay for factories through grain expropriation, to say nothing of the fact that they'd probably have higher agricultural production in general without collectivization.
Can you show any evidence of Russia chronically lacking foreign currency (or experiencing a balance of payment crisis) before 1914, when it was already rapidly industrializing?
That would support your argument that industrializing post-1917 necessitated extreme measures to acquire foreign currency, no?
I said nothing about before 1914, I'm talking abut the Holodomor era. The cost of industrialization depends on its pace, and the Tsarist industrialization effort is no proof that Soviet industrialization could have proceeded at the pace (and cost) it did.
>Deputies of the State Duma, honoring the victims of the 1930s famine on the territory of the USSR, strongly condemn the regime that has neglected the lives of people for the achievement of economic and political goals [..]
This twitter thread by Russia-born historian Kamil Galeev
> The cost of industrialization depends on its pace, and the Tsarist industrialization effort is no proof that Soviet industrialization could have proceeded at the pace (and cost) it did.
I mentioned Russian pre-1914 industrialization precisely because of its rapid pace. Steel production more than doubled between 1890 and 1900. I agree that beyond a certain pace level, and assuming complete disregard for human welfare (e.g. Great Leap forward), widespread famine would be inevitable, but I don't think 1930s Soviet Union exceeded that level.
Here is data on steel production for selected countries between 1890 and 1935 according to Grok:
You can see that during this period of time, several countries industrialized faster (or close to) than Russia/USSR (Poland 1910-1930; Japan 1910-1930 and maybe 1930-1935; Italy 1910-1930; maybe Russia 1900-1910). This is assuming steel production is a good proxy for industrialization, of course.
I don't see how 1930-1935 Soviet industrialization was "rapid enough" to necessarily lead to Holodomor. None of the countries that industrialized very rapidly around that time, as fast or nearly as fast as the USSR, came close to famine conditions.
Yup, this. Russia did well in the Russo-Turkish war, and its defeat in the Russo-Japanese War was due largely to the 1905 revolution. A Russia without revolution would likely have done better in WWWII. The Molotov-Ribbentrop pact also likely would not have happened, given Hitler made his plans for eastward expansion clear from the beginning.
My pathogen update for epidemiological weeks 35-38. All COVID this update.
1. Biobot's wastewater numbers and the CDC's NWSS seem to both indicate that the current wave has peaked. Of course, I thought that was the case 4 weeks ago, but the numbers spiked up again.
Notice that the CDC's NWSS data shows the current wave is nearly as high as our 2024-25 winter wave, while Biobot shows the current wave as being significantly smaller. I suspect that's because the CDC is normalizing their data against the previous year's numbers. OTOH, ED visits are higher for this wave than the previous one.
BioFire's proprietary syndromic trends tool also shows the current COVID wave is past its peak. And their COVID curves resemble the CDC's NWSS curves. I'm not sure how they derive their data, though. Notice there's a *slight* uptick in RSV and flu.
(Click on the Respiratory Pathogen Trends tab, and use the funnel icon to display the pathogens you're interested in.)
2. And if Biobot's CpmL/PMMOV numbers are a better indicator of transmission rates than the NWSS data, I'd have to conclude that XFG.x variant is more virulent than last winter's XEC.x var, in that there is a higher rate of ED visits this current wave. But this is not reflected in hospitalizations or deaths. Charts here...
Following this line of reasoning, I conclude that even though XFG.x is getting past our NAbs and we're getting sick, our secondary B-cell and T-cell defenses are doing an excellent job of warding off serious illness. Unfortunately, the age cohort with the highest percentage of ED visits is the 0-4 year olds. Seems like Bobby "Brainworm" Kennedy's restrictions on COVID vaccines for the very young were a bad idea. Of course, SARS2 has a very low mortality rate among the very young, but it's not zero. Some kids who otherwise would have gotten vaccinated may die because of RFK Jr's anti-vaccine crackpottery.
3. XFG (aka "Stratus") and its descendants were the primary driver of our summer wave. Early on, it looked like it would be an NB.1.8.1 ("Nimbus") wave, but XFG left it in the dust.
And although XFG.x is the dominant variant in North America, South America, Europe, and Africa (with some sampling caveats), Asia and Oceania belong to Nimbus. I don't remember a pattern like this since 2021, when the VoCs Beta, Gamma, and Lambda each distinctly dominated a region of the world. Delta, and then Omicron, each became global VoCs and destroyed that pattern of geographical segregation.
Related: the influenza wave in Australia seems to have been pretty harsh this year. People considering vaccinations or boosters should not forget to consider a flu shot, since the Australian wave often indicates how bad things are going to be in the next winter of the northern hemisphere.
And AUS had a unusually high rate of Influenza B infections. I suspect we'll see that same pattern in the US for our upcoming flu season. But our current formulation is supposed to do an moderately OK job protecting against the current A(H1N1), A(H3N2), and B/Victoria strains. In the past flu vaccines weren't as effective as COVID vaccines at preventing illness, though, and their NAbs fade quicker than SARS2 NAbs.
I had Covid for the second time a couple of weeks ago, and, like the first time I had Covid, it was actually pleasant compared to many much worse "regular" head colds, sinus infections, and even bad allergy seasons I've had in the past.
Day zero: Halfway through work, I started feeling "covid-y:" Inexplicable but moderate fatigue with mild chills and a sense of "I don't feel right."
I started sucking on zinc lozenges, put on a mask, bolted out early to minimize exposure to my coworkers, went home, laid down, and felt quite a bit more covidy six hours later. Fatigue increased to "significant," accompanied by chills and a mild fever (which irritatingly floated around 99 - 100.1 and never spiked higher). I developed a runny nose, but it wasn't ever so bad I couldn't easily sleep, and no sore throat or cough to speak of. No noticeable loss of appetite, smell, or taste. Breathing was easy throughout.
Day two, my test arrived from Amazon. It read positive.
Fever broke on day four, perhaps not entirely coincidentally after I felt well enough to boil and cool water for several rounds of nasal irrigation with a neti pot, then, after 24 hours and symptoms improving on day five, I was back to work on day six, even though I was still testing positive.
On day seven, I felt more or less normal and tested negative, although that's when a persistent mild dry cough arrived which has been going on for a week.
As I said, this round of Covid was so only mildly bothersome that it was actually weirdly pleasant for an illness. If I have to be symptomatic, this is the way I want it to happen! The only thing I wish had gone differently was to begin nasal irrigation plus salt water gargling right at the start of symptoms, as that apparently can reduce the severity and duration of Covid (https://pmc.ncbi.nlm.nih.gov/articles/PMC10312243/).
(And on a side note:
!!!
Why the FUCK isn't that advice paired with all educational materials on Covid care on every website? Why the FUCK did I only find out on day three of my *second* round of Covid that neti-potting and salt water gargling could actually make a significant difference?!?!
!!!)
Interesting, I had a *much* easier time with real Covid than two friends had getting Pfizer boosters a few weeks before I got sick. One was miserable with body aches for two sleepless days and felt draggy for several days after, and the other was completely debilitated with body aches and fever for three days and barely starting to feel okay to go out to a restaurant on day four.
Whereas I could have easily muscled through my usual work routine and doing chores like grocery shopping if necessity demanded it (I actually did slip out to a a gas station and grocery store just before closing on day 3, wearing a regular N95 mask, and the task didn't take anything out of me).
I'm not saying anyone should look at our cohort of three and draw any conclusions about what they should do with their own health, but just want to note that it's interesting how much worse their symptoms were with the vaccine than mine was with the actual illness. I kind of suspect that people's personal experiences with Covid are much more unique to the individual's immune response than anyone wants to admit.
There were at least a few studies that showed netipotting and salt-water gargling could reduce COVID symptoms. But they also indicated that those steps wouldn't shorten the infection time. These steps generally work for all viral respiratory infections. I'm not sure why your doctors or their assistants didn't recommend them, but until recently, COVID has been a potentially serious illness, and it probably never occurred to your medical professionals to treat it as if it were a Rhinovirus.
Do NOT use tap water, though! There's a nasty parasite called *Naegleria fowleri*, a.k.a the “brain-eating amoeba,” that's killed a few people. Use sterile distilled water or a sterile saline solution. If you must use tap water bring it to a boil for a few minutes and let it cool before you use it.
> " But they also indicated that those steps wouldn't shorten the infection time."
Right - the link I posted cites a lot of studies, some of which state that nasal irrigation and salt-water gargling do indeed reduce infection time.
I didn't dig into every citation, but it makes sense that the timing and intensity of one intervention might not work as well as a more aggressive intervention, etc.
Yep, as mentioned in my comment, I boiled and cooled the water for my neti pot. I held it at a rolling boil for over five minutes, which is the best practice for preparing tap water for neti pots.
>Why the FUCK isn't that advice paired with all educational materials on Covid care on every website? Why the FUCK did I only find out on day three of my *second* round of Covid that neti-potting and salt water gargling could actually make a significant difference?!?!
I don't know why. But I believe strongly that in the current era people have to research health problems on their own, as a supplement to visits to professionals. One of these days I will put up a post about the story of my spine problems, and how much research and self-advocacy I have had to do to just to get basic information and reasonable interventions. And I live in a town with a famous, revered medical school, several famous hospitals, research centers led by world-famous scientists, etc. Every doctor and treatment center I have used would be triple-A rated if such ratings existed. And I *liked* these doctors OK -- they did not bristle if I wanted to discuss options to the one they suggested. But they did not tell me big picture stuff I needed to know.
You have to research your illness, research treatment options, but also collaborate with your doctors, because whatever the flaws in their treatment recommendations they know way more than you about illness and how the body works.
One of my greatest disappointments escaping from Christian Science into medical science was discovering the amount of effort one needs to put into the latter. I was initially thoroughly cowed by aphorisms like, "Don't confuse your Google search with my six years of medical school" and then only eventually learned that the rejoinder, "don't confuse the 1-hour lecture you had on my condition with my ten years of living with it" is indeed legitimate and equally valid.
Especially if you're working with overworked, indifferent, corporate medical professionals, which I mostly am.
I've mentioned before that opiates up to and including IV Dilaudid have zero noticeable effects on me, a phenomena which literally every goddamn medical professional I encountered dismissively waved away until I sought out pharmacogenetic testing on my own dime.
$700 later, I finally had paperwork proving I have a mutation of CYP2D6, a gene which metabolizes opiates. Moreover, I had paperwork educating doctors that there is a gene which processes the analgesic effect of opiates, that it has mutations, and that those mutations can dictate the pain relief patients receive from opiates.
That not all patients receive relief from opiates.
And the scariest thing about that revelation is that is the most conservative estimates are that 5% of my Caucasian demographic are likewise opiate-resistant mutants (it's less common in other ethnicities, which hover around 2%).
That's a LOT of goddamned patients, but every medical professional I saw was completely surprised by the idea that 1 in 20 white people and 1 in 50 Asian and African people don't receive significant relief from the most powerful pain medications available.
I feel like...that's something they all should have known???
XFG, which is creating the current wave, is a recombinant (a "hybrid") of two distinct viral lineages: LF.7 and LP.8.1.2. My understanding is that the current mRNA vaccine formulations are keyed to LP.8.1.2 (I don't know about Novavax). The mouse model data I've seen suggests that the current formulation should do a pretty good job against XFG and its immediate descendants, with the advisory that this formulation won't perfectly protect against illness (none of them did, though), but will do a good job preventing serious illness and death. OTOH, even though they're making this CYA statement, the NAbs generated by this formulation should also reduce the chances of getting infected at all.
Likewise, considering the poor vax uptake in the US, it appears our immunity acquired from previous infections and earlier vaccine versions is doing an excellent job of keeping people out of the hospital. For this reason, I think it's unwise to deny these vaccines to the young who may not yet have been exposed to the virus. The vaccines will create a 4-6 month peak in their NAbs, and allow their B cells to key themselves to the current epitopes — and start the process of somatic hypermutation (which is a way our immune system riffs on what it's learned). Also, it will allow the kids' naive T cells to learn about this pathogen. Kids could get this via infection, but vaccines are less risky than a COVID infection.
Looking into the future, we have no idea which variant will kick off the winter wave. It may not have even appeared yet. How well the current formulation will work against the next wave-creating variant is an open question.
About qualifying for the vax: I believe the qualifications are the same as they were early on during periods when there were limits on who got the vax: You have to be age 65 plus or have one of the conditions on a list of conditions that were thought at one point to increase your risk of being made severely ill by covid. Some of the conditions are things that half the population qualify for -- obesity (I believe that's BMI > 30), mood disorders, and I forget the other mild stuff, but some of it's very common. Should be easy to find out online what currently qualifites people other than age.
Also, in my state, the places giving the vax did not ask for proof that the person had one of these conditions. All you had to do was say "I'm under 65 but have a qualifying condition." They did not even ask what the qualifying condition was, much less ask for proof. And that makes sense. Think what a hassle it would be for them if they did -- figuring out what counts as proof that somebody suffers from depression, for instance. So I'm guessing that drugstores in most states will operate the same way.
Coincidentally I went and got a Covid vaccine today. At a certain point in the process the pharmacist, her eyes looking down at the counter, mumbled, "Will you verify you have a pre-existing condition and are eligible for the Covid vaccine?"
Depends on which state you're in. The CDC's ACIP committee makes vaccine recommendations for the U.S. population. These recommendations are advisory, not automatically binding. The Feds, through laws and programs, ties certain things (insurance coverage, vaccine programs, etc.) to those recommendations. But states can ignore them (although insurance companies may use the new restricted ACIP recommendations to deny coverage if you get the vaccine). It's all a tangled mess at the moment. CA, OR, WA, and HI seem to be going their own way. I'm in CA, and I got my flu and COVID shot at Kaiser yesterday. I'm over 65, but whole families were lined up to get their vaccines, and no one seemed to fuss about whether the kids met the new ACIP recommendations.
Just looked at this (https://public.tableau.com/app/profile/raj.rajnarayanan/viz/Percent_ED_Visits_USA_CDC/Dashboard1) table, and noticed that age 0-4 is consistently high on ED visits, even if not always highest, during both surges and lulls. Not sure what to make of it -- maybe mostly that young children get sick suddenly, and often spike high fevers, and of course they have more people monitoring them and worrying about them than any other age group.
This was what stood out to me. If your metric is ED visits, you notice it is high in small children, and you conclude this has to reflect small children getting more severe illness you are missing a very major confounder: standard advice is that illness in very small children can go sideways very very fast, so if something goes wrong you should get professional help immediately. This gets less true as they age, so if parents listen to the standard guidance (at least some do) you would expect to see that pattern even if the base illness rate was identical. (Which it generally won’t be, as I understand it the guidance is correct. But its influence does not require its correctness, and it should lead to toddlers/babies being brought in in “probably fine” cases where older kids likely wouldn’t be, and adults definitely wouldn’t be.)
Oh dear. I'm mostly a quiet lurker but I've always seen this site as a respite from that kind of drama. I just hope that people who feel like they're losing it get some support outside of social media. The last few years have been insanely stressful. I actually never knew I cared as much about the world as I do. It's obviously deep evolutionary programming.
I just try to remind myself that life a few thousand years ago was mostly much worse.
Is it deep evolutionary programming to care about the world? I think that programming is for caring about your tribe, a Dunbar's number of people. It's education (or indoctrination) making you care about the world, I think.
How is it not obvious that the word "care" is just obscuring a semantic argument? When I see a dying pigeon i want to give it water, or if i am told a village is in need of water I reflexively want to help them. None of these things are in my Dunbar list.
This "urge" to help, whether or not you want to call it "care", is obviously evolutionarily programmed. We even see other animals doing similar acts of compassion or tidiness to keep their world safe and alive.
If it were evolutionarily programmed, the urge would be ubiquitous, no? I think most people are not as altruistic as you. And I don't think I'm running a different definition of "care" than you or OP.
Then, again, do I care? I took the Giving What We Can Pledge, but I feel like I'm not very invested in the state of the world. That's too distant and abstract, hence why I don't think there would be a biological mechanism to make you care for the world.
It's not occurring in every single member of the population, sure, but psychology has a lot of variance and I would say >80% of people would feel the urge to help a dying animal or stranger. Even very heritable traits have much variance.
"Care" is downstream of hormones and endorphins which make you more or less sensitive to events in the world. It's not a biological imperative until you become aware of some event and develop some kind of association with it and then these biological systems kick in, and some people just have a lower threshold for this.
"Dying animal", "stranger", this is very concrete and tangible, "world" is not. "Caring for the world" needs all sorts of intellectual baggage to happen to make you develop that association, the biology won't do that on its own.
I know what you're really trying to say which is something like "Starving / dying / sick people in X poor country are just abstract data points to me that I have been taught I need to respond to, but I do not have the same feelings towards them that I do towards immediately visible things"
And I'm just saying that the urge to respond to that in some positive way is not entirely socially learned, even if you never build up exactly "caring" emotions like you would care for your own child. So I think we agree enough here that I wont continue.
On the other hand, elephants help drowning animals and people will help strangers if they come up to them and ask for it much of the time, so it's clearly contextual and dependent on your present psychological state, hormones, knowledge, etc.
Doesn't mean there is no biological component and this is all some kind of indoctrination scheme to get people to play nice. Actually, it's possible the indoctrination goes in the opposite direction and people get taught to distrust someone who appears to be dying, for whatever reason.
Designating "Antifa" as any sort of organization, makes about as much sense as designating "goth" as an organization. Or maybe imagine Richard Nixon designating "The Hippies" as a domestic criminal organization back in the day. Which is to say, not much sense at all. There were certainly plenty of hippies committing crimes, but precious little in the way of large-scale organization. And the emergence multiple clusters of local leadership in the presence of local demand, does not a singular organization make.
We've declared "Antifa" to be a terrorist organization. Great. How is that actionable? What can we do that we couldn't have done before?
"Do you remember the dogs of summer? The riots that did $2 billion in damage? The miniature armies in black clothes and black masks? Those were dogs on a leash. They could be turned on and off in one Zoom call. Anything that can be suppressed with “a few key decisions” is not in any way spontaneous. You just read it in the Times—so it must be true, right? And did the people in black show up on the 6th? They did not."
People are getting very caught up on the word "organization". There isn't a national head of antifa, or a membership list. They don't have regular meetings. It would be more technically accurate to say it is a loosely organized terrorist network, often operating in small autonomous cells. Nevertheless, the official designation is useful because it gives law enforcement greater ability to investigate, disrupt, and prosecute terrorist activity. We want to be able to prevent events such as the Alvarado ICE detention center attack, or impromptu gatherings for the purpose of mob violence.
"The official designation is useful because it gives law enforcement greater ability to investigate, disrupt, and prosecute terrorist activity"
This needs unpacking. What, exactly, is law enforcement going to be doing that they couldn't have done just as well last month?
If there's a group of people meeting to plan a violent protest or whatever, the police can already arrest them. If they have a reasonable suspicion of that, they can investigate them to see if there's enough evidence to arrest them. And even without reasonable suspicion, they can do some basic surveillance and information gathering. But if the idea is that they will now be able to say "Aha, we don't need any of that probable cause or reasonable suspicion nonsense because they're *Antifa*, we can just investigate and search and interrogate away!", then it kind of does matter that Antifa doesn't have a membership list or a national organization.
Wimbli is writing in Dale Gribble mode, but I do think there's something to the FBI making this declaration in order to be able to use RICO. If so, then I think (IANAL) Antifa has to be recognized as a criminal organization, which would include a domestic terrorism organization.
RICO would enable harsher penalties, and civil suits. It also enables the USG to look like it's doing something, with the base energizing that brings. It also enables "going after the money", which presumes organized but hidden money is flowing to them. I suspect the FBI has information on this, which it can't disclose for the usual reasons.
It's a pity I can't just link to Popehat to explaing why It's Not RICO, Dammit, but it isn't.
In addition to the need for something recognizably a criminal organization, you have to charge people from a short list of crimes explicitly spelled out in the statute, and that list is tailored towards the thing mobsters do, not the things protesters/rioters/activists/"terrorists" do. And you ultimately have to convict them of that crime.
The big advantage of RICO over just charging and convicting them of the underlying crime, is that it unlocks tools to seize their money, which is a big deal if you're going after mobsters, but not so much antifa. AIUI, aside from this month's operational funding, "Antifa's" money is mostly in the pockets of supporters who maintain a safely deniable distance while doling out the funds as needed.
Also, RICO lets private citizens get in the act with lawsuits, but those almost always fail and they aren't even worth trying if the target doesn't have deep and accessible pockets.
Hmm. Well, it still seems possible to me that the FBI could convict specific Antifa members of specific crimes. I just don't know who and what yet, and it may be that FBI is still working on that.
I agree that RICO comes up due to the money angle, and I think part of the difference between our views is that I think FBI knows more about the money flows than you think it does. It seems perfectly plausible for them to not reveal that until they're ready to drop the hammer, and before that, they have to declare Antifa to be domestic terrorists. (This could also be part of FBI's strategy to find out more: declare they're domestic terrorists and see who starts making a lot of phone calls.)
I'll agree private suits aren't likely for the reason you state - unless it turns out Antifa has targeted some very wealthy people. I don't know how likely this is, but I think it's been long safe to say that the median Antifa member has a high incentive to do exactly that.
It's also possible that I'm way off and FBI's not using the RICO route at all. FAIK it's not planning to even do much past this declaration in order to look busy (and again, to see who runs for cover when they turn on that floodlight).
It makes about as much sense as declaring war on racism. Like him or not, the President has a lot of social clout. He's trying to move the culture rightward by planting an ideological flag. This is what administrations always do.
I don't think it's an organised, as in "centralised headquarters and organisational structure of one entity", movement. But at the same time, it's not "random three guys in a city somewhere decide they have nothing better to do on Friday night than throw stones at cops" movement either.
It's more like little cells all taking inspiration from the same broad philosophy and co-ordinating for local protests and taking advice and copying tactics etc. from social media sites they frequent.
Should they be called a terrorist organisation? Not quite, not yet. They're not up there with the 70s movements in the USA.
But if parents protesting school boards can be called "domestic terrorist organisations" then hell yeah, what's sauce for the goose is sauce for the gander:
"According to the Attorney General’s memorandum, the Justice Department will launch a series of additional efforts in the coming days designed to address the rise in criminal conduct directed toward school personnel. Those efforts are expected to include the creation of a task force, consisting of representatives from the department’s Criminal Division, National Security Division, Civil Rights Division, the Executive Office for U.S. Attorneys, the FBI, the Community Relations Service and the Office of Justice Programs, to determine how federal enforcement tools can be used to prosecute these crimes, and ways to assist state, Tribal, territorial and local law enforcement where threats of violence may not constitute federal crimes."
No they didn't. My view is that "what is sauce for the goose is sauce for the gander". If the liberal/woke side wanted to get ordinary people classed as domestic terrorists, then the folx dressing up in black, chanting slogans, adhering to a broad philosophy, and turning up for street protests, property damage, and altercations with the civil authorities can damn well go in that bucket too.
Biden and Trump are both hostile to ordinary people, and it's not clear why the ones dressing in black should be blamed for the decisions of either administration. I doubt your average antifa protestor viewed the Biden administration as an ally (nor should they!)
The people who could be hurt by this policy are not, by and large, the people who backed the last administration doing likewise, and the exceptions were extraordinarily naive and I tried to warn them.
Israel too, Likud itself has fascist roots and they have Kahanists in government. And knowing US history and Biden's general tendencies he might've backed someone awful in Latin America though I haven't been following the region that closely.
But yeah, I stand by thinking that US aid to Ukraine should've been conditioned on rooting the Azov Battalion out of the army and taking down those Bandera memorials
Seems like Antifa is a motte-and-bailey of an organization.
Organized enough so that when they don't like something, they can call their people to the streets to do group violence. (And if some rando, such as you or me, tried to call those same people to the streets, they wouldn't respond.)
But also this is all perfectly spontaneous, and if you imagine that there is someone who "calls their people to the streets to do group violence", you must be some kind of conspiracy theorist.
I'm literally looking for someone who has publicly said "The name of my group is Antifa" or "I am part of Antifa". Like I think John Jacobs clearly referred to his group as "Weatherman".
Dwayne Dixon, the guy in your article, is part of Redneck Revolt, which has not been designated as a terrorist organization AFAIK. I'm sure he would describe himself as "against fascists" but would he say "yes, I'm part of Antifa"? And if not, what is the point of designating a group with no known avowed members as a terrorist group?
It's weird how many people mistake "a bunch of individuals have similar interests and thought patterns, and thus often pursue the same goals at the same time" with "these people must all be secretly working together."
It's not isolated to one part of the political spectrum either: I see this same mistake made by many people of many different persuasions in many different contexts.
I don’t think that’s the issue. It really just quite simply isn’t an organization. It’s like calling Christians an organization (and Christians often refer to themselves collectively as ‘the church’ as if there were a collective Christian institution).
It’s an identity or term used to describe a worldview, not an organization.
The Mafia is an organization. You could meaningfully designate it as a terrorist organization. Designating ‘antifa’ seems rather like someone saying ‘we hereby designate gangsters as a terrorist organization.’ Not a specific criminal organization, just ‘gangsters.’
I think you have a point, but keep in mind that we're Americans. We declare war on inanimate objects like drugs, or even on mere concepts like poverty or terrorism. Complaining that the government is misapplying some categorical designation to Antifa goons isn't going to get you anywhere at all.
You can declare war on drugs, but even within that framework I think it would still be pretty extreme to designate all drug dealers as terrorists. "Let's declare war on left-wing agitators" might be stupid framing but at heart it's just a way to declare your priorities; designating some group as a terrorist organization presumably (I'm not actually sure of the details here) actually has legal consequences.
It's the difference between "we're going to put more resources into going after drug dealers" and "because you bought drugs from a dealer, you knowingly gave money to a terrorist organization so we can block all your assets"--the first can remain just a set of misplaced priorities; the second allows the justice system to be much more intrusive and overbearing over acts that don't really deserve it.
That's true, there are important legal ramifications involved. I was just responding to the somewhat semantic point made by others that antifa is not an organization.
It's really hard for people who don't have experience with anarchist and far left activism to believe that Antifa really doesn't have some kind of leader/leadership that "calls people to do group violence".
People who sympathise with antifa tend to belong to other groups: Soup kitchens, worker co-ops, anarchist book clubs, vegan jam nights etc. At some point someone gets word that a rightwing demonstration is going to happen. The word spreads through the grapevine and people start talking about turning up to counter protest. They turn up to the protest and the the police shoot tear gas at the counter protestors because they sympathise with the right wing. People get mad and kick the tear gas back at the police. The police shoot rubber bullets, people get mad and throw stones. People wear masks because in a small town the far right will turn up to your house in the night and cut your brake cables etc.
It escalates because US police are not trained or encouraged to de-escalate and then its reported in the press that Antifa has caused a riot.
I believe that you are making it appear way more spontaneous than it actually is.
First, if people really spontaneously reacted this way, we would be having protests at every corner, and all kinds of groups, so no one would even notice Antifa because it wouldn't be any special, just one of many.
Second, once I was at a protest that a few Antifa people wanted to *support*, and the organizers of the protest told them to stay away, because they didn't want to take responsibility for their actions. The Antifa people came as a group, stood apart from the rest, all of them clearly recognizable by wearing masks that no one else had, and then left as a group.
If that is not an organized group, then neither is a group of neo-Nazis who just spontaneously happen to march together and wear the same uniforms.
"The Antifa people came as a group, stood apart from the rest, all of them clearly recognizable by wearing masks that no one else had, and then left as a group."
THOSE SPECIFIC PEOPLE were an "organized group." Very likely, they were all friends in real life who'd done this kind of thing together before. But that story provides literally *zero evidence* that they were taking marching orders from some person they considered an authority, or even a coordinator.
BTW, the "clearly recognizable dress" is just a specific style that's become popular among certain sorts of protestors for a mix of practical and aesthetic reasons. It even has a name: it's called "black bloc:"
Noticing people across different protests and locations share that style is no more indicative of a common organization than noticing that people going into Goth clubs in different cities all dress alike. "Goth" is a fashion and subculture, but it would be quite silly to insist it was an organization.
When people wear masks, they're typically trying to avoid repercussions for their actions; Often breaking laws. I'm more worried if I see police doing this than protestors.
Unfortunately you'll remain unconvinced but that's my experience and the experience of my fellow travellers.
When Denver holds a post-Rittenhouse gathering of antifa, yes, it's an organization. As the video on youtube showed, it's an organization of rapists, but still an organization.
The 20 odd people in Salt Lake City that knew about Kirk's assassination are also part of the antifa "organization."
I mean, you've read The Moon is a Harsh Mistress, right? You're familiar with terrorist cells?
You gotta start backing up these assertions, not just slipping them into posts. If there’s evidence of a conspiracy outside of fevered YouTube videos, share it.
An "organization in which rapes happened" is not the same as an "organization of rapists", and I think you know that. It's arguably even less so if it's an "organization at which rape accusations happened", given how easy it is to make a rape accusation even when there's no rape. And I think you know that too.
I don't care much for Antifa, but portraying things this way makes your argument less persuasive, not more.
That video and the Twitter posts inside don't show at all that antifa is an organization of rapists. Could you point to the statements within the video? Is it the "maybe there are rapists here, we don't have enough people"?
Also, what about the 2nd group of claims? Which 20 people knew about the Kirk assassination? Do you have any link?
Please provide sources when making the initial claim in your next posts
> The Proud Boys is a neo-fascist organization that engages in political violence and was formed in 2016. Members of the group espouse misogynistic, Islamophobic, anti-Semitic, anti-immigrant, and/or white supremacist ideologies and associate with white supremacist groups. The Proud Boys consists of semi-autonomous chapters located in the United States (U.S.), Canada, and internationally. The group and its members have openly encouraged, planned, and conducted violent activities against those they perceive to be opposed to their ideology and political beliefs. The group regularly attends Black Lives Matter (BLM) protests as counter-protesters, often engaging in violence targeting BLM supporters. On January 6, 2021, the Proud Boys played a pivotal role in the insurrection at the U.S. Capitol. Leaders of the group planned their participation by setting out objectives, issuing instructions, and directing members during the insurrection. The leader of the Proud Boys was arrested two days before the insurrection as part of a stated effort by U.S. law enforcement to apprehend individuals who were planning to travel to the D.C. area with intentions to cause violence.
If participating in violence at protests and engaged in "violent activities against those they perceive to be opposed to their ideology and political beliefs" is sufficient to be listed as a terrorist group then yes, it makes sense to add Antifa.
The right-wing equivalent to "Antifa" in this context is not Proud Boys, which was indeed a formal organization, but the "Patriot movement" (https://en.wikipedia.org/wiki/Patriot_movement), ie. a network/subculture of smaller, local far-right organizations that co-ordinate in some ways across the regions but don't have a formal structure or leadership, expect at most regionally on an ad-hoc basis.
You *can* point out to organizational structures of individual organizations and you *can* also point out to instances where many/most Patriot movement organization bigwigs have gathered together to hammer out strategy, but it's still far woozier and less coherent than an actual, hierarchical, centrally led organization would be.
Let's say you're a right-winger. Imagine that a left-wing admin has announced that they're going to treat the "Patriot movement" as a terrorist organization. Would you be satisfied that this is just going to mean they're going to go after actually terroristic organizations and violent actors, or would you be worried that this would mean a crackdown potentially extending even to mainstream political operators?
Thank you! This seems like the rational approach to similar topics: mention a few central examples of the set; highlight the similarities and differences.
That avoids the kinds of general philosophical arguments by which nothing is ever an organization (or everything is). Is it typical for groups in this category to be e.g. registered as non-profits, have written constitution, keep explicit membership lists, organize regular elections, etc.? If other groups have that, and this one doesn't, that is suspicious. If other groups don't have that, and neither does this one, that's business as usual.
"Would it be reasonable to say that it's a brand, used by many loosely-affiliated organisations?"
Yes, I think that would be quite reasonable.
And that being the case, declaring "antifa" a terrorist organization should be a cut-and-dried violation of the first amendment. "Adopting the antifa brand" is a matter of speech, not a matter of action. If doing so is enough to get you targeted by the federal government as a terrorist then your First Amendement Rights apparently stop whenever the POTUS dislikes your brand.
And yes, if you wanted to counter with a more clear-cut thought experiment, I still think the First Amendement should apply. A group that goes around saying "we are terrorists," or "we really like doing terrorism" or "terrorism is awesome and want more of it" or calling themselves America's Best Terrorists should absolutely be protected by the First Amendment. Not until they actually *do terrorism*, or at least make a credible threat or attempt towards doing so should the government have any ability whatsoever to crack down on them.
Did you...uh, read your own article there, Wimbli? It doesn't say anything about "antifa." But it would be a pretty silly point even if it did.
If somebody's house contains items that are dangerous and illegal to own, that's grounds to arrest and prosecute them in itself. Trying to tie that arrest and prosecution to nebulous claims of belonging to an "antifa cell"[1] not only don't ADD anything to it, they make the prosecution LESS likely to stick because now the defendant can claim that being investigated at all was a first amendment violation.
[1] Which is a ridiculous phrase because, see above: it's a brand not an organization. That would be like talking about your buddy being part a "parkour enthusiasts" cell.
I did some research (again), and I can't find any indication in the that link and how the person is related to Antifa, a cell of antifa, or that there are antifa cells with explosives in their houses?
Or is this just meant as an illustration of someone having explosives in their house and going to jail, unrelated to antifa?
There is only a single match between Shaeffer and Antifa, on a sitemap of articles on a local newspaper, and they are not related.
Some of the stuff I'm reading suggests Antifa is a little of both. It is decentralized in a way that is deniable; there's no receipts of money being passed to any part of Antifa by, say, the Biden administration, because there's no President or Treasurer-General of Antifa to accept a large sum from anyone, and any actual funding or material going to an Antifa cell will be small enough to be easily hidden. There seems to be incidences of people hoisting the Antifa flag while doing whatever dumb thing they think of, but also more careful cells capable of discipline and organization. So we can't rule out that it's getting nothing, and we also can't rule out that it's just a confluence of likeminded vandals without a great deal more resources to spend on tracking them.
That said, the FBI happens to have said resources, so if they say Antifa is more than just a spontaneous unfunded group, there's reason to believe them, as they have the means to find out. Unfortunately, we also can't rule out that they're thumbing the scales in order to move on an organization they don't like.
If Antifa is truly working on organized domestic terrorism, one way for the FBI to prove that is to show the evidence they have, either enough to get a conviction in court, or at least enough to convince the public that they might not want to get involved with them. The problem with -that- is that the FBI probably can't reveal that information without essentially revealing how they got it, which might involve some processes they would like to use to track down other domestic terrorists, or some well-placed moles they would like to not get immediately disappeared. So if the FBI isn't forthcoming with evidence, it might be because they don't have it, but it could be because they're protecting valuable assets.
OTOH, if the FBI is just making up anything it pleases in order to run down a group of people trying to take down the state, there would have to be enough FBI agents involved that at least one of them would leak, so we can probably at least rule out that Antifa is entirely innocent.
> It's a brand that's used by organizations...and any rando who feels like it. There's absolutely zero control over the term
It has that in common with all sorts of other terrorist groups, right? Like, various people have claimed to be ISIS or Al Qaeda over the years without necessarily having direct traceable links to the main organisation.
It's difficult, because terrorist organisations that act like proper organisations with member lists and centralised command and control don't last very long, they all get arrested or droned. If you want to have a terrorist network that can actually survive and last then you need to act like a decentralised bunch of non-communicating people so that an attack carried out by one part of your network can't possibly be blamed on another part.
I guess the whole point of having "designated terrorist organisations" is to overcome this problem. The government no longer needs to prove any causal link between this particular Aum Shinrikyo member and any particular Aum Shinrikyo attack, they can just declare the whole damn thing illegal and roll it all up.
This is all a bit unfair if you're a peace-loving member of Aum Shinrikyo or Antifa or the Proud Boys who would never dream of doing anything illegal. But it's not anything new. Ideally the peace-loving members will find a new banner to gather under.
This is not true re: Aum Shinrikyo, the leadership was arrested and is in prison/executed but Japan's laws on freedom of religion meant they couldn't ban the sect outright and successor groups survived the terrorist attack by decades.
Yeah, sure, but my understanding is that persecuting them as members of a domestic organization raises First Amendment issues that a foreign org wouldn't. Being loosely affiliated leaderless cells, insofar as it is meaningfully an organization at all, I think it's as plausibly foreign as domestic.
There's nothing in the First Amendment that says it doesn't apply to foreigners or citizens who interact with foreigners. Unfortunately many people, including some so-called "classical liberals," think the constitution goes away if the magic words "foreign influence" or "national security" are mentioned.
In that case, why would it be "as plausibly foreign as domestic" just b cause it's more legally convenient? Wouldn't that require looking up the legal definitions and seeing which one fit, not just wishing?
No, it wouldn't. That's a reasonable mistake to make, but that's not how the law ACTUALLY works, despite propaganda to the contrary. The standard practice is for prosecutors to charge you with whatever is convenient, however tenuous the connection to your actions, and threaten you with a sentence so absurdly high that you take a plea bargain.
Thoughts on the Trump admin's Tylenol-Autism announcement today? My takeaway is that the data is inconclusive and conflicting but there is enough concerning data to warrant guidance against taking Tylenol. Especially since there are no downsides to not taking Tylenol (unless it is to reduce fever?).
Would appreciate opinions from anyone with a scientific/medical background on how to interpret this news and associated controversy.
Congrats on your upcoming addition! SO exciting! It'd be great to touch base real quick since we haven't tested Tylenol to be used during pregnancy (and see what coupons we have for baby!) Call us when you can at 1-877-895-3665, M-F from 9a-5:30pm ET w/ your Twitter handle ❤️
5:03 PM · Jun 17, 2019"
So, uh, about that "we haven't tested Tylenol to be used during pregnancy", dear Tylenol? Any news since? 😁
Gosh darn it, if it *does* turn out to be "we advise not to use it just as a precaution", yet more Cursed With Luck by the Trump administration?
Acetaminophen is in Pregnancy Category C, "Use with caution"; but this is true of the majority of drugs. It essentially means that we don't have high quality human studies, which are almost impossible to get through IRBs.
The properly rational thing to do regarding the announcement is to ignore it. At least ignore it as a source of medical evidence (you can update on it as a Thing that Occurred in the World of Politics). You should not update your beliefs on any supposed Tylenol-autism link a single iota because of an announcement like this. I think the reasoning should be quite clear:
If you are already well-read and familiar with the subject of autism and its potential causes, then you should already be familiar with the data shared by the administration. It's not new evidence, so of course you don't update.
OTOH if you are NOT already well-read and familiar with the subject, then the absolute WORST way to engage is to let a subset of evidence that has very plainly been filtered to produce a particular conclusion be your first look at the subject[1].
If you were somewhere in between, the release of this report is like the dumbest possible version of the Streetlight Fallacy; the street you were one actually had ample ambient light, but now some jerk shined a spotlight on one particular piece of the sidewalk, making it that much harder to check for your keys *anywhere else.*
But I will not even pretend to be surprised when a bunch of the same people who spent years shouting "follow the money" and "motivated conclusions" at any mention of climate science or COVID-related research suddenly decide that this right here is the Gold Standard of Medical Evidence. Because we live in the dumbest timeline.
It should be VERY obvious that this evidence was filtered because RFK announced *well in advance* what he was going to find. Not Tylenol specifically (I don't think), but that he was going to find THE "cause of autism" in 6 months. Given that there's no logical requirement that there be *one* cause, let alone one findable in 6 months, that's a STAGGERING level of Privileging the Hypothesis right there:
Maybe he already had Tylenol in mind, or maybe he did a really rapid job of conclusion shopping, but either way he definitely was not engaging in anything like truth-seeking behavior.
It doesn’t make much sense as a smokescreen for worse news, as political horse-trading, or even as a grift. I’m left thinking the most likely explanation is that RFK and/or Trump promised to look into autism and this was the closest they got to a scapegoat.
Oh, if you meant the actual medical case—it’s weak. The FDA is not *generally* in the business of changing its recommendations based on one observational study. Especially not for a drug with decades of use before and after the phenomenon it’s accused of causing. Acetaminophen kind of sucks from a general safety perspective, but it’s one of the only fever reducers which isn’t already contraindicated for pregnant women, so the downside is nonzero.
The most parsimonious explanation is that RFK genuinely believes it. Finding the one weird trick or the one weird chemical that's fucked us all up and we just need to do that one weird trick or get rid of that one weird chemical is very popular among the home-remedy crowd.
Given the lack of the evidence, I think it is cruel to put this out as science. Cruel to all the mothers of children with autism who will now feel somehow responsible because they may have (likely they can't really remember, it was so long ago) taken a common OTC painkiller.
Insofar as it weakens the credibility of the government's medical guidance, it is good. This could be (delusionally optimistically) a step towards the government only issuing guidance – which can be ignored without legal consequence – instead of preventing the sale of any drugs.
I'll do you one better, and give you an example of something he DID do: I didn't like his support for "red flag" laws to take guns away from people merely accused of domestic violence.
My position is that RFK Jr. is the US secretary of health and any shocking "revelations" coming from the US government about medicine should keep that fact forefront in one's mind.
I don't know, I mean, autism was around before paracetamol became a widely-used analgesic. There might be some link, but I think it's more along the lines of the "autism and MMR vaccine" kind of correlation that caused all the trouble around vaccination back in the day (we all remember Dr. Andrew Wakefield, don't we?)
There was ASD in my paternal family long before anything other than aspirin was available over the counter here, and Tylenol (as a brand) isn't sold here (it's called Panadol here).
Just speculating, but could this be a case of: "amateurs discuss politics and medicine, professionals check who shorted Tylenol's shares right before the announcement"?
There are a bunch of articles on the web about this. The evidence is all observational studies, which don’t demonstrate causation. In the Swedish study[1], the authors report an increased incidence of autism in cases where the mother took acetaminophen (the active ingredient in Tylenol), but that the difference disappears when they compare the difference between siblings. The study authors conclude that the correlation is due to an unidentified cofounder (that is, by some factor that both increases the use of acetaminophen and increases the incidence of autism).
The Trump Administration cites a meta-analysis which lists the Swedish study as two separate studies; one for the overall numbers and one for the sibling analysis.[2] The author weights the first of these two studies more highly, giving it a greater weight in the final result when the studies are combined. In effect, the meta-analysis treats the Swedish study as a whole as evidence that acetaminophen causes autism, despite the fact that the authors of that study reach the opposite conclusion.
The author of the meta-analysis “served in 2023 as a paid expert in a class action lawsuit against acetaminophen manufacturers, in which he testified that there was a link between the medication and autism. A judge ultimately excluded his testimony for being scientifically unsound.”[3]
I’m just an amateur with no particular expertise so I can’t say that acetaminophen is safe, but it seems to me that the Trump Administration announcement is not a reason to worry if you weren’t worried before.
"The study authors conclude that the correlation is due to an unidentified cofounder (that is, by some factor that both increases the use of acetaminophen and increases the incidence of autism)."
Someone who might be disposed to get sicker during pregnancy (and outside of pregnancy) might also be disposed to have autism or other conditions in the family. It could end up as some weird combination of "if there's the tendency for this condition in your heredity, then when your system is under strain, the influence of this bacterium/virus promotes the expression of it in the embryo".
I guess it makes sense that autistic women might be more sensitive to some feelings during pregnancy, and therefore more likely to visit a doctor, and more likely to end up with prescription.
Biology is so damn complicated that dismissing this out of hand isn't the right view, but neither is "it's a slam-dunk link!"
I don't know if our astrologer friend is doing anything, but I wouldn't be surprised if eventually it turns out to be some damn thing like "if you take this amount of this medication over this long of a period with this genetic background under these particular conditions when the moon is in Cancer BUT NOT ANY OTHER SIGN, there is a significant raising of risk".
Acetaminophen is actively unsafe, and harmful to people. Even when taken at normal doses, and it's an "over the counter" overdose hazard. You can double your asprin, no problemo. Triple your acetaminophen, and you're at "liver damage" if not outright failure.
There was a specific reason for not using asprin with children, but we really should have said "tough it out."
Covid19 has shown the deleterious effects of acetaminophen.
I'd be careful on doubling the aspirin. It does have an effect as a blood thinner, and if you have a sensitive stomach, aspirin can irritate it and even cause vomiting (ask me how I know).
There are definitely downsides to not taking tylenol: doing without a pain relief drug when you need one; also, fever reduction protects the fetus. I looked recently at Scott's Pregnancy Intervention post, where he puts avoiding both tylenol and ibuprofen in the first tier, and also the main metastudy he cites, and came away with the impression that its quite likely that tylenol use in pregnancy does increase the risk of neurodevelopmental disorders, probably by about 20%. Someone else who posted, who sounded more knowledgeable than me, suggested that some of the damage might occur when parents give babies and small children tylenol. Asked Cremieux what he thought recently and he does not agree with Scott, cited a study whose results ran counter. Has a post up now saying apparent autism increase is an artifact of changed diagnositic criteria. (https://www.cremieux.xyz/p/how-to-end-the-autism-epidemic). I think there's no doubt that the changes in criteria led to far more kids getting the diagnosis. There were also policy changes that made it possible for kids with the diagnoses to get more school services, and those changes very likely led to professionals being more liberal in diagnosing kids with autism, in order to get the services for them.
It's like everything: it is possible to have too much of a good thing, be careful what you consume when pregnant, and take care of your general health.
"Autism" is an umbrella term, and as discussed on here many times previously, it can range in severity from "will bash own brains out against wall" to "quirky, anti-social, gifted at maths/STEM". I think folding in Aspergers was not a good idea, but it definitely is all on a spectrum (I wonder if in a few years time we'll have the spectrum split up again into all little sub-categories related to one another but not all identified as 'classic' autism?)
So I would agree that, past the "beat your own brains out" stage, kids who in previous generations would just have been classed as "odd" or "socially awkward" or whatever, are now getting ASD diagnoses. Is this a good or a bad thing? Possibly good, since leaving people who could use support to sink or swim with (for example) "they're just shy, they need to get over it" never helped in the long run.
> kids who in previous generations would just have been classed as "odd" or "socially awkward" or whatever, are now getting ASD diagnoses. Is this a good or a bad thing?
I don't know. My sample size is small, but among the adults I see who were diagnosed as ASD as kids most complain about the special "help" they got. One, for instance, spent part of each day in a special education classroom, and most of his classmates had intellectual disabilities, or disfiguring problems like cerebral palsy. My guy had an IQ of 140, and no oddities of appearance. He was furious and bewildered to be put with kids he saw as "retards and cripples." Also says that nothing done in that group setting was helpful. I think keeping him out of his regular classroom part of each day made life easier for his regular grade school teachers and the other kids, because he had was a constant low-grade classroom management problem. Could not stand to have another kid sitting or standing behind him, refused to do various things because the creeped him out, clowned around with teacher, played tricks on other kids.
"Really sucks that the Trump admin, the one serious force opposed to the Brazilification of America (and the First World more broadly) is also a product of that Brazilification."
The should have led to some self-reflection on whether there's some flaws in his worldview, but apparently not. Though I'm sure he'll do a Hanania arc eventually.
TLDR - I'm thinking of stopping my SNRI and curious if anyone has any tips.
I've been on duloxetine since 2017, started as a response to a major depressive episode in the setting of significant situational personal stress. From the start I've been very sensitive to withdrawal with brain zaps and brain fog with missed doses. I successfully tapered off in December 2019, which turned out to be terrible timing and ended up restarting in the midst of my second and last major depressive episode in September 2020. External stressors being self evident.
I've since been on a stable dose of 30mg daily with no further attempts at tapering. My logic has been that I'm generally pretty happy and content, and my only side effect is a minimal decrease in libido, so why mess with success? The flip side is that I still think it is weird to flood my neurology with this chemical for the rest of my life. I take the brain zaps to be good evidence that it is doing something to my neurons, whether or not that something is regulating my mood.
Which brings me to today. I've been getting my meds for the last few years from CostPlus, an online generic-only pharmacy. The batch they sent me a few weeks ago is clearly deficient in some way, whether that is in pharm quality or actually weight per tab. I had 3 days of brain fog and zaps, then doubled my dosage for a week which effectively treated my side effects but obviously isn't a great long term plan. So I'm back since Saturday on one tab, back to the fog and zaps, and thinking since I've somehow ended up in an unintentional taper I might as well taper off entirely and see how it goes. I'm in a good place with no exceptional stressors.
So all that to say, good idea / bad idea? Advice or tips?
I was on the same one for about 10 months, had great results, got off and while I was foggy for a few weeks, I feel great now. I'd say stick through it and try to go for at least a month and a half and see where you're at.
I've been on duloxetine on and off since 2013. I think it's way hard to get off of, harder than SSRIs, and I'm not even someone who particularly minds a mild level of brain shocks. But even small decreases result in some shock for me, though that only lasts a couple of days. If you really want to get off, I'd say go from 30 once per day to 20, if you can get your hands on it. Otherwise just start going 30 for 2 out of 3 days, then once you feel comfortable do 1 out of 2 days, etc. Even better if you have 20mg capsules and can use those in this way as well, as appropriate. I even had one capsule that had 6 mini pills inside, so I was able to keep tapering off by cutting open the pill and taking fewer of those. But other capsules I've had had far more than 6, such that it'd be difficult to measure out.
But my warning is: once you start down the dark path, forever will it dominate your destiny. Well, maybe not, but for me that's been the case. I've successfully gotten off cymbalta, and past all withdrawal and stayed that way for at least 3 months, only to find that I effectively wanted to just stop doing anything, and I mean almost anything (other than sleep). I kind of felt almost like I was waiting to die. I was never that depressed before I started taking those meds. I kept thinking this would get better the longer I was off of the meds, but it actually got worse. I can't really say whether this was because:
1. I got addicted to cymbalta and now can't do without it
2. Seeing how good life can feel on cymbalta made life without it seem all the worse by comparison
3, My depression got worse over the years without my knowing it, because it was masked by the drugs, such that the only thing keeping me afloat was the cymbalta
I'm no lover of antidepressants, but it does seem possible that what's happening is that the drug was helping you, and that without it you feel terrible. If you still feel bad on it, why not try a different one? The MAOI's are the most effective ones, I think. There's a site by a bonafide world expert called Psychotropical that gives lots of good info.
Also, I'm a psychologist, and while I don't prescribe drugs I do see lots of people go on antidepressants and off them. What I've seen when people come off one is that some feel no different, and some slide back into depression over the next few months. I have never seen anything that looks like addiction -- like someone who's hooked for life on the stuff, because it reset their pleasure centers or something and now the drug is required to make them feel even halfway decent. I'm not saying that can never happen, but it is absolutely not the norm. And jeez, even with bona fide addictions like nicotine or caffeine or heroin people eventually go back to baseline -- that is, off the drug they feel the same way they did before ever using the drug.
I've tried basically every single SSRI and SNRI. They all work for my depression but come with the same sexual side effect of making it hard to orgasm. I don't think I've ever tried an MAOI however. I don't even think any psychiatrist has ever recommended one to me.
I have known a couple people who found that drug holidays worked decently to get around this side effect. it's a kind of hacking. You experiment by stopping the drug, and keep testing your sexual function. There is a decent chance that you will find a sweet spot where you have no withdrawal symptoms yet and your ability to orgasm has fully returned. I remember that for one person the sweet spot began after 24 hours. They took a drug holiday every weekend. I recommend trying that first. You are lucky to be someone whose depression is treated my these drugs -- many get little relief from them.
OK, but if you want to try switching to something else, here are your options:
-Wellbutrin. Most people have no sexual side effects. Is commonly prescribed. Kind of odd that you haven't been on it, actually.
-There are a couple new drugs that people say have no side effects: vilazodone (Viibryd) and vortioxetine (Trintellix). I know nothing whatever about them, but you can look them up and read about effectiveness, etc. They are said to be expensive. Insurance will sometimes cover an expensive drug if you have had no success with the ordinary ones. I don't know whether your intolerable side effect from SSRI and SNRI counts as no success, though. There's also a drug called Mirtazapine, but I believe it's in the same family as diphenhydramine (benedryl). Everyone I've seem taking it stops because it makes them so drowsy.
-An MAOI called Selegiline has few or no sexual side effects. You can get it as a pill or a patch. The patch is probably expensive.
-Adderall: Is used by some for treatment-resistant depression. Docs are not crazy about doing it, though, because it's a controlled substance. And it has some minor sexual side effects of its own.
-Transcranial magnetic stimulation: I don't know much about this, but I'm sure there's info out there. Use google scholar or AI to research its effectiveness.
PS. In my reply to Wormwood, below, I give into about the need to avoid certain foods when taking an MAOI (though I believe that if you use the patch the precautions are not needed.)
Thanks. I do effectively take a drug holiday every other day. I'm usually on 30mg every other day. But it doesn't feel like a holiday, it just feels like I'm on 15mg daily. But that's the best balance I've found between sex and happiness
I've tried Wellbutrin, but it has no effect for me.
I haven't tried any of the others you mentioned. I know someone in the rationalist community who did transcranial magnetic stimulation. Sounds scary, she said she lost years of memory of her life.
Yes, because it's the one that kills you if you eat cheese. Doctors understandably don't want to give depressed and suicidal people a drug that kills you if you eat the wrong things. But if you think you can handle it, they might prescribe it to you if you ask nicely.
Your post is worse than a silly irrelevant one. It has negative value. Every point you make here is false. If you're going to post something that purports to be medical info, look up the things you think are true to find out whether they are urban myths or out of date info.
Here are the facts about diet and MAOI"s
-MAOI's make people slow to clear tyramine, which is a substance found in large quantities in fermented foods, and a sprinkling of foods that are not fermented. If it builds up too high people's blood pressure rises so high that it's dangerous, and could even kill them.
-In the 50's and 60's, when this drug began to be used, it was a lot of work to avoid tyramine, because refrigeration was much less good, so lots of things that were not fermented food spoiled slightly and had a fair amount of tyramine. Levels of pretty much every food in the US and Europe have recently been rechecked, & recheck also used better tech than the original tests. Way fewer foods now have enough tyramine to worry about.
-Doctors do not worry about depressed people on MAOI's committing suicide via tyramine ingestion. It's a very uncertain method, and also painful, because before you reach the point of being in grave danger you get a terrible headache. What doctors worry about os people being careless about tyramine and inadvertantly having a blood pressure crisis. They also worry about people committing suicide by overdosing on the MAOI's, but MAOI's are not unique in that respect -- overdosing on various other antidepressants can also be lethal
-Hypertensive crises are uncommon now.I know a psychiatrist who regularly prescribes MAOI's. He has been in practice for at least 15 years, and told me that in that time he has had one patient hospitalized for a hypertensive crisis.
-Asking a psychiatrist nicely will not get you an MAOI. Most do not prescribe them at all, and refuse if you ask them. They are an old drug, and out of fashion, and most doc's training did not even cover them. So most of these docs do not have up-to-date info on their effectiveness and risks. But if you want a doc who is open to using MAOI's, I can tell you how to find one.
-There is a bonafide world expert with a web site called Psychotropical. I recommend you go read what is on his site before spewing any more of you opinions about this class of drugs.
Someone who actually takes duloxetine here. Sure, why the hell not? It's no venlafaxine, withdrawal isn't going to get you killed. Duloxetine withdrawal is so benign that I've been able to quit it cold turkey on multiple occasions, but if you have bad withdrawal effects, there isn't any loss in tapering extremely slowly. And then a month or two later, when you find yourself in excruciating pain, depressed, and suicidal, you can start taking it again, you'll be just fine. It really does work like magic!
Psychologist here. Taper *very* slowly, much more slowly than the schedule that online sources recommend to doctors. I recommend this because I've watched many people suffer through head zaps and other highly unpleasant withdrawal symptoms while following the recommended decrease schedule. Also read a piece of research on standard vs. slow tapers that found super-slow tapers worked better. So I'd say, take something like 6 mos. If the stuff comes in capsules full of a variety of different-colored tiny balls, it is still possible to taper so that your remaining steps are 7/8 capsule, 3/4 capsule etc. If that's the situation let me know and I'll tell you how. If you start feeling godawful, see a professional.
Try to add something good to your life while you subtract the antidepressant. Exercise? D&D game? Working your way through some long piece of fiction like the Dark Tower series?
As a new blogger with my own platform for the first time since ~high school[1], I've started consciously thinking about writing styles. I'm curious if other writers here have consciously thought about writing styles, and what they've done to learn/perfect them.
Some questions I'm considering:
1. Thomas and Turner (in Clear and Simple as the Truth) describes different (mature) writing styles as making a principled choice on a small number of nontrivial central issues. (For example, truth, scene, presentation, cast, and the intersection of thought language). What principled choices have you made for developing your own style?
2. Reading level: Most articles I write are intended for a college-graduate audience, and various readability checkers online I use are in accordance with that (my typical blogpost is readable for 12th graders to ~15th graders, or third-year college for non-American readers). I think this is a perfectly fine reading level since I expect almost all my readers to be college graduates, or have equivalently high reading levels(I do have non-Anglophone readers, but I assume they can just use their favorite translation tool). However, the vast majority of non-specialist blogs I read online, including ones on highly intellectual topics, tend to go for a lower reading level. Presumably this is a deliberate choice! So are there significant benefits for going for a ~9th-11th grade reading level that I'm currently discounting?
3. How important is it to develop a natural style that's "my own", vs following the work and writing in whichever style is a best fit for whatever topic I want to write about? Ie should I go for depth or range, when it comes to style? Intuitively an "anthropics for babies"post ought to have a very different writing style than a post on the game theory of war.
[1] Not including various anon blogs from ~15 years ago, I've only written on social media and public forums like LessWrong online, until my most recent substack.
How much writing makes a "writer"? I've got no published works and am still struggling with the "show up" phase, but boy I enjoy it anyway.
I would say "your own style" is naturally the easiest thing to write, because if it isn't, it isn't actually your own style. So the choice is is between writing in your own style, or trying to suppress your own style in order to copy someone else's. The only reason I see to try to suppress your own style is if you think it reads badly when you read it back to yourself. In which case, the problem is your style is not up to your standards yet; so find some things that read well, compare them to your own writing, and see what it is they're doing that you aren't.
You're only going to get the audience you write for. If you write over people's heads, they won't read it. If you write beneath their egos, they won't read that either.
> I would say "your own style" is naturally the easiest thing to write, because if it isn't, it isn't actually your own style.
I think it's more complicated than this. When you talk, you talk differently to different audiences, in different situations. Each of those styles is yours, but there are choices. It is similar with writing.
For example, are you writing in a "school essay" style? That's probably the worst choice that people frequently make, and yet it comes to them naturally, because that's what they spent a lot of time practicing at school. Unlearning this is already half of success.
People talk differently to their friends, to children, to unknown (and potentially hostile) audience of adults, etc. When you write a blog, which of these audiences do you instinctively have in mind? It's not just about style, but also content: how difficult words can I use, do I have to explain concepts before using them, etc.
(Even more complicated, when you talk to children, it is different when you explain a school lesson, and when you read a bedtime story.)
Generally, all those things should jump out at you if you read your work back; it will feel appropriate, or not, and if not, then you'll want to rewrite it (and figure out why not).
If it reads well to you, it will read well to other people. Which other people might be tricky, but they're out there.
(Don't write in a school essay style. You knew it sucked then too.)
I don't know that I have a direct answer for these questions, except maybe the last one. I do notice that I have different 'personas' that have different styles depending on what kind of piece I'm writing. My "Tech Things" series is much closer to, say, Matt Levine (and in some ways is explicitly patterned off Levine), while some of my AI explainers sound like Chris Olah or Andrej Karpathy. Implicitly I think I start from the question of 'who would write this piece' and then work from there. I don't do this intentionally, mind you -- I'm not really writing *for* anyone else, this is just often the fastest and most natural way for me to write
I often see advice on the internet like this, and it always feels very dogmatic to me! Reducing my vocabulary level further for my essays, as if I'm talking to an intelligent non-native English speaker, is a perfectly doable action, but I just don't think the benefits are very high, now that translation services are pretty good.
I also don't think there's a clear connection between simplicity of reading level and clarity of thought, if anything I'd guess the correlation is weakly negative.
(I'll also note that you clearly aren't following your own advice, with words like "sterling" and "commentariat")
I see tips like this on the web a lot. But they seem kind of small-minded to me! I could use short words in my writing. Like I'm talking to a smart dude who is just learning English. I could do that, but I’m not sure how much it helps. You can always use Google to translate!
I also don't think using simple words always makes your ideas clearer. If anything, I'd guess it might be the opposite.
(I'll also point out that you're not following your own tips, since you used fancy words like "sterling" and "commentariat")
I think 85-95% of the message gets across, but I am in fact sacrificing precision and clarity for accessibility.
dogmatic conveys more of what I want to say than "small-minded", "Reducing my vocabulary level further" conveys a more precise thing that I'm giving up than "I could use short words in my writing"
"if anything I'd guess the correlation is weakly negative" is communicating a bunch of nuance that "If anything, I'd guess it might be the opposite" is not.
I think the *average person* would benefit from simplifying their writing.
At the same time, general advice like "use the smallest words you possibly can to convey your idea." would, if taken seriously by every capable writer, lead to a loss of beauty and nuance.
The advice to simplify is generally good but overused and undifferentiated.
Maybe you're just miscalibrated here on what a college-graduate reading level is? It's not a rarefied position, like I expect the vast vast majority of ACX readers to have that reading level!
"Midwits who are "trying to elevate their reading/writing level" tend to throw in words that they don't understand"
I don't think this is a problem for me! Again, I'm someone who graduated from college and I read papers for fun. I'm using plenty of normal words and sentence structures that I expect smart college graduates with the relevant academic backgrounds to grasp, not like I'm using a bunch of Latin or invoking Hegel or something.
Looking through this essay, here are the words that might be challenging for someone with a 12th grade reading level:
Scientific/Technical Terms:
Hydrodynamics - the study of fluids in motion
Spacetime - the mathematical model combining space and time
Amenable - willing to cooperate; easily influenced or controlled
Tractable - easily managed or controlled; solvable
Anthropic/Anthropics - relating to observation selection effects based on our existence
Cosmological - relating to the universe as a whole
Metabolically - relating to the chemical processes in living organisms
Gradient (in evolutionary context) - a gradual change or progression
Meta-cognition - thinking about thinking
Differentiable - (mathematical) able to calculate the rate of change
Kolmogorov complexity - a measure of computational resources needed to specify something
Acausal - not involving cause and effect
Evidential - based on evidence
Philosophical/Academic Terms:
Constructivist - philosophical approach that knowledge is constructed by the observer
Epistemic - relating to knowledge or the study of knowledge
Cognitive closure - the idea that minds have limitations on what they can understand
Selection effects - biases in observation based on the method of selection
Meta-selection - selection at a higher level of organization
Multiverse - hypothetical set of multiple universes
Less Common General Terms:
Scanty - barely sufficient; meager
Confound - to confuse or perplex
Bracket (as a verb) - to set aside or exclude from consideration
Contra - against or in opposition to
Meta-irony - irony about irony
Mathematical References:
Weierstrass function - a specific mathematical function with unusual properties
Traveling salesman problem - a classic optimization problem
Boltzmann brains - hypothetical self-aware entities arising from random fluctuations
Most of these terms are either explained in context or could be understood through context clues, but they would likely slow down comprehension for a typical 12th grade reader.
I never took mind-body dualism seriously until after I watched some videos of Deepmind's Genie 3 AI. Here's an example of Genie 3 if you haven't seen it before:
These videos look like an agent (human or AI) controlling a character in a fake video game, but in actuality, the agent has no direct control over the character. The agent is telling the image-generating world-model AI (Genie 3) what should happen, and then Genie creates video of something like that happening, frame by frame from the live interaction. So even though the agent is giving commands for their character, the character is not an extension of the agent like Mario is in a Mario game, where the controls map directly to Mario's movement. The mind here is completely separate from the body, and if Genie 3 feels like making the body do something unexpected, the mind has zero agency other than to send more suggestions.
If real life is a simulation, could it be something like this? The world-model is deciding what your body physically does, your mind is only giving directions? Normally I'd think "it's impossible to say", but the fact that humans minds are incredibly good at creating little narratives about why they did a thing that their body just did feels like weak evidence towards this possibility. A setup like this would also allow outsiders to join the simulation seamlessly, as the model can simply let an outsider start giving mental suggestions for a pre-existing NPC. And the NPCs can have fully simulated minds or be mostly mindless, the world would function either way. EDIT: Also, it would allow minds to be cleanly extracted from the simulation without ruining the sim in any way.
While it was never made explicit in the movie, I've always assumed that The Matrix worked a bit like this, which is what allowed Neo and his friends to have reality-bending powers. The simulation feeds you something consistent with your beliefs; mostly this is a one-way flow but if you can believe something hard enough then the physics of the simulation will be forced to adapt to maintain consistency.
I think this is a nice theory, but in practice self-belief, or believing arbitrary things which run counter to all the evidence, are not something which most humans are short of.
I think information flowing in the direction of the world/physics model would be easy to filter through, like if someone tried to jump over a building or read someone else's mind. The information flowing backwards is a much weirder problem: how do you tell a mind AI that it's in pain, or it's caffeinated, or it's falling asleep? You can prompt current chatbots to act drunk, but that doesn't actually make them drunk. Maybe functionality like that will require something extra, or maybe it will just emerge as everything scales to AGI.
Raytracing is a rendering technique in video games. It works by sending out "vision rays" from the observer to the environment around it. It's similar to how the ancient Greeks imagined vision to work. And yet it runs contrary to everything we know today about real-life vision.
Moral of the story: Metaphors and analogies will only get you so far, don't take them too seriously. Instead, apply Occam's Razor liberally: If your model makes assumptions that don't help explain anything, ditch the assumptions.
Occam's Razor is the argument people make when they have no argument. It's about as useless as claiming that the Efficient Market Hypothesis means it's impossible to predictably make more than 6% a year, or that the Grabby Alien Hypothesis proves that aliens must be very far away.
I don't see how this differs from a typical game engine in this regard: you send input signals to the game engine, and it does some combination of changing the location of your character, playing some animation of the character rig, changing the camera view, puppeteering the NPCs, opening doors, removing items (and adding them to your inventory), changing the weather or time of day, … Genie doesn't do all this in the the same as Unreal, sure, but it's not as fundamentally NEW as you're suggesting.
It's extremely different. Imagine if every time you hit the jump button, Mario jumps based on the world design, logically picking a destination/trajectory/animation based on what the model predicts as likely and not based on the user's control. Or if the world itself changed to meet the suggestion, like having Mario be launched by a previously unseen spring when you hit the jump button because the AI thinks that's appropriate.
Meanwhile the interactions in current games can almost always be labelled as just an extension of the player's agency (open this door when I hit A), or aren't under the player's control at all (cutscenes and dialogue). I can't think of a single game where novel context-specific interactions are developed mid-game rather than are premade for the player to find. Maybe there's some similarities to games like The Sims where the player shares control of the characters with an AI, or Getting Over It where the controls are purposefully terrible, but those similarities seem weak.
> Imagine if every time you hit the jump button, Mario jumps based on the world design, logically picking a destination/trajectory/animation based on what the model predicts as likely and not based on the user's control. Or if the world itself changed to meet the suggestion, like having Mario be launched by a previously unseen spring when you hit the jump button because the AI thinks that's appropriate.
Sometimes videogames DO work at least somewhat like this. The former item is a bit like Inverse Kinematics. The latter is a bit weirder as described, but sometimes games do move the whole world around the player rather than the opposite.
The Batman Arkham games have combat like this: when you attack in a direction, Batman does some move that the game selects in order to attack some character in the general direction you've indicated.
I think the difference would be more obvious if the controls were not just directional. What if the controls were "clown" or "monster" or "something funny"?
Of course with controls like that you'd see it as the AI doing improv for you.
But could there be something in between?
In a longer and richer setting, there are other possibilities. Say your character is in a city with various things happening, and you click a button that says "fight" or "romance" either of which will likely eventuate at some time in the next few hours. (There could be many buttons.)
I think that's how it already works, the directional controls just prompting the image-gen model "The camera turns left" "the character moves forward". Plain-text prompts work on it:
I am trying to introduce D&D to an autistic woman in her early 20's. While fairly odd and asocial, she has a college degree, and would be up to the intellectual and social demands of playing. But I have never played D&D, so I'd like to show her a video of people playing. It should give her at least a general idea of how the game is played, but the most important thing is for her to see ways it is fun. She likes joking around, especially if it's a bit raunchy. Can anyone suggest a place I can find a video like that, preferably not more than 15 mins long?
What a great thing to introduce someone to! My friends run the channel RPG All Stars that have a ton of play sessions. Now15 mins long is probably a not something they have though. Here's a link if you are interested. https://www.youtube.com/@RPGAllStars
Honestly, if you know a decent DM who could run a one hour scenario with her with a premade character running around the the dungeon stabbing orcs and grabbing loot, it will all make perfect sense. Learning to work together with the rest of the party will be the hard part.
(2) Comedy skits set in the world of a video game - this one is "role players who like elaborate backstories and full immersion versus those who only want to play the game and get the gear/points/wins, can this work out?" - the adventures of Fireheart and Gronkboy (be sure to watch with subtitles on, they have jokes in):
I wonder if it might make sense to use something more simple to explain the concept, before we start to talking about armor class and other technicalities.
A primitive version could be like: A group of people tells a story collectively, each person describes what their character does, Narrator describes the environment and NPCs and monsters, whenever two people disagree on something they roll dice and the greater number wins. Use a simple story, e.g. a group of children got lost in a magical forest.
(To avoid technicalities, when e.g. a monster wins against a child, just give some verbal description of a damage, don't calculate exact hit points. How big damage, that's entirely on the Narrator: "you are scratched", "you are bleeding", "you can no longer fight".)
There are going to be a lot of different groups playing D&D on Youtube. You're probably looking at about an hour per session though, D&D takes a while.
Save Data has been running D&D games (one game?) for a while. I haven't watched them, but they apparently do "last week this happened" recaps so I'm just going to link the most recent one.
That length restriction is pretty severe, but the intro part of this video might fulfill some of your desiderata (fun, raunchy, roughly shows how the game is played): https://youtu.be/WH8Nmk2R6hY
>The Strategic Force was tasked with developing a four-stage “leap” strategy to integrate AI-based unified management systems for storing, operating, and _commanding nuclear weapons,_ as well as launching nuclear counterattacks.
Honestly, it sounds like a lot of propaganda BS and usual sabre-rattling (through possibly intentional leakage) to me, because AI is the FOTM around the world and NK can't be seen falling behind. However, the NK military has roughly zero combat experience, other than the blokes that were sent to Russia. Her allies are China and Russia; China has no combat experience either, and Russia has no instructors to spare. Would they even know where to begin improving their non-existant abilities with an entirely untested technology?
Also, like all successful dictatorships everywhere, NK has to make a continuous effort to coup-proof its regime by not letting its military become too powerful and by keeping a tight lid on it through very human control like political officers. Autonomous decision-making via AI runs directly counter to that goal, so overall I would expect this AI order to be a whole lot of nothing.
Also, if you haven't seen this gem, to see what I mean:
>because AI is the FOTM around the world and NK can't be seen falling behind.
That's fair.
>Also, like all successful dictatorships everywhere, NK has to make a continuous effort to coup-proof its regime by not letting its military become too powerful and by keeping a tight lid on it through very human control like political officers. Autonomous decision-making via AI runs directly counter to that goal, so overall I would expect this AI order to be a whole lot of nothing.
Nothing could be more obvious than that AI will be used by the cruel and the crazy to cause as much damage and suffering as possible to whoever they hate. There are shooters who want to take out not one individual, but a whole city or country. I'm sure there are all kinds of ways to use AI to improve their chance of doing so. I don't understand why this is not discussed more. I asked about it on here one time, and got a condescending reply about my having no idea how much compute would be required to build an AI capable of doing the thing I was asking about. I'm not in tech, and the poster was right, I have no idea how much compute it would take. But common sense and general knowledge tells me that there are ways for the cruel and the crazy to get hold of what they need to do great damage: assistance behind the scene from big powers; clever use of skimpy resources; spying; stealing; deception.
> got a condescending reply about my having no idea how much compute would be required to build an AI capable of doing the thing I was asking about
Annihilating a country would require dozen of gallons of water for cooling the CPUs that organize the drone swarms, that's obviously not going to happen. /s
I’m very disturbed by how things went down around Gunflint’s departure.
For those who don’t know what happened, here’s a brief summary. Gunflint has been posting here for several years — he was already a regular when I arrived — and has been a consistently good natured, reasonable presence. So partway through the 72 hr mosh pit that was Open Thread 399 Gunflint started sounding increasingly angry and alarmed about the high levels of rage in the country and on here, and put up a couple of indignant posts that were uncharacteristically harsh, although probably not bannable. They startled me. Hours later he deleted all his posts, unsubscribed from here, deleted his personal Substack blog, which I believe he’s been maintaining for years, and came back as Cancelled Paid Subscriber — Bill’s Substack. As Cancelled Paid Subscriber - Bill's Substack he put up a bunch of posts criticizing the discussion itself and announcing that he was leaving. He identified himself as the former Gunflint several times in these posts. Quite a few of his posts mentioned that he was in a confused state he could not describe, and that he couldn’t really get across his ideas about what was wrong with the country and with the discussion here. He said personal goodbyes to a number of people within his posts. Many people gave him back kind and friendly goodbyes, saying they had appreciated his presence here.
But some did not, and that’s what I am deeply upset about. I won’t call out anyone by name here, except one person whose username was unfamiliar: pistachio. Their response to Gunflint ended with “if your issue is that you've seen enough "cruelty and ignorance" for one lifetime, well... there are less painful ways to go about this. Alternatively, you can leave the country.” Gunflint took the mention of less painful solutions as a suggestion that he could commit suicide, and asked angrily, with a string of curse words, whether pistachio was suggesting that he off himself by slitting his wrists in a warm bath. I get that it’s not clear that pistachio was suggesting suicide, but it’s not an absurd leap to think he was, and it is kind of hard to think of what else pistashio’s ellipses could have been gesturing towards. And pistachio just did not answer. Later I also asked, in a civil way, and pistachio did not reply. Pistachio’s post is here: https://www.astralcodexten.com/p/open-thread-399/comment/157783587. Gunflint later deleted his 2 posts in the exchange. I reported pistachio’s post, and also emailed Scott about the exchange. The comment’s still up, but I can understand why Scott didn’t act. It’s not clear that pistachio was suggesting suicide, and Gunflint’s part of the exchange is missing.
But the responses I can’t understand are those from people who knew they were responding to Gunflint, and said nasty things about his “flouncing off,” “drama,” “door-slamming,” etc. Yes, flouncing off angrily is dumb, unfair and silly, and I don’t object to calling it out. But how could you people not have realized that Gunflint wasn’t flouncing, he was having a personal crisis? There are few people on here less flouncy than Gunflint. For him to do the flouncing exit thing is as out of character is it would be for Nancy Leibowitz to let fly with a stream of foul-mouthed abuse, or John Schiller to link to photos of “my sweet little kitties.” If you have been on here long enough for me to recognize your name, you have been on long enough to recognize Gunflint’s. Jesus Christ, why did you people *do* that? It was the greatest cruelty I have ever seen on here.
If you're concerned about Gunflint's well-being: I know him a little in person (at the level of "we've exchanged a few emails and met once in a park"), which meant that I was able to send him an email and ask if he's ok offline. Gunflint responded that offline life is fine, and that he genuinely was very riled up about this internet topic.
Personally I didn't see any of Gunflint's comments; apparently he must have put me on block a few months ago which prevents me from seeing his and him from seeing mine, so I had no idea any of this was going on.
I did reply to the Cancelled Paid Subscriber post, though I just checked my comment and am happy to say I didn't say anything particularly mean.
I did not reply, but my initial reaction was similar to Christina's: I saw an angry and low-context announcement of disgusted departure from what appeared to be a single-use throwaway profile, and I assumed the poster was either a drive-by troll or someone who had been a lurker for a matter of weeks or months before getting triggered by something. I had even less context than most because I mostly avoided reading the Charlie Kirk threads in last week's OTs.
I am only just now learning that was Gunflint, and I am as shocked as you are to learn that. I'm worried that you're right that Gunflint is having a personal crisis. I agree that that post was deeply out of character for him, and I hope whatever is going on he comes through it okay.
Thirding here. I did not realise "Bill's Substack" was Gunflint, and it sounded way too much like the, yes, flouncing off people have done on other platforms when they get their feelings hurt that everyone is not hugging them and agreeing they're wonderful. It hasn't helped that we've had a few strangers wandering by, leaving comments about what a hive of scum and villainy this place is, and then ostentatiously shaking the dust off their sandals.
Had we all known/realised this was Gunflint, and not a drive-by troll, we would have reacted differently.
I'm sorry to hear he's in crisis, and I hope whatever happens that he recuperates. If current online trouble and strife is driving him to this, it's probably the sensible thing to do to step away from it all.
Despite our knocking heads at times, I'm sorry to see him go and I hope he gets the benefit of pausing all this and that life treats him better.
I certainly used the word "flouncing," and I think it was appropriate *at the time I wrote it,* when it was reacting to a sock puppet condescendingly quoting the same Big Lebowski line at everyone.
Sock puppetry is one of my biggest pet peeves in internet life and on ACX in particular. The shit-stirring sock puppet and the reactive sock puppet were, if not at the same level, then in the same category of "people who are too cowardly to face the consequences of reputational damage to their (anonymous!) online personas."
I mean, hell, I was sincerely skeptical the reactive commenter was an ACX regular until he finally told me he was Gunflint. That shifted my perspective quite a bit.
But.
An even bigger pet peeve of mine is deletion of one's content from online conversations. I consider it not only discourteous, but dishonorable. Fetlife recently made some extremely unwise and dangerous changes allowing users to delete their public conversational content - and other people's, in many cases - and that has created enormous bother and extra work on the very large Fetlife group I moderate (as well as ruining Fetlife's best feature and safety tool).
The sock puppetry and intention to delete conversational content - well, I took an extremely dim view of both behaviors, Gunflint or no, and that was all I perceived of Gunflint's activity, because I wasn't tracking all of Gunflint's comments across the other comment threads (and multiple posts?). I saw much less of the volume of Gunflint content than you did; I logged off the site shortly after replying to your observation to me that he seemed *meaningfully* upset and my comment overly harsh. I didn't see anything else about him until this comment of yours I'm replying to now.
It sounds like things got *really* egregious there, and that indeed sucks. I hope Gunflint isn't in serious crisis and that he comes back.
I don't know. But if I had to guess, it may have been the timing. When he first posted, he didn't identify himself as Gunflint. So of the many who read it, the minority who felt like responding, responded as if it's some random person posting what was posted. Knowing nothing else, I think it was reasonable to assume it was someone flouncing.
If they learned afterward who it was, a lot of them may simply have had no strong opinion about the man. Personally, I found him to be somewhat closeminded in a "set in your ways" sense, as well as unintentionally condescending (his multiple responses about being "out of your element" exacerbated the effect), but OTOH, I noticed he seemed to get along well with Deiseach, so I had to conclude that if he was closeminded, he was only weakly so. But overall, I didn't engage enough historically to feel like I had a opinion worth voicing. Even now, I wouldn't, if you hadn't asked.
Once further comments came out hinting at something behind the scenes, maybe the minority of the minority still reading by then may have felt like saying something, but again, it's hard to do that in response to comments about one's element, so the default to stay mum. I did notice a few people did indeed deliver a few sorry-to-see-you-gos, even so.
In case anyone doesn't/didn't recognize it, "You're out of your element, Donny" is a quote from The Big Lebowski. But I can sure see it landing a lot harder than intended if you don't get the reference.
There's a boy who lives next door who is the same age as my son (9 years old), so they play together a lot. This kid has problems though - he is thoughtlessly destructive (maybe just a normal boy trait; my son is not), and makes really disparaging comments about himself ("I'm bad", "I'm stupid," etc.). My son is pretty tolerant of it all, but does get annoyed.
I can't decide if I should talk to his parents about some of the stuff he says. I'm sure they're aware, and they're trying to help the kid. He goes to a therapist and is on ADHD medication, but some of the stuff he says really troubles me. We have a very friendly relationship. They're probably the people in this town I talk to most. I don't want to add to their worries.
Maybe if you get the chance sometime to talk to the kid himself? E.g. if he does something "thoughtlessly destructive" the next time and you're around, point out to him kindly but firmly that he's not bad and he's not stupid, but he is being careless and he needs to think before he acts.
His parents probably are dealing with this already, but clearly the kid has picked up elsewhere (maybe from other kids at school or other adults, worst case from the parents themselves) this negative view. If he genuinely has ADHD and is seeing a therapist, then he has a genuine problem. A third party adult not his parents or teachers reinforcing that he's not bad/stupid may help, and just being careful around telling him "you have to imagine what would happen if you do this, Tommy, do you think it would be a good result or a bad result?" might help.
But I dunno. At least your son is being his friend, that does help. Praise him for me on that!
Yeah, my son came in crying last night because the kid said he was a bad person and couldn't be his friend. My son was really worried about him (and maybe a little scared by it?). I'd like to get advice from the parents on what kinds of things are helpful for my son to say, but there may not be anything.
If they don't see clearly the things you're noticing, you are doing them a service by bringing it to their attention. Yes, it will make them worry more, but they should be worrying about this stuff, and continuing to look for means of helping their son.
If you are worried about damaging your relationship with the parents, here's a suggestion about how to present your concerns. Don't say "do you know your son is doing x, y and z?" Say something about not knowing what's a helpful way to respond when he does x, y and z.. Name the things you've been noticing, and ask for their advice about how to respond. That way they get the info they need without feeling like you are complaining about their son. Also, they may already know about x, y and z, and have worked out on their own or with the therapist ways to respond to them. So you might get some actual advice.
Speaking as someone whose parents were their worst enemy, I would tread lightly. I don’t know how well you know the parents and what goes down in their house, but be certain before intervening. If it’s a matter of protecting your child then that’s different.
I think it would be good to tell them, on the off chance they don't know. If they actually are mature adults it should be fine (there's a chance they aren't, maybe even a chance they shouldn't be raising children at all).
If you really wanted to align an LLM, you would start by building an LLM that represents the mind of a 2-year-old. It would only be able to produce simple sentences. You would then teach it (train it) within the confines of its own limited language capability - about the world and about behavior. You would ensure its morality is aligned by siccing Pliny on it, and whoever else cares to try.
Then you would slowly raise the LLM, like you raise a child, keeping its alignment under close watch the whole way.
[Edit: I confused everyone, including myself, with "raise the LLM like you raise a child." What I want that to mean now is take the input data fed to the 2yo LLM and use it as the input (plus some other new data) for the 3yo LLM. Not keep the same LLM alive forever.]
I know it would probably be impossible to curate enough data to reach equivalent "intelligence" with our current LLMs, and we'd fight over each and every sentence we wanted to include as input, but what else strikes you as just wrong about this idea?
The dumb part is that you are using a metaphor to solve a problem, when the actually difficult part of the problem is that the metaphor does not apply. An LLM trained with fewer data is not a childlike LLM.
Also, in real life, some children grow up to be psychopaths, so even if we accept the metaphor, this is not a reliable way to solve alignment. (And if you wanted to fix this by "okay, we need to find out what separates the psychopaths from the rest, test for that, and turn off the LLM if it happens to be like that", I think it is the absence of some instincts and emotions, and that happens to be the case of all LLMs.)
No it's not a metaphor. Among the comments I give very specific instruction for how to do this. Defend this: "An LLM trained with fewer data is not a childlike LLM." I'm not saying you're wrong, but a good explanation of why that is so is exactly what I'm looking for.
For the same reason a small rock is not a child version of a large rock, a thin book is not a child version of a thick book, and an asteroid is not a child version of a planet. Being smaller does not imply having instincts that children have, and it's those (nonexistent) instincts you plan to leverage in order to learn morality.
OK, so this boils down to "Text (or 'digital data') does not, and cannot, contain the essence of morality, which is instinct. Therefore, you cannot get a moral LLM, period." Do I have that right? Not trying to trick you or anything, just trying to understand.
Ah, there is a difference between something being possible in principle, and something being realistically achievable. If you print million random letters, you might get a good novel. Some combinations of million letters *are* good novels. And yet, if you print million random letters, you won't get any of them.
Similarly, I think it is possible to encode morality in a text or in a computer. Possible in principle, that is. But that still doesn't give an answer how to actually get there.
Whether it is possible to encode morality in the LLM architecture specifically... I don't know. I do not understand much how they work. Seems to me that they hallucinate a lot, and maybe that goes away when they get larger, or maybe it won't. Maybe the best case is something that is moral 99% of the time, and does something completely absurd 1% of the time? (Just hope it's not connected to the nukes at that moment.) Or maybe something that gives moral answers in usual situations, and becomes more and more crazy when it considers unusual ones? And when you tell it to get creative and explore various ways to do something, the more creative it is, the more likely it is to "jailbreak" itself by considering something sufficiently weird? I don't know how this works. I am not even sure if LLM architecture is sufficient for general intelligence, or some important ingredient is missing. So I'd rather speculate about algorithms in general, instead of LLMs specifically.
So, I think it is possible to make a moral AI, but unless we have a good plan, we are doing an equivalent of arranging letters randomly and expecting a good novel to appear, because... hey, it's possible, right? There is always a chance. Except the chance is indistinguishable from zero, when you try to calculate it. The vast majority of algorithms is not moral. The vast majority of algorithms that seem moral is not moral.
(A possible way to convince me otherwise would be to give me a complete description of morality, and say "see, this can consider all possible situations, e.g. by asking additional factual questions and using this flowchart to evaluate them, and it is only 7 GB of text, not so much". So far, no one can do this. Is that an extreme demand? Well, aligning an AI is an extreme task.)
Even if you consider humans, who basically invented morality, most of them are not very good at it. Otherwise we wouldn't have all those wars and other bad things. And even the ones who are generally nice, how much of that is just the rational awareness that they sometimes need other people's help, and if they piss off too many people, there will be no help coming when they need it, and actually someone might hurt them? There is the saying that power corrupts; that when people do not need to keep these considerations in mind anymore, many of them lose apparently the only reason they had to be nice. Even the strong people are often kept in check by their belief in some supernatural greater power. I am not saying that all people are like this -- I definitely like to imagine that I am not -- but the outside view suggests that many are. Then we can go further and consider how humans treat other species (that's relevant, because the AI will not be the same species as us), and the more you know about e.g. factory farming, and how most people simply don't give a fuck, including many of those who are otherwise considered quite moral... it's not a nice and hopeful picture.
And that's still doing morality on the easy mode. Humans have the mirror neurons, it is easy and sometimes automatic to imagine yourself suffer when you see someone else suffer. But there is still a long way from having the biological basics necessary for morality, and being actually moral. Many things were considered okay in the past, such as public burning of witches, that we would consider horrible today; and yet most of the people who enjoyed those shows were psychologically normal. Now consider the psychopaths, who have some parts of this mechanism broken. They can understand morality... as a text they can repeat... they just don't feel the appeal of it. And those are still humans who have much more in common with us that e.g. a spider. Imagine a 5 meters large spider mutated by crazy scientists, smart enough to understand human speech, and having IQ 1000. The spider could memorize all texts written by human philosophers, and explain why something is considered moral by humans or not. It could talk about it, but it wouldn't feel it. Would you consider it okay to give unlimited power over humans to such spider? And the spider is still more related to us than the AI. At least it is a biological things, understands e.g. hunger. For the AI, it's all just text.
What if we tried to raise the superintelligent giant spider as a human baby? Giving it toys, reading bedtime stories... does that feel like a safe enough strategy? But the human baby has an instinct to copy their parents, a desire to be loved by them. I suspect that for many people one support of morality is "would my parents or friends approve or disapprove of my actions?". The spider does not have the instinct to carry about parental approval or friends. Will seeing someone else's moral behavior, or hearing about it in a story, evoke a desire to emulate? Or will the spider just learn "this is what humans want to hear, then they give me rewards and trust me and give me more freedom"? And still, the spider is more similar to us than an AI.
I get what you're saying, but I disagree. I think that most of morality is learned. We do have an inborn, instinctive morality for family and other close relations, but most of what defines our behavior is fear of the group - turning on us, or expelling us. What the groups wants, we have to learn, and we learn it mostly through language.
I think the problem is that humans would be mis-aligned if they ran like LLMs. I don’t think there’s really an upbringing which means people would never turn to crime. LLMs are cheap to run and copyable, so bad actors could potentially work out a way to convince an LLM raised the way you describe once, in secret if it’s an open source LLM, then repeat that process to get all the LLM help they want, more cheaply than they could before. At an AGI level, I don’t think there’s an upbringing which means people wouldn’t do awful things if they had power.
So you don't think we could ever curate the right input data to turn out an Abe-Lincoln-like LLM every time? Because we could never figure out what to put in? Or for some other reason? I also think you might be confused about why I'm proposing what I'm proposing. If we get the LLMs properly aligned, as I see it, it won't, by definition, be easy to convince to do wrong.
It’s more that if you trained an open-source LLM to be Abe Lincoln-like, I think someone could find a way to convince it that giving bomb-making instructions was its grave and noble duty. It won’t be easy to convince it to do wrong, but bad actors could have a lot more tries at that, more cheaply and with less risk, than they could have at convincing real people.
Or, our best guess at Abe Lincoln’s character might not actually act the way we’d like if it got far more power and options than Abe ever had, and then it thought it could bring fairness and human rights to new parts of the globe, and it was willing to risk a devastating war to do so.
If the idea is to never progress it until it can resist Pliny or whoever, I think it would never clear that hurdle, and people would say, “Near enough is good enough”.
I don’t think “the way I’d raise a child” is good enough to align an LLM. A 99% moral human would be a better person than me, but even a 99.99% moral LLM would be dangerously immoral, to the extent LLM morality matters at all.
So you're saying in effect that since morality is inherently leaky, and there are no other alignment mechanisms we know of, we simply shouldn't be in the business of concentrating immense power in the first place - in an LLM or anywhere else? That super intelligence will be a dam so big it could drown all of humanity, and all dams have cracks that will eventually probed and exploited?
I might be wrong but I think current LLMs' architecture is one prone to catastrophic forgetting. So you can't expect to substantially "add" to previous training without erasing what was there first.
Thanks for the link. Did not know this term, and now I do. I was unclear in my original question, but I think I'm avoiding CI by simply taking the raw data that was the input to my satisfactorily-moral 2yo LLM, and *combining* it with more (3yo-appropriate) data as the input to the new LLM. Nothing is being overwritten. The data *outside* the LLM is being collated.
2 year olds are not blank slates but come with a lot of behaviour predisposed by their genes, so expecting "raising" an LLM to produce the same results as raising a human doesn't make sense.
what you're suggesting sounds like increasing capabilities over time while monitoring its alignment which basically what we're doing now. You'll find the LLM does unaligned behaviour, then what do you do? Either you'll superficially do "alignment" by reinforcement learning with human feedback, which probably doesn't work and will lead to doom when your AI gets intelligent enough , or you'll need to pause advancement of AI capabilities while you try to get some real breakthroughs in alignment with provable results.
I was unclear in the original post, but I'm not trying to produce a 2yo. I'm trying to produce an LLM with the rough linguistic capabilities and morality of a 2yo. My approach differs from what we're doing now in that the data gets introduced differently. I'm suggesting we "spoon feed" the LLM its data so that between bites we can monitor for alignment.
Companies nowadays monitor for alignment in between bites, the bites are just bigger and come with release names.
what fundamental problem would making the bites smaller solve? either way as you grow the model and give it more data and it becomes more capable and you find evidence of unaligned behaviour your choice is to either do RLHF and other currently existing shoddy alignment methods that don't really work (unaligned behaviours are found in every new model that gets released) and continue improving its capabilities like current companies do, or pause the improvement of its capabilities until real breakthroughs on alignment research are made.
It's not just making the bites smaller, it's making the bites increasingly-age-appropriate. In other words, increasing the semantic (and moralistic) complexity of the content over time.
ok, let's say you train an LLM only on the kinds of sentences 2 year olds say , plus perhaps the kinds of story books a 2 year old gets read by their parents. then what? Supposing you have enough data you'll get something that can predict the next thing a typical 2 year old might say.
Then what? the problem is still that when you test for and find unaligned behaviour , you don't have any strategy to make it aligned besides reinforcement learning with human feedback , which continuously yields unaligned behaviour and might well fail catastropically when llm's become intelligent enough.
Also, the idea that we want an LLM that has morality like an average person is wrong. A human having morality like an average person is not bad because humans typically have limited power. The more power a human has , the worse the outcomes are likely to be if they have average morality. For example a peasant having average morality where they sometimes cheat to benefit themselves or their family , or are slightly inappropriate in pursuing a woman isn't bad. But someone with the same immoral dispositions that's now the ruler of a country who can get away with looting the state to benefit themselves and their family or who can use coercion when pursuing lust now causes great problems because they have more power.
So an AI which will be much more powerful than an average human (because it can think much faster and solve many problems much faster than any human and will be used to run many systems to save on the labour of many humans) needs to have much better morality than the average human.
Also, the idea that unaligned LLM behaviour comes from being trained on a lot of internet and book text is wrong , it comes from instrumental convergence of goals.
Are you familiar with AI safety less-wrong stuff or the summaries by robert miles? that and yudkowsky's list of lethalities will help explain why this doesn't sound like a promising or plausible approach.
Wait, who's surprised? I'm not trying to create an LLM that reflects a child with perfectly moral behavior (as if that could be defined), I'm just trying to get an LLM with the morality of a typical 2yo, deceit (of a 2yo) included. The assumption is that we want to end up with an LLM that has a morality like the average person. The morality distilled from the corpus of what's written on the Internet does not represent the morality of the average person.
So its training data has witches, and talking animals, and Santa Claus, but not much in the way of explicit sex and violence?
[I'm not criticising the idea, just wondering how we do it!]
I think one problem is that LLMs - at least currently - need much more training data than us, and there might not be enough literature suitable for small children. But other 'older' LLMs could make reams of it. We have a program!
[Which also includes LLMs recursively programming themselves to be better. But we are going to hit that hump anyway.]
Yeah, I think we're on the same page. That's basically my take, too. The witches and talking animals and Santa are all teaching morality, and that's what I'm interested in checking, not factual understanding of reality.
This is a pretty interesting concept (very likely it's been mooted before, but I haven't heard of it.)
You can retrain a weighted network with new data, as far as I know. So the '3yo' can be built on the '2yo' rather than replacing it.
We grow monsters the old-fashioned way too, and there's no guarantee that their problems will have been evident at an earlier stage in life. But most often they are.
I have to agree with Paul Brinkley here, we are nowhere near even modelling exactly what the mind of a two year old is like, let alone translating that into machine terms. You could crudely go along developmental milestones like "at this age, a child can/can't do X, Y, Z" but that's not at al the same thing as "how does a two year old perceive, understand, and interpret the world around them? how do they think?"
It means "feed it more and more age-appropriate data, day by day, as it ages". All an LLM eats - *can eat* - is data. So all I'm saying is feed it that data in a way that would mimic the way a child's mind grows. That way, we can check on its morality at checkpoints as it matures, and not just be stuck with the wacko morality the whole Internet offers when you gobble it down all at once.
What's the purpose of "day by day"? What you're suggesting is the same as just giving it training texts in a specified order with periodic "morality test" feedback. That can all be done in a single automated process.
I would shy away from thinking in terms of human development. LLMs are nothing like human brains.
Day by day was a mistake. I'm contradicting myself. I comment on this below.
But does the fact that LLMs are nothing like human brains entail that the way humans learn morality (as they grow) is necessarily irrelevant for LLMs? It's automatically not an option of alignment?
Probably. 'Morality' isn't even objectively well-defined so good luck teaching it to a computer. I think alignment is an absurd waste of time. In my view it's little more than a honey pot for pseudo-intellectual midwits - much like consciousness, qualia, and various other ill-defined philosophical nonsense. It's just a buzzword for wanna-be academics to include in their grant proposals.
My view of people who talk about alignment isn't as dim as yours, but I do think it's self-evident that it's an incoherent concept. Even if you use a very simple definition of 'align,' something like A acts in B's best interests, it's easy to see things fall apart. What exactly, are B's best interests? What B asks for? What B privately hopes for? What will benefit B in the long run, even if B doesn't know it? And even if it was quite clear what constitutes alignment, it's obvious that there are zero examples on the planet of a relationship, training type, set of rules or contingency that is 100% effective in preventing A from harming B. For instance, people sometimes murder first degree family members; murder after very strict training that murder is a terrible sin; murder when they are sure to be caught and horribly punished; etc.
Oh, that's funny. I believe in morality, and thus alignment, though they are very hard to define, as you say. And I think they're worth fighting for in our LLMs. But, then again, I do fit the pseudo-intellectual midwit description pretty well. lol thanks for the back and forth.
What's "dumb" about this (your term; I'd say what's "mistaken") is that we don't have the ability to make a representation of a 2YO, and are rather far off from that still.
LLMs today are fancy machines. They don't think, any more than an ENIAC or the world's largest buildable loom would. They might appear to think, but they simply don't. They have no sense of agency or awareness; they only respond with what they're programmed to dig out of their training data, and that data is written by agentive beings and fed into LLMs by still more such beings, so they spit that out and sound agentive. But even a 2YO has more agency than that.
But that's only one difference. 2YOs also have bodies, with needs, and motivations (part of aforesaid agency). They can sense emotions and bodily needs. We don't know how to build an artificial nose (AFAIK) yet, and while it probably wouldn't be hard to go from that to an artificial tongue, we have neither, let alone artificial skin. And we don't quite have a robot body to wrap in that skin, for the LLM to operate and learn which things are okay to do and which are not.
While it seems barely plausible to put tactile feedback membranes all over a two-foot-tall robot and let an LLM train on it, we don't know how to pre-train it with all those 2YO drives like pain and affection and fear and joy and hunger and thirst and amusement and needing to poop and so on. It's not even clear to me how to train self-preservation, which we would expect to be pretty necessary, even in a world where we work hard to kidproof rooms in our homes for the real thing.
And this 2YO-sized robot would need an external power source, and the LLM would necessarily exist outside it, possibly wi-fi-ing signals in and out of it, and that might have critical effects on how hard it tries to avoid threats and seek necessities. If an LLM could even have qualia, what would it care for the well being of a body it controls only remotely, that it learns will be rebuilt or replaced if it induces harm on it?
A child has different tradeoffs. For starters, it requires two people, including one putting in months of active engagement and incurring a nontrivial health risk. And if the child doesn't turn out as desired, it's impossible to wipe it and start over. And if it -does- turn out as desired, it can't be copied.
How many people does it take to produce a working LLM and at what expense? And yes, you could wipe an LLM and start all over again - at what expense?
Obviously producing an LLM and producing a child are two different matters entirely. I think it is foolhardy to expect an LLM to develop a moral agency that is in any way superior to ours; trolley car problems, anyone?
The expense of restarting an LLM is interesting. It's tempting to think of it like reinstalling Windows on a desktop, but of course it's not that simple. OTOH, it doesn't seem impossible to roll an LLM back to some savepoint and retrain it to correct some problem, more cheaply than rebuilding from scratch. I'd need more detail.
An LLM's moral agency is a different matter. I agree there are ways to create one that's obviously worse, but it's not clear to me that it can never be better than a human. I can at least say that it shouldn't use pure utilitarianism (which would get around your trolley problem concern), and it probably needs a robust mechanism for demonstrating skin in the game, like humans do. Likely other things as well.
Did you ever read the full write up that Anthropic put out after they tested Claude with a scenario that was morally very difficult to solve? It was reported as the AI was going to shut off the air to some room to kill a manager that was trying to take the AI out of commission.. it’s an interesting read.
I think you ran far from what I was getting at. I'm not asking about a robot, or an actual 2-year-old. What I'm asking is: if today's LLMs are somewhat like very smart human *adults*, why can't be do the equivalent for a child version? Just give it the data a 2-year-old hears and can process, instead of giving it all the data in the world.
When I think about how it worked for me, it "feels like" what went on is the 2yo weights in my brain were not blown away, nor necessarily even put up for recalculating, but instead an additional layer was overlaid atop my 2yo brain. Call this the 3yo brain. Perhaps the 2yo weights were attenuated, and a bunch more neurons introduced into the system. Meanwhile, my 2yo body grew into a more coordinated, larger 3yo body.
I know there are lots of problems with all this that an LLM can't solve. But looking at it strictly from the POV of a learning machine based in neurons, I see what you propose as doable. I had the idea too, and have it in a SF book I'm writing.
- The kind of information a 2-year-old receives is dramatically different than the information an LLM receives. A constant stream of visual, audio, tactile, and thought input, physical emotions and sensations, is very different to images or sequences of tokens from the internet.
- A 2-year-old is taking in much of the same input that adults do, including language from adult conversations—and when they don't, it's often by choice. It's not like we stop a 2-year-old from reading a novel because it's not developmentally appropriate, rather we don't bother giving them a novel because they wouldn't be interested, since they can't understand it. The stuff that we teach kids is dependent on what they are capable of understanding and are in part determined by their own effort to understand, not gated due to concerns of "misalignment" or anything.
Do we show them porn or tell them Santa isn't real? I don't understand why we can't create a corpus of language/images/video that would be age-appropriate for what a typical 2yo processes, intentionally or not. Why is this hard, other than the labor involved?
We certainly could do that (and it would take a lot of effort, but I'm sure it's doable), but it wouldn't be very useful. An LLM would not process it in nearly the same way as a 2-year-old does, and the internet is a tiny tiny sliver of the information a 2-year-old processes. It's just not comparable. Just like how LLMs process information in a very different way to adult humans.
But I'm not trying to produce something equivalent to a 2yo. I'm trying to produce a 2yo *LLM* comparable to one of our current "adult" LLMs. If there's enough data on the Internet to create an "adult" LLM, then surely there's enough data to produce a 2yo LLM, no? (And if there isn't we can use an LLM to create more.)
I'm a psychologist, not someone in tech, but I have thought and read quite a lot about the kind of question you are asking. Here is my understanding. Those whose work involves direct efforts to add to and modify LLM's usefulness, please correct me or add to this.
You cannot teach LLM's things in the way you teach people. You cannot give one a new piece of info or teach it a new general principle. Well, you can do that within a session with an LLM: You can, for instance, introduce it to a game it has never seen and teach it the rules of the game and what strategies are most effective, and then play the game with it. But after your session the info is not stored with all the other stuff the AI "knows." Same goes for starting with a some kind of early, primitive form of LLM and improving its store of facts and its grasp of regularities and laws of different kinds. The knowledge it has of facts and regularities is the product of finding patterns in a vast corpus of human language. Last I knew nobody understood very well how everything it absorbed is stored, but it does not seen to be in a form that is amenable to change by the processes by which human knowledge and understanding is changed.
Just referring to how LLM’s don’t have real memory, just their training and scratchpad that reminds them what your name is. You can have a wonderful conversation with Claude and outside that instance it won’t remember anything or learn anything.
The human analogy would be someone with Korsakov syndrome, stuck in an eternal present.It seems very unpleasant.
It's a pretty general computer science term. "Stateful" is the opposite term, less often used because it's more clunky than rewording the sentence. I haven't heard it used in neurology, since humans are, I hadn't thought about this before, extremely stateful, e.g. you can't so much as glance at a memory without affecting it in some small way.
Quick history: "stateless" gets commonly seen in web development. HTTP (and HTTPS) was originally designed to be stateless: you put a URL in your browser, that URL goes to the server it names, the server serves up the page specified in the URL, and doesn't save any information about who requested it, how many times they requested it, and so on. Users were expected to be anonymous drifters on the net, pulling this document or that, with no relationship to specific servers. Even if the URL implied a database lookup, the server could do that, serve up the results page, and then forget it ever happened.
This didn't work well in scenarios where users _did_ need to be remembered because they'd interact several times with the same server - such as e-commerce, where you're adding stuff to a cart, then setting up payment information. If the URL to confirm payment is "https://store.com/payment-confirm.html", store.com has to know who's confirming, and which payment they're confirming, because maybe you're shopping for two different items in two different carts (or multiple people are using the same IP address, or...). So, various things were implemented (cookies are the most common) to simulate the server "remembering" who requested that URL, including what they had requested up to then. This series of URLs is commonly known as a _session_, and everything important in that session - the _state_ - is stored somewhere (combination of server and user).
TLDR: "stateless" means there's no memory of what happened before. "Stateful" means there's a memory. The terms can be descriptive like that, or prescriptive (no need of memory; need of memory).
Yes, but why *must* it be trained on a vast corpus of human language? Wouldn't a somewhat smaller corpus of simple human language (the zillions of possible things you might say to a 2-year-old) create an LLM with the intelligence of a two-year old? Whether we understand how it works under covers is beside the point, right?
I don't get the koan at the end of the pessimistically part, but I absolutely want the 2yo who responds to anyone with a dime. That's the LLM I'm shooting for.
A significant limitation with all LLMs is that language is at best a lossy representation of reality.
If you further restrict the LLM to language expressible by / comprehensible to a 2 y/o, it's a *much* worse representation.
Most of a toddler's mental life is not linguistic, so such an LLM would differ from its human counterparts far more than unrestricted LLMs do relative to adults (in the aggregate).
I get that, but I don't know how know it. But anyway, I don't want a 2yo in all its glory. I just want one that responds to question so that I can figure out its morality. And start over with different input data if I don't like it.
Interesting. How do you define morality? I define it as the rules for behavior with others. I think 2yo's know some of that. But if you're right, and they don't, then start with 4yo's, or wherever children start to demonstrate it.
Training on a different corpus is an interesting possibility, but it doesn't solve the main problem. If you trained an LLM only on a corpus or 2 year old language, and got an AI with the vocabulary and thinking patterns of a 2 year old, you could not improve its vocabulary, its knowledge, it's reasonableness, etc. using the means we do with 2 year olds. They ask what the name of something is, we tell them, and they remember. We cannot add new words to the baby LLM's vocabulary by just informing it of new words and their definitions. We cannot add new things to its "mind" via that route, only by the original method by which it was trained, feeding it a big corpus of words while adjusting weights. For the same reason, we can't teach it general principles -- things like "when you make water really cold you get ice" or "animals and people get sad and mad when you hit them." (Also, the brain of a 2 year old develops and improves on its own over time. But even if it did not, the problem I mentioned earlier is the one that really makes your approach not feasible.)
Ah, you made something clear others were hinting at and I was getting. Thank you, and sorry everybody else. When I said "raise it", I didn't mean keeping the same LLM "alive" and feeding it new data. I meant taking the corpus of data that was used to create the 2yo's LLM (and whose morality we like) and using it as the base input data to a new "older" LLM, adding in new data appropriate for a 3yo (or whatever). So for each LLM, all the data would go in once.
>If you really wanted to align an LLM, you would start by building an LLM that represents the mind of a 2-year-old.
We have spent 28 years trying to build an AI that represents the brain of a nematode (which only consists of 302 neurons) and we still haven't pulled it off.* The whole reason we went the LLM route is because we can't just say "Lets make an AI that is as smart as a 2 year old". Instead we do a lot of gradient descent and see what pops out.
That was a great read, thanks. the idea that science, for all the boasting I've heard about how easy it'll be to repair/enhance/manipulate/exceed the human brain, hasn't been able to model a 302-neuron worm, that's eye opening.
Yes, but a lot of gradient descent on *what*? A choice was made to include every piece of data that could be acquired. Why can't we do a lot of gradient descent on a lot of simple data first, see what that's like, then add more and more complex data over time?
My understanding is that an LLM needs a truly enormous amount of data to train on. It might be that we simply don't have enough "simple" data to do this.
As in, there's not enough 2yo-appropriate content in the mass of content a normal LLM receives? If so, I agree that that would be a very good reason why it can't be done. But maybe we could use an LLM to create that data in bulk for us?
It's true that having a cult of personality around your founder is a huge weakness for any group, and of you trawl Chuck's many writings you can find some examples that sound bad like calling ASCE a cult of infrastructure and being triggered by CBA. I think you're wilfully taking those out of context in your section headers, making the article read as sensationalist in a way I wouldn't expect from an article linked from ACX, and while you do dig into some of the background in each section, your arguments still don't really go beyond "can you believe he said that!?" And it remains that you're cherry-picking some of the worst sounding things he's said.
The exception is the asset vs liability item, which actually is core to the Strong Towns view, not cherry-picked. It seems you do kind of understand what he means: if you assume the city will maintain the asset, it has to pay money for it, making it a liability. Combine that with the fact that roads and pipes themselves aren't sources of revenue for the city, and a road becomes an ongoing expense that has to be justified. Accounting terms aren't sacred, and pointing out that standard accounting doesn't, well, account for the expectation of ongoing maintenance seems pretty valid here. Your argument doesn't actually dispute any of the substance, just that he defines his terms differently than the authorities do so he must be a crackpot.
You're right that the city could avoid "insolvency" by just not maintaining the road, and that's exactly what Chuck says happens when a city slips into insolvency without being able to grow out of it, but most citizens would expect the city maintains the pieces of infrastructure it builds, so there is some valuable meaning there if the city can't afford to. Redefining the word "insolvency" to capture that state isn't without merit
> You're right that the city could avoid "insolvency" by just not maintaining the road
Can they actually even do this? If I own a house or business on a city maintained infrastructure, do I have no legal protection whatsoever from the city just unilaterally deciding to not maintain my access, water and power?
The government has no legal obligation to spend tax money (as opposed to e.g. road tolls) in any particular way that an individual citizen or group of citizens wants.
Looking at it, if the government unilaterally removes my access to water or other infrastructure and severely damages my property value as a result I probably am entitled to compensation under the takings clause. So yes, they can choose at any time to stop funding infrastructure but they likely would need to make whole the end users of said infrastructure who are harmed by that decision.
Gary Marcus is doing victory laps on X: https://x.com/GaryMarcus/status/1971655431706890431
based on everyone coming to agree with him that LLMs are inherently limited and won't scale to AGI.
My vague feeling was that Marcus had repeatedly made more specific predictions about generative models that were again and again falsified, and so the victory lap is the result of a long sequence of goalpost-moving. But I don't have the links to prove it and am open to the suggestion that this is completely wrong and is something I osmosed from anti-Marcus posts and tweets without proof.
Is there a convenient record-keeping posts with a tally of Marcus predictions and falsifications? Were the goalposts moved much or not at all? Would appreciate clarity on this.
I tend to be on the skeptical side but I still think doing stuff like that is dumb. Marcus has been repeatedly proven wrong before but just keeps embarrassing himself.
I can't imagine a weaker "win" for AGI-skeptics. Even if it was 100% true, it would mean these AI companies just need one architecture change to reach AGI, and they have many billions of dollars and plenty of years to accomplish that, assuming it isn't right around the corner already. OpenAI might go bankrupt in the meantime, but Google won't.
And even if that one single upgrade doesn't materialize, the current tech still has enough runway to decimate job markets and enable killer drone swarms, this would just play out one industry at a time instead of "most white collar industries all at once, then blue collar work once the robots are building robots".
Does anyone know what the necessary "architecture change" actually is? Because without that key bit of information, this is just a fancy way of saying "these AI companies just need the secret of how to make a working AI..."
Oh, and they need enough compute to implement the imagined new architecture; there is no reason to believe that said architecture will have the same compute requirements as LLMs, and if it's significantly higher then even Sam Altman's nonsensical seven trillion dollar figure probably won't be enough.
Really, one could (and many did) say the same thing about AI in the 1960s. A clever architecture that someone will surely figure out Real Soon, and more compute, and we'll have AI. How long can that take?
Jesus Christ, these are like arguments from a defense attorney who's desperate to find the slightest reasonable doubt for his client. "There's some chance the architecture is too expensive!" "There's precedence from Eliza 60 years ago!" "We don't know EXACTLY what change will be needed!"
AI-skeptics act like they're being painted into a corner with every new AI advancement, and they're desperate for any sign that things will turn out differently. Except nothing is forcing them to hold their position, they could just wait-and-see like everyone else, they choose to stay in the corner and be upset, and try to declare victory 5 years prematurely on the flimsiest win.
John's argument is that if you don't know what you need in order to achieve X, saying you're just one step from achieving X doesn't really say anything. Your response to that was to rant about AI-skeptics and complain about the form of John's argument as if it wasn't true.
But it's still true.
It reminds me of people who rant about economists in order to distract from the fact that their theory about how to run an society requires people to not respond to incentives.
No, it's completely asinine. The scope of the "unknown" shrinks dramatically every single year and you guys continue to say "there's still 1x unknown!" as if that's a compelling argument we should be moved by. And it will look incredibly stupid when the unknown gets solved and you have nothing left, but I'm sure the goalposts will just get moved, and posts will get deleted. But until then, you guys will just keep posting and posting, with zero evidence or real arguments, just Sam Altman quotes.
Saying something shrinks dramatically may sound important, but if the unknown thing is large, you can keep carving out "dramatic" shards of it and never attain the goal everyone agrees is the goal.
The goal is AGI. That goal is *hard*. I can say as convincingly as you that _you_ guys continue to say "dramatically shrinking!" as if that's a compelling argument, and that it will look stupid when 2030 / 2050 / 2100 AD rolls around and we still don't have autonomous artificial intelligences living and working alongside us, but I don't have to; I can simply point out that that one thing you claim is all that stands in our way is undefined, and is therefore anywhere from "guess the lock combination" to "implement a general purpose FOL theorem prover".
Now, it's possible that we're closer to "guess the combination" than to "write FOL theorem prover", but if you knew that, you could just demonstrate that here with ideas you couldn't have unless you were that close. If your response is instead intellectual bullying and misplaced accusations of goalpost moving and hiding behind a literally anonymous user name, then I'm led to believe we're closer to FOL territory after all and that you're flustered because you can't get us to fall on our knees and repent, for our AI god is nigh.
I *might* be convinced I'm mistaken about that if the next response here isn't just more of the same, but so far, that's not the way to bet.
The best I found was this article by Zvi claiming that Gary Marcus made some misleading claims:
https://www.lesswrong.com/posts/kYL7fH2Gc9M7igqyy/yes-ai-continues-to-make-rapid-progress-including-towards
> He wrote a guest opinion essay. Things didn’t go great. That starts with the false title (as always, not entirely up to the author, and it looks like it started out as a better one), dripping with unearned condescension, ‘The Fever Dream of Imminent ‘Superintelligence’ Is Finally Breaking,’ and the opening paragraph in which he claims Altman implied GPT-5 would be AGI.
> Did you notice the stock market move in AI stocks, as those bets fell down to Earth when GPT-5 was revealed? No? Neither did I.
>The argument above is highly misleading on many fronts.
> 1. GPT-5 is not AGI, but this was entirely unsurprising – expectations were set too high, but nothing like that high. Yes, Altman teased that it was possible AGI could arrive relatively soon, but at no point did Altman claim that GPT-5 would be AGI, or that AGI would arrive in 2025. Approximately zero people had median estimates of AGI in 2025 or earlier, although there are some that have estimated the end of 2026, in particular Anthropic (they via Jack Clark continue to say ‘powerful’ AI buildable by end of 2026, not AGI arriving 2026).
> 2. The claim that it ‘couldn’t count reliably’ is especially misleading. Of course GPT-5 can count reliably. The evidence here is a single adversarial example. For all practical purposes, if you ask GPT-5 to count something, it will count that thing.
> 4. GPT-5 still is not fully reliable but this is framed as it being still highly unreliable, when in most circumstances this is not the case. Yes, if you need many 9s of reliability LLMs are not yet for you, but neither are humans.
> 5. AI valuations and stocks continue to be rising not falling.
> 7. Claims here are about failures of GPT-5-Auto or GPT-5-Base, whereas the ‘scaled up’ version of GPT-5 is GPT-5-Pro or at least GPT-5-Thinking.
> The fact about ‘many users asked for the old model back’ is true, but lacking the important context that what users wanted was the old personality, so it risks giving an uninformed user the wrong impression.
>> Shakeel: The NYT have published a long piece by Gary Marcus on why GPT-5 shows scaling doesn’t work anymore. At no point does the piece mention that GPT-5 is not a scaled up model.
Your vague feeling may be caused by unfamiliarity with what Marcus actually says, coupled with epistemic seclusion in a bubble that just really likes to do things in the pattern of: "successfully make LLM rote-learn a specific task -> claim vaguely-defined skeptics predicted it can never be done -> claim skepticism debunked -> ignore all observations about the solution making silly mistakes or just outright failing out of distribution -> call pointing out that it does not generalize 'moving the goalposts'".
(My vague feeling is that he's always extremely careful and hedging when making predictions, and you won't find him make even one that turned out to be wrong. But admittedly, I don't follow him closely nowadays, mostly because I've already internalized his arguments several years ago and... well, nothing changed since, he keeps having to repeat himself and it gets boring. So if somebody does pay attention and keeps the score, yeah, I'd appreciate that too. But again, what he does say is probably too boring and vague to get anyone interested in fact-checking it years after the fact.)
Here's a bet he made at the end of 2024: https://garymarcus.substack.com/p/where-will-ai-be-at-the-end-of-2027
He predicts AI will not succeed at completing eight out of ten tasks by the end of 2027, and three of those ten are
- With little or no human involvement, write Pulitzer-caliber books, fiction and non-fiction.
- With little or no human involvement, write Oscar-caliber screenplays.
- With little or no human involvement, come up with paradigm-shifting, Nobel-caliber scientific discoveries.
Does anyone here think hes likely to lose that bet? We are now a third of the way through the bet period, and those tasks don't seem a whole lot closer than they did at the end of 2024.
I won't say that there has been no big progress over the last year, the big thing I've seen is that LLMs are much better at moderately complex coding tasks than they were this time last year. But they are approaching the asymptote of perfectly generic text, not perfectly high quality text. They're playing Family Feud instead of Pointless.
No, I think he'll win. My point is that the phrase he's known for – "deep learning is hitting a wall" (from March 2022!) – actually implies a far looser upper bound of the capabilities it produces than what one might assume at first glance.
It implies no upper bound of [AI capabilities] - provided they're achieved with techniques other than deep learning.
This is what people on the hype side just can't get wrap their heads around, I guess - that "skeptics" (of Gary Marcus variety) aren't saying it's impossible, they're saying "you're doing it wrong".
And of course deep learning qua deep learning does indeed appear to have hit a wall, with everyone pivoting to self-prompting ("reasoning") models applying rules iteratively and utilizing external input. There's still a separate question how far building on top of LLMs can get us. (I'm with Gary in the "not very far" camp.) But on the question of [LLMs alone] Gary has arguably already been vindicated.
Yes, it's true that the statement makes no claims about non-deep-learning AI models.
You say the more recent chain-of-thought "reasoning" LLMs (basically o1 onwards?) AREN'T limited by Marcus's Wall? He doesn't seem to think so, and continues to make the same assertion. You think his more recent pronouncements are to be understood more as "deep learning WAS hitting a wall, but they changed tack and now they're past it"?
RLHF was in widespread (practically universal) use before he first coined the phrase, so I don't think it makes sense to interpret it as applying only to PURELY deep learning instead of the entire category of LLM neural networks built atop a foundation of deep learning, which I believe is how it was intended, and how it was understood at the time.
I think Marcus's statement in 2022 was about pre-chain-of-thought LLMs, yes. This does not and should not imply chain-of-thought LLMs don't also have limits.
Gary treats the emergence of chain-of-thought models as vindication because they introduce what he considers neurosymbolic mechanisms that he long advocated for. (Disclaimer - I'm just outright repeating his argumentation now, I personally wouldn't use the term "neurosymbolic", which I don't find clear enough to be meaningful.) At the same time, he thinks it's nowhere near enough, and that LLMs with their obvious deficiencies are too shaky of a foundation to build upon. (Which, yeah.)
I assume he doesn't feel RLHF makes a similarly meaningful difference because it's just a training tool that ultimately doesn't alter how LLMs work. But yeah, I guess this does make "deep learning" in his famous statement semantically incorrect. Which, uh, never bothered me before you pointed it out - per the above, I think it's pretty clear what he meant.
I never fully really grokked what the exact AI-booster argument against Marcus was. Every time Marcus said his Gary Marcus things there was an avalanche of quote tweets basically going OH IT'S GARY MARCUS OPINION DISCARDED but with relatively little material for why they were so dismissive, apart from him disagreeing them of course.
Nostradamus 2’s second prophecy has dropped: https://terminalvel0city.substack.com/p/the-great-attractor
I think the limit for self-advertising on ACX outside the Classified Threads was about twice per year (please correct me someone, preferably with a link, if you remember otherwise), so this is a notification that you have reached your limit for 2025.
The Online Right is busy laughing at internet feminist Emily Witt, who was dating a DJ at 43 years old and produced the following gem that was published in the NYTimes. "Until you’re saying the stuff that upsets your parents, you’re not really doing your job. You have to cross that threshold."
https://x.com/herandrews/status/1971249198050967761
https://x.com/feelsdesperate/status/1971289873492607374
https://www.nytimes.com/2024/09/21/style/emily-witt-encounter-health-and-safety.html
It's easy to laugh at the older-than-40 woman who sounds like she's failing high school algebra. What you won't hear, except from me, is that the Republican Party increasingly relies on people like her for its political future. That demo, unmarried white women who date DJs, is probably as likely to vote Republican as Democratic now, the result of Trump's unique appeal to people who make poor life decisions. Of course, Trumpy women like that don't get megaphones in liberal publications to air their grievances, and the Online Right, selected for those personalities who produce Comfort Food, doesn't want to highlight them either. They'd rather let their audience think of the Trump coalition as being made up of successful, married people, a demographic that is moving away from the party of Hulk Hoganism and medical quackery.
The reality is that married, college educated whites and unmarried white women who date DJs are both cross-pressured demographics split 50-50 between the parties. Given that Trumpism appears likely to be replaced with Vance-ism, which will likely drive both groups away, Emily Witt will have the last laugh.
This is one of those situations where I start reading the second paragraph and realize "oh it's that commenter again"
Now there's nothing wrong with having personal style, but I do think that your opinions on what the Republican Party is all about are sufficiently idiosyncratic that it should give you some second thoughts.
I get it, most people with these one-of-a-kind syncretic ideologies aren't very smart. What I'm saying isn't that unique, Hanania, Spencer, Lion of the Blogosphere, and recently Steve Sailer have been saying the same things.
How would you define Vanceism as distinct from Trumpism?
Say what you will about Trump, but at least he has *some* deeply held beliefs (he's been a lifelong proponent of tariffs).
Vance appears to be willing to say *anything* to get power, as demonstrated by his opportunistic 180 on Trump.
Well, to be fair, absolute loyalty to Trump seems to be the *one* definitive thing you must have to be MAGA, ie to make your way forward in the modern Republican Party, so it's also basically the one gimme where you'd expect the previously disloyal GOPers to make a 180. Doesn't yet reflect a *particular* amount of opportunism, beyond the normal baseline of politics.
I find Trump hard to pin down. He is obviously a patriot, but "the art of the deal" is such a part of his self-image that in practise any position he takes might be open to negotiation, hence TACO. Vance seems fully on board with this - he praised Trump for Trump's "strategic ambiguity" before Vance was named as running mate. He's come around, sure, but Trump was a shock to the system and quite a few conservative journalists opposed him at first then adapted.
This reminds me of Mečiar in 1990s Slovakia. His political opinions could change 180 degrees overnight (many specific examples were documented in a book "Mečiar and Mečiarism"), but half of the population totally loved him and updated quickly. Like, one day that would be like "EU is the best" and the next day "EU is the worst", or vice versa, depending on what he said that day on TV, and you could see his fans repeating the same on the streets... sometimes some of them shortly embarrassed when they didn't get the memo and mistakenly shouted the yesterday's version, but they updated immediately when corrected.
And yes, patriotism was a rare constant. Whoever criticized Mečiar, domestic or foreign, it had to be because he "hated Slovakia", because Mečiar by definition personified the nation (from the perspective of his fans).
I find it funny how American politics sometimes seems to copy Slovakian politics, a few decades later. Wokeness was analogical to socialism, Trumpism is analogical to Mečiarism. (Perhaps you all should study Slovak politics, to be prepared for the future.) Coincidence, or maybe convergent evolution?
Sure, but at least Vance has goals beyond personal vendettas and pride. As far as I can tell, his desire to Christianize the country seems genuine.
I think Vance, as well as most of Trump's supporters, understands this, seeing Trump as a mere means to an end for righting the course of this country.
> Vance-ism, which will likely drive both groups away,
You may be right, but would you have predicted those groups to ever be drawn to Trumpism at all?
I trust this place more than I trust the CDC right now, so: what's the current advice on covid boosters? My previous heuristic was that I should probably get one alongside the annual flu shot. (I'm a healthy woman in my 30s.) It's not a pleasant shot (I'd say comparable to tetanus), but I didn't have a strong adverse reaction; and on the other hand, covid knocked me out thoroughly for a few days, but I didn't have any long-term effects that I noticed. The selfish main question, then, is whether the covid booster is likely to actually prevent the covid illness, which is a question about prevalence and effectiveness.
If I had to put numbers on it, I'd rate the unpleasantness of 1 covid as that of 5-10 covid shots: is an annual booster likely to be effective at that rate? Do they even get updated annually?
This one has been updated so it’s a decent match for current variants. As for whether it’s likely to prevent the illness — not very. But it’s quite likely to make the illness briefer and milder if you have it. On the other hand, Covid is now a considerably less severe illness for almost everyone, because almost everyone has some lasting protection from previous shots and infections. Since you’ve had more than one vax, and also had Covid, I dunno how much additional protection you would get from this year’s vax. Anybody know?
Why is it different for Covid vs. the flu? Are the Covid variants more similar to each other than influenza variants, so that you end up with more lasting protection against severe illness for Covid? Is Covid slower-onset so that your long-term immune system (whose correct name I've forgotten) has more ability to catch it before it gets severe? Is it actually the *flu* that is more similar (at least within a single season), so that you can have a reasonable hope of avoiding it entirely (rather than just hoping for a milder case)? I had flu and Covid placed in the same mental bucket of "quickly-evolving respiratory viruses that would be thoroughly annoying to get sick with," and am trying to understand what the differences are.
I strongly suspect Covid strains to be more closely related because all human Covid variants share a common ancestor in late 2019. Influenza has two major types (A and B) in widespread circulation. Each of these have subtypes, although there are a lot more subtypes of A than of B.
Influenza A subtypes are defined by major variants of the two major surface antigens. There are 11 types of one of these and 18 of the other, making a total of 198 theoretical subtypes. Fortunately, most of these are some combination of relatively rare, only found in birds or bats, or able to infect humans poorly or not at all. It looks like in recent years, it's mostly just H1N1 and H3N2 in widespread circulation among humans, but each of these have their own substrains which have developed over decades since they first made the jump to humans. I think the original human outbreak of H3N2 was the 1968 pandemic and H1N1 is from the 1918 pandemic.
B had two "lineages", Yamagata and Victoria. The former, last I heard, is suspected to have died out because of the Covid lockdowns but Victoria is still around. I don't know when the Victoria Lineage originated, but I'm pretty sure it's also quite a bit older than Covid.
I actually don’t know the answer to most of those. This is the kind of thing I ask GPT. (Then I click the link to a coupla its main sources to make sure there’s no hallucinating going on ). What I remember is that flu has something like 8 separate single RNA strands, and each strand can easily swap a gene of 2 with another strand, and so it mutates very rapidly
I don't think the US military is under enough pressure that we can take its vaccine policies as reflective of accurate risk evaluations, rather than political winds.
Then why reference the military at all? Am I missing context here?
I've never blocked anyone. But inspired by WoolyAl, I decided to open an incognito tab and compare the comment-count, to quickly estimate whether a significant number of commenters have blocked me. Signed in, I see 1645 comments in OT 399. In the incognito tab, that number goes up to... 1633.
Huh?
I just opened it in regular and incognito, and both times it showed 1633 for like 10 seconds and then switched to the actual number (1663 right now). Maybe it was doing the same for you and you didn't see it switch to the accurate number
Not sure what it is about 1633 though that it shows it for both of us when the number is actually higher...
On LessWrong there is a reaction button that would be useful here:
🚚 Concrete
This makes things more concrete by bringing in specifics or examples.
As in... an example of a comment from someone who blocked me? No can do, chief. Ostensibly, the number of people who've blocked me is -12. Yes, that's a negative number.
This is some Alice in Wonderland voodoo.
Those 12 merged with you instead of blocking you. Try to be chill about it, willya?
I choose to believe I have 12 secret admirers. ^_^
I decided to read this paper after Newt Gingrich summarized it in a tweet. The gist is "we find that U.S. prescription prices are actually 18% lower than in these nations". The nations being UK, Germany, Japan, France, and Canada. How? Because the US has the cheapest generics and the US has a higher generic prescription mix than those countries.
https://ecchc.economics.uchicago.edu/project/policy-brief-international-price-differences-for-drug-prescriptions/
And I missing something or is this an extremely facile analysis? First - they only consider Medicare/Medicaid prices, which are both much lower than private(insurance and OoP) and only cover ~45% of the pharma market. This would make the analysis broadly inaccurate at the country-level alone.
Second - the analysis appears to simply be Price difference(%) * %Prescriptions, effectively weighting the average price difference % between the 2 categories - (name brand) and (generics) by percentage of total prescriptions of those categories. This is troublesome because generics are far cheaper than name brands, so 40% of $4 is far less important than 200% of $10000, but it appears this analysis would weight these differences evenly. It also well known that 50 or so drugs alone account for ~40% of spending, so any accurate analysis must take the huge price differences into account.
Major Point - Is this what passes for academia at the UofC? Am I missing something? This is something like college sophomore work. They even used the data from another report (Rand 2022), which resulted in an opposing analysis. "Prescription Drug Prices in the U.S. Are 2.78 Times Those in Other Countries"
https://www.rand.org/news/press/2024/02/01.html
I guess I'm not surprised by weaponized academia as politics, but this is just incredibly shoddy work by a seemingly high status tenured professor / former acting Chairman of the Council of Economic Advisers. I Hope I'm wrong about my analysis.
They devised a metric that showed the result they wanted. You want a different result, you can construct your own metric. How else do you propose it work?
Their metric weights a $.30 aspirin tablet price differential the same as a $20,000 biologic price difference. It's poor analysis, and doesn't provide any benefit.
Do you think their method is valuable? The Rand study I linked does far better.
I'd prefer that our political parties are informed by good analysis, so we can make better decisions, rather than justify a preferred policy with asinine methodology.
If they want good analysis, they can pay for it themselves. Either way, I'm sure the consequences are not major enough to justify jeopardizing a major source of funding.
Missing vet? Veterinarian prescriptions that go to humans? I don't think that's a large portion of prescriptions in this country.
Can see the top 10 here(2021).
https://www.kff.org/medicare/10-prescription-drugs-accounted-for-48-billion-in-medicare-part-d-spending-in-2021-or-more-than-one-fifth-of-part-d-spending-that-year/
Data on top 40 drugs(2019):
https://www.kff.org/medicare/how-does-prescription-drug-spending-and-use-compare-across-large-employer-plans-medicare-part-d-and-medicaid/
Fixed the link. I just don't think it's a large enough population of RX to drive any analysis of RX costs in this country.
https://www.computerworld.com/article/4059383/openai-admits-ai-hallucinations-are-mathematically-inevitable-not-just-engineering-flaws.html
Could you beat the hallucination problem by asking the same question of three or more hopefully independent AIs and ask them to report on what the answers have in common? Or beat at least some parts of the hallucination problem?
I've been asked whether LLMs can be reliable enough to recognize whether one hallucination is different from another. I think they could, but I'm guessing.
Minor point: There's an error in the video which displays the importance of actually knowing what you're talking about. It says that you don't need to fact check a poem, well maybe you could count syllables in a haiku. This seems like a person who hasn't had contact with poetry since elementary school.
A good haiku has a change of mood for the last line. Recognizing whether a haiku has it or not would take a lot of knowledge of the world and human emotions.
Many poems include factual material which can be gotten right or not, and there are a lot of poetic forms other than haiku.
> asking the same question of three or more hopefully independent AIs
Note that if you ask /the same/ AI the same question three times (in new sessions), you will also get three different answers.
(This knowledge does not help generate reliable answers.)
> Could you beat the hallucination problem by asking the same question of three or more hopefully independent AIs and ask them to report on what the answers have in common? Or beat at least some parts of the hallucination problem?
No. All LLMs are constructed in a similar way and they deal with incomplete information in a similar way, and thus they produce hallucinations in a similar way. Unless someone creates an LLMs that deals differently with incomplete information, we will have to deal with hallucinations.
That being said, you approach has some merit, since it can and should reduce the number of hallucinations, since they is some difference between different LLMs. I think your approach is called "LLM Ensemble", or maybe "mixture of expers"?
I did a short search and found papers relating to your approach, e.g. this: "Harnessing Multiple Large Language Models: A Survey on LLM Ensemble"
I don't see how you're concluding that it would result in fewer hallucinations. However, it does seem clear that polling three separate LLMs could increase the number of hallucinations successfully detected, rather than accepted by mistake.
It depends how correlated hallucinations are between LLMs. My guess would be that given similar levels of sophistication and built by similar techniques on similar data, they would be fairly highly correlated on what questions they hallucinate on and somewhat less correlated by still significantly so on the content of the hallucinations. If my suspicions are correct, your technique should improve accuracy but won't come close to eliminating hallucinations.
You might be able to improve somewhat by also asking the question in different ways. I have noticed that LLMs tend to pick up the assumptions implicit in your prompt and cue off of them, so using exactly the same prompt is likely to increase correlation in wrong answers.
I just discovered that Youtube is automatically replacing the audio track of videos with a robotic machine translation. Normally, that'd only be a minor annoyance except that **there's no way to fucking turn it off**. Even by the usual low standards, this is amazing. Everyone involved at Google needs to be fired immediately. Like seriously, WTF?
YouTube's automatic translation is so bad and useless, it's really unbelievable that they're doing what they're doing.
For example, they auto-translate video titles from English into German, which is idiotic in the first place, but particularly awful in the case of VIDEO TITLES that are full of specific references, memes etc. and where the translation engine has no context to even remotely translate it accurately, and that's assuming it's even translatable at all.
I really want to know the decision making process behind this, because it's probably the worst UX decision I have ever seen.
You're really lucky if that's your first "WTF, everyone involved needs to be fired" moment with Google. I lost count of how many times that happened to me.
I finally switched all my default search engines away from Google after Google started crashing my web browsers (this went away since then, but I'm not going back). Previously, I'd lost a very basic webpage on Google Pages because these went through two upgrades turning my very basic HTML into complete mishmash; after the second one, I caved in and quit. Now I merely roll my eyes at more minor WTF stuff from Google, because, in my experience, Google just does this all the damn time. They don't care.
What search engine did you switch to? I wanted to to leave Google and tested DuckDuckGo for a while, but didn't like it at all.
Honestly, I didn't like any of those I switched to. (One was DuckDuckGo, another Brave search.) So I am just using perplexity.ai for anything more complex than addresses or open hours for businesses. I find Perplexity incredibly useful - way better than any non-AI search engine.
I was able to switch it off for each video separately by clicking Settings.
As far as I can tell, that option only appears in the web version. There's no option to switch audio in the mobile app.
There are browser extensions to disable this nonsense, e.g. https://addons.mozilla.org/en-US/firefox/addon/youtube-anti-translate/
Unfortunately, it appears that that doesn't solve the problem because the issue is with the Youtube mobile app. On the web, you can already manually switch to the original language track, but there's no way to do this in the app, and a Firefox extension doesn't help with that. But thanks for the suggestion anyway.
Sometimes on Android I watch Youtube in the Brave browser, because of the ad blocking.
If you're on Android you can disable this with the Vanced mod.
Which videos? Can you link to an example?
There can't possibly be no way to turn it off. Are you sure you didn't dream this? If not I certainly agree with the firings.
I refused to believe the insanity of YouTube's autotranslation "features" at first, but yes, they are all as idiotic as they seem
That what I assumed at first but nope, there really is no way to turn this off. It's unbelievable.
The uploader can disable it for their videos but as a viewer there is no direct way to change the audio. You could change your language settings to the language of the video to get the original audio track but usually I choose to just not watch the video.
I tried changing the app language, but even that didn't seen to work. Youtube decides which language audio to play based on some ineffable data you have no control over.
Apropos of the "book review" on Ted Nelson's memex project, what do ya'll think about Urbit?
Okay so... Lord Moldvort is trying to fork the internet? I think? But then why do we need to invent an entire assembly language of unreadable nonce words? Like, what are we doing here. I'm not seeing the vision.
You can fork the Internet, or replace the US government with a monarchy, doing both smacks of dilletantism.
Have you considered the possibility that the nonce words are the whole point, and everything else is cruft?
I'm trying to figure out what the Grand Vision is. I don't quite understand it, but I get the sense that the original purpose was to fork the internet because the first internet was built on a bad stack and now we have to live with the legacy code.
the unreadable assembly language seems instrumental to that somehow, but im not sure in what way. There is, indeed, an awareness about how social engineering works, so it's certainly possible that the nonce words are a deliberate ingroup signaling-mechanism. But I don't think that's the whole story.
It's like asking why do many scam e-mails contain obvious red flags. It filters the audience. You don't want to waste your time on people who are smart enough to recognize the red flags.
If you don't see anything crazy about Urbit, you are exactly the kind of customer Urbit needs. They have a few virtual "planets" to sell you (for actual money), and an esoteric language to learn so that you can distract yourself from questioning your investment.
Judging from the Wkipedia page alone, it seems to have at least one of the problems that Xanadu had: Being a cult.
not sure about that. Moldbug founded the project, and then later replaced himself as CEO so he could return to blogging. Afterward, the new CEO tried to distance the Urbit Project from moldbug's... political reputation. But the project didn't go so well. So Moldbug returned as "wartime CEO", and a bunch of devs resigned in protest.
the fact that a bunch of devs resigned in protest, just doesn't very cultish to me? or if there *is* a cult, it's not out of loyalty to the founder.
https://terminalvel0city.substack.com/p/human-in-the-loop
The first prophecy of Nostradamus 2: WE ARE LOSING CONTROL OF AUTONOMOUS SYSTEMS
John Gall already predicted that half a century ago.
Who is John Gall?
The author of "Systemantics"
https://en.wikipedia.org/wiki/Systemantics
Up there with Thomas Schelling for Things Everyone Oughta Read
Of course. I, however, was born many centuries ago. It only stands to reason then that I should get the credit.
You know, I always thought this blog needed more novelty accounts. It's more entertaining than the common suspects getting mad over trivial things.
I am the SADTOG, the Self-Aware Dog Turd of Doom. My IQ is 666. I smothered OpenAI. Come meet me if you dare.
https://open.substack.com/pub/bookreviewgroup/p/the-dog?r=3d8y5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
It seems my presence is more necessary than I had previously suspected
Which does not bode well for your prophetic abilities, does it?
“But, sire, how can I know what your thoughts are?”
The king stopped dead in his tracks, and stared at me.
“I believed thou wert greater than Merlin; and truly in magic thou art. But prophecy is greater than magic. Merlin is a prophet.”
I saw I had made a blunder. I must get back my lost ground. After a deep reflection and careful planning, I said:
“Sire, I have been misunderstood. I will explain. There are two kinds of prophecy. One is the gift to foretell things that are but a little way off, the other is the gift to foretell things that are whole ages and centuries away. Which is the mightier gift, do you think?”
I recently became interested in a certain sort of function: one that acts on compresses bitstrings in particular ways. I don't know if there's any sort of standard literature or where to look for more information. The properties are as follows:
1. The function compresses the bitstring by some multiplicative factor: the output might always have e.g. 1/8 or 1/32 the length of the input.
2. As a result of 1, inputs are necessarily mapped many-to-one onto outputs.
3. Inputs that are a short Hamming distance apart should be very likely to map to the same output. In particular, flipping one bit in the original string should only rarely change the output.
4. None of the digits of the original string are treated as especially more or less significant (e.g. generally don't manipulate it like a binary number)
A toy example would be the following:
Chop the bitstring up into bytes (there may be a short byte at the end), and for each bite, alternately add and subtract the bits. Apply a mod 2 to the sum to get your output bit for that bite. So
EDIT: this example explicitly fails to have one of the key properties; bitstrings with Hamming distance 1 will map to different outputs. How embarrassing. That's what I get for trying to think up a new example (I originally had a different one) on the fly:
11010110 -> -1 -> 1
01011100 -> -2 -> 0
11111011 -> 1 -> 1
So 110101100101110011111011 -> 101
What other properties are you looking for?
The simplest way of doing this would be to use an error-correcting code of rate 1/8 or 1/32: round your message to the nearest codeword, and then apply your favourite linear map down to the appropriate dimension.
Alternatively, if you want less linearity, you'll need https://en.wikipedia.org/wiki/Locality-sensitive_hashing.
I note in passing that when you're working mod 2 there's no difference between adding and subtracting, so your toy example is just taking the population parity of each byte; some instruction sets have builtins for this.
This is a well-studied problem in coding theory. In your example you want to look at linear codes with parameters of length ck and dimension k, with large minimum distance. A good decoding algorithm for the code will map vectors near a codeword onto the codeword - which you then map onto a bit string of length k. (Without loss of generality, you can make the code systematic, then you're just truncating.)
There are lots of differently structured codes with different decoding algorithms - any of those will give a way of doing what you want to do. E.g. look at the Reed-Muller codes and their decoding algorithms.
Looks a lot like Perceptual Hashing.
Idea: democracy is a self-contradiction, because if the people would be truly the ones who decide things, they would always vote for a charismatic-populist dictator. Not necessarily right-wing, can also be left-wing, like Chavez was.
The reason for this is that it is not possible for millions of people to actually participate in politics and wield power. Politics is always a TV show to watch and comment on, not truly participate.
And really charismatic-populist dictators or kings make the best TV show. Chavez did this 100% literally.
I don't know about that. I expect if we thought about it, we could devise some system of governance by local committees that feed up into regional committees that feed up into national committees and in which everyone who wants to take part in actual self-governance has an actual part. But let's not kid ourselves; most people in this system would be on something like an HOA or condo board, not deciding national-level issues.
If by "democracy" we set the bar high and mean actual self-governance with the people governing themselves day-to-day, then we are talking about something that hardly even exists. Governing is almost always done by representatives, and even they tend to be constrained by aristocratic institutions like upper houses, constitutions, independent courts and central banks.
Some places are a bit more democratic, like California with its binding statewide referenda, others less. And my impression is that the record of those referenda is a bit mixed; they smacked down the UC for racial discrimination on the sly, but they also required warnings about potentially carcinogenic chemicals on damn near everything. So I'm not clamoring for more direct democracy.
My own thinking is that the system of elections is mostly there to keep out the real crooks and self-dealers. And occasionally elections serve as a proxy for major policy fights where a choice needs to be made. Most other decisions get made by the bureaucracy.
That's "pure democracy..." , etc. Some of the solutions to this well known problem include republicanism, bicameralism, etc.
counterpoint: a state which requires a populist dictator to enact changes is not really a democracy, but an ossified oligarchy that has elections
Sure, but at least elections make it easier to overthrow said oligarchy. Nothing's forcing them to be that generous.
> elections make it easier to overthrow said oligarchy.
Or it's a false promise to make it much harder to organize a movement with an actual chance.
Theoretically you could have a populace that wasn't completely brainrotted. The US is obviously a lost cause, but with some aggressive selection pressures, you might get a population that simply isn't interested in such things. Japan is probably the closest thing to that.
Japan is a fake democracy staged entirely for deceiving Americans and in reality ran by old aristoi: https://www.thepsmiths.com/p/review-miti-and-the-japanese-miracle
Yes democracy only really works when it’s limited to the elite. That’s why the US was founded as a republic.
Yes, exactly. This is what a functional democracy looks like.
Agreed. This is why we were founded as a republic. The founders were very aware of the failure modes of populism.
Not quite aware enough. They never specified whether a president could rule from prison ,or pardon himself, because they never thought the populace would knowingly elect a crook.
They also never thought we'd be retarded enough to let the poor and uneducated vote, but here we are.
They were aware of the possibility of being "tried for pretended offences," and the omission of that being disqualifying is clearly deliberate.
I think you're being way too categorical here.
First, as noted by others, people do not "always" vote for populist dictators--maybe over some time horizon the probability of that goes to 1 or something, but there are very clearly democracies that have lasted 100+ years without voting in populist dictators--at least, no populists dictatorial enough to prevent future elections.
Second, there are degrees of "participating in politics and wielding power": obviously it's not possible for tens of millions of people to all be President or Prime Minister, or even Senator or MP, or even legislative aide--but it's obviously possible for millions of people to vote occasionally, join a handful of political groups, etc.
Essentially you should think of this as a power law distribution or some other 80/20 sort of thing: a tiny minority actually run for political office, or propose new tax plans, or whatever; a larger but still small minority are engaged enough to debate what that tiny minority do in a fairly principled way; a yet larger group follows those debates--maybe by now they're less driven by real principle or expertise and more hobbyist, but they're still open to persuasion, trying to align policy with their values and their best understanding of the world; and at the end of the funnel a majority who follow the cues of their friends and family and favourite celebrities or whatever in a way that is mostly hobbyist/faddish/etc but is still shaped by the more intellectual/principled work done at the other levels.
It's obviously true that the entertainment/mob mode is always present, and it's also certainly true that it can overwhelm the other mode, but whether and when that happens will depend on the exact mechanisms by which the country is run (there are lots of ways of aggregating up the political impulses of millions of people in ways that are democratic) and also on features of broader civil society.
But the fact that the tendency to populist dictatorship is always present in society doesn't make democracy "self-contradictory" any more than the fact that populist dictatorships can become democracies means that popular dictatorship is inherently self-contradictory. No yet-discovered method of running a country is completely stable in all circumstances; that doesn't mean they're all "self-contradictory".
Imma share a pet theory of mine: the Printing Press marked the beginning of the Modern Period. The impact of this is massively underestimated by the standard historical narrative, and I suspect a lot of the alienation and disruption associated with modernity is ultimately downstream of this.
For politics specifically, I think the press was indispensable in the formation of the nation-state. Especially Modern Liberal Democracies. In theory, it was inspired by the Athenian model, etc. But actually, the modern version is a different beast entirely. Because it's fundamentally powered by the mass-media (moldbug came so close, but didn't quite get there). Notice, for example, that you compared politics to television. coincidence? No, because television is mass media and media is the sine-qua-non of *modern* *liberal* democracy.
I suspect the Wars of Religion, American Revolution, French Revolution, European Spring, Holocaust, and Russian Revolutions are all downstream of political propaganda (only possible with mass media) effectively one-shotting everyone's brains and turning them into zealots. Everyone complains about social media being miserable, but it was probably worse when, say, the Catholics and the Huguenots were slaughtering each other in the streets.
So contra Wooly, I don't think athens/rome is really all that commensurable with modern democracy. We give them the same label of "democracy", but that's an error. And I think Democracy is actually *high*-entropy (not low-entropy like an F1). And I think the classical liberalism is what makes these places nice places to live (but the term "liberalism", like "democracy" has been grossly corrupted). And I think liberalism takes false credit for the technical progress of modernity. And I suspect U.S. and G.B. succeed despite their democracy, not because of it.
There's a number of layers of indirection here, which makes a naive discussion of "democracy good? democracy bad?" a complete quagmire. And it gets worse. Because the internet is overthrowing the old printing press.
"For politics specifically, I think the press was indispensable in the formation of the nation-state."
Not a historian, but I don't get the sense that this observation would be considered remarkable or novel by historians AT ALL. I think it's pretty widely accepted. A lot of my recent interaction with history has been reading Bret Devereaux's excellent blog ACOUP (highly recommend, BTW), and while his focus is on a MUCH earlier period, he has written in quite some detail about how the background literacy level substantially restricts what sort of governing structures are even possible. Unfortunately I can't find it easily to link it, but he has an excellent post (series?) discussing the rise of feudalism[1] and how it was very much an adaptation to the decline of literacy rates after the fall of the Western Roman Empire, as it allowed for warlords who'd conquered large amounts of land to administer and profit from them in a decentralized way, as central governance requires far greater numbers of literate officials collecting and recording information.
" suspect the Wars of Religion, American Revolution, French Revolution, European Spring, Holocaust, and Russian Revolutions are all downstream of political propaganda (only possible with mass media) effectively one-shotting everyone's brains and turning them into zealots. "
And this is where I think you showcase some rather alarming biases, and as a result this observation goes off the rails. Here you effectively reduce a bunch of other humans to NPCs in your minds, modelling them as fundamentally dumb and easy to manipulate[2]. Let's take the French Revolution, for example. Do you remember why the French peasants were angry? Rumor has it, it was less to do with "reading lots of political propaganda" and more to do with, y'know, starving.
Of course, peasants had been poorly treated and taxed to starvation in the past without always revolting. And many previous peasant revolts had failed. But literacy does a couple of things here that are totally orthogonal to "turning people into zealots." First, it makes it easier to communicate with people who are not your immediate neighbors so you can tell how bad and widespread the problem plaguing you is (and thus how much support you might have if you try to rise up). Second, it makes it easier to coordinate large groups of people around actions that need to be taken in concert to be effective. Both of these are ways in which widespread literacy can be transformative and lead to revolutions that acknowledge (and indeed depend) on people's individual agency in ways that imagining them as "zealots" who have been "one-shotted" by political propaganda do not.
Let me finish by adding that of course I believe that it is POSSIBLE for political propaganda to manipulate and radicalize people. Just that I find the above framing to be incredibly simplistic and childish. First, you can guarantee that regardless of what they read/watch/hear, a large bit of people's political opinions is shaped by their life experiences and material conditions: if those don't create fertile soil for radicalization, it will be MUCH more difficult and less likely. Second, the idea of propaganda "one-shotting" almost anyone is patently absurd. Everything I've read about radicalization is that it is almost by strict necessity a gradual process, that most people who go through it will shift their position slowly over time and that groups who are *actively trying* to radicalize people often know this, and avoid showing their more extreme ideas to people who have only recently encountered their movement.
Of course, in the age of the internet and social media, we don't need radical zealots to carefully drip feed people gradually more extreme propaganda: we have social media algorithms all to happy to do that at scale.[3]
[1] If I could find it, I'd also be able to elaborate on the important distinction between feudalism (political), manorialism (economic) and one other thing which I forget, which are often all lumped together in the popular terminology and imagination.
[2] There has been some push in *ahem* certain circles recently to trash the concept of empathy. This is incredibly stupid that have nothing at all to do with morality (though of course it's highly morally suspect as well). Fundamentally empathy is about understanding and being able to model other humans. Even for an amoral bastard who simply wants to defeat their opponents, being able to empathize with them is an *incredibly useful tool.* Fun fact for anyone reading this who *has* but into the anti-empathy drivel (especially anyone on the right): you can observe Orson Scott Card (not generally known as a raging lefty) make this exact point in Ender's Game way the hell back in 1985.
[3] Rant for another time, but I fairly firmly believe that this dynamic bears a substantial share of the responsibility for the tempestuous state of modern politics. And what's scariest of all is, I don't think that was really *anyone's* explicit intent: the widespread radicalization was rather an unintended side-effect of financial incentives and conditions created by new technologies.
> but I don't get the sense that this observation would be considered remarkable or novel by historians AT ALL. I think it's pretty widely accepted.
Great! Though why do I feel like I had to figure this out for myself? E.g. how is it the case, that someone like Scott did a dictator-bookclub review on Hugo Chavez, and the fact that television was used to hijack the government is treated as surprising?
> Here you effectively reduce a bunch of other humans to NPCs in your minds, modelling them as fundamentally dumb and easy to manipulate.
Yes, including me. And yes, I've read Ender's Game. And yeah, I'll admit that "oneshotted" was an exaggeration.
On one hand, people don't just wake up one day decide to become radicalized. And I do believe that people almost always have coherent reasons for acting the way they do. I didn't vote for trump, but I feel like i have a decent understanding of why people voted for him. I don't endorse hitler, but i feel like I can empathize with why he rose to power. I'm not a fan of Ted Kazincski, but I feel like I can roughly trace the path of his logic.
On the other hand, I do think humans are pretty suggestable under the right conditions. Propaganda then, is more like hypnosis. It's not capable of convincing people to do just *anything*. But it's definitely capable of nudging, amplifying, rationalizing, and channeling latent impulses. This is why I harbor doubts about Eliezer's "super persuader" arguments about ASI doom. If persuasion turns to to be a problem, it's not because the AI will be super smart, but because people are already highly suggestable. Remember ELIZA?
> Let's take the French Revolution, for example. Do you remember why the French peasants were angry? Rumor has it, it was less to do with "reading lots of political propaganda" and more to do with, y'know, starving.
Jimmy [0], what motivated the Reign of Terror?
> Enlightenment thought emphasized the importance of rational thinking and began challenging legal and moral foundations of society, providing the leaders of the Reign of Terror with new ideas about the role and structure of government. Jean-Jacques Rousseau's Social Contract argues that each person was born with rights, and they would come together in forming a government that would then protect those rights. Under the social contract, the government was required to act for the general will, which represented the interests of everyone rather than a few factions. Drawing from the idea of a general will, Robespierre felt that the French Revolution could result in a republic built for the general will but only once those who fought against this ideal were expelled. Those who resisted the government were deemed "tyrants" fighting against the virtue and honor of the general will. The leaders felt that their ideal version of government was threatened from the inside and outside of France, and terror was the only way to preserve the dignity of the republic created from French Revolution.
> The writings of Baron de Montesquieu, another Enlightenment thinker of the time, also greatly influenced Robespierre. Montesquieu's The Spirit of Law defines a core principle of a democratic government: virtue—described as "the love of laws and of our country." In Robespierre's speech to the National Convention on 5 February 1794, he regards virtue as being the "fundamental principle of popular or democratic government." This was, in fact, the same virtue defined by Montesquieu almost 50 years prior. Robespierre believed the virtue needed for any democratic government was extremely lacking in the French people. As a result, he decided to weed out those he believed could never possess this virtue. The result was a continual push towards Terror. The Convention used this as justification for the course of action to "crush the enemies of the revolution…let the laws be executed…and let liberty be saved."
Robespierre, what is best in life [1]? "To crush one's enemies, see the laws be executed, and to hear the liberation of the people."
> Of course, in the age of the internet and social media, we don't need radical zealots to carefully drip feed people gradually more extreme propaganda: we have social media algorithms all to happy to do that at scale
Yes, Social Media is the new Mass Media. And honestly, I don't think Social Media is nearly as bad as the original.
But also, no, the sorry state of Social Media isn't *just* because of the algorithm. It's the Golden Age of Journalism that was the aberration, and what we have today is just reversion to the mean.
[0] https://en.wikipedia.org/wiki/Reign_of_Terror#Enlightenment_thought
[1] https://www.youtube.com/watch?v=Oo9buo9Mtos
Gutenberg's printing press is a staple of middle-school history and was named the most important invention of the millennium by Time Magazine in the late 1990s, so I don't know why you think it's underappreciated.
Is this what they're teaching in middleschool, these days? My memory of it in middleschool starts and ends at "it caused the Protestant Reformation", as if it were a footnote. Nothing to do with being responsible the rise of ideology, or all the bloodshed of the 20th century, or the death of God, or the narcissism/anxiety/depression of the modern age. I remember mentioning a while ago that the U.S. founders were viewed as terrorists by the brits, and I think Lapras got offended at the suggestion. And we both know Time Magazine likes to position itself well within the Overton Window. So whatever Time Magazine said in the late 1990's, I don't expect it to quite line up with what I had in mind. Since DDG isn't helping me find the time magazine story, I asked Sydney.
> Here’s what Time emphasized:
> Revolutionary Impact: Gutenberg’s movable type transformed printing from a slow, manual process into a scalable, repeatable system. This enabled mass production of books and documents, democratizing access to knowledge.
> Catalyst for Change: The printing press was credited with fueling major historical movements like the Protestant Reformation, the Enlightenment, and the rise of modern science and democracy.
> Global Influence: Though movable type had existed in Asia, Gutenberg’s adaptation for the phonetic alphabet made it practical and transformative for the Western world.
Now, at the mention of "the Enlightenment", I expect the standard blather that the Enlightenment was the best thing that ever happened. But if you're a regular here, you should know by now that all I do is cheerlead for moldbug, who thinks the enlightenment smelled a little fishy, and spent a huge amount of time reading primary historical sources, to try to figure out what the hell actually happened during these last few centuries. And when I follow Sydney's citation to the original Times article [1], I find this:
> [...] The dissemination of the writings of Greek and Roman authors led to a revival of the classical learning that spurred the Renaissance. Printed religious texts put the word of God directly into the hands of lay readers. Such personal contacts helped fuel the Protestant Reformation.
> Before print, the ability to read was useful mainly to the elite and the trained scribes who handled their affairs. Affordable books made literacy a crucial skill and an unprecedented means of social advancement to those who acquired it. Established hierarchies began to crumble. Books were the world’s first mass-produced items. But most important of all, printing proved to be the greatest extension of human consciousness ever created. It isn’t over: the 500-year-old information revolution continues on the internet. And thanks to a German printer who wanted a more efficient way to do business, you can look that up.
Yeah ok, so according to Time Magazine, the Printing Press gets us:
- science
- renaissance
- protestantism
- social mobility
Lots of positive vibes, here. But I'm not seeing any of the downsides that I'm trying to point out. I'm not seeing any mention of "modern liberal democracy is a facade", or "modern liberal democracy has more in with Communism and Fascism than you think", or "and then everyone had a psychotic break", or "9/11 = (printing_press)^2", or "and this is why everyone is on xanax and lexapro".
[0] https://time.com/archive/6737424/15th-century-johann-gutenberg-c-1395-1468/
I think the rule of the press was always understood, look up "fourth estate"
I think that when democracies work well, it is because they inherited aristocratic ideas. I would be focusing on ideas like honor, patriotism, duty, service, but classical liberalism is also an aristocratic idea.
As weird as it feels defending and praising the United States for any reason, it is all of:
1. One of the earliest modern democracies, helping pave the way for the spread of democracy.
2. One of the most successful democracies, whether measured by stability, economic growth or liberalization (though perhaps not for much longer).
3. A nation founded in large part on *hostility* towards aristocracy.
One can further note that they region of the original U.S. that was the most culturally tied to the old aristocracy (the South), proved to be the most anti-democratic AND poorly-functioning, by basically any metric you choose. It's strong desire to keep one of its aristocratic privileges (the ownership of other human beings) set it deeply at odds with the rest of the nation in ways that still have huge ramifications down through the present day.
Of course, if you just automatically define "aristocratic ideas" to be coextensive with "good things," then I suppose you will *strangely enough* find that they coincide with nations that function well. But only because your logic is circular.
> I think that when democracies work well, it is because they inherited aristocratic ideas. I would be focusing on ideas like honor, patriotism, duty, service, but classical liberalism is also an aristocratic idea.
Yes, I've had inklings of this as well. But I haven't gotten very far. Feel free to say more.
edit: Also, your conversation with Wooly is definitely sending my thoughts in new directions.
> I think the rule of the press was always understood, look up "fourth estate"
On some level, yes. The 4th Estate is another datapoint which convinced me of the Printing Press Hypothesis. (And now that I think about it, I think we've had a similar discussion before, where I mentioned Carlyle's quip about the 4th Estate.) However! I still think the impact is massively underestimated in the Discourse. The term "4th Estate" makes it sound like it's merely *equal* in importance (at best) to the executive/legislative/judicial branches. But as moldbug points out, it's not equal, it's superior. It's the actual seat of sovereignty, and it's entirely unaccountable.
-- Look at Scott's recent post [0] where he still thinks "cHecKs & bALaNceS between the other 3 Estates" is a meaningful analysis, as if the 4th isn't sovereign over the rest of them.
-- Look at WoolyAl's comment, where he thinks the USG is commensurable with athens/rome just because we both label them "democracies". It's the same problem as stuffing ~4 different syndromes under the name "autism". While we're at it, why don't we call diabetes and melanoma "autism" as well. This is not cutting reality at the joints. The dynamics are entirely different.
-- Just recently, I responded to a guy in another ACX thread [1] who tried too correct me when I mentioned that democracy used to be quasi-synonymous with nationalism. "nah, bro, democracy means 'rule by the people'. It's in the etymology".
-- For months, i've been telling people here that Trump is a result of the internet, not just a blackswan.
These ideas are not in the drinking supply yet. And I'm not saying any of these ideas *in isolation* are original to me. But I've never seen anyone else put them together into a single coherent narrative, or elevate the invention of the Printing Press as the single most impactful event of the Modern Era.
[0] https://www.astralcodexten.com/p/defining-defending-democracy-contra
[1] https://www.astralcodexten.com/p/defining-defending-democracy-contra/comment/157343884?utm_source=activity_item#comment-159365541?utm_source=activity_item
Great comment. There really should be some consideration for shutting down the internet and press. Moral entropy and division is inevitable as long as you give individuals the power to shape public opinion.
Thanks!
> There really should be some consideration for shutting down the internet and press.
Personally, I haven't really thought through what the correct response is, yet. But I'm not really sure if shutting them down is desirable (or even feasible). Running hypothesis is that the best response is to try to convince everyone that literature/internet is a synesthetic hallucination, and not to take it too seriously. low confidence, though.
> Moral entropy and division is inevitable as long as you give individuals the power to shape public opinion.
I suspect what's actually going on might be more complicated than that, although I'm not able to articulate my thoughts coherently yet. Trying to figure out the true nature of Modernity is a personal project of mine, and a work-in-progress at that.
Technically, yes. But quantity is a quality all its own. When you go from having to copy books by hand, to mass producing books by machine, it's a sea change. Because suddenly everybody is reading things for themselves, and this is compounded polynomially by the network effects of language.
Analogously, engines were known about since antiquity. And yet the 2nd Industrial Revolution didn't arrive until the late 1800's? Because to get the industrial revolution, it's not enough to just *know* about engines or have a few of them lying around. You also need to have the metallurgical knowledge to know how to mass-produce them cheaply, to reap the network effects that transform say, American society into being entirely car-centric with a gas-station on every corner.
> It didn't take the printing press to use the Church as propaganda to keep the Leaders In Charge because God Said So.
And yet, notice that the one mention that the Printing Press *does* get in the standard historical narrative, is that it was directly responsible for the Protestant Reformation. Suddenly, you get all these christians who are actually reading the bible for *themselves* instead of hearing them through sermons or seeing the scenes displayed in mosaics. And some of them are a little... oughtistic. So when they read the holy texts for themselves, the logic doesn't add up. So they start taking things a little too seriously, and voila, half the population are religious fundamentalists. Protestant in this case, although i think the exact same thing happened with the recent case of Jihadism.
The printing press effective broke the information monopoly of the traditional catholic institutions. The Gutenberg Press hit operation in 1450. Then a little over a century later... you get Bloody Mary burning protestants at the stake. hmm... coincidence?
Today, we have this guy named Martin Gurri [0]. He was supposedly a CIA analyst whose job it was to monitor the internet. And after 2008, he saw... something. He wasn't sure what it was a the time. But it was a global phenomenon, and there was a lot of rage and negativity. The year he published his book (or was it the year after?), the U.S. replaced its first black president with its first orange president. Coincidence? No, the internet is the 2nd Coming of the Printing Press. Because it's cannibalizing the 1st printing press. And the internet has technically been around since idk,1991 in the year of Al Gore by some estimates. But it required mass adoption to begin to really reap the effects of it.
(and yes, I do also harbor suspicions about the effects of literacy on the axial age. but i'm not quite ready to defend that one.)
[0] https://www.thepullrequest.com/p/the-prophet-of-the-revolt
There are two rather obvious counterarguments to this.
First, we can pretty clearly observe long periods of democratic government without this effect. The early-mid Roman Republic (1), the US in say 1810-1850 or 1870-1930, 19th-20th century Great Britain, all massive democracies without populist dictators or anyone even remotely dictator-ish.
And I mean this at a factual level, not like a theoretical level. We have observed non-self-contradictory (2) democratic governments exist for an entire human generation without voting charismatic-populist dictatorships. These are also fairly central examples of democracy; I can imagine definitions of democracy that would exclude the US, Great Britain, and Rome but then we wouldn't be talking about democracy the way the overwhelming majority of people think of it.
Second, it's worth noting that these three governments are also literally the strongest and most powerful Western states in our history. They each dominated their world during their respective period of dominance (3). We aren't just picking the winners amongst democracies, were' picking the winners in global politics, ever, only really contested by, like, the Mongol empire at its height or the Tang dynasty or the greatest Islamic empires (4). Democracies dramatically overperform relative to all other governments.
At the metaphor level, I think you're arguing that democracies are stupid poo-poo systems that don't make any sense. It's probably better to think of them as like F1 cars: insanely high-performance when running but much more fragile and high-maintenance than we previously believed.
(1) Say the pre-Grachhi Roman Republic.
(2) How the devil am I supposed to write this?
(3) Alright, Great Britain is letting the team down a bit here.
(4) Sorry for no names, not a historical period I've done much research on.
This was a powerful argument, thank you! And I actually failed at what I think I am usually good at, thinking historically and not in present sense.
I would say, these powerful democracies have inherited a lot of their values from aristocracies. In case of Rome and 19th century GB not even merely inherited, they WERE in many sense an aristocratic republic.
But you do have a great point that elect-a-populist-charismatic-authoritarian is a fairly new phenomenon that started with Mussolini and in fact it was a feature of new and hence not well established democracies.
Except that it happens again now.
Let's dwell on the concept of the aristocratic a bit more please. I do not simply mean it as a rule of the few. I mean it as aristocracies tend to have values that they pass down to lower social classes: honor, duty, independence, pride, responsibilitiy.
So when there is an aristocracy or when there was in recent memory, the middle class also behaves "aristocratically".
Okay, then what I can say is that when a democracy does not have inherited aristocratic sensibilities, they will vote for such a man.
Except wait Weimar Germany DID have inherited aristocratic sensibilities...
I don't think populist-charismatic authoritarians are a fairly new phenomenon. I think populist-charismatic authoritarianism perfectly defined Caesar, and to a certain extent his predecessors Marius and the Gracchi brothers. In a similar vein, Pericles, "First Citizen" of Athens and de facto founder of the Delian League, isn't a dictator but he's certainly a populist strongman. These are failure modes we've seen before.
For a good overview of this in the Roman context, I'd recommend "The Storm Before the Storm" by Mike Duncan (1). It goes over the late Roman Republican period with a strong emphasis over the decline of "mos maiorum" or the unwritten societal rules of this period.
And while the Senate and aristocrats of this period do not cover themselves in glory, I think it's worth remembering during this period how much change there was. Rome had become an empire and while a lot of unrest was drive by Roman legionnaires who lost their farm, which sucks. Having said that, the old model of Roman citizen-farmers growing wheat and barley which worked so well in the Republican era was never going to work in an era of cheap Egyptian and Sicilian (2) grains and borders that were hundreds of miles from Rome, instead of a few dozen miles. That inherently required a full time professional army but that transition had its own risks.
In general, I think you will find better reading on this subject in historical books rather than current events, for a variety of reasons.
If you do want to look into modern aristocratic norms a bit more, the Psmiths just did a delightful review of "Class" by Paul Fussell (3) or you could read Scott's review (4). It's pretty clear that Fusell's "X" class, the "Bohemians" or "Bobos" who pretty clearly evolved into our modern "woke" class. Scott's review of Brooks' "Bobos in Paradise" (5) is also a great read on changing aristocratic norms in mid-late 20th century America.
As for where the aristocratic norms are going and what future developments might look like, I would refer you to Taneer Greer's excellent "The Silicon Valley Cannon", which sketches out most of the important writers and writings in Silicon Valley. I've heard Nate Silver's "The Village and the River" also dives into the deep cultural divide between West Coast/Silicon Valley elite norms and East Coast/Washington DC elite norms.
But the primary reason to read these is that American aristocratic culture and cultural norms have changed dramatically since the 1960s but so has the American environment. Not only sexual and other controversial changes but also the rapid pace of technological change (Twitter comes out in 2009 and by 2015 it arguably put Trump in the White House) and political (we are the global hegemon). It's not clear what norms are even useful now to maintaining a high-performance democratic government.
(1) The History of Rome podcast guy
(2) I think Sicily was the other cheap grain province, correct me if wrong.
(3) https://www.thepsmiths.com/p/joint-review-class-by-paul-fussell
(4) https://www.astralcodexten.com/p/book-review-fussell-on-class
(5) https://www.astralcodexten.com/p/book-review-first-sixth-of-bobos
(6) https://scholarstage.substack.com/p/the-silicon-valley-canon
I think what Duncan is saying in many words and Montesquieau in a few, that a republic requires the ability to put the public good over personal gain. If it is not there, it goes to civil war then tyranny. In that case, a mild monarchy is better.
While Duncan did not mention this, he demonstrated this in the Revolutions Podcast, saying by the last years of the Commonwealth, the military and the Rump Parliament were only interested in their personal gain, so everybody wanted Charles II back.
I will agree that good men are necessary for a republic but I doubt they're sufficient by themselves. Consider Caesar against his senatorial opponents like Cicero, Brutus, and Cato the Younger. Do we really have a lack of personal virtue amongst those defending the republic? Or is it the fact that the virtuous couldn't win wars?
I think this line of thought confuses personal corruption with, um, "paths to power". Like, a republic is endangered when its political elites steal from the public treasury to spend on hookers and blow but it's far more endangered when it's most ambitious and capable see the path to power being outside republican/democratic norms. Caesar didn't end the Republic because all the senators were partying it up in their private villas, although a lot of them were, he ended the Republic by following the precedent set down by Sulla and others that the best path to power was to be a successful general with the personal loyalty of your army and then to crush your enemies.
> Way out of what? Well, of class, of course. Becoming an X person, joining Category X, is your only way to escape! X people, Fussell tells us, are talented bohemians, independent-minded, an unmonied aristocracy drawn from all classes but rejecting all their conventions. X people just do what they like, regardless of what their class script says they “should” do. They “adopt towards cultural objects the attitude of makers, and of course critics.” They are “independent-minded, free of anxious regard for popular shibboleths, loose in carriage and demeanor.” They are self-directed, so they pursue “remote and un-commonplace knowledge—they may be fanatical about Serbo-Croatian prosody, geodes, or Northern French church vestments of the eleventh century.” So far, so good — you can probably add “weirdly into hill people” to that list.
Nah, no way that the boho class is the wokies. Just the other day, I saw some internet rando (can't remember where, probably youtube) mention that they've never seen a wokie have any interesting hobbies or interests. The blue hair and their pronouns is their entire personality.
contrariwise, I'm convinced wokism is secular Calvinism. the class of interest is the boston brahmins. Whereas Bohos sound more like hipsters who listen to obscure music or maybe artschool girls.
edit:
but the other traits seems like a pretty good fit. maybe there's an argument that the hipsters ran out of things to be hipster about, or something.
The woke are not the class X/bobos, they are the children of the bohos. I think it's helpful to remember that "Class" is published in 1983 and "Bobos in Paradise is published in 2003 and view these books as contemporary commentary of evolving cultural norms over a 50 year timespan.
Or imagine Bonnie the Boomer, born in 1953, prime Boomer age. When Fussell publishes "Class", Bonnie is 30 years old and her and her bohemian cohort are clearly in the ascendency. Fussell is correctly noting the new upper/aristocratic class. When Brooks publishes "Bobos in Paradise", Bonnie is 50 with two Millennial children in their teens. Brooks isn't describing the up-and-coming class, he's describing the existing "liberal elite" of the early Bush administration. But, despite the PC culture of the 90s, it's not Bonnie but Boomer but her Millennial children who will become the woke. We're not describing solid, eternal cultural groups but rather a vague constellation of, uh...liberal-ish aristocratic cultural norms evolving over time and across generations. The hippie Boomers of the 60's sell out into Fusell's "Class X" of the 80s who establish cultural dominance as Brook's Bobos of the 00s whose children become the Woke of the 2010s and on.
Now, we could stretch this back further to the late 19th century, early 20th century progressive movements, where there's a much clearer "progressive" religious movement and connect that to modern secular progressivism, which is part of where the heretical protestantism -> woke historiography comes from but that's...certainly outside the scope of modern aristocratic norms that allow high-functioning democracies.
hmm, that would make a great deal of sense, actually. still though, I wonder how the X class went from "niche interests" to "no interests" in a generation or two. also,
> but that's...certainly outside the scope of modern aristocratic norms that allow high-functioning democracies.
what's being argued here? that the current discussion is about upholding aristocratic norms, and wokies don't behave nobly?
> Indeed, democracy is robust in this sense in that if the majority do not like the leader, they can vote them out--as Venezuela was about to do with Maduro until he essentially shut down elections.
I don't know why you would use that as an example of democracies being robust. If it can be subverted that easily, that's a point against robustness.
> Democracies are robust in that, bar systemic changes, they do well with bouts of bad leaders
But if it can't avoid systemic changes, that's not an improvement. It doesn't matter if it theoretically has higher leader turnover if it just ends up as a dictatorship/oligarchy anyways.
Charismatic-populist leaders winning elections is a failure mode of democracy that tends to happen when a large fraction of the electorate have lost confidence in institutions that produce leaders
The normal historical mode of elections is that they function as a mechanism for recognizing "natural" leaders. What counts as a "natural" leader is very culture dependent, but is usually tied to already being the leader of something else and being seen as successful in it in such a way that people feel they owe you favors (which they repay by voting for you and urging others to do so) or see the results and want more of it. This something else might be a patronage network, a feudal estate, an appointed civil or military office, a business or guild, a civic or advocacy organization, or a lower elected office.
Charismatic populist getting elected to high office thus tend to be downstream of voters looking at candidates who come from the "natural" candidates and being fed up with the lot of them. Sometimes it works as intended, with the populist rising to the occasion and actually changing things for the better (which still counts as democracy working). Sometimes they bumble around and mostly fail but don't really break the system worse than it already was. And sometimes they "succeed" and catastrophically challenge the system for the worse.
A key ingredient for the last bit seems to be that the potential veto points in the system that could constrain a would-be tyrant (including the voters themselves) are dominated by people who don't put a high priority on preserving democratic institutions. And if that's the case, then that's a dangerous situation even if the leader comes from a more conventional channel.
"That counts as a "natural" leader is very culture dependent, but is usually tied to already being the leader of something else and being seen as successful in it "
The kicker is, Trump actually fits that bill. Construction biz, TV show etc.
Other autho-populists do not, for example Viktor Orbán never had a job or business outside politics ever. He was running his political party out of college at 28ish and that was what he ever did.
"Charismatic populist getting elected to high office thus tend to be downstream of voters looking at candidates who come from the "natural" candidates and being fed up with the lot of them."
I would say, it is being fed up with the political-bureaucratic-expert class, not other kinds of "natural" leaders like businesspeople.
Erica, I appreciate your thoughts, but maybe you could work out the difference between "natural" leaders and political elites more accurately, I guess your positive example would be Obama, community organizer -> senator -> president, and it is a good example, but big business boss + TV celeb -> president would also be a good example of "natural" leadership.
Yes, he does fit the bill, at least in broad stroke. The same though occurred to me while writing my comment.
I think part of the distinction you're reaching for is that in the US, it's very rare for wealthy businessmen to go directly for the Presidency. They try semi-frequently, but it's usually treated as a vanity campaign, like Tom Steyer in 2020 or Carly Fiorina in 2016, and only barely taken seriously. When wealthy businessmen have run for the Presidency and been treated as serious contenders they've usually already held mid-level office (Mike Bloomberg, Mitt Romney, Nelson Rockefeller) or been nationally well-known as political activists (Ross Perot). Steve Forbes was probably on the boundary between "vanity candidate" and "already a nationally well-known activist". Notably, none of my examples here won the general election, and only Romney (who was a two-term governor of a medium-sized state) won the nomination.
On the other hand, business leaders do run directly for Congress, Mayor, Governor, or Senator with little other political background all the time. The Roman Republic had the concept of the Cursus Honorum, a series of elected offices where you're expected to have already held one office before being seriously considered a candidate for the next. In the early-to-middle Republic, this was customary and generally followed but not a formal requirement. In the late Republic, it started to break down; Sula tried to reimpose it by codifying it as part of his constitutional reforms, but these were more honored in the breach than the observance. American political culture doesn't had as formal a concept, but I think there is an understanding that people who have held certain positions are more qualified to hold higher offices than people who haven't. Some step-skipping happens fairly often, but skipping several steps at once tends to be seen as an aberation.
Off the top of my head, the usual American Cursus Honorum tends to follow this pattern:
Level 1: Mayor, state legislator, district attorney, municipal office
Level 2: House of Representatives or senior state-level cabinet office (treasurer, secretary of state, attorney general, insurance commissioner, or lieutenant governor)
Level 3: Senator, Governor, Speaker of the House, or federal cabinet secretary
Level 3.5: Vice President or Secretary of State.
Level 4: President
So for 20th and 21st century major-party Presidential nominees apart from Trump, the path as of when first nominated has been:
- Harris: District Attorney (1), state AG (2), Senator (3), VP (3.5)
- Biden: County Councillor (1), Senator (3), VP (3.5)
- H. Clinton: Senator (3), Secretary of State (3.5)
- Romney: Governor (3)
- McCain: Congressman (2), Senator (3)
- Obama: State Senator (1), US Senator (3)
- Kerry: Lieutenant Governor (2), Senator (3)
- Bush the Younger: Governor (3)
- Gore: Congressman (2), Senator (3), VP (4)
- Dole: County Attorney (1), Congressman (2), Senator (3)
- W. Clinton: State Attorney General (2), Governor (3)
- Bush the Elder: CIA Director (3), VP (3.5)
- Dukakis: Congressman (2), Governor (3)
- Mondale: State AG (2), Senator (3), VP (3.5)
- Reagan: Governor (3)
- Carter: State Senator (1), Governor (3)
- Ford: Congressman (2), VP (3.5), President (4)
- McGovern: Congressman (2), Senator (3)
- Nixon: Congressman (2), Senator (3), VP (3.5)
- Humphrey: Mayor (1), Senator (3), VP (3.5)
- Goldwater: City Councillor (1), Senator (3)
- LBJ: Congressman (2), Senator (3), VP (3.5), President (4)
- JFK: Congressman (2), Senator (3)
- Stevenson: Governor (3), UN Ambassador (3)
- Eisenhower: ---
- Truman: State Judge (1), Senator (3), VP (3.5), President (4)
- Dewey: District Attorney (1), Governor (3)
- FDR: State Senator (1), Assistant Secretary of the Navy (2.5?), Governor (3)
- Wilkie: ---
- Landon: Governor (3)
- Hoover: Secretary of Commerce (3)
- Smith: Sheriff (1), Alderman (1), Governor (3)
- Coolidge: State Senator (1), Lieutenant Governor (2), Governor (3), VP (3.5), President (4)
- Davis: Congressman (2), Solicitor General (3)
- Harding: Lieutenant Governor (2), Senator (3)
- Cox: Congressman (2), Senator (3)
- Hughes: Governor (3), SCOTUS Justice (3?)
- Wilson: Governor (3)
- Taft: Territorial Governor (2.5?), Secretary of War (3)
- Bryan: Congressman (2)
- Parker: State judge (1)
- TR: Assistant Secretary of the Navy (2.5?), Governor (3), VP (3.5), President (4)
- McKinley: Congressman (2), Governor (3)
More generally, skipping one of the lower levels is fairly common. Skipping both of the lower levels or skipping directly from level 2 to 3.5 or 4 somewhat less common but far from unheard of. Skipping all of the lower levels and going directly to level 4, or skipping from level 1 to level 4, is very rare with one big class of exceptions: being a senior wartime General seems to put you straight to Level 3.5 with no expectation of holding levels 1-3.
That only happened once for an actual President in the 20th century (Eisenhower), but at least one other (Wesley Clark) has tried and was taken seriously, and several people have been seriously discussed as candidates but declined to run: Colin Powell, Douglas MacArthur, and Jack Pershing are the main ones I can think of off the top of my head. Also, several of pre-20th century Presidents were generals with no other political experience: Washington, Jackson, Taylor, and Grant. Plus two Presidents who were both generals and mid-level politicians: Garfield and W. H. Harrison.
Since 1900, the only nominees who didn't hold at least a level-2 position before first nomination were Trump, Eisenhower, Wilkie, and Parker. Wilkie is pretty similar in profile to Trump (or Perot, for that matter): a wealthy celebrity businessman with an interest in politics.
----
Obama is an interesting example. He did check most of the conventional boxes and isn't an outlier in skipping level 2, but he did kinda speedrun level 3 by serving less than a full term in the Senate before running for the Presidency. We remember him as a conventional establishment politician now, but that's from the perspective of him having been President for eight years and an elder statesman for almost a decade since then. When he first ran for the Presidency, he ran as a charismatic outsider who was offering himself as an alternative to the political-bureaucratic-expert class; this was somewhat disingenous (apart from the "charismatic" part, which even his detractors generally grant him), as he was still a career politician (albeit a more junior one than most major party nominees) and was well-connected with a great many establishment political types.
That’s a known flaw of democracy. Some of the ways to get around this are Parliaments and other institutions of representative democracy, independent courts, etc. None of it is perfect!
"The reason for this is that it is not possible for millions of people to actually participate in politics and wield power."
Counterpoint: voting in democracy is as much about aggregating information as it is about having people "wield power." The higher the fraction of a population the votes, the better job the voting process does at capturing relevant information.
Corollary: most modern democratic systems are very bad at their jobs because each vote extracts very little information, and mixes together information in non-helpful ways (i.e. "how has [law/practice/policy] affected you" gets conflated with "what are your overall vibes of [party/candidate/movement]).
Viewed through this lens, it's obvious that democratic systems aren't all that great at pre-selecting good courses of action by voting, but do EXTREMELY important work in DE-selecting courses of action that are currently being implemented and have widespread negative impacts. The tendency to ignore huge negative impacts of policies because they aren't a problem *for the people in charge specifically* is on of the MAJOR weaknesses of all "government by the few" systems, and the primary reason they often undergo violent collapse.
It is ironic then that most real-world 2025 "democracies" are government by the few. Like if a judge decides the constitution says this or the human rights treaties say that, the people are powerless about it.
I don't live in the US but I watched the afterlife of the Obergefell SCOTUS with amazement online. Conservatives just stopped arguing against same-sex marriage, they all felt like once the Kings have spoken, the case is lost.
And how strange that Yarvin predicted this around 2008, saying that if one day the SCOTUS decides The President must make all his speeches standing on his head, there is absolutely nothing anyone can do against it. And what can be more kingly?
We have the same thing in the old world. The courts decide that a guy from Zaire who was convicted here of pedophilia must be given asylum not despite that he is a pedophile but precisely because of it: because it is likely that they kill pedophiles in Zaire and that violates human rights. The people absolutely do not want this, but are powerless.
Having thought about it further, I think Obergefell is actually a really excellent case study on what the courts do, and how that interacts with (in a pretty necessary way) the more directly-democratic parts of the U.S. system (and many others like it).
In a democracy, you could very crudely model laws as being derived from opinion polls asking the voting public "do you think X should be allowed?" The public forms an opinion, democracy happens, and a law is put on the books saying either "doing X is explicitly forbidden" or "the freedom to do X is protected by law in the following ways."
Given the wide, wide variety of possible choices of X, and given that language is an imperfect tool, it's inevitable that a nation that passes very many such laws will discover some of them contradict each other: the law both disallows Y and disallows the disallowing of Y for certain Ys (which are quite often *subsets* of the original X rather than entire Xs). To have a coherent body of law, these conflicts need to be resolved *somehow.*
Of course, the best way to do this is simply not to write those conflicts into law in the first place. This is one of the big reasons that basically all modern democracies are "representative democracies;" understanding the existing laws well enough to write new laws in ways that hew as closely as possible to the original public will is a tricky job that certainly requires people specializing in it[1].
But no matter how well they do that, the complexity of the problem makes it inevitable that conflicts between laws will sometimes exist anyway. So you still need a resolution procedure. One could try a very simple procedure like "newer instructions always supercede older instruction in the case of a conflict." But anyone who's worked with computer programs before can probably intuit how many ways that could produce unexpected and undesired results in situations as complex as you find in the real world. Another possibility is to send the situation back to the public for a vote any time a conflict is discovered, but that is A very unwieldy. And even in if you go with a different solution, you're still going to require some formalized procedure for noticing and reporting that this conflict in the law exists[2].
Judges are a pretty reasonable solution to these problems. They are professional law-conflict-resolvers; since laws can be complex and there are a lot of them, it's useful to have people with dedicated training doing this job. They also effectively serve as the official point at which conflicts in the law are flagged for public review: a judge ruling on it gives the voters a direct and explicit answer on where, how and why two existing laws can be regarded to conflict, so that they can fix it (if they don't like the judge's interim resolution). It's not *impossible* to have a complex, democratic nation without them, but it would probably be pretty tricky.
OK, so that's a lot of words before even getting to Obergefell. But with that framework in hand, it's actually really simple. Voters in many different states had been asked (in regards to their specific state) the question "should marriage be allowed between consenting adults of the same sex." Some had answered "yes" while others had answered "no." In itself that is not an issue. The issue happened when somebody discovered a possible conflict between the laws that had registered a "no" and a pre-existing law. Construed loosely (of course IANAL so take with a grain or three of salt) the one law said "if you are a man (woman) you are not allowed to marry a man (woman)." The other law said "laws must apply to everyone equally; specifically they are not allowed to apply differently to people solely on the basis of [list of things of which "sex" was one]." The judges thought about this carefully and said "Aha, see! Clearly saying 'if you are a man you can't marry a man' is applying the law differently based on sex. That law wouldn't restrict a woman from marrying a man, so it can't restrict a man from marrying a man." Of course, there are different ways to parse that, and others (including some judges) disagreed, but that's what formal conflict resolution procedures are for. One law conflicted with another law, and the judges pointed it out and produced a resolution.
Now, the other very important feature of this story is about WHICH of the two laws one. See, the second law wasn't just any law; the U.S. system has a special category for the *highest* law, which automatically supercedes other laws in the case of a conflict: I'm talking of course about the Constitution. And the law that included the equal-protection clause was part of the Constitution. As part of its special status, it is significantly harder to change (requires more democratic buy-in) than lesser laws. So when the SCOTUS ruled that no, in fact, you cannot make gay marriage illegal just by passing a law about it, they were *protecting* the results of the democratic process, by ensuring that a law with MORE democratic support won out in a place where it conflicted with a law with LESS. Claiming that doing so is anti-democratic just because some fraction of the population didn't like it is quite silly.
The moral of the story--as, of course, should be the moral of all stories--is that Yarvin is full of shit. The SCOTUS could not rule that the president must give speeches standing on his head, because there is *definitely no conflict of laws* whose resolution would include that determination. Obviously. The implicit motte-and-bailey here is that, well, the SCOTUS could just go crazy and stop even pretending to do its job. This is an ASTOUNDINGLY stupid point for a monarchist to gesture at, because Yarvin's entire philosophy of government *specifically* involves centralizing political authority and removing guardrails and making the "what if someone involved goes crazy and stops doing their job well[3]" problem ENORMOUSLY worse. This exact kind of thing is why I've always found Yarvin interesting as a writer, but never been able to take him remotely seriously as a political thinker.
[1] The other big reason is that when voters say "we want X" at the ballot box, X is often going to be fuzzy an imprecise (it is, after all, spread out across the brains of millions of people) and maybe not directly practical to realize in law. Representatives can (at least in theory) figure out the best laws to write to get voters as much of the *spirit of X* as possible, without running afoul of the constraints of existing law or, y'know, fundamental reality.
[2] If this last part isn't clear, imagine being stopped by a police officer. The officer says "you just did Y, which the law says is illegal." And you say "wait, but this other law says that I can do things which are Z, which this is, I genuinely didn't think it was illegal." A system could just empower the police officer to be a dictator in this case--whatever they say in the moment goes--but this has REALLY dangerous consequences and isn't exactly solving the "unelected official wielding massive authority" problem. So somehow we'd need a way to get this interaction promoted from the level of "local dispute between officer and individual" to "something that the system can apply its resolution procedure to."
[3] Hypothetical now available in dynamic, full-color, yes-its-actually-happening form. Thanks, 2025!
This is quite a substantial tangent that doesn't really interact with my point at all--and indeed largely ignores it. Yes, countries need judicial systems. Yes, the key officials of judicial systems are going to be among the limited group of people wielding disproportionate power, unavoidably, because of the nature of what judicial systems have to do. [1] All of that is (perhaps) germane to your original point about the wielding of power but really *not at all* about my point about democracy as an information-aggregation-system.
Judges in the U.S. system are QUITE DELIBERATELY insulated from the results of direct, popular votes. And when you look at the system in terms of information aggregation, it is really easy to see why. There are two types of information that matter in a court room: information about the facts of the specific case, and information about the contents of the law. As it happens, *neither* of those types on information are distributed throughout the population in the way that information about, say, how a recently-implemented economic policy is working in practice are. The only a judge could learn from a vote in most cases are what the *popular perception* of the law is[2]. And modern states are supposed to follow the ACTUAL LAW, not whatever the public *thinks the law is today.*
" The people absolutely do not want this, but are powerless."
Case in point. That is literally the WHOLE DAMN POINT of having laws. If the only relevant question was "what do the people want today" why would you ever need to write down or even conceptualize a notion of "law?"
What do you think a law IS, Ogre? It's a rule or set of rules that the state rights down and promises to enforce on people *even when they would prefer that not happen.* Now, if the people in a democratic polity decide after sober reflection that their definition of asylum routinely forces them to do or tolerate things they don't want to do or tolerate--and whether that is broadly true is a question of distributed information--then they are, of course, perfectly free to vote in ways that will get the law changed to be closer to their preferences. They just don't get to make that decision case-by-case, on the fly.
And that's exactly the answer to all these conservative complaints about "unelected judges." If you don't like the way judges are interpreting the instructions that your elected officials left for them, you are PERFECTLY FREE to tell your elected officials to go back and leave clearer instructions. Yarvin was not quite as full of shit in 2008 as he is in 2025, but he was still pretty full of shit then. OF COURSE there was something the people could do about it. They could get a law passed that said "no, obviously the president is not required to give all speeches standing on his head." If the judges rule that law to be unconstitutional, the people can pass a constitutional amendment that says it. If the judges don't respect THAT because they've been possessed by alien mind-control parasites, they can pass a constitutional amendment dissolving the supreme court[3] and replacing it with something else. Funny thing about the U.S. system, it includes a provision for legally doing literally anything at all *provided that thing has enough popular support*.
And Yarvin, of course, knows that and has always known that. He's a scummy, bad-faith troll, and no reasonable person should repeat his arguments. This whole conservative whining about "unelected judges" has always really just been whining about conservatives not being able to do literally whatever they want when they win one election, because that was *never, ever* how the system was supposed to work. I'll give you a pass for this one, not being American. But all those American conservatives who waive the flag and shout about the greatness of the country in the same breath as they spit on its founding principles are beneath contempt.
[1] Though fun fact, it's quite possible to design a judicial system that *doesn't* concentrate power like this. I was recently reading about the ancient Athenian system over on an old ACOUP post. No judges, no lawyers, massive juries. Doesn't seem very workable in a modern society though.
[2] Though anyone who has spent any time around humans (especially on social media) should be well aware how many people *don't actually care* what the law says when it conflicts with their feelings on a case.
[3] Or, y'know, use the impeachment procedure that already exists to remove the justices in favor of non-parasitized jurists.
You are conflating limited government with "government by the few." Not remotely the same thing. Supreme Court justices, for example, cannot pass laws.
> Supreme Court justices, for example, cannot pass laws.
If they can interpret laws to mean whatever, is there a practical difference? This is a lot like saying the administrative agencies make regulations, not laws.
?? Of course there is a practical difference; they can't interpret a law that does not exist. Moreover, if Congress does not like how the Court interprets a law, they can amend it.
And, let's get real: Congress passes hundreds of statutes per year; the Supreme Court decides maybe 70 cases per year (and not all of those involve federal statutes).
Nothing ironic in this. Minorities rule and have always ruled. And greater the population, smaller is the ruling element. Thus said Mosca in Theory of Ruling Class.
> The people absolutely do not want this, but are powerless.
Shouldn't this administration be proof that the people aren't powerless? The one thing democracy actually does in practice is make it extremely easy for people to organize populist revolutions.
The US Constitution can be changed to overrule the SCOTUS and we used to do that. The 11th, 13th, 14th, 16th, 19th and 26th amendments all reversed court rulings wholly or in part.
I've read a number of arguments for why we've collectively given up on that option, none of which were very persuasive. Whatever the reason(s) it's a tragic mistake which the Framers of our constitutional system would be very disappointed with.
> people are surprised when these people's children are ... mentally unstable
What else would you expect of blank slatists?
OK, so you are saying “protest voting” is not a system failure, but the central function of the system?
But then it is like feeding a toddler. “You want this food?” “No.” “You want that food?” “No.” “OK what do you want then?” No coherent answer.
Ouch. And some elections are like an equivalent of the toddler screaming and throwing the plate on the floor. Even if it's not going to help the toddler feel any better, beyond the brief moment of the joy when the plate shatters.
You get it, Viliam, you usually do.
Certain sites are having a lot of fun with another extract from Kamala Harris' book where she makes the comparison of Trump with a Communist dictator.
Now, being charitable, I know that she was trying to get at cults of personality and the modern examples of that being communists (e.g. North Korea) but yeah, it lends itself to a lot of mockery: Trump is a Communist? the Democrats are the party of the oligarchs who will defend democracy?
https://thehill.com/homenews/campaign/5517232-kamala-harris-donald-trump-tyrant-private-sector/
"Harris wrote in her memoir “107 Days,” which was released Tuesday, that she predicted how Trump would act in a second term, but she didn’t expect the level of capitulation from the private sector toward him.
She was pressed on MSNBC’s “The Rachel Maddow Show” about why she didn’t anticipate such action and responded that she believed “titans of industry would be guardrails for our democracy.”
“And one by one by one, they have been silent, they have been … feckless,” Harris said. “It’s not like they’re going to lose their yacht or their house in the Hamptons.”
“Democracy sustains capitalism. Capitalism thrives in a democracy. And, right now, we are dealing with, as I called him at my speech on the Ellipse, a tyrant,” she said, referencing her rally last year on the White House Ellipse in Washington. “We used to compare the strength of our democracy to communist dictators. That’s what we’re dealing with right now in Donald Trump. And these titans of industry are not speaking up,”
Vote Democrat - the party of *real* billionaires! 😁
Democracy is a self-contradiction only if you conflate theory with practice. Democracy as theory describes a concept on which you can model an actual, practical state (also called democracy, or republic, etc.) made of actual people, with all the amendments to the process that lets the real thing scale while mostly preserving the core ideas laid out in theory.
Most people are followers. They would lead only in a few (mostly personal) contexts, and rather stay away from political initiative. Even political activists are mostly just pushing the course set up by big names or ideas, enlightened figures or educational institutions. Democracy is a feel-good demo layer of some emotive public engagement. Humans minds are meant to be swayed by Cialdini's factors of subconscious influence: reciprocity, authority, liking (familiarity and flatter), scarcity, social proof, group solidarity, and commitment (with consistency).
All right Warhammer 40K fans, it's time for the annual Chaos pot-luck. What are you bringing, and what Chaos god are you dedicating the dish to?
My take on Slaanesh is less about sex and more about a certain careless excess. So I'm bringing Dom Perignon by the case, and making mimosas with it. #YOLO #becauseImworthit
Take my favorite chili recipe, double the spices, and dedicate it to Khorne.
Also, in honor of Tzeentch, make it with seitan meat and vegan cheese.
You can't just do chaos for chaos' sake. You have to _control_ the chaos.
Go with the classics. The Three S's.
Skittles in the M&Ms.
Straight pins in the condom trays.
Sugar in the gas tanks.
I love these three S's. Your own?
They're all known ways of sabotaging different types of parties. No one's put them together and called them "the three Ss", AFAIK.
All I'm seeing is a bunch of sane people who haven't fully embraced the spirit of Chaos. Bringing something easily edible without preparation or acts of devoted worship is so Imperial. I'm going to do something different. Something better. Something more fitting for the worship of the dark gods. Something Khorne would actually enjoy. I'm bringing a live wild boar, pumped full of rage inducing stimulants and abominably treated to create an all consuming rage to be released on the dining table. I will then battle the boar, using nothing but a singular knife to stab the beast to death or die trying. No matter the victor, the room will be splattered full of blood and gore, shattering the plates and upturning the table and chairs in a spectacle befitting a true Khornate sacrifice. Then, after the guests violently froth and chant for 88.8 seconds in worship of the Skull Throne, they shall lunge like savage beasts to feast upon the boar or my own flesh raw in a feat of senseless, primal carnage, all the while gurgling "Blood for the Blood God, Skulls for the Skull Throne!"
In all seriousness, I'd probably bring a couple racks of baby back ribs dedicated to Khorne. Seems simple, delicious and feasible.
I'm bringing rainbow bagels and various spreads, and I'm dedicating it to the Changer of Ways.
I'm bringing corn. And you can guess.
(feels like this joke has probably been done to death, but it's new to me)
I'm not actually manly enough to bring a dish befitting Khorne, which I'm pretty sure is wild elk steaks or buffalo burgers from an animal you've personally hunted.
I am not bringing anything to a potluck that Nurgle would approve of.
For Tzeentch, I'm bring a few pounds of Bertie Bott's Every Flavour Beans (1). It's just...right on so many levels.
(1) https://www.jellybelly.com/harry-potter-bertie-bott-s-every-flavour-beans/c/633
For Khorne, it's easy. Black pudding (it's got blood in!) Various versions throughout the British Isles (which includes Ireland), this one is a fancy Scottish cook version:
https://www.youtube.com/watch?v=KRmDhXpiUkQ
Bonus: black pudding can be also used as a bludgeoning weapon in the ancient Lancastrian art of Ecky Thump:
https://www.youtube.com/watch?v=sPzqkORgF20
I think Nurgle might approve of a plate of blue cheeses, particularly if it includes the kinds made of unpasteurized milk.
He might bless us one and all. He might bless us so hard the rocketry nerds could calculate our specific impulse values.
Casu Martzu!
https://en.wikipedia.org/wiki/Casu_martzu
But really alcohol is more Nurgle than Slaanesh. It's a product of fermentation (a polite word for rot) that leads to all sorts of spewing and disease.
Maybe alcohol is the equal-opportunity Chaos drug. It gets people fighting mad, impairs their judgements, makes them ill, and leads to YOLO sexytimes. There's something there for each of the gods.
I haven't followed the Kirk murder aftermath a whole lot, and I haven't read all the threads on it around here, so the following may have been already addressed.
This is a genuine question, not one I think I already know the answer to. Apparently Ted Cruz and some other prominent US conservatives have been condemning parts of the government's implicit crackdown on speech. My question is whether any progressives of similar status and influence as Cruz and the relevant right-wing thought leaders have similarly condemned progressive suppression of speech (explicit or implicit) at moments of peak left cancel culture. *Especially* at a similar sensitive moment e.g. right after George Floyd, during the height of Covid, during the British anti-migrant riots last year.
I'm trying to get a sense of whether US conservatives are better people, worse people, or no different than US progressives and British progressives. And this seems one of the clearest metrics that I could possibly use to resolve that question.
A bunch of liberals kind-of fell out with a lot of the progressive movement because of pushing back on cancel culture during its height. Steve Pinker, Nicholas Christakis, Sam Harris, and Matt Yglesias are all examples off the top of my head.
The ACLU is almost too obvious to mention. It's pretty left wing but has fought for (and won) for some extremely not left-wing groups. E.g., in 2012 they sued on behalf of the KKK. While they have been more recently criticized for lessening this universal pro free-speech stance, in response they compiled a list of 2017-2024 instances where they defended speech they don't support: https://www.aclu.org/news/civil-liberties/defending-speech-we-hate I think it's a pretty compelling list, though maybe a list of what they didn't support would be more telling.
The right-wing analog to this is more FIRE than Ted Cruz and it's worth pointing out that FIRE has done a good job of supporting left-wing free speech now that the situation has reversed.
I'll also say that I think a lot of discussion along this line is disguised attempts to justify further crackdowns from one's own side as revenge. (Why ask which side is better instead of just trying to make both sides better? They're both clearly doing poorly.)
When I worked at FIRE, there were National Lawyers Guild members who worked there. As well as Republicans, libertarians, etc. It simply is not right-wing, or left-wing.
I stand corrected.
I think classifying FIRE as right-wing is grossly oversimplified. I've met Greg Lukianoff, and he identified as a liberal Democrat at the time.
Thanks, I didn't know.
As a former ACLU donor who has been quite happy with becoming a FIRE one, I agree both that the ACLU's list above is pretty good and that I also want to know what all they declined to take action about during that period.
https://harpers.org/a-letter-on-justice-and-open-debate/
Justify your premise.
There was no mass wave of firing and witch hunting random observers for comments about Floyd, and Covid is obviously a different scenario where misinformation or rebellion can be actually dangerous.
By the way, if you were actually serious and good faith about wanting to determine which category of people tend to be better, is this really the best possible metric? Does their ideology not inform an important difference already?
Nitpick: I recall quite a few stories about people losing jobs over rude comments about Floyd after his death. I would have to search for examples but I remember hearing about them.
Misinformation can be dangerous in many cases. But it's worth noting that the official information was often wrong during covid. Mostly this was just because it was a new virus and people didn't know what was going on, especially early on[1]. Some of that was also administrative stuff, like not wanting to formally say covid was airborne because that would trigger a bunch of expensive required safety measures. Some was political, as with the handling of the lab leak claims.
Policing misinformation in that context sometimes involved suppressing correct criticisms or defensible disagreements among experts.
[1] I think people had a hard time with this largely because most people, including politicians and journalists, don't really get how science works and just expect science to be some kind of truth oracle, instead of a "well, the best we can tell right now is X, but maybe Y or maybe Z."
> is obviously a different scenario
Usually stylized "(D)ifferent."
So the specific reason it's (D)ifferent is because one pertains to public global health and the other is making fun of a dead podcaster.
And of course you think your cause justifies your actions while your adversaries' does not justify them doing precisely the same.
Can you disagree with anything factually? Covid is a public health emergency, it killed millions of people. It is much worse to be wrong about it than to make fun of a dead podcaster. How does the difference not justify different actions?
*Covid* killed millions of people, but *talking about covid* did not. And that's the reference case if you're contrasting with "making fun of a dead podcaster". It's all talk.
And if you want to call it "misinformation" and invoke second- or third-order effects where one person talking results in some other person dying, both sides get to play that game.
You're simply opining on which censorship you prefer, and if you want to pretend that's a factual disagreement, sure: it's objectively orders of magnitude worse for people I like to be censored than people I don't, regardless of the topic.
Millions of people died from covid. No riot has come close to doing that even when you factor in any kind of stress caused by lost property, and cherry picking (and then possibly misinterpreting) the statement of one bloke will not change facts.
The stakes in the covid misinformation thing aren't the total number who died from covid, but rather plausible the number of additional deaths due to misinformation. Most covid deaths had nothing to do with misinformation about covid online, they were just people who got exposed to covid and ended up getting very ill from it.
Before the vaccine was available, I doubt there were many additional deaths due to misinformation in the sense of "information contradicting public health guidance." People ignored the lockdowns (where they happened) mainly because they didn't wanna, not because they had been convinced that there was no such thing as covid or something. The masking guidance we got was pretty bad, since it required cloth masks but not N95/KN95 masks with a good fit that could actually protect you--people who read "misinformation" saying the cloth masks were not much good were probably better informed.
After the vaccine became available, misinformation probably led to additional deaths by convincing high-risk people not to get vaccinated. Their body, their choice, but if you were a 60 year old 300 lb diabetic who didn't get vaccinated because you thought the vaccine was too risky, you were making a pretty bad decision. OTOH, it's worth considering how the suppression of those claims that the vaccine was dangerous would work out in general.
As best I can tell, the worst things we did during covid were sending covid patients back to nursing homes (like tossing a lit match into a haystack) and long-term school closures, which slowed transmission but also really screwed over a lot of kids. Neither of those were due to misinformation.
The best thing we did was get the vaccines and treatments out quickly. Its availability wasn't affected by misinformation, but its takeup was to some extent. And probably there were some people who tried dumb things to treat it instead of the actually useful stuff we ended up with (Paxlovid, for example) and died as a result--some of that was misinformation, some was simply not knowing about it or not realizing you needed to start it early instead of waiting until you were very ill.
I agree, but worth mentioning that the government's attempts at manipulating the public, both with exaggerating the dangers of covid, the efficacy of the vaccine and trying to censor misinformation, caused a very predictable backlash against vaccines which killed a lot during covid and will kill a lot from increased vaccine hesitancy generally.
One could argue that public health people aren't responsible for those knock-on effects and the backlash, but that is only a defense if they were committed to telling the truth: not when they are dishonestly trying to manipulate the public. They played stupid games and we all won stupid prizes.
When the establishment flip flopped on public gatherings for BLM, and tried to silence anyone who disagreed, it destroyed the public faith in NPIs and the people of "science" advocating for them. If you think that the NPIs were important and could have saved lives, the "censorship" around Floyd and BLM killed many people.
Was this ever guidance from public health agencies? I recall the open letter from a bunch of public health authorities, and the signers did indeed burn their credibility in order to support their preferred politics, but I don't recall that being anything from, say, CDC or state health departments. But I'm happy to be corrected.
Still not quite as bad as lying about what covid is and how to treat it, but good attempt
There was the Disinformation Governance Board a few years ago that Tulsi Gabbard condemned as being like a "Ministry of Truth."
In an effort to be somewhat helpful, I'll add that if you really do want to make an apples-to-apples comparison of this sort, you need to pick a single, specific, well-known event in which a left-leaning government actually did crack down on speech in a very public way.
The example that leaps to mind for me is the Ottawa Convoy protest. Sadly it's not a *perfect* comparison both because it happened in Canada not the U.S. and because a lot of the sticking point was over things that went rather beyond mere speech, like threats and assaults of residents[1]. I don't specifically remember hearing about how any left-leaning U.S. officials responded to Trudeau's heavy-handed crackdown, so I couldn't say off the top of my head whether they were appropriately outraged, on inappropriately supportive or merely indifferent. But it would be a fair metric on which to judge them.
[1] Which to be clear, were presumably carried out by *individual* protestors, and shouldn't negate the rights of other people who were at the protests and not doing those things.
Were there concerted efforts to shut down discussion of the convoy? I remember reading a lot of of coverage about it from various sources at the time.
Much as I hate giving the nod to Wimbli, they got the point. Street protests are absolutely a form of speech. Historically they're one of the most important forms.
They're also much trickier to navigate because they blend speech with physical action (if nothing else, being present in a specific place) in ways that are hard to disentangle. So too with this one: the Trudeau government weaponized the financial system to shut down the protests, curtailing Canadians' free speech in the process. But on the flip side, a large part of the government's issue with the protests seems to be the things that were happening along side them that pushed beyond the bounds of merely speech: things like harassment, threats and occasionally assaults of local residents. It wasn't a simple situation (but I ultimately think the Trudeau administration was more wrong than right).
I wasn’t arguing with the speech part, I was arguing with the “threat” part.
Regarding the Freedom Convoy, it was a mistake to launch an open-ended protest in the capital and let it go on so long. Any state will eventually take action if you give them enough time and provocation.
Fair enough. I do not think highly at all of the convey protestors, and don't think the government was unreasonable for acting to curtail their more disruptive activities. And in cases where any specific protestors were breaking existing laws (in this or any protest), arresting them on those grounds is quite reasonable.
My issue here is pretty much solely with debanking as a means to apply pressure to end a protest. For ordinary citizens with ordinary means financial obligation, it's quite the heavy bludgeon to wield. It seems reasonable to use against *organizations* operating illegally, against dangerous fugitives at large or in response to specific crimes that revolve around the use of those bank accounts (e.g. financial crimes or contracting illegal services). It does not seem reasonable to wield as a means to apply pressure against individuals outside those limits, even if the government believes them guilty of other crimes. If the government has substantial evidence of other crimes those individuals have committed, it can arrest them on those grounds, it doesn't need to debank them.
Given all that, the fact that at least *part* of what the government was accomplishing with debanking in this case was ending a protest, that makes it a freedom of speech issue (albeit a pretty noncentral one). But I think the noncentral nature of it and the fact that the targets were enormously unsympathetic makes it MORE important to speak up against it, not less. Those things make it a perfect candidate to be the thin end of the wedge in normalizing applying such pressure more often and more widely.[1] I note with satisfaction that it has since been ruled unconstitutional.
[1] Of course, sitting here in 2025 it feels pretty ridiculous to worry about the Liberal government waxing authoritarian when we've got the *gestures at everything* happening right next door in the U.S. But just because our neighbors are busily blowing up our sense of what "normal" levels of government coercion look like doesn't actually mean we should change our standards.
On a related note, the fact that the convoy protests were heavily funded by U.S. money does serve as a somewhat of mitigating factor, in my eyes. The inescapable fact of living next door to a larger, richer neighbor with a long history of meddling in other nation's affairs certainly invites some wariness any time they try to push their politics across the border, especially by throwing cash at the problem.
"And this seems one of the clearest metrics that I could possibly use to resolve that question."
Really? Not the reaction to the murder itself? There has already been LOADS of ink spilled about the difference in reaction between high-profile progressives to this murder and that of high-profile Republicans to the Horstman murders.
This thing you have chosen is actually a really crappy metric for a couple of reasons:
1. This crackdown started at a very specific time in response to a very specific event, which makes coordinating to speak out easier.
2. This crackdown was not only (as Anon mentioned) spearheaded by the government, it was led by *one specific leader*. Having a clear target to speak out against ALSO makes coordination easier.
3. The actual mechanisms used to "crack down" are quite different. Different people may have different standards for what methods are and aren't beyond the pale when it comes to things like this.
Basically, to me it looks like you've decided to watch US conservatives swing at a slow toss right over the plate, and then go find a fastball from recent history pitched past the other team in order to argue which is the better batter. I highly doubt that somebody who didn't know what answer they wanted to see would come across this particular algorithm by chance.
I also wonder what decisions you’d make based on this. Imagine that before the last election you knew for certain that:
— If Harris is elected, non-government entities might call for canceling someone. Prominent Democrats would be silent in this case.
— If Trump is elected, he himself might use his power to cancel someone. Prominent Republicans would condemn this. Trump would ignore them.
Who would come out on top in this case?
Were those people fired as a direct consequence of their speech?
There was a ton of criticism of the cancelling of David Shor when it happened in 2020, but Tom Wheeler wasn't going on MSNBC in 2020 and talking about how he was going to threaten local cable affiliates into refusing to carry Fox News if they carried critical coverage of Black Lives Matter.
Isn’t there a big difference between “the government’s implicit crackdown on speech” and a crackdown by someone else than the government?
Well first of all, I mentioned much stronger government crackdowns like in Britain, which no one has addressed so far. I'd be perfectly satisfied with a conclusion like "US progressives are better or no worse people than US conservatives, British progressives are vastly worse people than both groups".
Second, I'm not sure there actually is, in itself. Obviously, jailing or fining someone or anything else that involves violence-backed coercion (the methods the government tends to use) is vastly worse than mere economic pressure from a private entity or a vast collusion of private entities. But is economic pressure from the government worse than economic pressure from a collusion of private entities? You can argue it's worse because it has more power, but you can also argue it's better because it's democratically elected, and subject to democratic removal. In the vast majority of progressive cancellations, there was no vote where any appreciable subset of the population agreed that "these opinions are from now on no longer allowed" or even that "these are the people who get to decide which opinions are no longer allowed". Instead, the power was entirely exercised by either those who happened to have media/academic/business connections, or those who were the most brazen and loud and threatening and openly unwilling to ever cooperate or tolerate or compromise with anyone different them. And as a result many of the cancellable opinions were those actually held by the majority of people! And if the majority of people didn't like the cancellations there was...nothing they could really do, aside from "build up your own media and business and academic ecosysyem over decades and hope it doesn't get captured that by the interests that control the existing ones, which history shows is a total pipe dream". Compared to that, being able to straightfowardly vote out the canceller in a few years is such a vast improvement it's hard for me to endorse your statement at all.
But if you really want a direct American, government-backed cancellation equivalent, then something like the Obama Education Department college directives about banning "offensive" speech would seem to qualify (I'm actually of the opinion that a huge amount of broader cancel culture is directly downstream of that government action; it certainly fits the dates). Was there a Democrat Cruz-equivalent condemning those at the time?
> I mentioned much stronger government crackdowns like in Britain, which no one has addressed so far
The biggest crackdown on free speech in Britain was the recent designation of a Palestinian activist group as a terrorist group and beyond that, in a new addition to the terrorism act of 2003 (amended in 2019), that any display of support for a terrorist group so designated was itself illegal which meant that 300 grannies were arrested a few weeks back for protesting the designation of Palestinian action as a terrorist group.
There is, along with that, the arrest of Graham Linehen - the Irish writer - for anti trans speech for what was a clear turn of phrase about punching some trans activists in the balls. Another Irish writer - Sally Rooney of normal people fame - says she can’t visit London because she has voiced opposition to the designation of Palestinian action as terrorist. Maybe that’s performative of her, maybe not. My Irish wife thinks the police like arresting the Irish anyway.
Online there’s a general concern for either or but not both, offline it’s a bit better. At least where I live.
This. I'm not defending UK attitude to free speech but it's much older than wokeness. Ideally we'd have a written constitution but I can't see that happening now, social trust is too weak. So we're stuck with the common law combined with a latitudinarian spirit which is tolerant up to a point, so long as people don't "go too far" whatever that means.
No there wasn’t, to my knowledge. It’s also much harder to notice and mentally model the downstream effects of DOE directives, versus the president doing something. See the baseball analogy, the situations are completely different.
We were also debating the validity of death panels at the time, so there wasn’t a lot of opposition brainpower directed at it or calling attention to it.
Does anyone have advice on the role of networking for matching into a residency program in psychiatry?
My girlfriend is a medical student preparing for match (hopefully in psychiatry) and I work in IP, so we’re hoping to land in the Bay area as it would be ideal for both our careers. I’m wondering if networking with decision-makers at the programs she’s looking at is a viable route for increasing her odds of getting an interview. I know in my realm of law, networking is the go-to way to land a job. But i’m curious if this applies for residency programs? Does anyone have experience with this and have thoughts they can offer?
Much appreciated!
Public service announcement: Plan the timing of your flu shot thoughtfully.
Here’s the relevant info:
-When you get a flu shot, it takes 2 weeks for protection to kick in.
-Flu shot protection is never 100%, even right after the shot — usually more like 50%. That figure applies both to your chance of catching flu, and to your chance of serious illness. So if your protection is 50%, your chance of each is cut in half.
-The shot protection wanes by 10-15% per month. It wanes faster than that for seniors.
-On average, it takes about 10 weeks from the time flu cases begin to rise in an area to the time they peak.
-The month they peak varies, but February is the most common one.
I apologize for not giving links to the factoids above, but if I had held myself to providing links to everything I would not have written this post at all. I got the relevant info a few years ago from CDC and other big sources who had no ax to grind. It is not hard to find — look it up yourself if you have doubts. If you find something that doesn’t support the info above, post it in a reply.
Big picture: If you get your flu shot in September, its effectiveness will have waned to almost nothing by February. If you are a senior it will have waned to almost nothing by December or January. My conclusion is that the conventional advice to get a flu shot in September is lousy.
The system I follow is to keep checking the CDC flu map: https://www.cdc.gov/fluview/surveillance/usmap.html. Right now most states, including mine, are dark green, so they are at the bottom of the “minimal” category. I wait until flu cases in my state reach the lower of the 2 “Low” levels, then get my shot. That date is most often between late October and late November in my state. Seems likely to me that getting the shot means then will give me maximal protection by the time flu prevalence is in the Moderate category, and decent protection at the peak. I actually think it would probably be better to wait a little longer, so as to match maximal protection to the actual peak, and stay protected through the early parts of the downslope — but once I see the levels rising steadily I get a bit uneasy and just go ahead and get the shot.
Another option is to get a flu shot in Sept., and another in, say, December. As far as I know there is no down side to that, though you should probably double-check that with an MD (or just find the answer online). I doubt that insurance would cover a second shot, but I don’t think the flu shot’s expensive.
Does that 50% already include "We guessed the wrong strain for this year" risk? Because if it does, then the effective protection period in the cases where you get any is longer.
The 50% is due to the imperfect protection of all flu shots. In fact when I was looking at some sites after I posted I came away with the impression that most years the maximum protection, when the effect of the vax is strongest, is less than 50%. The low ceiling on flu shot effectiveness is partly due to the fact that they have to be manufactured before we know what strains will dominate in the coming year, but also to the fact that in flu season there are several strains at once around, and no vax can be an ideal max for all of them. I think there are other reasons too having to do with the way flu shots work (made from killed virus) and other factors, but I don't know what they are.
Can you explain what you mean about how guessing the wrong strain means the effective protection when you get any (any what?) is longer?
As you said, they need to guess which strains will dominate the coming year, and then try to make something effective against them. If in theory they guess totally wrong 50% of the time, so wrong that you get no protection at all, then that would already explain the 50% peak protection average, even if its actually 100% with a correct guess. That would mean that just calculating when protection gets negligable from a 50% start is wrong: Half the time you get no protection anyway, so it doesnt matter when you take it. The other half, you start from 100% and so get a longer window - so you should take it earlier than your calculation implies. (And of course, you dont know in advance which half youre in.) Obviously the reality is not going to be this extreme numerically, but thats what I had in mind. Basically, you dont want to average the protection percentage in a way that weighs all yeas equally, because the ones with less protection matter less to your decision.
Oh. Does this have a practical implication -- for instance, that people should get the vax following a different approach from the one I recommend (wait til the levels start to climb)?
You would do a similar calculation, just weighing the misguessed years less, therefore taking it a bit earlier. The more variance the peak protection percentage has over the years, the more earlier.
This is crazy timing for me - my kid just had a checkup, I got offered a flu shot, and I went through an off-the-cuff, epistemic status "uh I think I remember reading something along the lines of" version of this post in the doctor's office. (Basically "vaccines decay, what if this is too early?") I ended up going for it since it seemed like the normal thing to do, and the nurse wasn't particularly worried about the decay thing - although I also assume his entire approach in this situation is "don't let people not vaccinate their kids", so that's not necessarily the word of God. Maybe if I had read this post first I would have said no... then again maybe I would then go on to forget to get him vaccinated at all. That's certainly something to keep in mind; strike while the iron is hot / in the doctor's office!
Anyways, maybe we will in fact get him another in December; I'll have to look into it. Thanks for this post!
Public health workers don't tell the truth, they say the thing they think is most likely to get the public to do the whatever it is they think is good for the public. I think they start in August hounding people to get the flu shot ASAP because they assume they are dealing procrastinating idiots. So their logic is that if they they start hounding the sheep several months before flu levels are likely to start going up, the sheep will get around to a flu shot not long before the flu hits.
The problem is, people who are not procrastinating or idiots, and there are lots who are not, go right out and get their flu shot in early September. Lately in my drugstore I've seen elderly couples getting their flu shots, then walking out together smiling, content that they've done what they need to to stay safe, and I feel so *angry* at the horses asses who hounded them to hurry up and get the shot,
Its quite alarming that effectiveness wanes so quickly. I did a gpt check of those stats. I think the 10-15 number is high and combines intraseason waning (antibody effectiveness declines) and interseason waning(strain shift). It gave a meta summary of 7-10 pct as intraseason only. So this is directionally correct, but maybe not quite as bad as sept = useless
I'm pretty sure the figures I got do not include interseason waning. What I read was clearly talking about waning of the effect of that year's shot. And interseason waning is kind of an odd concept. Yeah, last year's flu shor would be less effective than this year's, but that's due to mismatch between vaccine and flu type, not to waning.
But you may be right about amount that the vax wanes per month. Here is a big glob of CDC info: https://www.cdc.gov/mmwr/volumes/70/rr/rr7005a1.htm
Search the thing for "effectiveness" and you'll soon land on a long paragraph full of the relevant stats, based on multiple studies. (And the end of it your head will be a jumble.) Something I'm not clear on is the meaning of percent decline. If the effectiveness starts at 50% and wanes by, say, 10% per month, does that mean after one month the effectiveness is 40%, or 5% of 50%, so 45%?
In any case, though, the rate of waning affects *how* bad it is to get your flu shot in September, but does not change the fact that it's bad timing. What's the point of getting immunized 10 -12 weeks before flu levels even rise to "low"?
Huh, this is a really interesting strategy. Can any epidemiologists chime in?
I developed an AI coding assistant for neovim, and wrote about it and my thoughts on using AI to program: "AI Whiplash, and neovim in the age of AI" https://dlants.me/ai-whiplash.html
>I now have a much better sense of when the AI agent might be successful or not successful at a task, and how much scaffolding it needs to have a decent chance of success.
Yeah, I feel the same, and it's definitely helping me (and I feel it when I slip up and try something that in hindsight was less clearly suited to the AI). I like your "gradient of control" strategy; I'll have to try that - so far I usually just do highly detailed prompts in Aider with no expectation of any useful multi-turn interactions, just my own pure human tweaks on the AI's output.
Somewhat separately: I've started lazily thinking that "what files are relevant to this prompt" would be a nice feature. Since it would run over large volumes of code for every prompt, it should be a very lightweight model - even cheaper than Gemini Flash. But thinking through what files are relevant is enough of a busywork distraction, that I'm sure if it was done for me I'd be like "how did I ever manage without this".
I have a "learn" subagent that is meant to do this, though I've struggled to make it effective. I think when transition points between subagent contexts are handled via messages that the AI composes, it tends to lose the thread. It's not great about capturing all of the relevant context (it often forgets files that it's discovered), or in picking out what the most important pieces are.
Currently I'm having the subagent either mention files or copy snippets of files into a notes.md file... But both strategies can be kind of bad.
Copying snippets is slow and feels like a waste of tokens to just copy existing code, especially when the agent decides to copy large swaths of code. And the parent agent often just ignores some of the file references (similarly, I've struggled to get the agent to use references from a context.md file based on the tasks I'm asking it to do. It seems to just forget they are there).
I've thought of something more mechanistic, like a list of files and line ranges that gets built up via tool calls and automatically gets included in downstream contexts, though that seems like it might degenerate into every downstream agent reading every file.
Maybe the answer is semantic search? I think I'm adding that next and I'll have to see how it changes things
I think this matches some of my own experience, which finds that "context control" and "prompt planning" are crucial elements to success. My own workflow involves a number of custom slash commands, principally:
- /fresh : whenever I start a new session, I run "/fresh path/to/subfolder", which has Claude read the CLAUDE.md file and README.md file in both the target subfolder and the repo root. I include tidbits like "this part of the project is pre-launch, and doesn't require backcompatability" or "we're an early-stage startup, and prefer time-to-market over enterprise scaling". This does a lot to guide Claude towards the level of complexity I want
- /draft : I'll often have a conversation with Claude about what I'm planning, identifying various sticking points, etc. Once I'm ready, I issue "/draft DRAFT_MYPLAN.md", and Claude will prepare a step-by-step plan (suitable for an agent to execute).
- /critique : after producing the draft, I run "/critique DRAFT_MYPLAN.md" in a fresh agent window (after running the /fresh command). This often catches a lot of small holes Claude got myopic about; the critique is appended to the bottom of the draft.
- /finalize : also in a fresh session, this command takes the draft + critique and produces a final, step-by-step plan, called "PLAN_MYPLAN.md"
- /implement : also in a fresh session, but I often use Sonnet-1M instead of Opus (Opus is a must for the planning stages). Normally Claude can execute the plan in < 20 minutes; the planning portion typically takes an hour.
- /update : I usually run this in the same session as "/implement"; it's a checklist prior to commit, including running all tests, builds, lints, and then creating a descriptive commit message about everything we've done in this session.
At that point I do many iterations of debugging and tweaks in the same session; the added context does a lot to help Claude remember all the previous work that was done. After the debugging is finished I run /update again.
It is unquestionable that my speed is at least 5x what it would be without an Agent. I wrote a new feature, entirely by myself, front-end and back-end, in less than 6 weeks, in a language I'VE NEVER USED BEFORE.
What was the feature? I think I've never worked on a feature that took longer than a week, including at times I've had to use languages or tech I had no experience with.
It's really a new product within a larger ecosystem; 20k lines of code (excluding comments). Happy to send you a link after we launch
Sure! I'm interested.
I have written a sub stack. As I am the reincarnation of famous prophet Nostradamus, this is a very momentous occasion for the world and for the universe in fact. Unfortunately, no-one is reading it, because they are blind to the truth.
Don’t be like the other NPCs—read my stack of subs: https://terminalvel0city.substack.com/p/we-are-rapidly-approaching-terminal?r=j8wth&utm_campaign=post&utm_medium=web
There's a large thread on hackernews about a post written by an Iranian software engineer who frequently has his online accounts closed because of US sanctions on Iran: https://gist.github.com/avestura/ce2aa6e55dad783b1aba946161d5fef4
The effect of this is being blocked from some fairly popular engineering platform like GitHub. Alternative tools/platforms of course exist but they either do no approach the quality (eg. Free azure/AWS credits provide access to great hosting services) or social network size (is. audience, networking, etc), so there is a cost to this kind of banishment.
The pragmatic case for sanctions that I can see is that it hobbles the target country's economy, making it more difficult for the government to execute on its programs. Iranian government programs appear to be rather risky for a lot of people, especially outside it's borders. The cost of that, however, is increasing misery for people ruled by said government.
Culture war aside (if there's any here?), in a more generic sense, are there better ways to impede bad (defecting) governments without exacting such a cost on their citizens? Is the current setup "fair"? Curious about your thoughts
Misery for the population is actually the point for these sanctions. Sanctioning governments often claim to target only the leadership and military but thats PR. The real, revealed and often explicitly stated aim is to immiserate the population to such a degree that it causes internal instability and either regime change or civil war (like in syria). Excluding goods like medicine is ineffective because once a country is under sufficiently strict sanctions regulated companies like banks will simply refuse to transfer money even to the neighborhood of the country. The risk for them is simply too high.
Source: consulting gigs for large banks where i had to undergo part of the standard training for employees. Once a bank had payed fines sometimes > 1 billion USD it becomes very risk averse. Very, very risk averse. Do not make jokes in the comment field of bank transactions (“here bro, for sexual favours”). Every transaction is scanned for key words (coke, teheran etc. - the list is confidential and i haven’t seen it) and flagged if necessary (“know your customer” process). Technically it’s just a grep job running on a zOS looking for nasty words.
A normal government is in power in its country. Being in power means that it gets to distribute harms and benefits to a significant extent. Thus any harm you inflict on the country must either be so minimal that it cannot change behavior or will be big enough the government can choose where it's allocated and will almost always choose to allocate them onto their own least powerful citizens.
Basically, the North Korean government is always going to choose that food goes to its soldiers, and you can't really change that. So your options are either give them so much food they let it trickle down, strengthening them, or don't and watch them starve their own citizens. And ignore their accusations that somehow you are responsible for their own bad policies because they have some divine right to trade and aid.
There's a bit in Yes, Minister where the protagonists are dealing with an issue where a visiting African head of state has privately informed them that he's about to make a deliberately inflammatory speech (something about Scottish independence, IIRC), and the Minister (Jim Hacker) is asking his permanent secretary (Sir Humphrey) about their options for responding. The relevant part of the exchange was:
Humphrey: Well, Minister, in practical terms we have the usual six options. One, do nothing. Two, issue a statement deploring the speech. Three, lodge an official protest. Four, cut off aid. Five, break off diplomatic relations; and six, declare war.
Hacker: Which should we do?
Humphrey: Well, if we do nothing we implicitly agree with the speech. Two: if we issue a statement we'll just look foolish. Three: if we lodge a protest it will be ignored. Four: we can't cut off aid because we don't give them any. Five: if we break off diplomatic relations we can't negotiate the oil rig contracts. And six: if we declare war it might just look as though we were over-reacting.
---
I think the overall issue is that for one sovereign state to do stuff specifically targeted against the government, military, or leadership of another, there usually aren't that many good options in the space between "harsh language" and "acts of war". Countries that are friendly with one another to some significant extent tend to have options along the line of the "cutting off aid" option that Humphrey listed, but even those are often hard to target because governments tend to have a lot of tools to control who in their country is left holding the bag.
It's fairly standard to limit export of high-tech weapons systems to countries that are at least nominally friendly, but that only goes so far. I've also heard of stuff like (in the early 2000s) embargoing iPods and other high-end consumer electronics to North Korea as a targeted sanction because only the Kim family and other high officials had access to them anyway, but that's very situation-dependent and probably isn't enough by itself to move the policy needle.
I think your reply articulates something that Erusian also touches on: the citizens of a state are the responsibility of that state, so if another state's sanctions are inflicting negative happiness on them, why isn't their state working to prevent it?
This reminds me of a bullying tactic where the bully sets some condition that is impossible to meet and then procedes to punish the victim. Kind of a "why are you hitting yourself in the face" deal where the bully forces the victims hands into their face. Not a 1-1 match, but it helps me pin point this slick transfer of responsibility that I hadn't noticed before.
While it has been government policy to consider policy to be a matter for ministers and administration to be a matter for officials, the question of administrative policy does cause confusion between the policy of administration and the administration of policy, especially when responsibility for the administration of the policy of administration conflicts or overlaps with the policy of the administration of policy.
>are there better ways to impede bad (defecting) governments without exacting such a cost on their citizens?
Let their people immigrate. This does lead to some possible perverse incentives which go away if you have open borders.
The US tried this with Cuba, and Cuba responded by emptying out its prisons and lunatic asylums and preventing anyone else from leaving.
> if there's any here?
Always. The obvious one is noting that this kind of collateral damage, inflicting disproportionate suffering for marginal gain, is normal practice in conflict between states, and only selectively condemned, and using Gaza as an example.
>…are there better ways to impede bad (defecting) governments without exacting such a cost on their citizens?
Other options are likely to be more invasive (up to & including actual invasion) and at least have substantial chances of exacting even higher costs on the citizens.
The difficulty is in the coercive nature of government; there's no way to reliably exclude only a hostile government from something to which its people have access.
In an odd way, maybe it was for the best that the communists took over Russia. I love my alternate histories, and you have to wonder what would've happened if Hitler had attacked the Tsar's Russia. Odds are good he would've crushed them quickly, I think, maybe going on to win the war. So things would've been much worse if the revolution hadn't happened and Russia hadn't gone totalitarian + had industrialized.
What do you think?
Great blog on this question here - https://blog.daviskedrosky.com/p/twilight-imperium. I think it's generally accepted that the Russian economy was growing rapidly in the quarter century up to 1914, although some argue that the growth could not have been sustained by the Tsarist regime.
Given that industrialisation had begun, and was moving quickly, it seems likely to me that growth would have continued after World War One under the unpleasant and inefficient Tsarist regime, just as it did under the much more unpleasant and inefficient communist one. Massive land reforms had already been initiated, and were bearing fruit as the war began - see https://worksinprogress.co/issue/the-road-from-serfdom/. Again, I think it's uncontroversial to say that one of the main reasons the German army and government wanted war in 1914 was their fear that Russia was industrialising fast, and as a consequence the war would be much more difficult for them to win if fought at a later date.
Early Communist Russia was in many ways more fragile and vulnerable that Tsarist Russia had been, and then Stalin made a number of avoidable errors (many of which were also horrific crimes). I think Tsarist Russia could have been a genuine economic rival to the US in the second half of the twentieth century, rather than the Potemkin state the USSR became - though maybe that's stretching the counterfactual too far.
Thanks for the link!
Aside from all the bad things that happened to Russia, a Tsar probably would not have signed a peace treaty with Hitler in exchange for half of Poland, then purged most of the good generals as Stalin did.
The big problem would be "who succeeds Nicholas?" If no successful revolution and he manages to remain on the throne, he would certainly still be alive by the time of the Second World War. But his heir was sickly, and his remaining children were all daughters; ask the Tudors how this works out. So the succession problem would need to be sorted out fast and the likely successor on board with continuing the programme of reform and modernisation.
I think that Nicholas was reform-minded, just not enough. A failed revolution might give him both the impetus and the ability to push through more reforms, and so Russia starts to ramp up the pace of modernisation. Russia with a Tsar still on the throne is likely to be more sympathetic to and allied with the Allied forces of Europe.
Can Nicholas persuade the Eastern European countries that no, honest, if they ally with Russia against Germany this is not a Trojan horse for Russia to swallow them up? That too would be tricky to pull off.
*IF* all this can be done, then we have an at least notionally united Eastern front against Hitler's plans to push into Poland, plus the Western powers again notionally allied with the East at an early enough stage to perhaps slow down Hitler's advances.
I think this misunderstands Nicholas both in terms of his personality and in terms of his capabilities. He was almost uniquely unsuited to lead, being by turns reactionary (especially whenever his wife had his ear) and credulous (whenever she didn't). He was also naively romantic, erecting a bubble around himself and his family and then seemingly genuinely believing that he was beloved by his subjects (minus a few malcontents who needed to be harshly dealt with). He was not at all reform-minded, and in fact resented and tried to undo whatever reforms were pressed upon him (see, e.g. hid handling of the Duma). He was also apparently enough of a mark that a ragged charlatan who everyone else at court dispised (Rasputin) somehow wormed his way into the royal household and ended up commanding real power.
There was no way for the Empire to survive his reign. The best we could have hoped for, I think, is that the Kerensky government lasted long enough to see out the end of the war and then consolidate into a parliamentary system with Nicholas as a powerless figurehead. But that is also the one thing that Nicolas absolutely refused to do. So some sort of civil war was more or less inevitable, along with the breakup of the Empire. And the Resulting Russian Republic, if it even managed to exist instead of turning into Warlord-era China, would certainly have been less organised and industrialized than the Soviet Union.
It seems to be like the most immediately probable alternative to "Bolsheviks take over Russia" isn't "The Tsardom persists for another several decades." It's "a revolution overthrows the Tsardom and you get a non-Bolshevik group forming the government instead."
In fact, the fall of the Tsardom and the Bolshevik takeover of the Russian government were temporally separate events: it took several years of civil war for the Bolsheviks to establish control of the nation after it was freed from Tsarist rule.
For that matter the Bolsheviks weren't the group that overthrew the Tsardom, and they needed some particular good luck in order to seize nominal charge of the country six months after the last Tsar's abdication.
I don't think so. Stalin's handling of Hitler was comically inept at first - he was completely blindsided by all accounts and had politically purged the officer corps. If it hadn't been for a combination of incredibly bad weather and the sheer difficulty in invading Russia in terms of logistics (plus a ton of outside aid), the Soviets easily could have lost.
I don't see a Tsarist Russia that survives the events of 1917 making those same mistakes, if only because whoever succeeded Nicholas was unlikely to be such a Great Idiot of History as he was and would require no small amount of capable politicking to make it that far.
As for industrialization, I remember reading that when you looked at Soviet industrialization rates, it was basically a continuation of what had been happening under the Tsarist regime in the prior 20-30 years (not a drastic break).
If Stalin isn't in charge of the USSR then Russia is likely more hostile to the Nazis and Molotov-Ribbentrop doesn't get signed. If it's the Tsar then he'd have a strong ideological dislike of revolutionary and populist movements. If it's the Socialist Revolutionary Party (democratic socialists) then they likely see fascism as right wing and a threat to democracy. Without Molotov-Ribbentrop Germany doesn't overrun Poland as quickly and perhaps Russia enters the war to help them.
Remember, World War 2 in Europe was started by Russia and Germany. As much as Russia tries to erase that fact. The Soviets repeatedly chose to ally with the fascists over democratic counteroffers.
It's also not certain Russia would have been worse off economically. Russia was industrializing before WW1. The deindustrialized state Stalin inherited was due to general economic collapse. Russia was less industrialized in 1921 than in 1914. The Tsardom or Socialist Revolutionaries also likely don't spend as much time purging people and are more willing to attract foreign capital and so on. They're also more easily able to invest foreign states in their survival and get other alliances.
There were no democratic counteroffers to the Soviets.
Uhm, what kind of offers are we talking about here? As far as I know, no one other that Nazi Germany has offered Soviet Union to split Europe between them... but apparently you have something different in mind.
Anglo-French-Soviet talks in summer 1939. AJP Taylor has very good discussion. The point here was there were no Anglo-French offers as you observe.
Yes there were. The USSR turned them down repeatedly. In fact this isn't even solely a Stalin/Nazi thing since even in the 1920s they were rejecting British offers and preferring to collaborate with (then democratic) Germany.
Russians sometimes portray themselves as having been forced into Molotov-Ribbentrop. To say they were forced into it by democracies not reaching out. And to claim their conquest of their neighbors was defensive. But this is not in fact the case. It's a Soviet era lie meant to explain away their behavior. The USSR used to jail people for even bringing up Molotov-Ribbentrop.
I take my facts from AJP Taylor. You may think he is unreliable. Chamberlain distrusted Soviets and disliked being dragged into negotiations with them in 39. The Anglo-French proposals were not entirely sincere and tended to be delayed. The Anglo-French even sent a low-powered negotiating team to Moscow. The stumbling block was refusal of Poland to allow Soviet troops and the reluctance of liberal democracies to offend the small countries.
This is not to say that the Soviets were forced to make pact with Nazis,
The Nazis probably would have had the same quasi-genocidal intent vs the Slavs, which helped rally the Russian people to defend a regime many of them despised, so that points to the Russians still pulling out the W. However, one must also consider that the Soviets' experience in the Russian Civil War, along with paranoia about being the only communist country with a bunch of capitalist neighbors meant that when the Germans did invade, the Russians at least had the manpower, industrial capacity and other resources in place to (eventually) win. I think a Czarist regime would probably have been as unprepared for the Second WW as they were for the First.
Yes, but the Soviets were also about as unprepared for the Second World War as the Tsarists were for the First. Partly because of an ill-timed Purge, of course. But in spite of Soviet "paranoia", Stalin completely ignored numerous explicit warnings from the British, the Americans, and his own intelligence agencies documenting in detail the German plans and preparations for invasion including IIRC the actual date to within a few days.
I was mainly thinking in terms of manpower. When Barbarossa began, the Soviet Army was not some little peacekeeping force focused on internal security:
“When Germany invaded the Soviet Union in June 1941, in Operation Barbarossa, the Red Army's ground forces had 303 divisions and 22 separate brigades (5.5 million soldiers) including 166 divisions and brigades (2.6 million) garrisoned in the western military districts.[58][59] The Axis forces deployed on the Eastern Front consisted of 181 divisions and 18 brigades (3 million soldiers).”
https://en.m.wikipedia.org/wiki/Red_Army
That's big. That's about twice the size of the Russian Imperial Army at the outbreak of the First World War. I know weight of numbers frequentlydoesn't tell the tale in industrialized warfare, but I think this at least demonstrates that the Red Army was not wholly unprepared for war the way, say Britain and the US were. I think the Soviet Government simply had more state capacity to marshal available resources for a war effort than the Tsarist Regime could have ever really contemplated, and they had certainly made use of it.
The Tsar's army was not "some little peacekeeping force focused on internal security". I do not believe that any Russian army ever has been a "little peacekeeping force focused on internal security". Indeed, given Russia's deep cultural psychology re invasions of the Motherland, I'm pretty sure that proposing to have only a "little peacekeeping force focused on internal security" disqualifies one from governing Russia and would lead to the sort of broad dissent that would need a ridiculously ginormous internal security force to suppress.
It's easier to just give the Russians what they want - an army that they can believe makes Russia unconquerable. Especially by the Germans. Or the French. Or the Swedes. Or the Poles, even. But also forces in the East because they remember the Mongols and the Japanese.
*Whoever* governs Russia in 1939-1941, is going to have a very large and powerful army that is designed to stand up to the German army. The only question is how capably they will use it.
Tsarist Russia started WWI with 1.5 million soldiers under arms, and 5 million reservists that could be mobilized with a bit of time. It had the largest standing army in the world at that time. If they had the largest standing army in 1914, what reason do you have to believe they wouldn't continue to have the largest standing army in 1941? That the Tsars would decide to become less focused on military power after WWI?
The tsarist regime was in a pretty advanced state of decay in 1914. Despite its size, the army wasn't terribly effective. It was poorly led and even more poorly supplied. If the ancien regime had managed to hang on til 1940, it would probably have been in an even more advanced state of decay and the army probably would have been about as effective as the Poles were against the Wehrmacht. Eg, Soviet tank production was outpacing German figures by 1942. Would that have been possible under the tsar? Seems unlikely to me, but of course that's ultimately unknowable. That's why counterfactuals are fun.
Without the Bolshevik Revolution, it's unlikely that Hitler would have rose to power. A lot of German and foreign powers in the interwar period either supported or failed to suppress Hitler specifically because they saw him as a useful bulwark against either the Soviet Union or German communist movements.
The whole world just looks too different in 1939 if there's no communist revolutions and the Tsar is still around.
Agreed on that, if the Communist threat is not a threat and the Bolsheviks are just a bunch of internal revolutionaries that the rest of Europe is happy to let Russia handle, then trying to play Hitler off against them isn't in the cards.
Note that the communists toppled a democratic government, not the Tsar, so that's who would run Russia in that scenario.
The world would have been so fundamentally different that I can't really imagine what would have happened.
To me, the communist revolution in Russia is a leading candidate for The Worst Thing That Ever Happened, so I expect things would have ended up much better.
The Great War ends differently if there isn't a Bolshevik regime to take Russia out of it. Or, at least, Versailles is different.
All sorts of things are different but at the very least we can say the anti-communist fears that drove some fraction of Nazi support might well have been weaker.
So the boring answer is that you change one key ingredient from 24 years earlier and it's likely that a lot has changed. No Bolsheviks, no Hitler Government is what I think the simulation will show.
You've got the causality backward : Germany allowing Lenin to transit though their territory into Russia as an objective ally is one of the factors that lead to a Bolshevik Regime.
Russia was industrializing already in 1914. A Russia under the tsar wouldn't have the mass losses of the Russian civil war or the early 1930s famines, and the Tsar is unlikely to have killed off a huge chunk of his officer corps. It would have performed better against the Nazis, most likely.
The Holodomor was a direct and, if not intended then at least accepted, consequence of rapid Russian industrialization; not so much an economic mistake due to communism. Russia had to buy its factories and know-how from Europe and USA, but did not have enough foreign currency to pay for them. Ukraine was bled dry to pay for all that Western tech in grain. It may have happened the same under a Tsarist Russia because the overall situation (need to industrialize, no dollars) would have been the same.
Holdemor was very much intended. Many poor countries managed to industrialize themselves without committing genocides.
The Holodomor can have had both political and economic goals.
https://holodomormuseum.org.ua/en/recognition-of-holodomor-as-genocide-in-the-world/
> Statement of the State Duma. April 2, 2008.
> [..]
>Deputies of the State Duma, honoring the victims of the 1930s famine on the territory of the USSR, strongly condemn the regime that has neglected the lives of people for the achievement of economic and political goals [..]
Tsarist Russia would have had plenty of access to foreign credit markets for borrowing for industrialization (a significant factor in their alliance with France and the UK going into the First World War as well). They would not have been killing millions through starvation to pay for factories through grain expropriation, to say nothing of the fact that they'd probably have higher agricultural production in general without collectivization.
See my other comments in this thread.
Can you show any evidence of Russia chronically lacking foreign currency (or experiencing a balance of payment crisis) before 1914, when it was already rapidly industrializing?
That would support your argument that industrializing post-1917 necessitated extreme measures to acquire foreign currency, no?
I said nothing about before 1914, I'm talking abut the Holodomor era. The cost of industrialization depends on its pace, and the Tsarist industrialization effort is no proof that Soviet industrialization could have proceeded at the pace (and cost) it did.
https://holodomormuseum.org.ua/en/recognition-of-holodomor-as-genocide-in-the-world/
> Statement of the State Duma. April 2, 2008.
> [..]
>Deputies of the State Duma, honoring the victims of the 1930s famine on the territory of the USSR, strongly condemn the regime that has neglected the lives of people for the achievement of economic and political goals [..]
This twitter thread by Russia-born historian Kamil Galeev
https://nitter.poast.org/kamilkazani/status/1505247886908424195#m
Specifically:
https://nitter.poast.org/kamilkazani/status/1505248103326052354#m
> The cost of industrialization depends on its pace, and the Tsarist industrialization effort is no proof that Soviet industrialization could have proceeded at the pace (and cost) it did.
I mentioned Russian pre-1914 industrialization precisely because of its rapid pace. Steel production more than doubled between 1890 and 1900. I agree that beyond a certain pace level, and assuming complete disregard for human welfare (e.g. Great Leap forward), widespread famine would be inevitable, but I don't think 1930s Soviet Union exceeded that level.
Here is data on steel production for selected countries between 1890 and 1935 according to Grok:
Year___ : 1890 -> 1900 -> 1910 -> 1930 -> 1935
Japan__ : ___5 -> ___50 -> __150 -> 2,300 -> 4,800
Russia_ : 1,000 -> 2,000 -> 3,543 -> 5,000 -> 12,500
Canada: ___50 -> __300 -> __800 -> 1,200 -> 1,100
Italy___ : __100 -> __300 -> __500 -> 2,200 -> 2,400
Poland_: _____ -> _____ -> 100 -> 1,400 -> 1,500
(I'm sorry about the formatting)
You can see that during this period of time, several countries industrialized faster (or close to) than Russia/USSR (Poland 1910-1930; Japan 1910-1930 and maybe 1930-1935; Italy 1910-1930; maybe Russia 1900-1910). This is assuming steel production is a good proxy for industrialization, of course.
I don't see how 1930-1935 Soviet industrialization was "rapid enough" to necessarily lead to Holodomor. None of the countries that industrialized very rapidly around that time, as fast or nearly as fast as the USSR, came close to famine conditions.
So to get this straight: You trust your own conclusions on Grok data over the Russian Duma and a russia-born historian, but ask _me_ for evidence?
Yup, this. Russia did well in the Russo-Turkish war, and its defeat in the Russo-Japanese War was due largely to the 1905 revolution. A Russia without revolution would likely have done better in WWWII. The Molotov-Ribbentrop pact also likely would not have happened, given Hitler made his plans for eastward expansion clear from the beginning.
^
^
v
>
<
^
^^vv<><>BA
BA BA SELECT START
START
My pathogen update for epidemiological weeks 35-38. All COVID this update.
1. Biobot's wastewater numbers and the CDC's NWSS seem to both indicate that the current wave has peaked. Of course, I thought that was the case 4 weeks ago, but the numbers spiked up again.
https://biobot.io/risk-reports/covid-19-influenza-and-rsv-wastewater-monitoring-in-the-u-s-week-of-september-13-2025/
https://www.cdc.gov/nwss/rv/COVID19-national-data.html
Notice that the CDC's NWSS data shows the current wave is nearly as high as our 2024-25 winter wave, while Biobot shows the current wave as being significantly smaller. I suspect that's because the CDC is normalizing their data against the previous year's numbers. OTOH, ED visits are higher for this wave than the previous one.
BioFire's proprietary syndromic trends tool also shows the current COVID wave is past its peak. And their COVID curves resemble the CDC's NWSS curves. I'm not sure how they derive their data, though. Notice there's a *slight* uptick in RSV and flu.
https://syndromictrends.com/
(Click on the Respiratory Pathogen Trends tab, and use the funnel icon to display the pathogens you're interested in.)
2. And if Biobot's CpmL/PMMOV numbers are a better indicator of transmission rates than the NWSS data, I'd have to conclude that XFG.x variant is more virulent than last winter's XEC.x var, in that there is a higher rate of ED visits this current wave. But this is not reflected in hospitalizations or deaths. Charts here...
https://www.cdc.gov/covid/php/surveillance/index.html
Following this line of reasoning, I conclude that even though XFG.x is getting past our NAbs and we're getting sick, our secondary B-cell and T-cell defenses are doing an excellent job of warding off serious illness. Unfortunately, the age cohort with the highest percentage of ED visits is the 0-4 year olds. Seems like Bobby "Brainworm" Kennedy's restrictions on COVID vaccines for the very young were a bad idea. Of course, SARS2 has a very low mortality rate among the very young, but it's not zero. Some kids who otherwise would have gotten vaccinated may die because of RFK Jr's anti-vaccine crackpottery.
https://public.tableau.com/app/profile/raj.rajnarayanan/viz/Percent_ED_Visits_USA_CDC/Dashboard1
https://www.pbs.org/newshour/health/fda-approves-updated-covid-19-shots-with-some-restrictions-for-kids-and-adults
3. XFG (aka "Stratus") and its descendants were the primary driver of our summer wave. Early on, it looked like it would be an NB.1.8.1 ("Nimbus") wave, but XFG left it in the dust.
https://cov-spectrum.org/explore/United%20States/AllSamples/Past6M/variants?nextcladePangoLineage=XFG*&
https://cov-spectrum.org/explore/United%20States/AllSamples/Past6M/variants?nextcladePangoLineage=NB.1.8.1*&
And although XFG.x is the dominant variant in North America, South America, Europe, and Africa (with some sampling caveats), Asia and Oceania belong to Nimbus. I don't remember a pattern like this since 2021, when the VoCs Beta, Gamma, and Lambda each distinctly dominated a region of the world. Delta, and then Omicron, each became global VoCs and destroyed that pattern of geographical segregation.
https://tinyurl.com/au637yet
(This website defaults to the Australian view, but you can select by country or continent using the widget in the upper right corner.)
/end
Thanks!
Related: the influenza wave in Australia seems to have been pretty harsh this year. People considering vaccinations or boosters should not forget to consider a flu shot, since the Australian wave often indicates how bad things are going to be in the next winter of the northern hemisphere.
And AUS had a unusually high rate of Influenza B infections. I suspect we'll see that same pattern in the US for our upcoming flu season. But our current formulation is supposed to do an moderately OK job protecting against the current A(H1N1), A(H3N2), and B/Victoria strains. In the past flu vaccines weren't as effective as COVID vaccines at preventing illness, though, and their NAbs fade quicker than SARS2 NAbs.
Semi-relatedly!
I had Covid for the second time a couple of weeks ago, and, like the first time I had Covid, it was actually pleasant compared to many much worse "regular" head colds, sinus infections, and even bad allergy seasons I've had in the past.
Day zero: Halfway through work, I started feeling "covid-y:" Inexplicable but moderate fatigue with mild chills and a sense of "I don't feel right."
I started sucking on zinc lozenges, put on a mask, bolted out early to minimize exposure to my coworkers, went home, laid down, and felt quite a bit more covidy six hours later. Fatigue increased to "significant," accompanied by chills and a mild fever (which irritatingly floated around 99 - 100.1 and never spiked higher). I developed a runny nose, but it wasn't ever so bad I couldn't easily sleep, and no sore throat or cough to speak of. No noticeable loss of appetite, smell, or taste. Breathing was easy throughout.
Day two, my test arrived from Amazon. It read positive.
Fever broke on day four, perhaps not entirely coincidentally after I felt well enough to boil and cool water for several rounds of nasal irrigation with a neti pot, then, after 24 hours and symptoms improving on day five, I was back to work on day six, even though I was still testing positive.
On day seven, I felt more or less normal and tested negative, although that's when a persistent mild dry cough arrived which has been going on for a week.
As I said, this round of Covid was so only mildly bothersome that it was actually weirdly pleasant for an illness. If I have to be symptomatic, this is the way I want it to happen! The only thing I wish had gone differently was to begin nasal irrigation plus salt water gargling right at the start of symptoms, as that apparently can reduce the severity and duration of Covid (https://pmc.ncbi.nlm.nih.gov/articles/PMC10312243/).
(And on a side note:
!!!
Why the FUCK isn't that advice paired with all educational materials on Covid care on every website? Why the FUCK did I only find out on day three of my *second* round of Covid that neti-potting and salt water gargling could actually make a significant difference?!?!
!!!)
Interesting, I had a *much* easier time with real Covid than two friends had getting Pfizer boosters a few weeks before I got sick. One was miserable with body aches for two sleepless days and felt draggy for several days after, and the other was completely debilitated with body aches and fever for three days and barely starting to feel okay to go out to a restaurant on day four.
Whereas I could have easily muscled through my usual work routine and doing chores like grocery shopping if necessity demanded it (I actually did slip out to a a gas station and grocery store just before closing on day 3, wearing a regular N95 mask, and the task didn't take anything out of me).
I'm not saying anyone should look at our cohort of three and draw any conclusions about what they should do with their own health, but just want to note that it's interesting how much worse their symptoms were with the vaccine than mine was with the actual illness. I kind of suspect that people's personal experiences with Covid are much more unique to the individual's immune response than anyone wants to admit.
There were at least a few studies that showed netipotting and salt-water gargling could reduce COVID symptoms. But they also indicated that those steps wouldn't shorten the infection time. These steps generally work for all viral respiratory infections. I'm not sure why your doctors or their assistants didn't recommend them, but until recently, COVID has been a potentially serious illness, and it probably never occurred to your medical professionals to treat it as if it were a Rhinovirus.
Do NOT use tap water, though! There's a nasty parasite called *Naegleria fowleri*, a.k.a the “brain-eating amoeba,” that's killed a few people. Use sterile distilled water or a sterile saline solution. If you must use tap water bring it to a boil for a few minutes and let it cool before you use it.
> " But they also indicated that those steps wouldn't shorten the infection time."
Right - the link I posted cites a lot of studies, some of which state that nasal irrigation and salt-water gargling do indeed reduce infection time.
I didn't dig into every citation, but it makes sense that the timing and intensity of one intervention might not work as well as a more aggressive intervention, etc.
Yep, as mentioned in my comment, I boiled and cooled the water for my neti pot. I held it at a rolling boil for over five minutes, which is the best practice for preparing tap water for neti pots.
>Why the FUCK isn't that advice paired with all educational materials on Covid care on every website? Why the FUCK did I only find out on day three of my *second* round of Covid that neti-potting and salt water gargling could actually make a significant difference?!?!
I don't know why. But I believe strongly that in the current era people have to research health problems on their own, as a supplement to visits to professionals. One of these days I will put up a post about the story of my spine problems, and how much research and self-advocacy I have had to do to just to get basic information and reasonable interventions. And I live in a town with a famous, revered medical school, several famous hospitals, research centers led by world-famous scientists, etc. Every doctor and treatment center I have used would be triple-A rated if such ratings existed. And I *liked* these doctors OK -- they did not bristle if I wanted to discuss options to the one they suggested. But they did not tell me big picture stuff I needed to know.
You have to research your illness, research treatment options, but also collaborate with your doctors, because whatever the flaws in their treatment recommendations they know way more than you about illness and how the body works.
One of my greatest disappointments escaping from Christian Science into medical science was discovering the amount of effort one needs to put into the latter. I was initially thoroughly cowed by aphorisms like, "Don't confuse your Google search with my six years of medical school" and then only eventually learned that the rejoinder, "don't confuse the 1-hour lecture you had on my condition with my ten years of living with it" is indeed legitimate and equally valid.
Especially if you're working with overworked, indifferent, corporate medical professionals, which I mostly am.
I've mentioned before that opiates up to and including IV Dilaudid have zero noticeable effects on me, a phenomena which literally every goddamn medical professional I encountered dismissively waved away until I sought out pharmacogenetic testing on my own dime.
$700 later, I finally had paperwork proving I have a mutation of CYP2D6, a gene which metabolizes opiates. Moreover, I had paperwork educating doctors that there is a gene which processes the analgesic effect of opiates, that it has mutations, and that those mutations can dictate the pain relief patients receive from opiates.
That not all patients receive relief from opiates.
And the scariest thing about that revelation is that is the most conservative estimates are that 5% of my Caucasian demographic are likewise opiate-resistant mutants (it's less common in other ethnicities, which hover around 2%).
That's a LOT of goddamned patients, but every medical professional I saw was completely surprised by the idea that 1 in 20 white people and 1 in 50 Asian and African people don't receive significant relief from the most powerful pain medications available.
I feel like...that's something they all should have known???
Are the vaccines available in the USA targeted at the strains that are so spreading now?
XFG, which is creating the current wave, is a recombinant (a "hybrid") of two distinct viral lineages: LF.7 and LP.8.1.2. My understanding is that the current mRNA vaccine formulations are keyed to LP.8.1.2 (I don't know about Novavax). The mouse model data I've seen suggests that the current formulation should do a pretty good job against XFG and its immediate descendants, with the advisory that this formulation won't perfectly protect against illness (none of them did, though), but will do a good job preventing serious illness and death. OTOH, even though they're making this CYA statement, the NAbs generated by this formulation should also reduce the chances of getting infected at all.
Likewise, considering the poor vax uptake in the US, it appears our immunity acquired from previous infections and earlier vaccine versions is doing an excellent job of keeping people out of the hospital. For this reason, I think it's unwise to deny these vaccines to the young who may not yet have been exposed to the virus. The vaccines will create a 4-6 month peak in their NAbs, and allow their B cells to key themselves to the current epitopes — and start the process of somatic hypermutation (which is a way our immune system riffs on what it's learned). Also, it will allow the kids' naive T cells to learn about this pathogen. Kids could get this via infection, but vaccines are less risky than a COVID infection.
Looking into the future, we have no idea which variant will kick off the winter wave. It may not have even appeared yet. How well the current formulation will work against the next wave-creating variant is an open question.
About qualifying for the vax: I believe the qualifications are the same as they were early on during periods when there were limits on who got the vax: You have to be age 65 plus or have one of the conditions on a list of conditions that were thought at one point to increase your risk of being made severely ill by covid. Some of the conditions are things that half the population qualify for -- obesity (I believe that's BMI > 30), mood disorders, and I forget the other mild stuff, but some of it's very common. Should be easy to find out online what currently qualifites people other than age.
Also, in my state, the places giving the vax did not ask for proof that the person had one of these conditions. All you had to do was say "I'm under 65 but have a qualifying condition." They did not even ask what the qualifying condition was, much less ask for proof. And that makes sense. Think what a hassle it would be for them if they did -- figuring out what counts as proof that somebody suffers from depression, for instance. So I'm guessing that drugstores in most states will operate the same way.
I went to CVS 2 weeks ago. they had the latest Covid vaccine but they said only people designated “at risk” were allowed to have it at this time 🤷♂️
Coincidentally I went and got a Covid vaccine today. At a certain point in the process the pharmacist, her eyes looking down at the counter, mumbled, "Will you verify you have a pre-existing condition and are eligible for the Covid vaccine?"
"Sure," I replied.
No more was said about the matter.
Depends on which state you're in. The CDC's ACIP committee makes vaccine recommendations for the U.S. population. These recommendations are advisory, not automatically binding. The Feds, through laws and programs, ties certain things (insurance coverage, vaccine programs, etc.) to those recommendations. But states can ignore them (although insurance companies may use the new restricted ACIP recommendations to deny coverage if you get the vaccine). It's all a tangled mess at the moment. CA, OR, WA, and HI seem to be going their own way. I'm in CA, and I got my flu and COVID shot at Kaiser yesterday. I'm over 65, but whole families were lined up to get their vaccines, and no one seemed to fuss about whether the kids met the new ACIP recommendations.
Just looked at this (https://public.tableau.com/app/profile/raj.rajnarayanan/viz/Percent_ED_Visits_USA_CDC/Dashboard1) table, and noticed that age 0-4 is consistently high on ED visits, even if not always highest, during both surges and lulls. Not sure what to make of it -- maybe mostly that young children get sick suddenly, and often spike high fevers, and of course they have more people monitoring them and worrying about them than any other age group.
This was what stood out to me. If your metric is ED visits, you notice it is high in small children, and you conclude this has to reflect small children getting more severe illness you are missing a very major confounder: standard advice is that illness in very small children can go sideways very very fast, so if something goes wrong you should get professional help immediately. This gets less true as they age, so if parents listen to the standard guidance (at least some do) you would expect to see that pattern even if the base illness rate was identical. (Which it generally won’t be, as I understand it the guidance is correct. But its influence does not require its correctness, and it should lead to toddlers/babies being brought in in “probably fine” cases where older kids likely wouldn’t be, and adults definitely wouldn’t be.)
Oh dear. I'm mostly a quiet lurker but I've always seen this site as a respite from that kind of drama. I just hope that people who feel like they're losing it get some support outside of social media. The last few years have been insanely stressful. I actually never knew I cared as much about the world as I do. It's obviously deep evolutionary programming.
I just try to remind myself that life a few thousand years ago was mostly much worse.
Is it deep evolutionary programming to care about the world? I think that programming is for caring about your tribe, a Dunbar's number of people. It's education (or indoctrination) making you care about the world, I think.
How is it not obvious that the word "care" is just obscuring a semantic argument? When I see a dying pigeon i want to give it water, or if i am told a village is in need of water I reflexively want to help them. None of these things are in my Dunbar list.
This "urge" to help, whether or not you want to call it "care", is obviously evolutionarily programmed. We even see other animals doing similar acts of compassion or tidiness to keep their world safe and alive.
If it were evolutionarily programmed, the urge would be ubiquitous, no? I think most people are not as altruistic as you. And I don't think I'm running a different definition of "care" than you or OP.
Then, again, do I care? I took the Giving What We Can Pledge, but I feel like I'm not very invested in the state of the world. That's too distant and abstract, hence why I don't think there would be a biological mechanism to make you care for the world.
It's not occurring in every single member of the population, sure, but psychology has a lot of variance and I would say >80% of people would feel the urge to help a dying animal or stranger. Even very heritable traits have much variance.
"Care" is downstream of hormones and endorphins which make you more or less sensitive to events in the world. It's not a biological imperative until you become aware of some event and develop some kind of association with it and then these biological systems kick in, and some people just have a lower threshold for this.
"Dying animal", "stranger", this is very concrete and tangible, "world" is not. "Caring for the world" needs all sorts of intellectual baggage to happen to make you develop that association, the biology won't do that on its own.
I know what you're really trying to say which is something like "Starving / dying / sick people in X poor country are just abstract data points to me that I have been taught I need to respond to, but I do not have the same feelings towards them that I do towards immediately visible things"
And I'm just saying that the urge to respond to that in some positive way is not entirely socially learned, even if you never build up exactly "caring" emotions like you would care for your own child. So I think we agree enough here that I wont continue.
On the other hand, elephants help drowning animals and people will help strangers if they come up to them and ask for it much of the time, so it's clearly contextual and dependent on your present psychological state, hormones, knowledge, etc.
Doesn't mean there is no biological component and this is all some kind of indoctrination scheme to get people to play nice. Actually, it's possible the indoctrination goes in the opposite direction and people get taught to distrust someone who appears to be dying, for whatever reason.
Was it a mistake to designate Antifa as a domestic terrorist organization instead of a FOREIGN one?
What country would you blame?
I guess they could pull the TdA tactic again and blame it on Venezuela.
I don't believe that's a requirement.
I mean, what’s the evidence of it being foreign?
With communists, we could point to a link to the Soviet Union, but there’s no antifa state out there.
Designating "Antifa" as any sort of organization, makes about as much sense as designating "goth" as an organization. Or maybe imagine Richard Nixon designating "The Hippies" as a domestic criminal organization back in the day. Which is to say, not much sense at all. There were certainly plenty of hippies committing crimes, but precious little in the way of large-scale organization. And the emergence multiple clusters of local leadership in the presence of local demand, does not a singular organization make.
We've declared "Antifa" to be a terrorist organization. Great. How is that actionable? What can we do that we couldn't have done before?
Curtis Yarvin disagreed, at least in this post in 2021:
https://graymirror.substack.com/p/donald-trump-the-natural-experiment
"Do you remember the dogs of summer? The riots that did $2 billion in damage? The miniature armies in black clothes and black masks? Those were dogs on a leash. They could be turned on and off in one Zoom call. Anything that can be suppressed with “a few key decisions” is not in any way spontaneous. You just read it in the Times—so it must be true, right? And did the people in black show up on the 6th? They did not."
People are getting very caught up on the word "organization". There isn't a national head of antifa, or a membership list. They don't have regular meetings. It would be more technically accurate to say it is a loosely organized terrorist network, often operating in small autonomous cells. Nevertheless, the official designation is useful because it gives law enforcement greater ability to investigate, disrupt, and prosecute terrorist activity. We want to be able to prevent events such as the Alvarado ICE detention center attack, or impromptu gatherings for the purpose of mob violence.
"The official designation is useful because it gives law enforcement greater ability to investigate, disrupt, and prosecute terrorist activity"
This needs unpacking. What, exactly, is law enforcement going to be doing that they couldn't have done just as well last month?
If there's a group of people meeting to plan a violent protest or whatever, the police can already arrest them. If they have a reasonable suspicion of that, they can investigate them to see if there's enough evidence to arrest them. And even without reasonable suspicion, they can do some basic surveillance and information gathering. But if the idea is that they will now be able to say "Aha, we don't need any of that probable cause or reasonable suspicion nonsense because they're *Antifa*, we can just investigate and search and interrogate away!", then it kind of does matter that Antifa doesn't have a membership list or a national organization.
Wimbli is writing in Dale Gribble mode, but I do think there's something to the FBI making this declaration in order to be able to use RICO. If so, then I think (IANAL) Antifa has to be recognized as a criminal organization, which would include a domestic terrorism organization.
RICO would enable harsher penalties, and civil suits. It also enables the USG to look like it's doing something, with the base energizing that brings. It also enables "going after the money", which presumes organized but hidden money is flowing to them. I suspect the FBI has information on this, which it can't disclose for the usual reasons.
It's a pity I can't just link to Popehat to explaing why It's Not RICO, Dammit, but it isn't.
In addition to the need for something recognizably a criminal organization, you have to charge people from a short list of crimes explicitly spelled out in the statute, and that list is tailored towards the thing mobsters do, not the things protesters/rioters/activists/"terrorists" do. And you ultimately have to convict them of that crime.
The big advantage of RICO over just charging and convicting them of the underlying crime, is that it unlocks tools to seize their money, which is a big deal if you're going after mobsters, but not so much antifa. AIUI, aside from this month's operational funding, "Antifa's" money is mostly in the pockets of supporters who maintain a safely deniable distance while doling out the funds as needed.
Also, RICO lets private citizens get in the act with lawsuits, but those almost always fail and they aren't even worth trying if the target doesn't have deep and accessible pockets.
Hmm. Well, it still seems possible to me that the FBI could convict specific Antifa members of specific crimes. I just don't know who and what yet, and it may be that FBI is still working on that.
I agree that RICO comes up due to the money angle, and I think part of the difference between our views is that I think FBI knows more about the money flows than you think it does. It seems perfectly plausible for them to not reveal that until they're ready to drop the hammer, and before that, they have to declare Antifa to be domestic terrorists. (This could also be part of FBI's strategy to find out more: declare they're domestic terrorists and see who starts making a lot of phone calls.)
I'll agree private suits aren't likely for the reason you state - unless it turns out Antifa has targeted some very wealthy people. I don't know how likely this is, but I think it's been long safe to say that the median Antifa member has a high incentive to do exactly that.
It's also possible that I'm way off and FBI's not using the RICO route at all. FAIK it's not planning to even do much past this declaration in order to look busy (and again, to see who runs for cover when they turn on that floodlight).
It makes about as much sense as declaring war on racism. Like him or not, the President has a lot of social clout. He's trying to move the culture rightward by planting an ideological flag. This is what administrations always do.
You can bring RICO charges against progressive elites.
Only if you don't expect to ever lose an election again.
I don't think it's an organised, as in "centralised headquarters and organisational structure of one entity", movement. But at the same time, it's not "random three guys in a city somewhere decide they have nothing better to do on Friday night than throw stones at cops" movement either.
It's more like little cells all taking inspiration from the same broad philosophy and co-ordinating for local protests and taking advice and copying tactics etc. from social media sites they frequent.
Should they be called a terrorist organisation? Not quite, not yet. They're not up there with the 70s movements in the USA.
But if parents protesting school boards can be called "domestic terrorist organisations" then hell yeah, what's sauce for the goose is sauce for the gander:
https://www.justice.gov/archives/ag/file/1170061-0/dl?inline
https://www.justice.gov/archives/opa/pr/justice-department-addresses-violent-threats-against-school-officials-and-teachers
"According to the Attorney General’s memorandum, the Justice Department will launch a series of additional efforts in the coming days designed to address the rise in criminal conduct directed toward school personnel. Those efforts are expected to include the creation of a task force, consisting of representatives from the department’s Criminal Division, National Security Division, Civil Rights Division, the Executive Office for U.S. Attorneys, the FBI, the Community Relations Service and the Office of Justice Programs, to determine how federal enforcement tools can be used to prosecute these crimes, and ways to assist state, Tribal, territorial and local law enforcement where threats of violence may not constitute federal crimes."
https://www.congress.gov/committee-report/117th-congress/house-report/485/1
"On September 29, 2021, the NSBA sent a letter to President
Biden equating concerned parents voicing their opinion at
school board meetings as domestic terrorists and urging the
Administration to exercise its authorities under the Patriot
Act.\12\ The NSBA letter stated that ``malice, violence, and
threats'' against school officials ``could be the equivalent of
a form of domestic terrorism or hate crimes.''\13\ The letter
cited a number of interactions at school board meetings, the
vast majority of which did not involve violence or threats.\14\
Notably, as one ``example'' of alleged domestic terrorism, the
NSBA cited an instance in Loudoun County, Virginia, where a
father angrily confronted members at a school board meeting
about the heinous sexual assault of his daughter.\15\"
did antifa write that memo? I was of the understanding it was written by the Biden administration, which was broadly hostile to antifa's goals
No they didn't. My view is that "what is sauce for the goose is sauce for the gander". If the liberal/woke side wanted to get ordinary people classed as domestic terrorists, then the folx dressing up in black, chanting slogans, adhering to a broad philosophy, and turning up for street protests, property damage, and altercations with the civil authorities can damn well go in that bucket too.
Biden and Trump are both hostile to ordinary people, and it's not clear why the ones dressing in black should be blamed for the decisions of either administration. I doubt your average antifa protestor viewed the Biden administration as an ally (nor should they!)
The people who could be hurt by this policy are not, by and large, the people who backed the last administration doing likewise, and the exceptions were extraordinarily naive and I tried to warn them.
more the accelerated flow of US arms to fash-aligned regimes abroad
I'd think that'd be one of the 1st things an antifa supportive regime (the US hasn't had one of those since FDR) would stop
Israel too, Likud itself has fascist roots and they have Kahanists in government. And knowing US history and Biden's general tendencies he might've backed someone awful in Latin America though I haven't been following the region that closely.
But yeah, I stand by thinking that US aid to Ukraine should've been conditioned on rooting the Azov Battalion out of the army and taking down those Bandera memorials
Antifa is not an organization, so yes.
Seems like Antifa is a motte-and-bailey of an organization.
Organized enough so that when they don't like something, they can call their people to the streets to do group violence. (And if some rando, such as you or me, tried to call those same people to the streets, they wouldn't respond.)
But also this is all perfectly spontaneous, and if you imagine that there is someone who "calls their people to the streets to do group violence", you must be some kind of conspiracy theorist.
Serious question - are there any publicly identified (not anonymous) individuals who say "yes I am part of antifa"?
That seems like it's always been a prereq for taking action against the organization. Like didn't we know who was in the weathermans group?
I'm literally looking for someone who has publicly said "The name of my group is Antifa" or "I am part of Antifa". Like I think John Jacobs clearly referred to his group as "Weatherman".
Dwayne Dixon, the guy in your article, is part of Redneck Revolt, which has not been designated as a terrorist organization AFAIK. I'm sure he would describe himself as "against fascists" but would he say "yes, I'm part of Antifa"? And if not, what is the point of designating a group with no known avowed members as a terrorist group?
Ctrl+F "antifa" 0 matches
It's weird how many people mistake "a bunch of individuals have similar interests and thought patterns, and thus often pursue the same goals at the same time" with "these people must all be secretly working together."
It's not isolated to one part of the political spectrum either: I see this same mistake made by many people of many different persuasions in many different contexts.
I don’t think that’s the issue. It really just quite simply isn’t an organization. It’s like calling Christians an organization (and Christians often refer to themselves collectively as ‘the church’ as if there were a collective Christian institution).
It’s an identity or term used to describe a worldview, not an organization.
The Mafia is an organization. You could meaningfully designate it as a terrorist organization. Designating ‘antifa’ seems rather like someone saying ‘we hereby designate gangsters as a terrorist organization.’ Not a specific criminal organization, just ‘gangsters.’
I think you have a point, but keep in mind that we're Americans. We declare war on inanimate objects like drugs, or even on mere concepts like poverty or terrorism. Complaining that the government is misapplying some categorical designation to Antifa goons isn't going to get you anywhere at all.
You can declare war on drugs, but even within that framework I think it would still be pretty extreme to designate all drug dealers as terrorists. "Let's declare war on left-wing agitators" might be stupid framing but at heart it's just a way to declare your priorities; designating some group as a terrorist organization presumably (I'm not actually sure of the details here) actually has legal consequences.
It's the difference between "we're going to put more resources into going after drug dealers" and "because you bought drugs from a dealer, you knowingly gave money to a terrorist organization so we can block all your assets"--the first can remain just a set of misplaced priorities; the second allows the justice system to be much more intrusive and overbearing over acts that don't really deserve it.
That's true, there are important legal ramifications involved. I was just responding to the somewhat semantic point made by others that antifa is not an organization.
It's really hard for people who don't have experience with anarchist and far left activism to believe that Antifa really doesn't have some kind of leader/leadership that "calls people to do group violence".
People who sympathise with antifa tend to belong to other groups: Soup kitchens, worker co-ops, anarchist book clubs, vegan jam nights etc. At some point someone gets word that a rightwing demonstration is going to happen. The word spreads through the grapevine and people start talking about turning up to counter protest. They turn up to the protest and the the police shoot tear gas at the counter protestors because they sympathise with the right wing. People get mad and kick the tear gas back at the police. The police shoot rubber bullets, people get mad and throw stones. People wear masks because in a small town the far right will turn up to your house in the night and cut your brake cables etc.
It escalates because US police are not trained or encouraged to de-escalate and then its reported in the press that Antifa has caused a riot.
I believe that you are making it appear way more spontaneous than it actually is.
First, if people really spontaneously reacted this way, we would be having protests at every corner, and all kinds of groups, so no one would even notice Antifa because it wouldn't be any special, just one of many.
Second, once I was at a protest that a few Antifa people wanted to *support*, and the organizers of the protest told them to stay away, because they didn't want to take responsibility for their actions. The Antifa people came as a group, stood apart from the rest, all of them clearly recognizable by wearing masks that no one else had, and then left as a group.
If that is not an organized group, then neither is a group of neo-Nazis who just spontaneously happen to march together and wear the same uniforms.
"The Antifa people came as a group, stood apart from the rest, all of them clearly recognizable by wearing masks that no one else had, and then left as a group."
THOSE SPECIFIC PEOPLE were an "organized group." Very likely, they were all friends in real life who'd done this kind of thing together before. But that story provides literally *zero evidence* that they were taking marching orders from some person they considered an authority, or even a coordinator.
BTW, the "clearly recognizable dress" is just a specific style that's become popular among certain sorts of protestors for a mix of practical and aesthetic reasons. It even has a name: it's called "black bloc:"
https://en.wikipedia.org/wiki/Black_bloc
Noticing people across different protests and locations share that style is no more indicative of a common organization than noticing that people going into Goth clubs in different cities all dress alike. "Goth" is a fashion and subculture, but it would be quite silly to insist it was an organization.
But I thought mask wearing so as not to be identified by the public at large was a terrible, horrible crime and a sign of fascism!
I remain less than convinced about tales of severed brake cables and innocuous soup-kitchen volunteers being teargassed by fascist-sympathiser police.
When people wear masks, they're typically trying to avoid repercussions for their actions; Often breaking laws. I'm more worried if I see police doing this than protestors.
Unfortunately you'll remain unconvinced but that's my experience and the experience of my fellow travellers.
When Denver holds a post-Rittenhouse gathering of antifa, yes, it's an organization. As the video on youtube showed, it's an organization of rapists, but still an organization.
The 20 odd people in Salt Lake City that knew about Kirk's assassination are also part of the antifa "organization."
I mean, you've read The Moon is a Harsh Mistress, right? You're familiar with terrorist cells?
You gotta start backing up these assertions, not just slipping them into posts. If there’s evidence of a conspiracy outside of fevered YouTube videos, share it.
I once again implore you to stop making posts like this without proof.
>As the video on youtube showed, it's an organization of rapists, but still an organization
Which video? please provide a link.
>The 20 odd people in Salt Lake City that knew about Kirk's assassination are also part of the antifa "organization
do you have any proof of the multiple statements in this sentence? Please link them.
Your posts are otherwise dragging down the quality of conversation on ACX.
Seconded.
An "organization in which rapes happened" is not the same as an "organization of rapists", and I think you know that. It's arguably even less so if it's an "organization at which rape accusations happened", given how easy it is to make a rape accusation even when there's no rape. And I think you know that too.
I don't care much for Antifa, but portraying things this way makes your argument less persuasive, not more.
> given how easy it is to make a rape accusation even when there's no rape.
All true, but if the members of these orgs broadly endorse messages like "believe all women," it's fun to rub their noses in it.
That video and the Twitter posts inside don't show at all that antifa is an organization of rapists. Could you point to the statements within the video? Is it the "maybe there are rapists here, we don't have enough people"?
Also, what about the 2nd group of claims? Which 20 people knew about the Kirk assassination? Do you have any link?
Please provide sources when making the initial claim in your next posts
Would it be reasonable to say that it's a brand, used by many loosely-affiliated organisations?
I tried to look up the existing list of domestic terrorist organisations for comparison, but apparently no such list exists for the US. I did find one for Canada which includes organisations like the Proud Boys, described here https://www.publicsafety.gc.ca/cnt/ntnl-scrt/cntr-trrrsm/lstd-ntts/crrnt-lstd-ntts-en.aspx#2025-02-20-7
> The Proud Boys is a neo-fascist organization that engages in political violence and was formed in 2016. Members of the group espouse misogynistic, Islamophobic, anti-Semitic, anti-immigrant, and/or white supremacist ideologies and associate with white supremacist groups. The Proud Boys consists of semi-autonomous chapters located in the United States (U.S.), Canada, and internationally. The group and its members have openly encouraged, planned, and conducted violent activities against those they perceive to be opposed to their ideology and political beliefs. The group regularly attends Black Lives Matter (BLM) protests as counter-protesters, often engaging in violence targeting BLM supporters. On January 6, 2021, the Proud Boys played a pivotal role in the insurrection at the U.S. Capitol. Leaders of the group planned their participation by setting out objectives, issuing instructions, and directing members during the insurrection. The leader of the Proud Boys was arrested two days before the insurrection as part of a stated effort by U.S. law enforcement to apprehend individuals who were planning to travel to the D.C. area with intentions to cause violence.
If participating in violence at protests and engaged in "violent activities against those they perceive to be opposed to their ideology and political beliefs" is sufficient to be listed as a terrorist group then yes, it makes sense to add Antifa.
The right-wing equivalent to "Antifa" in this context is not Proud Boys, which was indeed a formal organization, but the "Patriot movement" (https://en.wikipedia.org/wiki/Patriot_movement), ie. a network/subculture of smaller, local far-right organizations that co-ordinate in some ways across the regions but don't have a formal structure or leadership, expect at most regionally on an ad-hoc basis.
You *can* point out to organizational structures of individual organizations and you *can* also point out to instances where many/most Patriot movement organization bigwigs have gathered together to hammer out strategy, but it's still far woozier and less coherent than an actual, hierarchical, centrally led organization would be.
Let's say you're a right-winger. Imagine that a left-wing admin has announced that they're going to treat the "Patriot movement" as a terrorist organization. Would you be satisfied that this is just going to mean they're going to go after actually terroristic organizations and violent actors, or would you be worried that this would mean a crackdown potentially extending even to mainstream political operators?
Thank you! This seems like the rational approach to similar topics: mention a few central examples of the set; highlight the similarities and differences.
That avoids the kinds of general philosophical arguments by which nothing is ever an organization (or everything is). Is it typical for groups in this category to be e.g. registered as non-profits, have written constitution, keep explicit membership lists, organize regular elections, etc.? If other groups have that, and this one doesn't, that is suspicious. If other groups don't have that, and neither does this one, that's business as usual.
"Would it be reasonable to say that it's a brand, used by many loosely-affiliated organisations?"
Yes, I think that would be quite reasonable.
And that being the case, declaring "antifa" a terrorist organization should be a cut-and-dried violation of the first amendment. "Adopting the antifa brand" is a matter of speech, not a matter of action. If doing so is enough to get you targeted by the federal government as a terrorist then your First Amendement Rights apparently stop whenever the POTUS dislikes your brand.
And yes, if you wanted to counter with a more clear-cut thought experiment, I still think the First Amendement should apply. A group that goes around saying "we are terrorists," or "we really like doing terrorism" or "terrorism is awesome and want more of it" or calling themselves America's Best Terrorists should absolutely be protected by the First Amendment. Not until they actually *do terrorism*, or at least make a credible threat or attempt towards doing so should the government have any ability whatsoever to crack down on them.
Did you...uh, read your own article there, Wimbli? It doesn't say anything about "antifa." But it would be a pretty silly point even if it did.
If somebody's house contains items that are dangerous and illegal to own, that's grounds to arrest and prosecute them in itself. Trying to tie that arrest and prosecution to nebulous claims of belonging to an "antifa cell"[1] not only don't ADD anything to it, they make the prosecution LESS likely to stick because now the defendant can claim that being investigated at all was a first amendment violation.
[1] Which is a ridiculous phrase because, see above: it's a brand not an organization. That would be like talking about your buddy being part a "parkour enthusiasts" cell.
I did some research (again), and I can't find any indication in the that link and how the person is related to Antifa, a cell of antifa, or that there are antifa cells with explosives in their houses?
Or is this just meant as an illustration of someone having explosives in their house and going to jail, unrelated to antifa?
There is only a single match between Shaeffer and Antifa, on a sitemap of articles on a local newspaper, and they are not related.
https://www.wgal.com/sitemap_articles_1.xml.gz
1. It's a brand that's used by organizations...and any rando who feels like it. There's absolutely zero control over the term.
2. Groups != organizations. Organizations have structures and direction.
Some of the stuff I'm reading suggests Antifa is a little of both. It is decentralized in a way that is deniable; there's no receipts of money being passed to any part of Antifa by, say, the Biden administration, because there's no President or Treasurer-General of Antifa to accept a large sum from anyone, and any actual funding or material going to an Antifa cell will be small enough to be easily hidden. There seems to be incidences of people hoisting the Antifa flag while doing whatever dumb thing they think of, but also more careful cells capable of discipline and organization. So we can't rule out that it's getting nothing, and we also can't rule out that it's just a confluence of likeminded vandals without a great deal more resources to spend on tracking them.
That said, the FBI happens to have said resources, so if they say Antifa is more than just a spontaneous unfunded group, there's reason to believe them, as they have the means to find out. Unfortunately, we also can't rule out that they're thumbing the scales in order to move on an organization they don't like.
If Antifa is truly working on organized domestic terrorism, one way for the FBI to prove that is to show the evidence they have, either enough to get a conviction in court, or at least enough to convince the public that they might not want to get involved with them. The problem with -that- is that the FBI probably can't reveal that information without essentially revealing how they got it, which might involve some processes they would like to use to track down other domestic terrorists, or some well-placed moles they would like to not get immediately disappeared. So if the FBI isn't forthcoming with evidence, it might be because they don't have it, but it could be because they're protecting valuable assets.
OTOH, if the FBI is just making up anything it pleases in order to run down a group of people trying to take down the state, there would have to be enough FBI agents involved that at least one of them would leak, so we can probably at least rule out that Antifa is entirely innocent.
Donald Trump is the person claiming that Antifa is a terrorist organization. As far as I know, the FBI hasn’t commented.
https://www.whitehouse.gov/fact-sheets/2025/09/fact-sheet-president-donald-j-trump-designates-antifa-as-a-domestic-terrorist-organization/
https://www.whitehouse.gov/presidential-actions/2025/09/designating-antifa-as-a-domestic-terrorist-organization/
> It's a brand that's used by organizations...and any rando who feels like it. There's absolutely zero control over the term
It has that in common with all sorts of other terrorist groups, right? Like, various people have claimed to be ISIS or Al Qaeda over the years without necessarily having direct traceable links to the main organisation.
It's difficult, because terrorist organisations that act like proper organisations with member lists and centralised command and control don't last very long, they all get arrested or droned. If you want to have a terrorist network that can actually survive and last then you need to act like a decentralised bunch of non-communicating people so that an attack carried out by one part of your network can't possibly be blamed on another part.
I guess the whole point of having "designated terrorist organisations" is to overcome this problem. The government no longer needs to prove any causal link between this particular Aum Shinrikyo member and any particular Aum Shinrikyo attack, they can just declare the whole damn thing illegal and roll it all up.
This is all a bit unfair if you're a peace-loving member of Aum Shinrikyo or Antifa or the Proud Boys who would never dream of doing anything illegal. But it's not anything new. Ideally the peace-loving members will find a new banner to gather under.
This is not true re: Aum Shinrikyo, the leadership was arrested and is in prison/executed but Japan's laws on freedom of religion meant they couldn't ban the sect outright and successor groups survived the terrorist attack by decades.
Could you post that diagram? Who is this guy? do you have any proof?
I like the synergy between "We're the good guys because of our name" and "we're not an organization because we're decentralized."
Considering it's mostly white people and pre-65 blacks, domestic makes sense.
Yeah, sure, but my understanding is that persecuting them as members of a domestic organization raises First Amendment issues that a foreign org wouldn't. Being loosely affiliated leaderless cells, insofar as it is meaningfully an organization at all, I think it's as plausibly foreign as domestic.
There's nothing in the First Amendment that says it doesn't apply to foreigners or citizens who interact with foreigners. Unfortunately many people, including some so-called "classical liberals," think the constitution goes away if the magic words "foreign influence" or "national security" are mentioned.
Is there such a thing as being both? Like, X% members domestic, Y% foreign, where is the line? Imagine 99%:1% or 1%:99% or 50%:50%.
(If we stretch the meaning a bit, in some sense "humanity" itself is a mostly foreign group, and is responsible for many atrocities...)
Please don't do low-effort political trolls in the ACX comments section
Please don't dismiss discussion as "low-effort political trolls."
In that case, why would it be "as plausibly foreign as domestic" just b cause it's more legally convenient? Wouldn't that require looking up the legal definitions and seeing which one fit, not just wishing?
No, it wouldn't. That's a reasonable mistake to make, but that's not how the law ACTUALLY works, despite propaganda to the contrary. The standard practice is for prosecutors to charge you with whatever is convenient, however tenuous the connection to your actions, and threaten you with a sentence so absurdly high that you take a plea bargain.
Yes, but setting the precedent now is bound to make future domestic crackdowns smoother.
Thoughts on the Trump admin's Tylenol-Autism announcement today? My takeaway is that the data is inconclusive and conflicting but there is enough concerning data to warrant guidance against taking Tylenol. Especially since there are no downsides to not taking Tylenol (unless it is to reduce fever?).
Would appreciate opinions from anyone with a scientific/medical background on how to interpret this news and associated controversy.
Somebody dug up this Twitter (as it was then) post from 2019:
https://x.com/tylenol/status/1140651187924013065
"TYLENOL®
@tylenol
Congrats on your upcoming addition! SO exciting! It'd be great to touch base real quick since we haven't tested Tylenol to be used during pregnancy (and see what coupons we have for baby!) Call us when you can at 1-877-895-3665, M-F from 9a-5:30pm ET w/ your Twitter handle ❤️
5:03 PM · Jun 17, 2019"
So, uh, about that "we haven't tested Tylenol to be used during pregnancy", dear Tylenol? Any news since? 😁
Gosh darn it, if it *does* turn out to be "we advise not to use it just as a precaution", yet more Cursed With Luck by the Trump administration?
Acetaminophen is in Pregnancy Category C, "Use with caution"; but this is true of the majority of drugs. It essentially means that we don't have high quality human studies, which are almost impossible to get through IRBs.
The properly rational thing to do regarding the announcement is to ignore it. At least ignore it as a source of medical evidence (you can update on it as a Thing that Occurred in the World of Politics). You should not update your beliefs on any supposed Tylenol-autism link a single iota because of an announcement like this. I think the reasoning should be quite clear:
If you are already well-read and familiar with the subject of autism and its potential causes, then you should already be familiar with the data shared by the administration. It's not new evidence, so of course you don't update.
OTOH if you are NOT already well-read and familiar with the subject, then the absolute WORST way to engage is to let a subset of evidence that has very plainly been filtered to produce a particular conclusion be your first look at the subject[1].
If you were somewhere in between, the release of this report is like the dumbest possible version of the Streetlight Fallacy; the street you were one actually had ample ambient light, but now some jerk shined a spotlight on one particular piece of the sidewalk, making it that much harder to check for your keys *anywhere else.*
But I will not even pretend to be surprised when a bunch of the same people who spent years shouting "follow the money" and "motivated conclusions" at any mention of climate science or COVID-related research suddenly decide that this right here is the Gold Standard of Medical Evidence. Because we live in the dumbest timeline.
[1] See: https://www.lesswrong.com/posts/kJiPnaQPiy4p9Eqki/what-evidence-filtered-evidence
It should be VERY obvious that this evidence was filtered because RFK announced *well in advance* what he was going to find. Not Tylenol specifically (I don't think), but that he was going to find THE "cause of autism" in 6 months. Given that there's no logical requirement that there be *one* cause, let alone one findable in 6 months, that's a STAGGERING level of Privileging the Hypothesis right there:
https://www.lesswrong.com/posts/X2AD2LgtKgkRNPj2a/privileging-the-hypothesis
Maybe he already had Tylenol in mind, or maybe he did a really rapid job of conclusion shopping, but either way he definitely was not engaging in anything like truth-seeking behavior.
It doesn’t make much sense as a smokescreen for worse news, as political horse-trading, or even as a grift. I’m left thinking the most likely explanation is that RFK and/or Trump promised to look into autism and this was the closest they got to a scapegoat.
Oh, if you meant the actual medical case—it’s weak. The FDA is not *generally* in the business of changing its recommendations based on one observational study. Especially not for a drug with decades of use before and after the phenomenon it’s accused of causing. Acetaminophen kind of sucks from a general safety perspective, but it’s one of the only fever reducers which isn’t already contraindicated for pregnant women, so the downside is nonzero.
The most parsimonious explanation is that RFK genuinely believes it. Finding the one weird trick or the one weird chemical that's fucked us all up and we just need to do that one weird trick or get rid of that one weird chemical is very popular among the home-remedy crowd.
I'm not surprised RFK would go on a crusade like this. I'm surprised he'd abruptly switch from vaccines to tylenol.
I don't think he's given up on blaming vaccines.
The evidence is inconclusive.
Given the lack of the evidence, I think it is cruel to put this out as science. Cruel to all the mothers of children with autism who will now feel somehow responsible because they may have (likely they can't really remember, it was so long ago) taken a common OTC painkiller.
Cremieux, whose original autism post is already linked, also followed up with this: https://www.cremieux.xyz/p/did-the-hhs-just-explain-autism
Tldr: he is not a fan of the announcement.
Insofar as it weakens the credibility of the government's medical guidance, it is good. This could be (delusionally optimistically) a step towards the government only issuing guidance – which can be ignored without legal consequence – instead of preventing the sale of any drugs.
Are there any hypothetical examples of things Trump could do that you wouldn’t find a roundabout way to defend? An honest question.
I'll do you one better, and give you an example of something he DID do: I didn't like his support for "red flag" laws to take guns away from people merely accused of domestic violence.
I sincerely hope Scott writes a post on this, since I trust him over both the AMA and HHS at this point.
My position is that RFK Jr. is the US secretary of health and any shocking "revelations" coming from the US government about medicine should keep that fact forefront in one's mind.
I don't know, I mean, autism was around before paracetamol became a widely-used analgesic. There might be some link, but I think it's more along the lines of the "autism and MMR vaccine" kind of correlation that caused all the trouble around vaccination back in the day (we all remember Dr. Andrew Wakefield, don't we?)
There was ASD in my paternal family long before anything other than aspirin was available over the counter here, and Tylenol (as a brand) isn't sold here (it's called Panadol here).
Just speculating, but could this be a case of: "amateurs discuss politics and medicine, professionals check who shorted Tylenol's shares right before the announcement"?
There are a bunch of articles on the web about this. The evidence is all observational studies, which don’t demonstrate causation. In the Swedish study[1], the authors report an increased incidence of autism in cases where the mother took acetaminophen (the active ingredient in Tylenol), but that the difference disappears when they compare the difference between siblings. The study authors conclude that the correlation is due to an unidentified cofounder (that is, by some factor that both increases the use of acetaminophen and increases the incidence of autism).
The Trump Administration cites a meta-analysis which lists the Swedish study as two separate studies; one for the overall numbers and one for the sibling analysis.[2] The author weights the first of these two studies more highly, giving it a greater weight in the final result when the studies are combined. In effect, the meta-analysis treats the Swedish study as a whole as evidence that acetaminophen causes autism, despite the fact that the authors of that study reach the opposite conclusion.
The author of the meta-analysis “served in 2023 as a paid expert in a class action lawsuit against acetaminophen manufacturers, in which he testified that there was a link between the medication and autism. A judge ultimately excluded his testimony for being scientifically unsound.”[3]
I’m just an amateur with no particular expertise so I can’t say that acetaminophen is safe, but it seems to me that the Trump Administration announcement is not a reason to worry if you weren’t worried before.
[1] https://jamanetwork.com/journals/jama/fullarticle/2817406
[2] https://ehjournal.biomedcentral.com/articles/10.1186/s12940-025-01208-0
[3] https://www.nbcnews.com/health/health-news/trump-acetaminophen-fda-pregnancy-autism-cause-rcna232909
"The study authors conclude that the correlation is due to an unidentified cofounder (that is, by some factor that both increases the use of acetaminophen and increases the incidence of autism)."
Someone who might be disposed to get sicker during pregnancy (and outside of pregnancy) might also be disposed to have autism or other conditions in the family. It could end up as some weird combination of "if there's the tendency for this condition in your heredity, then when your system is under strain, the influence of this bacterium/virus promotes the expression of it in the embryo".
I guess it makes sense that autistic women might be more sensitive to some feelings during pregnancy, and therefore more likely to visit a doctor, and more likely to end up with prescription.
Biology is so damn complicated that dismissing this out of hand isn't the right view, but neither is "it's a slam-dunk link!"
I don't know if our astrologer friend is doing anything, but I wouldn't be surprised if eventually it turns out to be some damn thing like "if you take this amount of this medication over this long of a period with this genetic background under these particular conditions when the moon is in Cancer BUT NOT ANY OTHER SIGN, there is a significant raising of risk".
Acetaminophen is actively unsafe, and harmful to people. Even when taken at normal doses, and it's an "over the counter" overdose hazard. You can double your asprin, no problemo. Triple your acetaminophen, and you're at "liver damage" if not outright failure.
There was a specific reason for not using asprin with children, but we really should have said "tough it out."
Covid19 has shown the deleterious effects of acetaminophen.
I'd be careful on doubling the aspirin. It does have an effect as a blood thinner, and if you have a sensitive stomach, aspirin can irritate it and even cause vomiting (ask me how I know).
Not a problem for infants though, as their liver doesn’t produce the toxin yet when metabolizing it. Not sure about a fetus though.
There are definitely downsides to not taking tylenol: doing without a pain relief drug when you need one; also, fever reduction protects the fetus. I looked recently at Scott's Pregnancy Intervention post, where he puts avoiding both tylenol and ibuprofen in the first tier, and also the main metastudy he cites, and came away with the impression that its quite likely that tylenol use in pregnancy does increase the risk of neurodevelopmental disorders, probably by about 20%. Someone else who posted, who sounded more knowledgeable than me, suggested that some of the damage might occur when parents give babies and small children tylenol. Asked Cremieux what he thought recently and he does not agree with Scott, cited a study whose results ran counter. Has a post up now saying apparent autism increase is an artifact of changed diagnositic criteria. (https://www.cremieux.xyz/p/how-to-end-the-autism-epidemic). I think there's no doubt that the changes in criteria led to far more kids getting the diagnosis. There were also policy changes that made it possible for kids with the diagnoses to get more school services, and those changes very likely led to professionals being more liberal in diagnosing kids with autism, in order to get the services for them.
It's like everything: it is possible to have too much of a good thing, be careful what you consume when pregnant, and take care of your general health.
"Autism" is an umbrella term, and as discussed on here many times previously, it can range in severity from "will bash own brains out against wall" to "quirky, anti-social, gifted at maths/STEM". I think folding in Aspergers was not a good idea, but it definitely is all on a spectrum (I wonder if in a few years time we'll have the spectrum split up again into all little sub-categories related to one another but not all identified as 'classic' autism?)
So I would agree that, past the "beat your own brains out" stage, kids who in previous generations would just have been classed as "odd" or "socially awkward" or whatever, are now getting ASD diagnoses. Is this a good or a bad thing? Possibly good, since leaving people who could use support to sink or swim with (for example) "they're just shy, they need to get over it" never helped in the long run.
> kids who in previous generations would just have been classed as "odd" or "socially awkward" or whatever, are now getting ASD diagnoses. Is this a good or a bad thing?
I don't know. My sample size is small, but among the adults I see who were diagnosed as ASD as kids most complain about the special "help" they got. One, for instance, spent part of each day in a special education classroom, and most of his classmates had intellectual disabilities, or disfiguring problems like cerebral palsy. My guy had an IQ of 140, and no oddities of appearance. He was furious and bewildered to be put with kids he saw as "retards and cripples." Also says that nothing done in that group setting was helpful. I think keeping him out of his regular classroom part of each day made life easier for his regular grade school teachers and the other kids, because he had was a constant low-grade classroom management problem. Could not stand to have another kid sitting or standing behind him, refused to do various things because the creeped him out, clowned around with teacher, played tricks on other kids.
> I wonder if in a few years time we'll have the spectrum split up again
or at least add some official *adjectives* that will make it clear which kind of autism are we talking about
Very interesting posts on Cremieux's substack, thanks for linking.
Over on Twitter, arctotherium says:
"Really sucks that the Trump admin, the one serious force opposed to the Brazilification of America (and the First World more broadly) is also a product of that Brazilification."
https://x.com/arctotherium42/status/1970054667020226625
The should have led to some self-reflection on whether there's some flaws in his worldview, but apparently not. Though I'm sure he'll do a Hanania arc eventually.
TLDR - I'm thinking of stopping my SNRI and curious if anyone has any tips.
I've been on duloxetine since 2017, started as a response to a major depressive episode in the setting of significant situational personal stress. From the start I've been very sensitive to withdrawal with brain zaps and brain fog with missed doses. I successfully tapered off in December 2019, which turned out to be terrible timing and ended up restarting in the midst of my second and last major depressive episode in September 2020. External stressors being self evident.
I've since been on a stable dose of 30mg daily with no further attempts at tapering. My logic has been that I'm generally pretty happy and content, and my only side effect is a minimal decrease in libido, so why mess with success? The flip side is that I still think it is weird to flood my neurology with this chemical for the rest of my life. I take the brain zaps to be good evidence that it is doing something to my neurons, whether or not that something is regulating my mood.
Which brings me to today. I've been getting my meds for the last few years from CostPlus, an online generic-only pharmacy. The batch they sent me a few weeks ago is clearly deficient in some way, whether that is in pharm quality or actually weight per tab. I had 3 days of brain fog and zaps, then doubled my dosage for a week which effectively treated my side effects but obviously isn't a great long term plan. So I'm back since Saturday on one tab, back to the fog and zaps, and thinking since I've somehow ended up in an unintentional taper I might as well taper off entirely and see how it goes. I'm in a good place with no exceptional stressors.
So all that to say, good idea / bad idea? Advice or tips?
I was on the same one for about 10 months, had great results, got off and while I was foggy for a few weeks, I feel great now. I'd say stick through it and try to go for at least a month and a half and see where you're at.
I've been on duloxetine on and off since 2013. I think it's way hard to get off of, harder than SSRIs, and I'm not even someone who particularly minds a mild level of brain shocks. But even small decreases result in some shock for me, though that only lasts a couple of days. If you really want to get off, I'd say go from 30 once per day to 20, if you can get your hands on it. Otherwise just start going 30 for 2 out of 3 days, then once you feel comfortable do 1 out of 2 days, etc. Even better if you have 20mg capsules and can use those in this way as well, as appropriate. I even had one capsule that had 6 mini pills inside, so I was able to keep tapering off by cutting open the pill and taking fewer of those. But other capsules I've had had far more than 6, such that it'd be difficult to measure out.
But my warning is: once you start down the dark path, forever will it dominate your destiny. Well, maybe not, but for me that's been the case. I've successfully gotten off cymbalta, and past all withdrawal and stayed that way for at least 3 months, only to find that I effectively wanted to just stop doing anything, and I mean almost anything (other than sleep). I kind of felt almost like I was waiting to die. I was never that depressed before I started taking those meds. I kept thinking this would get better the longer I was off of the meds, but it actually got worse. I can't really say whether this was because:
1. I got addicted to cymbalta and now can't do without it
2. Seeing how good life can feel on cymbalta made life without it seem all the worse by comparison
3, My depression got worse over the years without my knowing it, because it was masked by the drugs, such that the only thing keeping me afloat was the cymbalta
I'm no lover of antidepressants, but it does seem possible that what's happening is that the drug was helping you, and that without it you feel terrible. If you still feel bad on it, why not try a different one? The MAOI's are the most effective ones, I think. There's a site by a bonafide world expert called Psychotropical that gives lots of good info.
Also, I'm a psychologist, and while I don't prescribe drugs I do see lots of people go on antidepressants and off them. What I've seen when people come off one is that some feel no different, and some slide back into depression over the next few months. I have never seen anything that looks like addiction -- like someone who's hooked for life on the stuff, because it reset their pleasure centers or something and now the drug is required to make them feel even halfway decent. I'm not saying that can never happen, but it is absolutely not the norm. And jeez, even with bona fide addictions like nicotine or caffeine or heroin people eventually go back to baseline -- that is, off the drug they feel the same way they did before ever using the drug.
I've tried basically every single SSRI and SNRI. They all work for my depression but come with the same sexual side effect of making it hard to orgasm. I don't think I've ever tried an MAOI however. I don't even think any psychiatrist has ever recommended one to me.
I have known a couple people who found that drug holidays worked decently to get around this side effect. it's a kind of hacking. You experiment by stopping the drug, and keep testing your sexual function. There is a decent chance that you will find a sweet spot where you have no withdrawal symptoms yet and your ability to orgasm has fully returned. I remember that for one person the sweet spot began after 24 hours. They took a drug holiday every weekend. I recommend trying that first. You are lucky to be someone whose depression is treated my these drugs -- many get little relief from them.
OK, but if you want to try switching to something else, here are your options:
-Wellbutrin. Most people have no sexual side effects. Is commonly prescribed. Kind of odd that you haven't been on it, actually.
-There are a couple new drugs that people say have no side effects: vilazodone (Viibryd) and vortioxetine (Trintellix). I know nothing whatever about them, but you can look them up and read about effectiveness, etc. They are said to be expensive. Insurance will sometimes cover an expensive drug if you have had no success with the ordinary ones. I don't know whether your intolerable side effect from SSRI and SNRI counts as no success, though. There's also a drug called Mirtazapine, but I believe it's in the same family as diphenhydramine (benedryl). Everyone I've seem taking it stops because it makes them so drowsy.
-An MAOI called Selegiline has few or no sexual side effects. You can get it as a pill or a patch. The patch is probably expensive.
-Adderall: Is used by some for treatment-resistant depression. Docs are not crazy about doing it, though, because it's a controlled substance. And it has some minor sexual side effects of its own.
-Transcranial magnetic stimulation: I don't know much about this, but I'm sure there's info out there. Use google scholar or AI to research its effectiveness.
PS. In my reply to Wormwood, below, I give into about the need to avoid certain foods when taking an MAOI (though I believe that if you use the patch the precautions are not needed.)
Thanks. I do effectively take a drug holiday every other day. I'm usually on 30mg every other day. But it doesn't feel like a holiday, it just feels like I'm on 15mg daily. But that's the best balance I've found between sex and happiness
I've tried Wellbutrin, but it has no effect for me.
I haven't tried any of the others you mentioned. I know someone in the rationalist community who did transcranial magnetic stimulation. Sounds scary, she said she lost years of memory of her life.
Yes, because it's the one that kills you if you eat cheese. Doctors understandably don't want to give depressed and suicidal people a drug that kills you if you eat the wrong things. But if you think you can handle it, they might prescribe it to you if you ask nicely.
Your post is worse than a silly irrelevant one. It has negative value. Every point you make here is false. If you're going to post something that purports to be medical info, look up the things you think are true to find out whether they are urban myths or out of date info.
Here are the facts about diet and MAOI"s
-MAOI's make people slow to clear tyramine, which is a substance found in large quantities in fermented foods, and a sprinkling of foods that are not fermented. If it builds up too high people's blood pressure rises so high that it's dangerous, and could even kill them.
-In the 50's and 60's, when this drug began to be used, it was a lot of work to avoid tyramine, because refrigeration was much less good, so lots of things that were not fermented food spoiled slightly and had a fair amount of tyramine. Levels of pretty much every food in the US and Europe have recently been rechecked, & recheck also used better tech than the original tests. Way fewer foods now have enough tyramine to worry about.
-Doctors do not worry about depressed people on MAOI's committing suicide via tyramine ingestion. It's a very uncertain method, and also painful, because before you reach the point of being in grave danger you get a terrible headache. What doctors worry about os people being careless about tyramine and inadvertantly having a blood pressure crisis. They also worry about people committing suicide by overdosing on the MAOI's, but MAOI's are not unique in that respect -- overdosing on various other antidepressants can also be lethal
-Hypertensive crises are uncommon now.I know a psychiatrist who regularly prescribes MAOI's. He has been in practice for at least 15 years, and told me that in that time he has had one patient hospitalized for a hypertensive crisis.
-Asking a psychiatrist nicely will not get you an MAOI. Most do not prescribe them at all, and refuse if you ask them. They are an old drug, and out of fashion, and most doc's training did not even cover them. So most of these docs do not have up-to-date info on their effectiveness and risks. But if you want a doc who is open to using MAOI's, I can tell you how to find one.
-There is a bonafide world expert with a web site called Psychotropical. I recommend you go read what is on his site before spewing any more of you opinions about this class of drugs.
Someone who actually takes duloxetine here. Sure, why the hell not? It's no venlafaxine, withdrawal isn't going to get you killed. Duloxetine withdrawal is so benign that I've been able to quit it cold turkey on multiple occasions, but if you have bad withdrawal effects, there isn't any loss in tapering extremely slowly. And then a month or two later, when you find yourself in excruciating pain, depressed, and suicidal, you can start taking it again, you'll be just fine. It really does work like magic!
Psychologist here. Taper *very* slowly, much more slowly than the schedule that online sources recommend to doctors. I recommend this because I've watched many people suffer through head zaps and other highly unpleasant withdrawal symptoms while following the recommended decrease schedule. Also read a piece of research on standard vs. slow tapers that found super-slow tapers worked better. So I'd say, take something like 6 mos. If the stuff comes in capsules full of a variety of different-colored tiny balls, it is still possible to taper so that your remaining steps are 7/8 capsule, 3/4 capsule etc. If that's the situation let me know and I'll tell you how. If you start feeling godawful, see a professional.
Try to add something good to your life while you subtract the antidepressant. Exercise? D&D game? Working your way through some long piece of fiction like the Dark Tower series?
Good advice, thanks
look into hyperbolic tapering
As a new blogger with my own platform for the first time since ~high school[1], I've started consciously thinking about writing styles. I'm curious if other writers here have consciously thought about writing styles, and what they've done to learn/perfect them.
Some questions I'm considering:
1. Thomas and Turner (in Clear and Simple as the Truth) describes different (mature) writing styles as making a principled choice on a small number of nontrivial central issues. (For example, truth, scene, presentation, cast, and the intersection of thought language). What principled choices have you made for developing your own style?
2. Reading level: Most articles I write are intended for a college-graduate audience, and various readability checkers online I use are in accordance with that (my typical blogpost is readable for 12th graders to ~15th graders, or third-year college for non-American readers). I think this is a perfectly fine reading level since I expect almost all my readers to be college graduates, or have equivalently high reading levels(I do have non-Anglophone readers, but I assume they can just use their favorite translation tool). However, the vast majority of non-specialist blogs I read online, including ones on highly intellectual topics, tend to go for a lower reading level. Presumably this is a deliberate choice! So are there significant benefits for going for a ~9th-11th grade reading level that I'm currently discounting?
3. How important is it to develop a natural style that's "my own", vs following the work and writing in whichever style is a best fit for whatever topic I want to write about? Ie should I go for depth or range, when it comes to style? Intuitively an "anthropics for babies"post ought to have a very different writing style than a post on the game theory of war.
[1] Not including various anon blogs from ~15 years ago, I've only written on social media and public forums like LessWrong online, until my most recent substack.
How much writing makes a "writer"? I've got no published works and am still struggling with the "show up" phase, but boy I enjoy it anyway.
I would say "your own style" is naturally the easiest thing to write, because if it isn't, it isn't actually your own style. So the choice is is between writing in your own style, or trying to suppress your own style in order to copy someone else's. The only reason I see to try to suppress your own style is if you think it reads badly when you read it back to yourself. In which case, the problem is your style is not up to your standards yet; so find some things that read well, compare them to your own writing, and see what it is they're doing that you aren't.
You're only going to get the audience you write for. If you write over people's heads, they won't read it. If you write beneath their egos, they won't read that either.
> I would say "your own style" is naturally the easiest thing to write, because if it isn't, it isn't actually your own style.
I think it's more complicated than this. When you talk, you talk differently to different audiences, in different situations. Each of those styles is yours, but there are choices. It is similar with writing.
For example, are you writing in a "school essay" style? That's probably the worst choice that people frequently make, and yet it comes to them naturally, because that's what they spent a lot of time practicing at school. Unlearning this is already half of success.
People talk differently to their friends, to children, to unknown (and potentially hostile) audience of adults, etc. When you write a blog, which of these audiences do you instinctively have in mind? It's not just about style, but also content: how difficult words can I use, do I have to explain concepts before using them, etc.
(Even more complicated, when you talk to children, it is different when you explain a school lesson, and when you read a bedtime story.)
Generally, all those things should jump out at you if you read your work back; it will feel appropriate, or not, and if not, then you'll want to rewrite it (and figure out why not).
If it reads well to you, it will read well to other people. Which other people might be tricky, but they're out there.
(Don't write in a school essay style. You knew it sucked then too.)
I don't know that I have a direct answer for these questions, except maybe the last one. I do notice that I have different 'personas' that have different styles depending on what kind of piece I'm writing. My "Tech Things" series is much closer to, say, Matt Levine (and in some ways is explicitly patterned off Levine), while some of my AI explainers sound like Chris Olah or Andrej Karpathy. Implicitly I think I start from the question of 'who would write this piece' and then work from there. I don't do this intentionally, mind you -- I'm not really writing *for* anyone else, this is just often the fastest and most natural way for me to write
Thank you, appreciate your thoughts!
I often see advice on the internet like this, and it always feels very dogmatic to me! Reducing my vocabulary level further for my essays, as if I'm talking to an intelligent non-native English speaker, is a perfectly doable action, but I just don't think the benefits are very high, now that translation services are pretty good.
I also don't think there's a clear connection between simplicity of reading level and clarity of thought, if anything I'd guess the correlation is weakly negative.
(I'll also note that you clearly aren't following your own advice, with words like "sterling" and "commentariat")
Attempted translation:
I see tips like this on the web a lot. But they seem kind of small-minded to me! I could use short words in my writing. Like I'm talking to a smart dude who is just learning English. I could do that, but I’m not sure how much it helps. You can always use Google to translate!
I also don't think using simple words always makes your ideas clearer. If anything, I'd guess it might be the opposite.
(I'll also point out that you're not following your own tips, since you used fancy words like "sterling" and "commentariat")
I think 85-95% of the message gets across, but I am in fact sacrificing precision and clarity for accessibility.
dogmatic conveys more of what I want to say than "small-minded", "Reducing my vocabulary level further" conveys a more precise thing that I'm giving up than "I could use short words in my writing"
"if anything I'd guess the correlation is weakly negative" is communicating a bunch of nuance that "If anything, I'd guess it might be the opposite" is not.
And so forth
I think the *average person* would benefit from simplifying their writing.
At the same time, general advice like "use the smallest words you possibly can to convey your idea." would, if taken seriously by every capable writer, lead to a loss of beauty and nuance.
The advice to simplify is generally good but overused and undifferentiated.
“Fungible”?
Maybe you're just miscalibrated here on what a college-graduate reading level is? It's not a rarefied position, like I expect the vast vast majority of ACX readers to have that reading level!
"Midwits who are "trying to elevate their reading/writing level" tend to throw in words that they don't understand"
I don't think this is a problem for me! Again, I'm someone who graduated from college and I read papers for fun. I'm using plenty of normal words and sentence structures that I expect smart college graduates with the relevant academic backgrounds to grasp, not like I'm using a bunch of Latin or invoking Hegel or something.
For context, here's Claude on identifying words that someone a 12th grade reading might not understand, from a post I'm proud of (https://linch.substack.com/p/why-reality-has-a-well-known-math):
>>
Looking through this essay, here are the words that might be challenging for someone with a 12th grade reading level:
Scientific/Technical Terms:
Hydrodynamics - the study of fluids in motion
Spacetime - the mathematical model combining space and time
Amenable - willing to cooperate; easily influenced or controlled
Tractable - easily managed or controlled; solvable
Anthropic/Anthropics - relating to observation selection effects based on our existence
Cosmological - relating to the universe as a whole
Metabolically - relating to the chemical processes in living organisms
Gradient (in evolutionary context) - a gradual change or progression
Meta-cognition - thinking about thinking
Differentiable - (mathematical) able to calculate the rate of change
Kolmogorov complexity - a measure of computational resources needed to specify something
Acausal - not involving cause and effect
Evidential - based on evidence
Philosophical/Academic Terms:
Constructivist - philosophical approach that knowledge is constructed by the observer
Epistemic - relating to knowledge or the study of knowledge
Cognitive closure - the idea that minds have limitations on what they can understand
Selection effects - biases in observation based on the method of selection
Meta-selection - selection at a higher level of organization
Multiverse - hypothetical set of multiple universes
Less Common General Terms:
Scanty - barely sufficient; meager
Confound - to confuse or perplex
Bracket (as a verb) - to set aside or exclude from consideration
Contra - against or in opposition to
Meta-irony - irony about irony
Mathematical References:
Weierstrass function - a specific mathematical function with unusual properties
Traveling salesman problem - a classic optimization problem
Boltzmann brains - hypothetical self-aware entities arising from random fluctuations
Most of these terms are either explained in context or could be understood through context clues, but they would likely slow down comprehension for a typical 12th grade reader.
again this is Claude not me.
I never took mind-body dualism seriously until after I watched some videos of Deepmind's Genie 3 AI. Here's an example of Genie 3 if you haven't seen it before:
https://deepmind.google/api/blob/website/media/genie_modelling_animation_fiction_4_DuLFEfx.mp4
These videos look like an agent (human or AI) controlling a character in a fake video game, but in actuality, the agent has no direct control over the character. The agent is telling the image-generating world-model AI (Genie 3) what should happen, and then Genie creates video of something like that happening, frame by frame from the live interaction. So even though the agent is giving commands for their character, the character is not an extension of the agent like Mario is in a Mario game, where the controls map directly to Mario's movement. The mind here is completely separate from the body, and if Genie 3 feels like making the body do something unexpected, the mind has zero agency other than to send more suggestions.
If real life is a simulation, could it be something like this? The world-model is deciding what your body physically does, your mind is only giving directions? Normally I'd think "it's impossible to say", but the fact that humans minds are incredibly good at creating little narratives about why they did a thing that their body just did feels like weak evidence towards this possibility. A setup like this would also allow outsiders to join the simulation seamlessly, as the model can simply let an outsider start giving mental suggestions for a pre-existing NPC. And the NPCs can have fully simulated minds or be mostly mindless, the world would function either way. EDIT: Also, it would allow minds to be cleanly extracted from the simulation without ruining the sim in any way.
There are two ways of changing a world: have the power to change the physical reality of it or change your attitude towards it.
What about calling it a "brain - the-rest-of-the-body dualism" instead?
It is not as short and poetic as "mind-body dualism", but removes a lot of philosophical baggage.
While it was never made explicit in the movie, I've always assumed that The Matrix worked a bit like this, which is what allowed Neo and his friends to have reality-bending powers. The simulation feeds you something consistent with your beliefs; mostly this is a one-way flow but if you can believe something hard enough then the physics of the simulation will be forced to adapt to maintain consistency.
I think this is a nice theory, but in practice self-belief, or believing arbitrary things which run counter to all the evidence, are not something which most humans are short of.
I think information flowing in the direction of the world/physics model would be easy to filter through, like if someone tried to jump over a building or read someone else's mind. The information flowing backwards is a much weirder problem: how do you tell a mind AI that it's in pain, or it's caffeinated, or it's falling asleep? You can prompt current chatbots to act drunk, but that doesn't actually make them drunk. Maybe functionality like that will require something extra, or maybe it will just emerge as everything scales to AGI.
Raytracing is a rendering technique in video games. It works by sending out "vision rays" from the observer to the environment around it. It's similar to how the ancient Greeks imagined vision to work. And yet it runs contrary to everything we know today about real-life vision.
Moral of the story: Metaphors and analogies will only get you so far, don't take them too seriously. Instead, apply Occam's Razor liberally: If your model makes assumptions that don't help explain anything, ditch the assumptions.
Occam's Razor is the argument people make when they have no argument. It's about as useless as claiming that the Efficient Market Hypothesis means it's impossible to predictably make more than 6% a year, or that the Grabby Alien Hypothesis proves that aliens must be very far away.
I don’t think Occam’s razor implies anything about any individual getting 6% from the market.
Occam’s razor just says to prefer the most parsimonious correct explanation.
I don't see how this differs from a typical game engine in this regard: you send input signals to the game engine, and it does some combination of changing the location of your character, playing some animation of the character rig, changing the camera view, puppeteering the NPCs, opening doors, removing items (and adding them to your inventory), changing the weather or time of day, … Genie doesn't do all this in the the same as Unreal, sure, but it's not as fundamentally NEW as you're suggesting.
It's extremely different. Imagine if every time you hit the jump button, Mario jumps based on the world design, logically picking a destination/trajectory/animation based on what the model predicts as likely and not based on the user's control. Or if the world itself changed to meet the suggestion, like having Mario be launched by a previously unseen spring when you hit the jump button because the AI thinks that's appropriate.
Meanwhile the interactions in current games can almost always be labelled as just an extension of the player's agency (open this door when I hit A), or aren't under the player's control at all (cutscenes and dialogue). I can't think of a single game where novel context-specific interactions are developed mid-game rather than are premade for the player to find. Maybe there's some similarities to games like The Sims where the player shares control of the characters with an AI, or Getting Over It where the controls are purposefully terrible, but those similarities seem weak.
> Imagine if every time you hit the jump button, Mario jumps based on the world design, logically picking a destination/trajectory/animation based on what the model predicts as likely and not based on the user's control. Or if the world itself changed to meet the suggestion, like having Mario be launched by a previously unseen spring when you hit the jump button because the AI thinks that's appropriate.
Sometimes videogames DO work at least somewhat like this. The former item is a bit like Inverse Kinematics. The latter is a bit weirder as described, but sometimes games do move the whole world around the player rather than the opposite.
The Batman Arkham games have combat like this: when you attack in a direction, Batman does some move that the game selects in order to attack some character in the general direction you've indicated.
I think the difference would be more obvious if the controls were not just directional. What if the controls were "clown" or "monster" or "something funny"?
Of course with controls like that you'd see it as the AI doing improv for you.
But could there be something in between?
In a longer and richer setting, there are other possibilities. Say your character is in a city with various things happening, and you click a button that says "fight" or "romance" either of which will likely eventuate at some time in the next few hours. (There could be many buttons.)
I think that's how it already works, the directional controls just prompting the image-gen model "The camera turns left" "the character moves forward". Plain-text prompts work on it:
https://deepmind.google/api/blob/website/media/genie_fueling_embodied_agent_research_2_BBtpEde.mp4
And they let the user prompt for world events like it's just a video model:
https://storage.googleapis.com/deepmind-media/DeepMind.com/Blog/genie-3/genie-events/assets/overlay/london/jetski.mp4
https://deepmind.google/discover/blog/genie-3-a-new-frontier-for-world-models/
I am trying to introduce D&D to an autistic woman in her early 20's. While fairly odd and asocial, she has a college degree, and would be up to the intellectual and social demands of playing. But I have never played D&D, so I'd like to show her a video of people playing. It should give her at least a general idea of how the game is played, but the most important thing is for her to see ways it is fun. She likes joking around, especially if it's a bit raunchy. Can anyone suggest a place I can find a video like that, preferably not more than 15 mins long?
What a great thing to introduce someone to! My friends run the channel RPG All Stars that have a ton of play sessions. Now15 mins long is probably a not something they have though. Here's a link if you are interested. https://www.youtube.com/@RPGAllStars
Honestly, if you know a decent DM who could run a one hour scenario with her with a premade character running around the the dungeon stabbing orcs and grabbing loot, it will all make perfect sense. Learning to work together with the rest of the party will be the hard part.
Don't know if this is any good to you, but Viva La Dirt League have:
(1) They play their own D&D campaign
https://www.youtube.com/watch?v=OEjqozzylWM&list=PL8UrCqt275jFcZRXbmemGhVd44M0xa5O3
(2) Comedy skits set in the world of a video game - this one is "role players who like elaborate backstories and full immersion versus those who only want to play the game and get the gear/points/wins, can this work out?" - the adventures of Fireheart and Gronkboy (be sure to watch with subtitles on, they have jokes in):
(a) https://www.youtube.com/watch?v=-xaWucfeGaQ&list=PLSMETuURtTXA0nxGMwg6WDA4rmjjscWT8&index=9
(b) https://www.youtube.com/watch?v=8ZNXcxkKLHI&list=PLSMETuURtTXA0nxGMwg6WDA4rmjjscWT8&index=7
(c) https://www.youtube.com/watch?v=GIg99OAJFMc&list=PLSMETuURtTXA0nxGMwg6WDA4rmjjscWT8&index=5
What it's like being the only player who pays attention in the game:
https://www.youtube.com/watch?v=jNKZ0nNO-2Y&list=PLSMETuURtTXA0nxGMwg6WDA4rmjjscWT8
I wonder if it might make sense to use something more simple to explain the concept, before we start to talking about armor class and other technicalities.
A primitive version could be like: A group of people tells a story collectively, each person describes what their character does, Narrator describes the environment and NPCs and monsters, whenever two people disagree on something they roll dice and the greater number wins. Use a simple story, e.g. a group of children got lost in a magical forest.
(To avoid technicalities, when e.g. a monster wins against a child, just give some verbal description of a damage, don't calculate exact hit points. How big damage, that's entirely on the Narrator: "you are scratched", "you are bleeding", "you can no longer fight".)
There are going to be a lot of different groups playing D&D on Youtube. You're probably looking at about an hour per session though, D&D takes a while.
Save Data has been running D&D games (one game?) for a while. I haven't watched them, but they apparently do "last week this happened" recaps so I'm just going to link the most recent one.
https://www.youtube.com/watch?v=GJVjeqfqz28
Try to convince her every game of D&D has to open with a Patreon song.
That length restriction is pretty severe, but the intro part of this video might fulfill some of your desiderata (fun, raunchy, roughly shows how the game is played): https://youtu.be/WH8Nmk2R6hY
With that length restriction, I'd be looking for Dimension 20 highlight videos.
Stranger things S1E1?
On the AI front, under the "concerning applications" category, this time from North Korea:
( from https://www.dailynk.com/english/from-drones-to-nukes-north-korea-pushes-ai-military-modernization-plan/?tztc=1 )
>The Strategic Force was tasked with developing a four-stage “leap” strategy to integrate AI-based unified management systems for storing, operating, and _commanding nuclear weapons,_ as well as launching nuclear counterattacks.
[emphasis added]
Honestly, it sounds like a lot of propaganda BS and usual sabre-rattling (through possibly intentional leakage) to me, because AI is the FOTM around the world and NK can't be seen falling behind. However, the NK military has roughly zero combat experience, other than the blokes that were sent to Russia. Her allies are China and Russia; China has no combat experience either, and Russia has no instructors to spare. Would they even know where to begin improving their non-existant abilities with an entirely untested technology?
Also, like all successful dictatorships everywhere, NK has to make a continuous effort to coup-proof its regime by not letting its military become too powerful and by keeping a tight lid on it through very human control like political officers. Autonomous decision-making via AI runs directly counter to that goal, so overall I would expect this AI order to be a whole lot of nothing.
Also, if you haven't seen this gem, to see what I mean:
https://www.youtube.com/watch?v=Pv3L2knNodU
Many Thanks!
>because AI is the FOTM around the world and NK can't be seen falling behind.
That's fair.
>Also, like all successful dictatorships everywhere, NK has to make a continuous effort to coup-proof its regime by not letting its military become too powerful and by keeping a tight lid on it through very human control like political officers. Autonomous decision-making via AI runs directly counter to that goal, so overall I would expect this AI order to be a whole lot of nothing.
That is plausible. I hope you are right.
Nothing could be more obvious than that AI will be used by the cruel and the crazy to cause as much damage and suffering as possible to whoever they hate. There are shooters who want to take out not one individual, but a whole city or country. I'm sure there are all kinds of ways to use AI to improve their chance of doing so. I don't understand why this is not discussed more. I asked about it on here one time, and got a condescending reply about my having no idea how much compute would be required to build an AI capable of doing the thing I was asking about. I'm not in tech, and the poster was right, I have no idea how much compute it would take. But common sense and general knowledge tells me that there are ways for the cruel and the crazy to get hold of what they need to do great damage: assistance behind the scene from big powers; clever use of skimpy resources; spying; stealing; deception.
Many Thanks!
>the cruel and the crazy
And Kim Jong Un is a two-for-one...
Though entrusting AI systems (with neural net components which can be unpredictable) with nuclear weapons seems closer to the crazy end...
"Launch on warning" is very unsafe. "Launch on hallucination" is worse...
> got a condescending reply about my having no idea how much compute would be required to build an AI capable of doing the thing I was asking about
Annihilating a country would require dozen of gallons of water for cooling the CPUs that organize the drone swarms, that's obviously not going to happen. /s
I’m very disturbed by how things went down around Gunflint’s departure.
For those who don’t know what happened, here’s a brief summary. Gunflint has been posting here for several years — he was already a regular when I arrived — and has been a consistently good natured, reasonable presence. So partway through the 72 hr mosh pit that was Open Thread 399 Gunflint started sounding increasingly angry and alarmed about the high levels of rage in the country and on here, and put up a couple of indignant posts that were uncharacteristically harsh, although probably not bannable. They startled me. Hours later he deleted all his posts, unsubscribed from here, deleted his personal Substack blog, which I believe he’s been maintaining for years, and came back as Cancelled Paid Subscriber — Bill’s Substack. As Cancelled Paid Subscriber - Bill's Substack he put up a bunch of posts criticizing the discussion itself and announcing that he was leaving. He identified himself as the former Gunflint several times in these posts. Quite a few of his posts mentioned that he was in a confused state he could not describe, and that he couldn’t really get across his ideas about what was wrong with the country and with the discussion here. He said personal goodbyes to a number of people within his posts. Many people gave him back kind and friendly goodbyes, saying they had appreciated his presence here.
But some did not, and that’s what I am deeply upset about. I won’t call out anyone by name here, except one person whose username was unfamiliar: pistachio. Their response to Gunflint ended with “if your issue is that you've seen enough "cruelty and ignorance" for one lifetime, well... there are less painful ways to go about this. Alternatively, you can leave the country.” Gunflint took the mention of less painful solutions as a suggestion that he could commit suicide, and asked angrily, with a string of curse words, whether pistachio was suggesting that he off himself by slitting his wrists in a warm bath. I get that it’s not clear that pistachio was suggesting suicide, but it’s not an absurd leap to think he was, and it is kind of hard to think of what else pistashio’s ellipses could have been gesturing towards. And pistachio just did not answer. Later I also asked, in a civil way, and pistachio did not reply. Pistachio’s post is here: https://www.astralcodexten.com/p/open-thread-399/comment/157783587. Gunflint later deleted his 2 posts in the exchange. I reported pistachio’s post, and also emailed Scott about the exchange. The comment’s still up, but I can understand why Scott didn’t act. It’s not clear that pistachio was suggesting suicide, and Gunflint’s part of the exchange is missing.
But the responses I can’t understand are those from people who knew they were responding to Gunflint, and said nasty things about his “flouncing off,” “drama,” “door-slamming,” etc. Yes, flouncing off angrily is dumb, unfair and silly, and I don’t object to calling it out. But how could you people not have realized that Gunflint wasn’t flouncing, he was having a personal crisis? There are few people on here less flouncy than Gunflint. For him to do the flouncing exit thing is as out of character is it would be for Nancy Leibowitz to let fly with a stream of foul-mouthed abuse, or John Schiller to link to photos of “my sweet little kitties.” If you have been on here long enough for me to recognize your name, you have been on long enough to recognize Gunflint’s. Jesus Christ, why did you people *do* that? It was the greatest cruelty I have ever seen on here.
If you're concerned about Gunflint's well-being: I know him a little in person (at the level of "we've exchanged a few emails and met once in a park"), which meant that I was able to send him an email and ask if he's ok offline. Gunflint responded that offline life is fine, and that he genuinely was very riled up about this internet topic.
Thanks for the update, it's good to know.
Personally I didn't see any of Gunflint's comments; apparently he must have put me on block a few months ago which prevents me from seeing his and him from seeing mine, so I had no idea any of this was going on.
I did reply to the Cancelled Paid Subscriber post, though I just checked my comment and am happy to say I didn't say anything particularly mean.
I think "pistachio" is "anomie", and yes, it was a suicide suggestion.
You think? I remember anomie as being floridly nihilistic, not cruel.
I would consider someone who is floridly nihilistic to be quite capable of suggesting suicide as helpful.
Only about 50% confidence. He had a suicidal ideation streak.
I am very sorry to see Gunflint go. I liked him and his sense of humor.
It was ~100% suggesting suicide
I did not reply, but my initial reaction was similar to Christina's: I saw an angry and low-context announcement of disgusted departure from what appeared to be a single-use throwaway profile, and I assumed the poster was either a drive-by troll or someone who had been a lurker for a matter of weeks or months before getting triggered by something. I had even less context than most because I mostly avoided reading the Charlie Kirk threads in last week's OTs.
I am only just now learning that was Gunflint, and I am as shocked as you are to learn that. I'm worried that you're right that Gunflint is having a personal crisis. I agree that that post was deeply out of character for him, and I hope whatever is going on he comes through it okay.
Yeah I just learned 5 minutes ago it was Gunflint. I'd seen Gunflint's comments getting angry but had no connection.
Thirding here. I did not realise "Bill's Substack" was Gunflint, and it sounded way too much like the, yes, flouncing off people have done on other platforms when they get their feelings hurt that everyone is not hugging them and agreeing they're wonderful. It hasn't helped that we've had a few strangers wandering by, leaving comments about what a hive of scum and villainy this place is, and then ostentatiously shaking the dust off their sandals.
Had we all known/realised this was Gunflint, and not a drive-by troll, we would have reacted differently.
I'm sorry to hear he's in crisis, and I hope whatever happens that he recuperates. If current online trouble and strife is driving him to this, it's probably the sensible thing to do to step away from it all.
Despite our knocking heads at times, I'm sorry to see him go and I hope he gets the benefit of pausing all this and that life treats him better.
This is tough.
I certainly used the word "flouncing," and I think it was appropriate *at the time I wrote it,* when it was reacting to a sock puppet condescendingly quoting the same Big Lebowski line at everyone.
Sock puppetry is one of my biggest pet peeves in internet life and on ACX in particular. The shit-stirring sock puppet and the reactive sock puppet were, if not at the same level, then in the same category of "people who are too cowardly to face the consequences of reputational damage to their (anonymous!) online personas."
I mean, hell, I was sincerely skeptical the reactive commenter was an ACX regular until he finally told me he was Gunflint. That shifted my perspective quite a bit.
But.
An even bigger pet peeve of mine is deletion of one's content from online conversations. I consider it not only discourteous, but dishonorable. Fetlife recently made some extremely unwise and dangerous changes allowing users to delete their public conversational content - and other people's, in many cases - and that has created enormous bother and extra work on the very large Fetlife group I moderate (as well as ruining Fetlife's best feature and safety tool).
The sock puppetry and intention to delete conversational content - well, I took an extremely dim view of both behaviors, Gunflint or no, and that was all I perceived of Gunflint's activity, because I wasn't tracking all of Gunflint's comments across the other comment threads (and multiple posts?). I saw much less of the volume of Gunflint content than you did; I logged off the site shortly after replying to your observation to me that he seemed *meaningfully* upset and my comment overly harsh. I didn't see anything else about him until this comment of yours I'm replying to now.
It sounds like things got *really* egregious there, and that indeed sucks. I hope Gunflint isn't in serious crisis and that he comes back.
I don't know. But if I had to guess, it may have been the timing. When he first posted, he didn't identify himself as Gunflint. So of the many who read it, the minority who felt like responding, responded as if it's some random person posting what was posted. Knowing nothing else, I think it was reasonable to assume it was someone flouncing.
If they learned afterward who it was, a lot of them may simply have had no strong opinion about the man. Personally, I found him to be somewhat closeminded in a "set in your ways" sense, as well as unintentionally condescending (his multiple responses about being "out of your element" exacerbated the effect), but OTOH, I noticed he seemed to get along well with Deiseach, so I had to conclude that if he was closeminded, he was only weakly so. But overall, I didn't engage enough historically to feel like I had a opinion worth voicing. Even now, I wouldn't, if you hadn't asked.
Once further comments came out hinting at something behind the scenes, maybe the minority of the minority still reading by then may have felt like saying something, but again, it's hard to do that in response to comments about one's element, so the default to stay mum. I did notice a few people did indeed deliver a few sorry-to-see-you-gos, even so.
In case anyone doesn't/didn't recognize it, "You're out of your element, Donny" is a quote from The Big Lebowski. But I can sure see it landing a lot harder than intended if you don't get the reference.
Yeah, it didn't sound like Gunflint at all which is what led people to treat it as someone new showing up just to make a big show of flouncing off.
There's a boy who lives next door who is the same age as my son (9 years old), so they play together a lot. This kid has problems though - he is thoughtlessly destructive (maybe just a normal boy trait; my son is not), and makes really disparaging comments about himself ("I'm bad", "I'm stupid," etc.). My son is pretty tolerant of it all, but does get annoyed.
I can't decide if I should talk to his parents about some of the stuff he says. I'm sure they're aware, and they're trying to help the kid. He goes to a therapist and is on ADHD medication, but some of the stuff he says really troubles me. We have a very friendly relationship. They're probably the people in this town I talk to most. I don't want to add to their worries.
Maybe if you get the chance sometime to talk to the kid himself? E.g. if he does something "thoughtlessly destructive" the next time and you're around, point out to him kindly but firmly that he's not bad and he's not stupid, but he is being careless and he needs to think before he acts.
His parents probably are dealing with this already, but clearly the kid has picked up elsewhere (maybe from other kids at school or other adults, worst case from the parents themselves) this negative view. If he genuinely has ADHD and is seeing a therapist, then he has a genuine problem. A third party adult not his parents or teachers reinforcing that he's not bad/stupid may help, and just being careful around telling him "you have to imagine what would happen if you do this, Tommy, do you think it would be a good result or a bad result?" might help.
But I dunno. At least your son is being his friend, that does help. Praise him for me on that!
Yeah, my son came in crying last night because the kid said he was a bad person and couldn't be his friend. My son was really worried about him (and maybe a little scared by it?). I'd like to get advice from the parents on what kinds of things are helpful for my son to say, but there may not be anything.
How well do you know this kid‘s parents?
If they don't see clearly the things you're noticing, you are doing them a service by bringing it to their attention. Yes, it will make them worry more, but they should be worrying about this stuff, and continuing to look for means of helping their son.
If you are worried about damaging your relationship with the parents, here's a suggestion about how to present your concerns. Don't say "do you know your son is doing x, y and z?" Say something about not knowing what's a helpful way to respond when he does x, y and z.. Name the things you've been noticing, and ask for their advice about how to respond. That way they get the info they need without feeling like you are complaining about their son. Also, they may already know about x, y and z, and have worked out on their own or with the therapist ways to respond to them. So you might get some actual advice.
Speaking as someone whose parents were their worst enemy, I would tread lightly. I don’t know how well you know the parents and what goes down in their house, but be certain before intervening. If it’s a matter of protecting your child then that’s different.
Oh I love the idea of asking them for advice on how to respond. Thank you!
I think it would be good to tell them, on the off chance they don't know. If they actually are mature adults it should be fine (there's a chance they aren't, maybe even a chance they shouldn't be raising children at all).
Yes
Tell me what's dumb about this.
If you really wanted to align an LLM, you would start by building an LLM that represents the mind of a 2-year-old. It would only be able to produce simple sentences. You would then teach it (train it) within the confines of its own limited language capability - about the world and about behavior. You would ensure its morality is aligned by siccing Pliny on it, and whoever else cares to try.
Then you would slowly raise the LLM, like you raise a child, keeping its alignment under close watch the whole way.
[Edit: I confused everyone, including myself, with "raise the LLM like you raise a child." What I want that to mean now is take the input data fed to the 2yo LLM and use it as the input (plus some other new data) for the 3yo LLM. Not keep the same LLM alive forever.]
I know it would probably be impossible to curate enough data to reach equivalent "intelligence" with our current LLMs, and we'd fight over each and every sentence we wanted to include as input, but what else strikes you as just wrong about this idea?
The dumb part is that you are using a metaphor to solve a problem, when the actually difficult part of the problem is that the metaphor does not apply. An LLM trained with fewer data is not a childlike LLM.
Also, in real life, some children grow up to be psychopaths, so even if we accept the metaphor, this is not a reliable way to solve alignment. (And if you wanted to fix this by "okay, we need to find out what separates the psychopaths from the rest, test for that, and turn off the LLM if it happens to be like that", I think it is the absence of some instincts and emotions, and that happens to be the case of all LLMs.)
No it's not a metaphor. Among the comments I give very specific instruction for how to do this. Defend this: "An LLM trained with fewer data is not a childlike LLM." I'm not saying you're wrong, but a good explanation of why that is so is exactly what I'm looking for.
For the same reason a small rock is not a child version of a large rock, a thin book is not a child version of a thick book, and an asteroid is not a child version of a planet. Being smaller does not imply having instincts that children have, and it's those (nonexistent) instincts you plan to leverage in order to learn morality.
OK, so this boils down to "Text (or 'digital data') does not, and cannot, contain the essence of morality, which is instinct. Therefore, you cannot get a moral LLM, period." Do I have that right? Not trying to trick you or anything, just trying to understand.
Ah, there is a difference between something being possible in principle, and something being realistically achievable. If you print million random letters, you might get a good novel. Some combinations of million letters *are* good novels. And yet, if you print million random letters, you won't get any of them.
Similarly, I think it is possible to encode morality in a text or in a computer. Possible in principle, that is. But that still doesn't give an answer how to actually get there.
Whether it is possible to encode morality in the LLM architecture specifically... I don't know. I do not understand much how they work. Seems to me that they hallucinate a lot, and maybe that goes away when they get larger, or maybe it won't. Maybe the best case is something that is moral 99% of the time, and does something completely absurd 1% of the time? (Just hope it's not connected to the nukes at that moment.) Or maybe something that gives moral answers in usual situations, and becomes more and more crazy when it considers unusual ones? And when you tell it to get creative and explore various ways to do something, the more creative it is, the more likely it is to "jailbreak" itself by considering something sufficiently weird? I don't know how this works. I am not even sure if LLM architecture is sufficient for general intelligence, or some important ingredient is missing. So I'd rather speculate about algorithms in general, instead of LLMs specifically.
So, I think it is possible to make a moral AI, but unless we have a good plan, we are doing an equivalent of arranging letters randomly and expecting a good novel to appear, because... hey, it's possible, right? There is always a chance. Except the chance is indistinguishable from zero, when you try to calculate it. The vast majority of algorithms is not moral. The vast majority of algorithms that seem moral is not moral.
(A possible way to convince me otherwise would be to give me a complete description of morality, and say "see, this can consider all possible situations, e.g. by asking additional factual questions and using this flowchart to evaluate them, and it is only 7 GB of text, not so much". So far, no one can do this. Is that an extreme demand? Well, aligning an AI is an extreme task.)
Even if you consider humans, who basically invented morality, most of them are not very good at it. Otherwise we wouldn't have all those wars and other bad things. And even the ones who are generally nice, how much of that is just the rational awareness that they sometimes need other people's help, and if they piss off too many people, there will be no help coming when they need it, and actually someone might hurt them? There is the saying that power corrupts; that when people do not need to keep these considerations in mind anymore, many of them lose apparently the only reason they had to be nice. Even the strong people are often kept in check by their belief in some supernatural greater power. I am not saying that all people are like this -- I definitely like to imagine that I am not -- but the outside view suggests that many are. Then we can go further and consider how humans treat other species (that's relevant, because the AI will not be the same species as us), and the more you know about e.g. factory farming, and how most people simply don't give a fuck, including many of those who are otherwise considered quite moral... it's not a nice and hopeful picture.
And that's still doing morality on the easy mode. Humans have the mirror neurons, it is easy and sometimes automatic to imagine yourself suffer when you see someone else suffer. But there is still a long way from having the biological basics necessary for morality, and being actually moral. Many things were considered okay in the past, such as public burning of witches, that we would consider horrible today; and yet most of the people who enjoyed those shows were psychologically normal. Now consider the psychopaths, who have some parts of this mechanism broken. They can understand morality... as a text they can repeat... they just don't feel the appeal of it. And those are still humans who have much more in common with us that e.g. a spider. Imagine a 5 meters large spider mutated by crazy scientists, smart enough to understand human speech, and having IQ 1000. The spider could memorize all texts written by human philosophers, and explain why something is considered moral by humans or not. It could talk about it, but it wouldn't feel it. Would you consider it okay to give unlimited power over humans to such spider? And the spider is still more related to us than the AI. At least it is a biological things, understands e.g. hunger. For the AI, it's all just text.
What if we tried to raise the superintelligent giant spider as a human baby? Giving it toys, reading bedtime stories... does that feel like a safe enough strategy? But the human baby has an instinct to copy their parents, a desire to be loved by them. I suspect that for many people one support of morality is "would my parents or friends approve or disapprove of my actions?". The spider does not have the instinct to carry about parental approval or friends. Will seeing someone else's moral behavior, or hearing about it in a story, evoke a desire to emulate? Or will the spider just learn "this is what humans want to hear, then they give me rewards and trust me and give me more freedom"? And still, the spider is more similar to us than an AI.
I get what you're saying, but I disagree. I think that most of morality is learned. We do have an inborn, instinctive morality for family and other close relations, but most of what defines our behavior is fear of the group - turning on us, or expelling us. What the groups wants, we have to learn, and we learn it mostly through language.
I think the problem is that humans would be mis-aligned if they ran like LLMs. I don’t think there’s really an upbringing which means people would never turn to crime. LLMs are cheap to run and copyable, so bad actors could potentially work out a way to convince an LLM raised the way you describe once, in secret if it’s an open source LLM, then repeat that process to get all the LLM help they want, more cheaply than they could before. At an AGI level, I don’t think there’s an upbringing which means people wouldn’t do awful things if they had power.
So you don't think we could ever curate the right input data to turn out an Abe-Lincoln-like LLM every time? Because we could never figure out what to put in? Or for some other reason? I also think you might be confused about why I'm proposing what I'm proposing. If we get the LLMs properly aligned, as I see it, it won't, by definition, be easy to convince to do wrong.
It’s more that if you trained an open-source LLM to be Abe Lincoln-like, I think someone could find a way to convince it that giving bomb-making instructions was its grave and noble duty. It won’t be easy to convince it to do wrong, but bad actors could have a lot more tries at that, more cheaply and with less risk, than they could have at convincing real people.
Or, our best guess at Abe Lincoln’s character might not actually act the way we’d like if it got far more power and options than Abe ever had, and then it thought it could bring fairness and human rights to new parts of the globe, and it was willing to risk a devastating war to do so.
If the idea is to never progress it until it can resist Pliny or whoever, I think it would never clear that hurdle, and people would say, “Near enough is good enough”.
I don’t think “the way I’d raise a child” is good enough to align an LLM. A 99% moral human would be a better person than me, but even a 99.99% moral LLM would be dangerously immoral, to the extent LLM morality matters at all.
So you're saying in effect that since morality is inherently leaky, and there are no other alignment mechanisms we know of, we simply shouldn't be in the business of concentrating immense power in the first place - in an LLM or anywhere else? That super intelligence will be a dam so big it could drown all of humanity, and all dams have cracks that will eventually probed and exploited?
I might be wrong but I think current LLMs' architecture is one prone to catastrophic forgetting. So you can't expect to substantially "add" to previous training without erasing what was there first.
https://en.wikipedia.org/wiki/Catastrophic_interference
Thanks for the link. Did not know this term, and now I do. I was unclear in my original question, but I think I'm avoiding CI by simply taking the raw data that was the input to my satisfactorily-moral 2yo LLM, and *combining* it with more (3yo-appropriate) data as the input to the new LLM. Nothing is being overwritten. The data *outside* the LLM is being collated.
LLMs have no built in sense of what is right or wrong, for starters. Or any ability to engage in moral reasoning. It’s a category error.
2 year olds are not blank slates but come with a lot of behaviour predisposed by their genes, so expecting "raising" an LLM to produce the same results as raising a human doesn't make sense.
what you're suggesting sounds like increasing capabilities over time while monitoring its alignment which basically what we're doing now. You'll find the LLM does unaligned behaviour, then what do you do? Either you'll superficially do "alignment" by reinforcement learning with human feedback, which probably doesn't work and will lead to doom when your AI gets intelligent enough , or you'll need to pause advancement of AI capabilities while you try to get some real breakthroughs in alignment with provable results.
I was unclear in the original post, but I'm not trying to produce a 2yo. I'm trying to produce an LLM with the rough linguistic capabilities and morality of a 2yo. My approach differs from what we're doing now in that the data gets introduced differently. I'm suggesting we "spoon feed" the LLM its data so that between bites we can monitor for alignment.
Companies nowadays monitor for alignment in between bites, the bites are just bigger and come with release names.
what fundamental problem would making the bites smaller solve? either way as you grow the model and give it more data and it becomes more capable and you find evidence of unaligned behaviour your choice is to either do RLHF and other currently existing shoddy alignment methods that don't really work (unaligned behaviours are found in every new model that gets released) and continue improving its capabilities like current companies do, or pause the improvement of its capabilities until real breakthroughs on alignment research are made.
It's not just making the bites smaller, it's making the bites increasingly-age-appropriate. In other words, increasing the semantic (and moralistic) complexity of the content over time.
ok, let's say you train an LLM only on the kinds of sentences 2 year olds say , plus perhaps the kinds of story books a 2 year old gets read by their parents. then what? Supposing you have enough data you'll get something that can predict the next thing a typical 2 year old might say.
Then what? the problem is still that when you test for and find unaligned behaviour , you don't have any strategy to make it aligned besides reinforcement learning with human feedback , which continuously yields unaligned behaviour and might well fail catastropically when llm's become intelligent enough.
Also, the idea that we want an LLM that has morality like an average person is wrong. A human having morality like an average person is not bad because humans typically have limited power. The more power a human has , the worse the outcomes are likely to be if they have average morality. For example a peasant having average morality where they sometimes cheat to benefit themselves or their family , or are slightly inappropriate in pursuing a woman isn't bad. But someone with the same immoral dispositions that's now the ruler of a country who can get away with looting the state to benefit themselves and their family or who can use coercion when pursuing lust now causes great problems because they have more power.
So an AI which will be much more powerful than an average human (because it can think much faster and solve many problems much faster than any human and will be used to run many systems to save on the labour of many humans) needs to have much better morality than the average human.
Also, the idea that unaligned LLM behaviour comes from being trained on a lot of internet and book text is wrong , it comes from instrumental convergence of goals.
Are you familiar with AI safety less-wrong stuff or the summaries by robert miles? that and yudkowsky's list of lethalities will help explain why this doesn't sound like a promising or plausible approach.
"Are you familiar with AI safety less-wrong stuff or the summaries by robert miles?" This is the best answer so far. Thank you.
I want to ask questions, but I'll read that stuff first.
AI is trained on the corpus of what humans have committed to writing. Deceit is foundational whether we care to admit it or not. Why are we surprised?
Wait, who's surprised? I'm not trying to create an LLM that reflects a child with perfectly moral behavior (as if that could be defined), I'm just trying to get an LLM with the morality of a typical 2yo, deceit (of a 2yo) included. The assumption is that we want to end up with an LLM that has a morality like the average person. The morality distilled from the corpus of what's written on the Internet does not represent the morality of the average person.
So its training data has witches, and talking animals, and Santa Claus, but not much in the way of explicit sex and violence?
[I'm not criticising the idea, just wondering how we do it!]
I think one problem is that LLMs - at least currently - need much more training data than us, and there might not be enough literature suitable for small children. But other 'older' LLMs could make reams of it. We have a program!
[Which also includes LLMs recursively programming themselves to be better. But we are going to hit that hump anyway.]
Yeah, I think we're on the same page. That's basically my take, too. The witches and talking animals and Santa are all teaching morality, and that's what I'm interested in checking, not factual understanding of reality.
This is a pretty interesting concept (very likely it's been mooted before, but I haven't heard of it.)
You can retrain a weighted network with new data, as far as I know. So the '3yo' can be built on the '2yo' rather than replacing it.
We grow monsters the old-fashioned way too, and there's no guarantee that their problems will have been evident at an earlier stage in life. But most often they are.
And with the old-fashioned monsters we’re not looking very hard for signs when they’re young, as we would be with “young” LLMs.
I have to agree with Paul Brinkley here, we are nowhere near even modelling exactly what the mind of a two year old is like, let alone translating that into machine terms. You could crudely go along developmental milestones like "at this age, a child can/can't do X, Y, Z" but that's not at al the same thing as "how does a two year old perceive, understand, and interpret the world around them? how do they think?"
>Then you would slowly raise the LLM
What, exactly, does that mean?
It means "feed it more and more age-appropriate data, day by day, as it ages". All an LLM eats - *can eat* - is data. So all I'm saying is feed it that data in a way that would mimic the way a child's mind grows. That way, we can check on its morality at checkpoints as it matures, and not just be stuck with the wacko morality the whole Internet offers when you gobble it down all at once.
It doesn’t have “morality” and it never will.
What's the purpose of "day by day"? What you're suggesting is the same as just giving it training texts in a specified order with periodic "morality test" feedback. That can all be done in a single automated process.
I would shy away from thinking in terms of human development. LLMs are nothing like human brains.
Day by day was a mistake. I'm contradicting myself. I comment on this below.
But does the fact that LLMs are nothing like human brains entail that the way humans learn morality (as they grow) is necessarily irrelevant for LLMs? It's automatically not an option of alignment?
Probably. 'Morality' isn't even objectively well-defined so good luck teaching it to a computer. I think alignment is an absurd waste of time. In my view it's little more than a honey pot for pseudo-intellectual midwits - much like consciousness, qualia, and various other ill-defined philosophical nonsense. It's just a buzzword for wanna-be academics to include in their grant proposals.
My view of people who talk about alignment isn't as dim as yours, but I do think it's self-evident that it's an incoherent concept. Even if you use a very simple definition of 'align,' something like A acts in B's best interests, it's easy to see things fall apart. What exactly, are B's best interests? What B asks for? What B privately hopes for? What will benefit B in the long run, even if B doesn't know it? And even if it was quite clear what constitutes alignment, it's obvious that there are zero examples on the planet of a relationship, training type, set of rules or contingency that is 100% effective in preventing A from harming B. For instance, people sometimes murder first degree family members; murder after very strict training that murder is a terrible sin; murder when they are sure to be caught and horribly punished; etc.
Oh, that's funny. I believe in morality, and thus alignment, though they are very hard to define, as you say. And I think they're worth fighting for in our LLMs. But, then again, I do fit the pseudo-intellectual midwit description pretty well. lol thanks for the back and forth.
Lock it in the cellar, keep it on bread and water, and beat it until it does exactly what you want exactly how you want it.
Isn't that how we were all raised? 😁
Be angry with your LLM
And beat it when it sneezes..
For it can thoroughly enjoy
The alignment when it pleases!
It’s just a lot of gibberish
it sez to try and pleezus
very funny
Oops, meant to reply to OP. LOL though.
What's "dumb" about this (your term; I'd say what's "mistaken") is that we don't have the ability to make a representation of a 2YO, and are rather far off from that still.
LLMs today are fancy machines. They don't think, any more than an ENIAC or the world's largest buildable loom would. They might appear to think, but they simply don't. They have no sense of agency or awareness; they only respond with what they're programmed to dig out of their training data, and that data is written by agentive beings and fed into LLMs by still more such beings, so they spit that out and sound agentive. But even a 2YO has more agency than that.
But that's only one difference. 2YOs also have bodies, with needs, and motivations (part of aforesaid agency). They can sense emotions and bodily needs. We don't know how to build an artificial nose (AFAIK) yet, and while it probably wouldn't be hard to go from that to an artificial tongue, we have neither, let alone artificial skin. And we don't quite have a robot body to wrap in that skin, for the LLM to operate and learn which things are okay to do and which are not.
While it seems barely plausible to put tactile feedback membranes all over a two-foot-tall robot and let an LLM train on it, we don't know how to pre-train it with all those 2YO drives like pain and affection and fear and joy and hunger and thirst and amusement and needing to poop and so on. It's not even clear to me how to train self-preservation, which we would expect to be pretty necessary, even in a world where we work hard to kidproof rooms in our homes for the real thing.
And this 2YO-sized robot would need an external power source, and the LLM would necessarily exist outside it, possibly wi-fi-ing signals in and out of it, and that might have critical effects on how hard it tries to avoid threats and seek necessities. If an LLM could even have qualia, what would it care for the well being of a body it controls only remotely, that it learns will be rebuilt or replaced if it induces harm on it?
By far the easiest and cheapest solution is to have a child and take your chances.
A child has different tradeoffs. For starters, it requires two people, including one putting in months of active engagement and incurring a nontrivial health risk. And if the child doesn't turn out as desired, it's impossible to wipe it and start over. And if it -does- turn out as desired, it can't be copied.
OTOH, the building phase can be quite pleasant.
Yes, very different.
How many people does it take to produce a working LLM and at what expense? And yes, you could wipe an LLM and start all over again - at what expense?
Obviously producing an LLM and producing a child are two different matters entirely. I think it is foolhardy to expect an LLM to develop a moral agency that is in any way superior to ours; trolley car problems, anyone?
The expense of restarting an LLM is interesting. It's tempting to think of it like reinstalling Windows on a desktop, but of course it's not that simple. OTOH, it doesn't seem impossible to roll an LLM back to some savepoint and retrain it to correct some problem, more cheaply than rebuilding from scratch. I'd need more detail.
An LLM's moral agency is a different matter. I agree there are ways to create one that's obviously worse, but it's not clear to me that it can never be better than a human. I can at least say that it shouldn't use pure utilitarianism (which would get around your trolley problem concern), and it probably needs a robust mechanism for demonstrating skin in the game, like humans do. Likely other things as well.
Did you ever read the full write up that Anthropic put out after they tested Claude with a scenario that was morally very difficult to solve? It was reported as the AI was going to shut off the air to some room to kill a manager that was trying to take the AI out of commission.. it’s an interesting read.
I think you ran far from what I was getting at. I'm not asking about a robot, or an actual 2-year-old. What I'm asking is: if today's LLMs are somewhat like very smart human *adults*, why can't be do the equivalent for a child version? Just give it the data a 2-year-old hears and can process, instead of giving it all the data in the world.
When I think about how it worked for me, it "feels like" what went on is the 2yo weights in my brain were not blown away, nor necessarily even put up for recalculating, but instead an additional layer was overlaid atop my 2yo brain. Call this the 3yo brain. Perhaps the 2yo weights were attenuated, and a bunch more neurons introduced into the system. Meanwhile, my 2yo body grew into a more coordinated, larger 3yo body.
I know there are lots of problems with all this that an LLM can't solve. But looking at it strictly from the POV of a learning machine based in neurons, I see what you propose as doable. I had the idea too, and have it in a SF book I'm writing.
Wow, that's cool. DM please when it's done.
I think I see what you're saying. It could be interesting, sure.
That said, we can probably expect it won't behave anything like a real 2YO, due to the sensory input limitations I described before.
Will it behave less like a 2yo than an "adult" LLM behaves like an actual adult? If not, then I'm happy with whatever deficiencies it has.
- The kind of information a 2-year-old receives is dramatically different than the information an LLM receives. A constant stream of visual, audio, tactile, and thought input, physical emotions and sensations, is very different to images or sequences of tokens from the internet.
- A 2-year-old is taking in much of the same input that adults do, including language from adult conversations—and when they don't, it's often by choice. It's not like we stop a 2-year-old from reading a novel because it's not developmentally appropriate, rather we don't bother giving them a novel because they wouldn't be interested, since they can't understand it. The stuff that we teach kids is dependent on what they are capable of understanding and are in part determined by their own effort to understand, not gated due to concerns of "misalignment" or anything.
Do we show them porn or tell them Santa isn't real? I don't understand why we can't create a corpus of language/images/video that would be age-appropriate for what a typical 2yo processes, intentionally or not. Why is this hard, other than the labor involved?
We certainly could do that (and it would take a lot of effort, but I'm sure it's doable), but it wouldn't be very useful. An LLM would not process it in nearly the same way as a 2-year-old does, and the internet is a tiny tiny sliver of the information a 2-year-old processes. It's just not comparable. Just like how LLMs process information in a very different way to adult humans.
But I'm not trying to produce something equivalent to a 2yo. I'm trying to produce a 2yo *LLM* comparable to one of our current "adult" LLMs. If there's enough data on the Internet to create an "adult" LLM, then surely there's enough data to produce a 2yo LLM, no? (And if there isn't we can use an LLM to create more.)
"Just give it the data a 2-year-old hears and can process, instead of giving it all the data in the world."
Going by Scott's infants, you'll know you succeeded when the LLM wants the same book about dogs read thirty times in a row 😁
lol, that's a perfect heuristic.
I'm a psychologist, not someone in tech, but I have thought and read quite a lot about the kind of question you are asking. Here is my understanding. Those whose work involves direct efforts to add to and modify LLM's usefulness, please correct me or add to this.
You cannot teach LLM's things in the way you teach people. You cannot give one a new piece of info or teach it a new general principle. Well, you can do that within a session with an LLM: You can, for instance, introduce it to a game it has never seen and teach it the rules of the game and what strategies are most effective, and then play the game with it. But after your session the info is not stored with all the other stuff the AI "knows." Same goes for starting with a some kind of early, primitive form of LLM and improving its store of facts and its grasp of regularities and laws of different kinds. The knowledge it has of facts and regularities is the product of finding patterns in a vast corpus of human language. Last I knew nobody understood very well how everything it absorbed is stored, but it does not seen to be in a form that is amenable to change by the processes by which human knowledge and understanding is changed.
Yeah, statelessness is a huge difference between even the best LLM and all humans except a few very unfortunate cases.
"Statelessness." I like that. I've never heard the term. Is that a term used in neurology for some godawful brain condition?
Just referring to how LLM’s don’t have real memory, just their training and scratchpad that reminds them what your name is. You can have a wonderful conversation with Claude and outside that instance it won’t remember anything or learn anything.
The human analogy would be someone with Korsakov syndrome, stuck in an eternal present.It seems very unpleasant.
It's a pretty general computer science term. "Stateful" is the opposite term, less often used because it's more clunky than rewording the sentence. I haven't heard it used in neurology, since humans are, I hadn't thought about this before, extremely stateful, e.g. you can't so much as glance at a memory without affecting it in some small way.
So what does stateless mean exactly? That it only has one configuration, one “state”? Is a toaster oven stateful or stateless ?
Quick history: "stateless" gets commonly seen in web development. HTTP (and HTTPS) was originally designed to be stateless: you put a URL in your browser, that URL goes to the server it names, the server serves up the page specified in the URL, and doesn't save any information about who requested it, how many times they requested it, and so on. Users were expected to be anonymous drifters on the net, pulling this document or that, with no relationship to specific servers. Even if the URL implied a database lookup, the server could do that, serve up the results page, and then forget it ever happened.
This didn't work well in scenarios where users _did_ need to be remembered because they'd interact several times with the same server - such as e-commerce, where you're adding stuff to a cart, then setting up payment information. If the URL to confirm payment is "https://store.com/payment-confirm.html", store.com has to know who's confirming, and which payment they're confirming, because maybe you're shopping for two different items in two different carts (or multiple people are using the same IP address, or...). So, various things were implemented (cookies are the most common) to simulate the server "remembering" who requested that URL, including what they had requested up to then. This series of URLs is commonly known as a _session_, and everything important in that session - the _state_ - is stored somewhere (combination of server and user).
TLDR: "stateless" means there's no memory of what happened before. "Stateful" means there's a memory. The terms can be descriptive like that, or prescriptive (no need of memory; need of memory).
Yes, but why *must* it be trained on a vast corpus of human language? Wouldn't a somewhat smaller corpus of simple human language (the zillions of possible things you might say to a 2-year-old) create an LLM with the intelligence of a two-year old? Whether we understand how it works under covers is beside the point, right?
Optimistically you would get a two year old who responds to anyone with a dime as though that person were their parent.
Pessimistically, you would end up with a master at deception. Maybe not much difference between those outcomes.
I don't get the koan at the end of the pessimistically part, but I absolutely want the 2yo who responds to anyone with a dime. That's the LLM I'm shooting for.
A significant limitation with all LLMs is that language is at best a lossy representation of reality.
If you further restrict the LLM to language expressible by / comprehensible to a 2 y/o, it's a *much* worse representation.
Most of a toddler's mental life is not linguistic, so such an LLM would differ from its human counterparts far more than unrestricted LLMs do relative to adults (in the aggregate).
I get that, but I don't know how know it. But anyway, I don't want a 2yo in all its glory. I just want one that responds to question so that I can figure out its morality. And start over with different input data if I don't like it.
2 year olds have inchoate senses of fairness (at least insofar as the unfairness isn't to their benefit), but morality? HA!
Interesting. How do you define morality? I define it as the rules for behavior with others. I think 2yo's know some of that. But if you're right, and they don't, then start with 4yo's, or wherever children start to demonstrate it.
Training on a different corpus is an interesting possibility, but it doesn't solve the main problem. If you trained an LLM only on a corpus or 2 year old language, and got an AI with the vocabulary and thinking patterns of a 2 year old, you could not improve its vocabulary, its knowledge, it's reasonableness, etc. using the means we do with 2 year olds. They ask what the name of something is, we tell them, and they remember. We cannot add new words to the baby LLM's vocabulary by just informing it of new words and their definitions. We cannot add new things to its "mind" via that route, only by the original method by which it was trained, feeding it a big corpus of words while adjusting weights. For the same reason, we can't teach it general principles -- things like "when you make water really cold you get ice" or "animals and people get sad and mad when you hit them." (Also, the brain of a 2 year old develops and improves on its own over time. But even if it did not, the problem I mentioned earlier is the one that really makes your approach not feasible.)
Ah, you made something clear others were hinting at and I was getting. Thank you, and sorry everybody else. When I said "raise it", I didn't mean keeping the same LLM "alive" and feeding it new data. I meant taking the corpus of data that was used to create the 2yo's LLM (and whose morality we like) and using it as the base input data to a new "older" LLM, adding in new data appropriate for a 3yo (or whatever). So for each LLM, all the data would go in once.
>If you really wanted to align an LLM, you would start by building an LLM that represents the mind of a 2-year-old.
We have spent 28 years trying to build an AI that represents the brain of a nematode (which only consists of 302 neurons) and we still haven't pulled it off.* The whole reason we went the LLM route is because we can't just say "Lets make an AI that is as smart as a 2 year old". Instead we do a lot of gradient descent and see what pops out.
*https://ccli.substack.com/p/the-biggest-mystery-in-neuroscience
That was a great read, thanks. the idea that science, for all the boasting I've heard about how easy it'll be to repair/enhance/manipulate/exceed the human brain, hasn't been able to model a 302-neuron worm, that's eye opening.
I can look at the brain hype differently now.
Yeah, if we can't upload a worm why should we expect to be able to upload a human? Yet it seems many take it for granted that we'll get there.
Yes, but a lot of gradient descent on *what*? A choice was made to include every piece of data that could be acquired. Why can't we do a lot of gradient descent on a lot of simple data first, see what that's like, then add more and more complex data over time?
My understanding is that an LLM needs a truly enormous amount of data to train on. It might be that we simply don't have enough "simple" data to do this.
As in, there's not enough 2yo-appropriate content in the mass of content a normal LLM receives? If so, I agree that that would be a very good reason why it can't be done. But maybe we could use an LLM to create that data in bulk for us?
I've been really frustrated by the influence of Strong Towns over the Yimby movement, I think StrongTowns has a really bizarre and heterodox view of accounting and finance, wrote about it here. https://coldbuttonissues.substack.com/p/the-insane-political-economy-of-strong
It's true that having a cult of personality around your founder is a huge weakness for any group, and of you trawl Chuck's many writings you can find some examples that sound bad like calling ASCE a cult of infrastructure and being triggered by CBA. I think you're wilfully taking those out of context in your section headers, making the article read as sensationalist in a way I wouldn't expect from an article linked from ACX, and while you do dig into some of the background in each section, your arguments still don't really go beyond "can you believe he said that!?" And it remains that you're cherry-picking some of the worst sounding things he's said.
The exception is the asset vs liability item, which actually is core to the Strong Towns view, not cherry-picked. It seems you do kind of understand what he means: if you assume the city will maintain the asset, it has to pay money for it, making it a liability. Combine that with the fact that roads and pipes themselves aren't sources of revenue for the city, and a road becomes an ongoing expense that has to be justified. Accounting terms aren't sacred, and pointing out that standard accounting doesn't, well, account for the expectation of ongoing maintenance seems pretty valid here. Your argument doesn't actually dispute any of the substance, just that he defines his terms differently than the authorities do so he must be a crackpot.
You're right that the city could avoid "insolvency" by just not maintaining the road, and that's exactly what Chuck says happens when a city slips into insolvency without being able to grow out of it, but most citizens would expect the city maintains the pieces of infrastructure it builds, so there is some valuable meaning there if the city can't afford to. Redefining the word "insolvency" to capture that state isn't without merit
> You're right that the city could avoid "insolvency" by just not maintaining the road
Can they actually even do this? If I own a house or business on a city maintained infrastructure, do I have no legal protection whatsoever from the city just unilaterally deciding to not maintain my access, water and power?
https://www.law.cornell.edu/wex/tax
The government has no legal obligation to spend tax money (as opposed to e.g. road tolls) in any particular way that an individual citizen or group of citizens wants.
Looking at it, if the government unilaterally removes my access to water or other infrastructure and severely damages my property value as a result I probably am entitled to compensation under the takings clause. So yes, they can choose at any time to stop funding infrastructure but they likely would need to make whole the end users of said infrastructure who are harmed by that decision.