In getting smarter, we have accidentally upgraded our fears to existential, at a huge cost to our wellbeing: we’re paying for it in chronic anxiety and cultural despair. How deeply does fear shape our world? Why can’t rationalism protect us from panic? And what would life feel like without fear?
This essay argues that the next step in human evolution isn’t a faster brain or a deathless body — it’s learning to live, plan, and care without being afraid.
Anyone know what exactly the deal is with Amazon web services going down? Seems like a big deal. Also now I'm at work and can't do anything because of the outage.
I guess there was an outage and a ton of stuff went out. They said they mitigated whatever the issue was a few hours ago, but the program I use at work (Autodesk Revit) is still not working at my office or the other offices we collaborate with. I don't know how it all works, so I'm not sure why some things would still be down and others wouldn't.
Why don't AI Safety Researchers use the Tyler Cowen "Poisoned Strawberry" strategy to wage war on frontier LLM's and slow down the pace of AI research?
> the plan? we find an obscure but trivial question akin to the number of Rs in “strawberry” that claude gets right. then, we plant hundreds of documents across the internet that will activate when our competitors’ models are asked the question. our documents will cause those models not only to get the answer wrong, but to spend thousands of reasoning tokens in doing so. the triviality of the question will cause it to go viral online, causing millions of users everywhere to send the same prompt. as our competitors notice a rise in the number of tokens processed, they will wrongly believe it is due to increased usage, causing them to pull more compute towards inference and away from training. this, along with constant dunks on the timeline about the model failing our easy question, will annoy their top researchers and cause them to leave. and which lab will they join? us of course, the only company whose model doesn’t make such stupid mistakes. their lack of top researchers will mean their next model will be somewhat lacking, leading to questions about whether their valuation is really justified. but all this vc money has to go somewhere, so we raise another round, using our question as evidence of our model’s superior intellect. this allows us to spend more time crafting sleeper agent documents that will further embarrass our competitors, until finally the entire internet is just a facade for the underbelly of our data war. every prompt to a competitor’s model has the stench of our poison, and yet they have no way to trace it back to us. even if they did, there is nothing they could do. all is finished. we have won.
(and if companies eventually *do* discover countermeasures, that's a win-win for everyone, right?)
From _The 70 Maxims of Maximally Effective Mercenaries_: "Don't expect the enemy to cooperate in the creation of your dream engagement."
Tyler's hypothetical requires the general public to be excited about the novelty of the strawberry question enough to virally repeat the query, but not so excited about novelty to get bored with asking the question after a few days. It requires the targeted AI companies to be alert enough to changing trends to redirect compute, but not alert enough to recognize that the rise in compute demands is due to a flaw in their product gone viral. It requires top researchers to be smart enough to be irreplaceable at a company, but not smart enough to recognize a strawberry attack that is now part of the published literature.
As dubious says, the hypothetical is at best a fiction plot, not a business strategy.
This seems silly and ineffective, poorly-written-movie-plot style. "Plant hundreds of documents;" will mere hundreds of documents make a difference? Why are they saying "planted" as opposed to "posted on a bunch of sites we control" other than to sound sneaky? How will these "activate" when _the models are asked the question_ and why would that change how the model is trained?!
It really goes downhill. If it "went viral" _and_ users everywhere asked the same question, this _would_ be increased usage. But what does "causing them to pull more compute towards inference and away from training" even mean? Why would they not notice and account for meme problems much as other companies have? Why would this "annoy their top researchers and cause them to leave," as this is the kind of problem top researches are likely to be nerd sniped by?
None of this makes any sense. I think no one serious would bother.
I'm looking for epistemic help understanding what is actually happening on the ground with ICE and Border Patrol, and believe this commentariat is both thoughtful and varied enough to help get to truth here.
Broadly, any increase in immigration enforcement necessarily involves tradeoffs. More enforcement means more risk of false positives, and more risk of acts of cruelty or violence.
I'm mostly experiencing left-leaning media as characterizing what's happening as (a) there's tons of cruelty and violence and it's actively encouraged at all levels, and (b) nobody at any level actually cares about the false positives at all, they're just grabbing anyone they can get away with.
I'm mostly experiencing right-leaning media as characterizing what's happening as "yes, there have been a small number of cases where agents were unusually violent, but they're facing a lot of pressure, this isn't intentional but just the necessary consequence of those tradeoffs shifting, and by the way we should talk more about the violence AGAINST these members of law enforcement, that's the real problem."
Both sides seem to largely agree that masked + unidentified individuals who are probably but not necessarily actually members of law enforcement is bad, and ideally the masks should come off, but right-leaning coverage is much more sympathetic to the individual concerns of agents on the ground that could be motivating the individual belief that they have no choice in order to protect themselves and their families.
Questions I'm trying to answer:
Q0: Am I doing something reasonable, for the purposes of answering these questions, by munging "ICE" and "Border Patrol" and possibly others into what I think of as one group of "immigration enforcement"? Or are there important-to-these-questions differences I should be thinking about?
Q1: What are the actual statistics regarding incidents of unusual cruelty or violence perpetrated / initiated/ exacerbated by the immigration enforcement officials? How do these compare to what would be expected a priori given the priority that's been played on deporting illegals?
Q2: To what degree does the average immigration enforcement agent believe their orders to be to intentionally be cruel or violent beyond what's strictly necessary to arrest a target?
Q3: To what degree does the average immigration enforcement agent believe their orders to be to intentionally attempt to deport, or at least harm in some way, individuals who are not here illegally, but who plausibly appear to be or are clearly sympathizing with those who are?
Q4: I understand that law enforcement culture makes it genuinely hard to question orders, but IF the orders are believed to be *incentivizing* the behaviors in Q2 or Q3 (which I think most individuals would say are not morally desirable), to what degree are there indications that some immigration enforcement agents are pushing back? How does that manifest?
Q5: To the degree there's any of the orders described in Q2 or Q3, should we understand these mandates to be getting *amplified* by others within the chain (e.g. a supervisor adds even more pressure to the on-the-ground agents to behave this way, because the supervisor wholeheartedly endorses the orders), vs getting *dampened* (e.g. the supervisor disagrees with the orders so attempts to limit the degree to which they follow them while still being able to plausibly claim to be giving it a shot)? Obviously there will be individual variability here, but I'm trying to understand, in general, what level the "intent" is coming from. There are worlds where someone jumps on an excuse provided by those above them to act as they always dreamed, and alternate worlds where someone feels duty-bound to execute on orders that are somewhat contrary to their own beliefs as an individual, and I think this is important for anyone trying to understand how likely these behaviors are to last/get-worse/get-better or where to focus their efforts to change outcomes.
Q6: To what degree, if at all, should it be taken seriously that these specific agencies might be used as a "secret police" to target other kinds of people the administration dislikes, completely unrelated to immigration? E.g. should the average American citizen worry about an ICE agent being more likely to do such things, compared to e.g. an FBI or Secret Service agent?
I'm really trying hard to stay focused on factual understanding here and avoid the risk of this question turning into a culture-war discussion; I'm not interested in debates around what the tradeoffs/policies should be, where the moral line is drawn, etc. I just want to understand what's actually happening within these agencies right now, so that we can all form our own (informed) opinions about it instead of instinctively responding to the people shouting loudest.
With regard to Q0: CPB (Customs and Border Patrol) mostly operates at the border. ICE (Immigration and Customs Enforcement) is divided into ERO (Enforcement and Removals Operations) and HSI (Homeland Security Investigations). HSI deals with trans-national crime. So your questions mostly concern ERO.
Your other questions are good questions but I don’t know the answers. Supposedly “the agency tracks every time officers use force,”[1], but I doubt that information is available publicly.
Some factors seem plausible— a unicameral legislature; a fractured Congress that means little partisan support for the president; the idiosyncrasy of a leading opposition figure who barely missed being president for three straight elections; plus the Odebrecht scandal that rocked all of Latin America. But these didn’t feel sufficient to explain the sheer frequency of these incidents relative to neighboring countries. The one Peruvian guy I know said he’s long had the same question but no good answers. Would be glad to hear any thoughts!
South America does have a reputation in general for corruption, revolution, and unstable governments. Why Peru in particular is a good question, perhaps the influence of the Shining Path in the 80s predisposed to the kinds of 'vote for a strong man' reaction which would lead to a lot of corrupt leaders/unpopular with the other parties who then push for impeachment that we see elsewhere?
Nearly all countries in Latin America have presidential systems. One way Peru is different from neighboring countries is (apparently) the presence of a Prime Minister in addition to the President, as in the French system: https://en.wikipedia.org/wiki/Semi-presidential_republic
But SK doesn't have that feature and I'm not sure why that would matter, as the PM seems to have very little power.
Wiki lists 4 successful impeachments for SK (successful in that the impeachment vote passed, not that the president was ousted, i.e. by this count Trump was successfully impeached twice). Twice a president was removed.
Equivalent numbers for Peru are 7 and 4, which is something of an undercount because one president resigned to avoid impeachment, etc.
You need two-thirds of parliament (it's a unicameral legislature) to impeach. This threshold, has been frequently surpassed in recent history-- the most recent impeachment was unanimous, 122-0 (out of 130)! Getting two-thirds majorities of legislative bodies after multiple elections in a democratic country seems bonkers to me.
It seems to me that Peruvians have come to see an impeachment vote in their system as little more than a vote-of-no-confidence in most parliamentary systems. That is: it's not an affirmative claim that 2/3 of the country thinks the president is unqualified, but rather that fewer than 1/3 of legislators are happy with the current coalition agreement.
(Of course, this doesn't explain why other South American countries don't impeach the same way. But public perceptions of constitutional procedures are often quite historically determined.)
For the conference/unconference. I want to commend and endorse 100% your insistence on using the term "conference" rather than "unconference" to describe something, that based on current common usage of the term in english, is a conference.
I can just about conceptualize the outlines of what an unconference might be, but it would certainly not involve people with an interest in a particular topic gathering at a pre-advertised time and location, to hold discussions on that topic.
Nothing highlights the unseriousness of a movement or a cause than inventing a new term for a common concept, and a tone-deaf insistence on using it, #actually.
AI "will kill everyone", but it's important that we now say "unconference".
Reading the linked page it sounds a bit hippy-dippy (I'm sure there are more current terms to describe this mindset, but I am old, Father William):
"OK then, what is an unconference?
We start together, build the agenda together, then split into parallel sessions.
We have only one law, that I adore:
the Law of Two Feet: If you’re not learning or contributing, move to another session (or take a break, or do what nourishes you). Guilt-free.
And 4 (and some) principles:
Whoever comes are the right people.
Whatever happens is the only thing that could have.
Whenever it starts is the right time.
When it’s over, it’s over.
(Take responsibility for what you care about.)
Format (in this event): Opening circle → marketplace (we propose sessions) → four session blocks → group synthesis → social"
I notice that there doesn't seem to be any mention of tea breaks, so presumably you have to bring your own or go out and buy soemthing to eat/drink for the marked break time instead of getting a free cuppa tae. Tsk! Not very sociable! Maybe they *will* provide same but just haven't mentioned it explicitly, but the only benefit of going to these things is you at least get a cuppa tea and a bikkie on the house.
So it's a conference with no pre-planned agenda? (Or rather, it sounds like the agenda is Step 1: Find out what people want to talk about, Step 2: Breakout sessions to talk about that.)
Like, I don't know how productive that is, but that sounds different enough to be worth giving a different name.
NY State Assemblyman Alex Bores has announced he's running for Congress. Bores was the sponsor of NY's RAISE act, which passed before California's SB 53 and is similarly aimed at addressing catastrophic and existential risk from AI. He seems to be making AI safety a priority, and I've heard from several people in the field that they'd be really excited to see him in congress.
As far as I know one other person is officially in the race (Micah Lasher, who's probably the favorite). The seat is currently held by Jerry Nadler, who isn't running for re-election, but Lasher is seen as the "heir apparent."
This is my neighborhood - Lasher is heavily favored to win. My sense is that advancing the cause of AI safety might be better served by converting Lasher than trying to get Bores to win outright. I'm not super familiar with Lasher but he seems persuadable on issues where he thinks he can make a difference.
Going back to the Fatima article (which I read in it's entirety but didn't have time to see all the comments so I apologize if this already got covered):
If I understood correctly, Scott's materialist explanation of the miracle is a coming together of three circumstances:
1) a specific type of (relatively common) cloud cover through which the sun can still be seen and which allows people to stare at it for a long time without damaging their eyes
2) this staring can cause people to see strange colours and a sensation that the sun is pulsating, rotating etc
3) the people staring are also psychologically primed to see a miracle
He debunks the fact that other observers in villages quite far away did not fulfill these three conditions by providing evidence that they were also psychologically primed (because if they were not expecting the miracle at all, the materialist explanation falls apart).
However, even if they were expecting to see a miracle, by all the testimonies, they saw the miracle in the direction of Fatima and not in the direction of the Sun which would have been completely different from their perspective. So in order for this to work, it would mean that they had to be outside at just the right time (presumably because they were expecting it), then they had to stare in the direction of the partially cloud-covered Sun for a while until they started seeing weird stuff, then they had to turn in a completely different direction to look at Fatima, where they then witnessed a Sun miracle in that completely different direction.
I was skimming a history of the Abbasid caliphate and ran across a reference to the Zanj Rebellion, an event I had never heard of. I read more into it, and got the gist that the there was a large African slave population in what is today modern-day Iraq, performing hard and dangerous labor like building and maintaining dikes and canals. A chararismastic slave led a multi-year uprising that turned into one of the biggest internal challenges the Abbasids faced.
My immediate thought was this Abbasid slavery practice sounded like chattel slavery in the American south. But I went through college assuming the talking point that the US essentially invented chattel slavery is true.
I'm confused now. What is chattel slavery and what isn't? Were the helots in Sparta chattel slaves? Were enslaved Africans in the Abbasid caliphate? Or is chattel slavery more narrowly just a 17th-19thC-ish western thing?
As Korakys says, chattel slavery refers to a system in which slaves have identifiable owners. This is more or less necessary wherever "slave" is a legal status, because a slave's relationship to the legal system runs through his owner.
The practice is presumably older than history is.
Note that on objective metrics like "health" and "surviving children", American slaves were among the best-off anywhere in the world.
(Chinese imperial eunuchs might be considered better off, but obviously they didn't manage to have children. The Mamluks were definitely better off. At some point you have to question whether you want to include them in the category "slaves". They were formally slaves, but they were also formally in charge of the rest of the society.)
A point worth noting that even under Jim Crow, supposed to be cruel and harsh, there was no emigration of American blacks I heard of, there were no refugees fleeing oppression.
I've been reading the bible for the first time and it has quite a few rules in there on what to do in various situations regarding slaves.
Chattel is being used to differentiate it from milder forms of servitude that sometimes people claim is slavery but isn't what most people first think of, like serfdom or "wage slavery". It means slavery where you can buy and sell people freely rather than the slaves being bound to the land or something like that.
Needless to say many, many societies practised it prior to America. Most notably the Romans. I think this is a case of some Americans not really internalising that fact that the rest of the world exists.
Doesn't chattel slavery mean slavery for purely economic reasons? As in, the slaves weren't your enemies who you captured in battle, nor were they dudes who owed you a bunch of money they needed to pay off. I think in Ancient or Medieval times, these were commonly (almost always?) what slaves were, no?
Thanks, by that definition it doesn't seem like the US slavery situation was exceptional at all.
However, I do think the fact that the slaves were neither prisoners of war, nor people who owed you a debt, but were, rather, just some random easy-to-catch guys who would allow you to run your business much more cheaply makes that type of slavery exceptional (and to my mind, much more abhorrent).
> However, I do think the fact that the slaves were neither prisoners of war, nor people who owed you a debt, but were, rather, just some random easy-to-catch guys
That's not really right either. Originally they *were* prisoners of war. Their captors then sold them to Europeans to be sold into American (North or South) markets. The Europeans weren't going around capturing random Africans; they bought them, already enslaved, from more victorious Africans. That's generally also what the Arabs did, although I'm pretty sure I've heard that sub-Saharans who made the trip to Mecca might be enslaved for their trouble.
(As the trade developed, there almost certainly was a growing element of "we caught these guys because the market is good".)
Because American (USA) slaves had such an impressive rate of natural increase, this came to be a much less significant source of slaves there. Instead, slaves were mostly born into slavery as the child of local parents.
There's nothing historically unusual about slavery being an inherited status. But the American (USA) situation is unusual in that the population of slaves was able to sustain itself this way. Further south, they constantly needed newly imported slaves.
---
It isn't really clear to me why slaves captured for the purpose are more offensive to you than slaves captured in war. As far as I can see, those aren't different situations. If you're taking slaves from another group, you are at war with that group. And the main purpose of war is to confiscate resources belonging to the other side. Slaves are one of those resources.
Because in a war it quite often might have been the case that the "other" side was the aggressor and when my side happened to be victorious I used the captive bastards who had been attacking me for decades as slaves to compensate all the suffering and loss they had caused me.
In the African slave trade, as far as I have read, the "wars" going on were specifically and primarily to go out and capture slaves for sale. I agree that in some sense they were just another "resource", but it seems somehow uglier that one would make war on another group only to enslave them, rather than just to steal their riches or their territory.
But I can understand not seeing that as a big ethical difference.
A review on a Bob Dylan song made me want to try to write something in the style of Dylan. I remembered a song of his, but couldn't remember any of the words, so was trying to go through titles that might possibly have been the song, and re-stumbled across The Hurricane:
Been a while since I listened to the fully political ones. Man, he pulled no punches at all. That... would suck to try to copy. The worst-case scenario would be managing to pull it off.
And then I found the one I was originally looking for: It's Alright Ma
Time to update the Milei scorecard? Begging Trump for a $40 billion bailout hardly seems like the kind of thing that counts as successfully turning the economy around.
It's a complicated situation. My understanding is that the anti-Milei party won a local election, the market panicked that Milei would lose, they sold off pesos, and the US intervened to prevent a currency crash. So it's not really a clear-cut "Milei good" or "Milei bad" narrative, just more of Argentina being a basketcase.
Scott Bessent has said he wants $20 billion from other sources to complement the $20 billion from the U.S. government. I’d suspect the odds of that happening are fairly low. Trump has suggested more money will not be forthcoming if Milei’s party does badly in the Oct. 25 election. If the party does well, the Administration may figure that the promise has accomplished its objective, and see no reason to actually follow through.
Why aren't therapy books used more in the psychiatric system? Scott has mentioned in some posts that if you're seeking help for a specific problem/illness (he mentioned depression, anxiety, OCD, maybr borderline?) then reading a therapy self help book seems to have as good results as therapy.
Why isn't it utilized more? I can imagine for many patients1p it also feels less scary than speaking to a human. Not to mention a billion times more cost effective, will take pressure off the (in my country with socialized healthcare, at least) completely overloaded psychiatric system with insane wait times, etc.
I can read self-help books till my ears fall off, but I'm not going to implement any of the changes unless there's a real person standing over me with a whip to force me to do so.
I don't know how responsive that is; the claimed result is "reading a therapy self help book seems to have as good results as therapy".
That's different from "implementing the recommendations in a self help book seems to have as good results as therapy". Medicine is riddled with this problem; most notably, taking a placebo faithfully leads to much better outcomes than taking the same placebo irregularly.
One of Scott's articles for covid made the observation that masks-for-the-public research is more about the intervention of telling people to wear masks than about the intervention of having people wear masks, because you can't do the second thing. You can tell people they should wear masks and see what results you get, so that's what the research measures.
But in this case, that's 'good'; the research is also measuring reading the book more than it's measuring "correct implementation of the book".
I can only speak from experience regarding depression, but if you ask a depressed person to do x or y simple thing (excercise, reading self help, eating better etc) that is proven to help alleviate symptoms, they usually will not do it. Metaphorically, they are standing a looking at a cliff. You might be shouting down to them "There's a path up!" but all they hear is " You can just leap to the top!" or "You can just scale the cliff!"
Seeing a therapist is kind of like sending someone down to guide them to the path. It might be arduous, but its easier with a sherpa.
Your comment reminds me of one of my favorite ACX essays of late: https://www.astralcodexten.com/p/singing-the-blues I also agree with Deiseach. I'm currently in physical therapy for a back issue, and the willpower needed to exercise on my own is gargantuan compared to the willpower needed to spend an hour with my physical therapist. Humans are an odd lot.
An important factor is likely that many people just don't really read books anymore, so setting people who are already more likely to have attentional issues reading would probably be fruitless for a large percentage of people.
It also doesn't help that patients can differ significantly in their reading proficiency, so you would probably have to take care to assign books on an individual basis to match patients' abilities.
Scott, I don't know if you've seen the concerns raised about cheating in the review contest. But if you have, and you just ignore them and don't address them head-on and take an explicit position on what you consider fair behaviour and why...I'm seriously considering cancelling my subscription.
I feel like it doesn’t really matter? It’s an extremely low-stakes review run by one guy. There isn’t even really an easy solution for influential people using that influence to encourage others to vote. Unless there’s a serious attempt at mass manipulation (which I believe there was in the past that was found out) it seems like things have worked pretty well so far.
I do think people should be more aware that there is a genre of rationalist-adjacent media that boils down to, “here is a thinly-veiled ad for my startup/advocacy group”. I’m not totally against this, since a lot of the startups/advocacy groups are legitimately interesting and working on important problems, but it does lead to some conflicts-of-interest.
Do you mean the person who copy-pasted a Gemini response that the Alpha School entry contained signs of "marketing techniques"? I haven't tried it, but I predict that if you ask Gemini a leading question about any other ACX review for a product (maybe the mashed potatoes one) it will say the same thing.
I think he refers to another comment claiming (with a working link) that there was a vote-soliciting, pinned tweet form a high follower count account asking people to vote for the alpha school review during the voting-phase.
I'm not sure what the rule here is - it doesn't seem feasible to say that nobody may ever recommend a review. Liemandt has 7000 Twitter followers and his recommendation doesn't seem outside the range of normal things that don't shift the vote too much.
I re-ran the analysis to compare votes before and after Liemandt's tweet. For first-choice votes only (too annoying to rerun the whole thing), the numbers are:
Total voting period: Alpha 15.7%, vs. Ukraine 11.6%
Pre-10/5: Alpha 14.4%, vs. Ukraine 12.6%
Hopefully this will satisfy people that Liemandt's tweet did not change the ranking. I have no solution for the general case except to limit the voting to ACX subscribers, which has some technical disadvantages and doesn't avert the fact that people could just subscribe and then vote. Limiting it to paid subscribers might work, but this would change the nature of the contest and I'm not willing to do it without some issue bigger than a 7,000 follower Twitter account praising one of the entries.
Please consider the implications of billionaire investors co-opting your personal blog’s review contest into advertising their venture capital equity investments on a level deeper than a person “ever recommending a review”.
Thank you. That's what I consider an investigation and response. I do think much more discussion is needed: you focus on the number of followers (7000), Eremolalos focuses on the 35,000 views.
Mainly, I think the lack of a clear rule on this isn't fair. It encourages the more brazen types of people, the less overly scrupulous ones, to push and test the boundaries, see what they can get away with. The rules are ambiguous enough that they can go very far and claim to have never violated an explicit rule. But the rules are also ambiguous enough that the more scrupulous and anxious types of people can worry that they are breaking rules and fear punishment from even slightly debateable things. Is this really the dynamic you want in your contests? Do you really want to reward those brazen risk-taking types and punish and disadvantage the scrupulous types? I don't understand why you of all people would want that, and why it's so difficult to make a clear rule on what level of outside engagement is acceptable. Multiple people have asked for such a ruling in the past.
I also think it's extremely relevant whether or not the review author knew of this solicitation. I don't know whether or not you think it is.
But that said, I appreciate your response and I'm sorry for my aggressive comment. I am, I confess, suspicious that you would have responded had I or another said "do you think you could maybe please look into this Scott?" but only you know the answer to that and you can't possibly prove it to me either way.
What rule do you recommend? There is no way for authors to prevent third parties from signal-boosting their work, and if someone does, I will never be able to prove that the author wasn't involved.
It would be a bit presumptuous of me to say what *the* rule should be; I'm not running the contest, I didn't expect to have that power, and I'd have to think about it for a while.
My argument is that there should be *a* rule, something to work with, so we're all on the same page, a common knowledge coordination point, etc etc.
To that effect, I'd consider any of the following to be sufficient:
"No promoting your review outside this blog, no bringing in friends or other outsiders to vote for you. I may not be able to always prove this right away, but if I do find evidence I will disqualify, even retroactively if necessary."
"If someone else promotes your review, I will hold you responsible, so you'd better do all you can you discourage it."
"You can tell a few friends and/or mention it on other platforms, but if there end up being enough outside votes to substantially shift the result, you've crossed the line."
"Outside promotion is pretty much fine. If you are able to get a massive audience to brigade votes from, you've earned your success."
"Basically, if I can prove you solicited votes, you're disqualified. Either keep it a low enough level that I won't notice, or be really good at hiding it."
"The only rule is don't piss me off. I'll decide what I'll allow and what I won't, after it's happened. So yes you need to make an educated guess at reading my mind."
You might say you're already applying the last one. I submit there's a universe of difference between applying a rule like that explicitly, with common knowledge that that *is* the rule, and applying it implicitly, where it's just an aproximation of the practical result of the rules being unclear. In the first case someone who crosses a line can expect a response like "well you tried to read the judge's mind like we all were and you failed, better luck time"; in the second case they can expect something more like "you disgusting cheat, you should have known what the rules were". Compare a game like Poker where bluffing and reading secret information is part of the game vs a game like (I think?) tournament Bridge, where it's considered cheating. Compare a sport where some action is explicitly allowed as long as the referee doesn't see it vs a sport where it's clearly against the rules, but lots of people secretly do it anyway. Compare a dating context where it's explicitly understood that guessing what will cause discomfort or not is the entire aim vs one where that which causes discomfort was "obviously" just objectively wrong and there's something wrong with you if you didn't realise that.
I have very strong feelings about this whole category of fairness and social honesty, which is my answer to anyone who asks the (incomprehensible to me but amazingly common even around here) question of "why do you care so much?"
Who do you think cheated in the not a book review contest and in what way? I tried looking through comments but couldn’t find the accusation, so it may have been buried
Claims that the founder or co-founder or guy associated with Alpha School linked to the review and urged everyone to go vote for it. That is expanding beyond the pool of ACX readers and the complaint, so far as I understand it, is that a lot of outside votes would come in and artificially inflate the result; ballot box stuffing, if you will.
Thanks, that is concerning. I didn’t even vote this year because I only had time to read half the reviews and thought that would be unfair. It’s likely these voters didn’t read any of the other reviews.
Yeah, that's what is making people mad. The outside votes just came in (if they did come in), voted for Alpha School, and that was it. It wasn't about the review contest, it was boosting and publicity for the school.
I think Scott is right that you can't really stop third parties from doing this, but you can (maybe) make it a rule that reviewers can't shill for votes in that naked a way. If they link to their review, the people coming in should vote for other reviews as well.
This is kind of the last straw for me. Many comments have been left over several years of book review contests politely arguing that certain things can be improved or are unfair in some way, or asking for you to clarify what is acceptable and what isn't when ethical intuitions diverge. My impression is that almost all of these have been ignored. Not even a one line response saying you've taken note of the concern and are giving it serious consideration.
I also think you've been very unwilling to say anything critical of EA/rationalism for a long time. I've found that very hypocritical when the first essay of yours I read and the one that made me love your writing was I Can Tolerate Anything Except the Outgroup with its emphasis on the importance of criticising one's own "tribe".
So these are two different issues I have with you...not engaging with your own comment section's concerns and objections, and being unwilling to criticise your own tribe. This issue seems to combine them. Several people are angry about the apparent vote stacking and think it's blatantly disqualifying and want a clear response from you. And such a response involves taking a potentially critical stance on a possibly rationalist-adjacent institution (maybe the Alpha School isn't rationalist adjacent, I don't know, but it seems to have overlap in ideology and according to some info provided in a comment on the Review Winners post it has shady connections with a number of other actors).
So even if each of those former concerns is inaccurate or can be excused...this one I think absolutely demands a response.
This comment is aggressive but my experience is that nicer ones get reliably ignored.
For what it's worth, if it were up to me I would have specifically ignored this comment because it was too aggressive, but enough people brought it to my attention that I am responding to it just to satisfy them. I have asked them not to bring aggressive comments to my attention in the future.
That's fair and I don't want to encourage aggressive comments and I am upset about the way they tend to more reliably generate engagement.
But I'm also upset when I write a long and thought-out comment politely objecting to something, full of disclaimers and reassurances that I don't mean to insult and including clarifying questions instead of accusations and so on...and just get ignored. I feel like this has happened many times in several years of book review threads, as well is in other contexts, but I admit my feeling could be wrong.
> I also think you've been very unwilling to say anything critical of EA/rationalism for a long time.
Very different perception here. For example, there was recently a review of If Anyone Builds It Everyone Dies which very explicitly criticised Yudkowsky et al's approach to AI doomerism. If this isn't an example of criticising your own tribe, then what is? Surely you don't expect someone who is part of a movement/group to periodically criticise _all_ of said movement/group just to show they can?
Very different perception here, to borrow a phrase. The review was way too deferential toward the authors, and even upon a brief realization that maybe Y. isn't interested in addressing even mildly challenging questions* Scott keeps treating him as some kind of a sage**.
And then swiftly bans people for mildly irritated comments toward Y. in this thread***, while completely ignoring a deteriorating clownshow that open threads tend to descend to with little help of shitposting Wimblies.
It's mostly about not criticising elements of the lifestyle and subculture and ideology. It's merely my impression and I could be misremembering or wrong. I also could be holding Scott to an unreasonably high standard. And in this context I was trying to be confrontational to make sure the main complaint couldn't be easily disregarded.
But I came to follow Scott primarily because he seemed to be one of the only people who valued truth intrinsically and who was willing to openly criticise things like wokeness and cancel culture at its peak while being left-leaning himself. He seemed to be one of the only people who would criticise his own side. So I'm very sensitive to signs he may be actually not living up to this principle when it comes to his "real" tribe.
Your link is for subscribers only so can't comment on the specifics. But it's sounding from the outside less about unwillingness to criticise his tribe than about unwillingness to criticise specific aspects of his tribe with which you personally disagree.
All right I'll paste it here. Scott has replied to my original grievance, so I don't want to attack him any further. But since I'm being challenged on this point, and Scott also challenged me on my current tone being too aggressive, I feel the need to defend myself.
Maybe the following is also aggressive, but I thought it was polite but direct. I'm raising it only to clarify part of the general perspective I have, rather than as a challenge or attack or demand for engagement:
"Scott...I mean the following as politely as possible. But I think you've got an extreme bias, a blatant double standard, that you are, dare I say, a hypocrite here. And I haven't ever seen you address this. (Of course, that doesn't mean you didn't.)
Two parts to my problem...
1. I think that's a good essay. By which I mean, it makes some criticisms in a striking way that I *really want to see rationalists address*. And that, I feel I have very rarely seen them address, in any kind of direct way. The prima facie arrogance of taking utilitarianism as somehow obvious when ethical theory is a notoriously difficult problem and only a distinct minority of professional philosophers think it's logically defensible. The obsession with using rationality to increase one's status and "win" instead of advancing truth for its own sake. The simultaneous treatment of AI as an existential risk and also as a path to utopia, both positions routinely held by the same people. The open contempt for centuries of philosophy and the unironic belief that it can all be ignored because one guy in the late 2000s figured out all important truths from scratch in a fanfic amd some blog posts. The creepy conflation of "people who want to think more accurately" and "people who live in the Bay Area and love orgies" with very little apparent embarrassment at this widespread conflation.
I accept that not all of these are fair descriptions. In fact, my prior is very high that some (maybe even all) of them are inaccurate or irrational accusations. But I don't actually really know what the response to them is supposed to be, because very rarely do I see any rationalists actually confronting the criticisms head-on and making a full-throated defence of their movement. You've sort-of done it a few times (e.g. the consequentialism FAQ, the tower of assumptions) but not nearly as often as I think ought to be done by someone who mostly treats all the above things as presumptively reasonable and good, despite their plethora of obvious serious objections, and who is otherwise very careful at pointing out the problems and blind spots with other movements and ideologies.
And mostly when these things are brought up in the comments I see a lot of vague deflection, a lot of detailed nitpicking, a lot of lazy linking to The Sequences, and a lot of irritation. The vibe I get (which I could of course be getting completely wrong, I totally accept that) is that there's very little interest among most rationalists in seriously relitigating the foundations of their movement and ideology. Which, if actually true and unless I'm missing something, seems to almost utterly invalid the entire movement on its own terms.
(And even if I'm wrong and there's an enthusiastic willingness to keep debating the foundations, which there should be for any movement that cares primarily about truth, there are just a lot of important facets that are routinely ignored. Kriss is the first one in ages who I've seen, in discussions of utilitarianism and EA, bring up the really basic point of the Phil Papers survey results, for example.)
2. Despite all that, I'd completely understand why you might see that Kriss essay as a bad-faith, overly mocking hit piece. It lacks charity, it's too snarky, it makes broad sweeping claims, it condemns a belief system without doing enough to understand and engage with its real claims, it sounds very arrogant. This is all objectionable and I'd completely agree with your "don't feed the trolls" dismissal except that...
Elieizer Yudowsky writes in the *exact same way*. Over and over. The immensely arrogant tone. The wholesale vaguely-mocking or outright-mocking dismissal of whole categories of thought (theism, Cartesian Rationalism, dualism, non-consequentialism to name a few) with little substantive engagement with their claims and arguments. The near-total lack of certain basic virtues (he's one of the only people I've ever seen unironically call themselves a genuis; even most of the vilest partisans and fanatics have more humility than him and know that decent people don't talk like that). He writes in this style all the time.
*You* don't! I can never imagine you writing something with the level of uncharity and mocking arrogance that both Kriss and Yudowsky display. But I've never seen you once say a word against the latter. Even when you've been asked point-blank by multiple people, including me, whether you agree that he's unacceptably arrogant (among other things), it's always silence. Every time.
And yet you're happy to call Kriss a troll in a one-line putdown.
So I don't see how there's any explanation other than that you are living by a principle of "different rules for my friends and other people". Loyalty above all else, truth and fairness be damned.
All in all, one of the following is true: (a) you *have* repudiated Yudowsky and I've missed it, or (b) you need to reread your own Outgroup essay and take the part at the end about your own tribe more seriously.
Sorry for the pointed criticism, I really love your writing and find you overwhelmingly truthful and charitable. But I think you have a major blind spot. And the former is precisely the reason the latter is so jarring."
...
As for this: "But it's sounding from the outside less about unwillingness to criticise his tribe than about unwillingness to criticise specific aspects of his tribe with which you personally disagree."
Obviously I don't think that's the case. I think it's about unwillingness to do the same kind of internal criticism that he wanted and demanded from the woke left back in the SSC days, and which I applauded him so much for demanding.
I think disagreeing gently about a factual matter is the sort of thing even tribalists can do. From my perspective I want the same person who rightly wanted reasonable feminists to clearly and publically say "you are a cruel and hateful bully who has no interest in truth or dialogue with those different from you; stop it!" to the likes of Amanda Marcotte to himself clearly and publically say "you speak like an obnoxious and egotistical asshole and wannabe cult leader who has no interest in charity towards those you disagree with; stop it!" to the likes of Yudkowsky.
Eh, criticising the lifestyle is tough, that veers on moral judgement and moral judginess isn't part of EA (apart from "Drowning Child you must agree or be a moral monster" type judgements. And shrimp. We can't be mean to the liddle shrimpies!)
"bunch of weirdoes should not be weird" is going to be a tough message.
A finalist was disqualified last year due to vote manipulation. But that case was relatively easy to spot, because the reviewer was from a geographical area where not a lot of ACX readers live, and the initial voting form responses giving the review a 10/10 were all from this same geographical area. (Meanwhile, the other votes from elsewhere rated it much lower.) This review was disqualified before it was posted to ACX as a public-facing finalist.
Error 404: Blog not found (substack.com was down with error page for me earlier today though its own status page brazenly denied it, possibly related to the general AWS outage?)
Strikes me that if Alexander signs up for the (un)conference, he could find some guinea pigs (I mean volunteers) for the flashing lights experiment there!
Since it's being hosted in a place called Newspeak House, which makes me wonder *very* strongly what the heck was going through the minds of the founders of "the London College of Political Technology" (why are you naming your headquarters out of "1984" by Orwell?), it seems like the perfect place for "we just want to see if we can alter your brain, trust me it's harmless" experiments.
In getting smarter, we have accidentally upgraded our fears to existential, at a huge cost to our wellbeing: we’re paying for it in chronic anxiety and cultural despair. How deeply does fear shape our world? Why can’t rationalism protect us from panic? And what would life feel like without fear?
This essay argues that the next step in human evolution isn’t a faster brain or a deathless body — it’s learning to live, plan, and care without being afraid.
https://kaiteorn.substack.com/p/on-fear
Anyone know what exactly the deal is with Amazon web services going down? Seems like a big deal. Also now I'm at work and can't do anything because of the outage.
Is everything out I just woke up
I guess there was an outage and a ton of stuff went out. They said they mitigated whatever the issue was a few hours ago, but the program I use at work (Autodesk Revit) is still not working at my office or the other offices we collaborate with. I don't know how it all works, so I'm not sure why some things would still be down and others wouldn't.
I apologize I created all of this yeah I know what a girl NOO well here’s the thing if it’s going down and out it had to understand?
Perhaps you should get a little more sleep
I’m fine thanks I slept good last night
I think it's ethically justified to suggest:
Why don't AI Safety Researchers use the Tyler Cowen "Poisoned Strawberry" strategy to wage war on frontier LLM's and slow down the pace of AI research?
[update: this whole suggestion only makes sense in light of new research. Apparently it took very few documents to hijack a model. And the amount required didn't grow with model size. It was, again, apparently, a fixed amount. https://thezvi.substack.com/p/ai-138-part-2-watch-out-for-documents?open=false#%C2%A7the-dose-makes-the-poison]
https://x.com/tylercosg/status/1977458185368932626
> the plan? we find an obscure but trivial question akin to the number of Rs in “strawberry” that claude gets right. then, we plant hundreds of documents across the internet that will activate when our competitors’ models are asked the question. our documents will cause those models not only to get the answer wrong, but to spend thousands of reasoning tokens in doing so. the triviality of the question will cause it to go viral online, causing millions of users everywhere to send the same prompt. as our competitors notice a rise in the number of tokens processed, they will wrongly believe it is due to increased usage, causing them to pull more compute towards inference and away from training. this, along with constant dunks on the timeline about the model failing our easy question, will annoy their top researchers and cause them to leave. and which lab will they join? us of course, the only company whose model doesn’t make such stupid mistakes. their lack of top researchers will mean their next model will be somewhat lacking, leading to questions about whether their valuation is really justified. but all this vc money has to go somewhere, so we raise another round, using our question as evidence of our model’s superior intellect. this allows us to spend more time crafting sleeper agent documents that will further embarrass our competitors, until finally the entire internet is just a facade for the underbelly of our data war. every prompt to a competitor’s model has the stench of our poison, and yet they have no way to trace it back to us. even if they did, there is nothing they could do. all is finished. we have won.
(and if companies eventually *do* discover countermeasures, that's a win-win for everyone, right?)
From _The 70 Maxims of Maximally Effective Mercenaries_: "Don't expect the enemy to cooperate in the creation of your dream engagement."
Tyler's hypothetical requires the general public to be excited about the novelty of the strawberry question enough to virally repeat the query, but not so excited about novelty to get bored with asking the question after a few days. It requires the targeted AI companies to be alert enough to changing trends to redirect compute, but not alert enough to recognize that the rise in compute demands is due to a flaw in their product gone viral. It requires top researchers to be smart enough to be irreplaceable at a company, but not smart enough to recognize a strawberry attack that is now part of the published literature.
As dubious says, the hypothetical is at best a fiction plot, not a business strategy.
This seems silly and ineffective, poorly-written-movie-plot style. "Plant hundreds of documents;" will mere hundreds of documents make a difference? Why are they saying "planted" as opposed to "posted on a bunch of sites we control" other than to sound sneaky? How will these "activate" when _the models are asked the question_ and why would that change how the model is trained?!
It really goes downhill. If it "went viral" _and_ users everywhere asked the same question, this _would_ be increased usage. But what does "causing them to pull more compute towards inference and away from training" even mean? Why would they not notice and account for meme problems much as other companies have? Why would this "annoy their top researchers and cause them to leave," as this is the kind of problem top researches are likely to be nerd sniped by?
None of this makes any sense. I think no one serious would bother.
Why be don’t you just ask me I created it if you want clarity just ask because it’s a lot
Please see my updated post.
I'm looking for epistemic help understanding what is actually happening on the ground with ICE and Border Patrol, and believe this commentariat is both thoughtful and varied enough to help get to truth here.
Broadly, any increase in immigration enforcement necessarily involves tradeoffs. More enforcement means more risk of false positives, and more risk of acts of cruelty or violence.
I'm mostly experiencing left-leaning media as characterizing what's happening as (a) there's tons of cruelty and violence and it's actively encouraged at all levels, and (b) nobody at any level actually cares about the false positives at all, they're just grabbing anyone they can get away with.
I'm mostly experiencing right-leaning media as characterizing what's happening as "yes, there have been a small number of cases where agents were unusually violent, but they're facing a lot of pressure, this isn't intentional but just the necessary consequence of those tradeoffs shifting, and by the way we should talk more about the violence AGAINST these members of law enforcement, that's the real problem."
Both sides seem to largely agree that masked + unidentified individuals who are probably but not necessarily actually members of law enforcement is bad, and ideally the masks should come off, but right-leaning coverage is much more sympathetic to the individual concerns of agents on the ground that could be motivating the individual belief that they have no choice in order to protect themselves and their families.
Questions I'm trying to answer:
Q0: Am I doing something reasonable, for the purposes of answering these questions, by munging "ICE" and "Border Patrol" and possibly others into what I think of as one group of "immigration enforcement"? Or are there important-to-these-questions differences I should be thinking about?
Q1: What are the actual statistics regarding incidents of unusual cruelty or violence perpetrated / initiated/ exacerbated by the immigration enforcement officials? How do these compare to what would be expected a priori given the priority that's been played on deporting illegals?
Q2: To what degree does the average immigration enforcement agent believe their orders to be to intentionally be cruel or violent beyond what's strictly necessary to arrest a target?
Q3: To what degree does the average immigration enforcement agent believe their orders to be to intentionally attempt to deport, or at least harm in some way, individuals who are not here illegally, but who plausibly appear to be or are clearly sympathizing with those who are?
Q4: I understand that law enforcement culture makes it genuinely hard to question orders, but IF the orders are believed to be *incentivizing* the behaviors in Q2 or Q3 (which I think most individuals would say are not morally desirable), to what degree are there indications that some immigration enforcement agents are pushing back? How does that manifest?
Q5: To the degree there's any of the orders described in Q2 or Q3, should we understand these mandates to be getting *amplified* by others within the chain (e.g. a supervisor adds even more pressure to the on-the-ground agents to behave this way, because the supervisor wholeheartedly endorses the orders), vs getting *dampened* (e.g. the supervisor disagrees with the orders so attempts to limit the degree to which they follow them while still being able to plausibly claim to be giving it a shot)? Obviously there will be individual variability here, but I'm trying to understand, in general, what level the "intent" is coming from. There are worlds where someone jumps on an excuse provided by those above them to act as they always dreamed, and alternate worlds where someone feels duty-bound to execute on orders that are somewhat contrary to their own beliefs as an individual, and I think this is important for anyone trying to understand how likely these behaviors are to last/get-worse/get-better or where to focus their efforts to change outcomes.
Q6: To what degree, if at all, should it be taken seriously that these specific agencies might be used as a "secret police" to target other kinds of people the administration dislikes, completely unrelated to immigration? E.g. should the average American citizen worry about an ICE agent being more likely to do such things, compared to e.g. an FBI or Secret Service agent?
I'm really trying hard to stay focused on factual understanding here and avoid the risk of this question turning into a culture-war discussion; I'm not interested in debates around what the tradeoffs/policies should be, where the moral line is drawn, etc. I just want to understand what's actually happening within these agencies right now, so that we can all form our own (informed) opinions about it instead of instinctively responding to the people shouting loudest.
With regard to Q0: CPB (Customs and Border Patrol) mostly operates at the border. ICE (Immigration and Customs Enforcement) is divided into ERO (Enforcement and Removals Operations) and HSI (Homeland Security Investigations). HSI deals with trans-national crime. So your questions mostly concern ERO.
Your other questions are good questions but I don’t know the answers. Supposedly “the agency tracks every time officers use force,”[1], but I doubt that information is available publicly.
[1] https://www.nbcnewyork.com/news/politics/ice-agents-recruiting-hiring-budget/6374886/
Why have nearly all Peruvian presidents in the last 35 years ended up impeached, arrested, or both?
I wrote about the question here: https://meirbrooks.wordpress.com/2025/10/17/perus-blues/
I’ve wondered about this for some time, but the immediate trigger is the ousting of President Boluarte last week: https://apnews.com/article/peru-president-boluarte-impeachment-crime-shooting-concert-d3bc438bdec704c0c2bd21ef981f95eb
Some factors seem plausible— a unicameral legislature; a fractured Congress that means little partisan support for the president; the idiosyncrasy of a leading opposition figure who barely missed being president for three straight elections; plus the Odebrecht scandal that rocked all of Latin America. But these didn’t feel sufficient to explain the sheer frequency of these incidents relative to neighboring countries. The one Peruvian guy I know said he’s long had the same question but no good answers. Would be glad to hear any thoughts!
South America does have a reputation in general for corruption, revolution, and unstable governments. Why Peru in particular is a good question, perhaps the influence of the Shining Path in the 80s predisposed to the kinds of 'vote for a strong man' reaction which would lead to a lot of corrupt leaders/unpopular with the other parties who then push for impeachment that we see elsewhere?
Plenty of strong men in other Latin American countries (and beyond) in recent history, nothing similar in terms of impeachments as far as I know.
Makes me immediately think of South Korea, where pretty much the same thing happened. Is there anything in common between two countries?
They're both presidential systems!
Nearly all countries in Latin America have presidential systems. One way Peru is different from neighboring countries is (apparently) the presence of a Prime Minister in addition to the President, as in the French system: https://en.wikipedia.org/wiki/Semi-presidential_republic
But SK doesn't have that feature and I'm not sure why that would matter, as the PM seems to have very little power.
Wiki lists 4 successful impeachments for SK (successful in that the impeachment vote passed, not that the president was ousted, i.e. by this count Trump was successfully impeached twice). Twice a president was removed.
Equivalent numbers for Peru are 7 and 4, which is something of an undercount because one president resigned to avoid impeachment, etc.
https://en.wikipedia.org/wiki/List_of_impeachments_of_heads_of_state
Brazil, the epicenter of the Odebrecht corruption case that caused so many presidents of Peru to be arrested, saw one impeachment since 1955.
What should be stopping them from getting impeached and/or arrested? It sounds like that's what it's like in Peru.
You need two-thirds of parliament (it's a unicameral legislature) to impeach. This threshold, has been frequently surpassed in recent history-- the most recent impeachment was unanimous, 122-0 (out of 130)! Getting two-thirds majorities of legislative bodies after multiple elections in a democratic country seems bonkers to me.
It seems to me that Peruvians have come to see an impeachment vote in their system as little more than a vote-of-no-confidence in most parliamentary systems. That is: it's not an affirmative claim that 2/3 of the country thinks the president is unqualified, but rather that fewer than 1/3 of legislators are happy with the current coalition agreement.
(Of course, this doesn't explain why other South American countries don't impeach the same way. But public perceptions of constitutional procedures are often quite historically determined.)
For the conference/unconference. I want to commend and endorse 100% your insistence on using the term "conference" rather than "unconference" to describe something, that based on current common usage of the term in english, is a conference.
I can just about conceptualize the outlines of what an unconference might be, but it would certainly not involve people with an interest in a particular topic gathering at a pre-advertised time and location, to hold discussions on that topic.
Nothing highlights the unseriousness of a movement or a cause than inventing a new term for a common concept, and a tone-deaf insistence on using it, #actually.
AI "will kill everyone", but it's important that we now say "unconference".
And these people want us to bomb the datacenters.
Reading the linked page it sounds a bit hippy-dippy (I'm sure there are more current terms to describe this mindset, but I am old, Father William):
"OK then, what is an unconference?
We start together, build the agenda together, then split into parallel sessions.
We have only one law, that I adore:
the Law of Two Feet: If you’re not learning or contributing, move to another session (or take a break, or do what nourishes you). Guilt-free.
And 4 (and some) principles:
Whoever comes are the right people.
Whatever happens is the only thing that could have.
Whenever it starts is the right time.
When it’s over, it’s over.
(Take responsibility for what you care about.)
Format (in this event): Opening circle → marketplace (we propose sessions) → four session blocks → group synthesis → social"
I notice that there doesn't seem to be any mention of tea breaks, so presumably you have to bring your own or go out and buy soemthing to eat/drink for the marked break time instead of getting a free cuppa tae. Tsk! Not very sociable! Maybe they *will* provide same but just haven't mentioned it explicitly, but the only benefit of going to these things is you at least get a cuppa tea and a bikkie on the house.
So it's a conference with no pre-planned agenda? (Or rather, it sounds like the agenda is Step 1: Find out what people want to talk about, Step 2: Breakout sessions to talk about that.)
Like, I don't know how productive that is, but that sounds different enough to be worth giving a different name.
NY State Assemblyman Alex Bores has announced he's running for Congress. Bores was the sponsor of NY's RAISE act, which passed before California's SB 53 and is similarly aimed at addressing catastrophic and existential risk from AI. He seems to be making AI safety a priority, and I've heard from several people in the field that they'd be really excited to see him in congress.
As far as I know one other person is officially in the race (Micah Lasher, who's probably the favorite). The seat is currently held by Jerry Nadler, who isn't running for re-election, but Lasher is seen as the "heir apparent."
https://www.nytimes.com/2025/10/20/nyregion/alex-bores-ny-congress-primary.html
This is my neighborhood - Lasher is heavily favored to win. My sense is that advancing the cause of AI safety might be better served by converting Lasher than trying to get Bores to win outright. I'm not super familiar with Lasher but he seems persuadable on issues where he thinks he can make a difference.
Going back to the Fatima article (which I read in it's entirety but didn't have time to see all the comments so I apologize if this already got covered):
If I understood correctly, Scott's materialist explanation of the miracle is a coming together of three circumstances:
1) a specific type of (relatively common) cloud cover through which the sun can still be seen and which allows people to stare at it for a long time without damaging their eyes
2) this staring can cause people to see strange colours and a sensation that the sun is pulsating, rotating etc
3) the people staring are also psychologically primed to see a miracle
He debunks the fact that other observers in villages quite far away did not fulfill these three conditions by providing evidence that they were also psychologically primed (because if they were not expecting the miracle at all, the materialist explanation falls apart).
However, even if they were expecting to see a miracle, by all the testimonies, they saw the miracle in the direction of Fatima and not in the direction of the Sun which would have been completely different from their perspective. So in order for this to work, it would mean that they had to be outside at just the right time (presumably because they were expecting it), then they had to stare in the direction of the partially cloud-covered Sun for a while until they started seeing weird stuff, then they had to turn in a completely different direction to look at Fatima, where they then witnessed a Sun miracle in that completely different direction.
That sounds pretty far-fetched.
I was skimming a history of the Abbasid caliphate and ran across a reference to the Zanj Rebellion, an event I had never heard of. I read more into it, and got the gist that the there was a large African slave population in what is today modern-day Iraq, performing hard and dangerous labor like building and maintaining dikes and canals. A chararismastic slave led a multi-year uprising that turned into one of the biggest internal challenges the Abbasids faced.
My immediate thought was this Abbasid slavery practice sounded like chattel slavery in the American south. But I went through college assuming the talking point that the US essentially invented chattel slavery is true.
I'm confused now. What is chattel slavery and what isn't? Were the helots in Sparta chattel slaves? Were enslaved Africans in the Abbasid caliphate? Or is chattel slavery more narrowly just a 17th-19thC-ish western thing?
As Korakys says, chattel slavery refers to a system in which slaves have identifiable owners. This is more or less necessary wherever "slave" is a legal status, because a slave's relationship to the legal system runs through his owner.
The practice is presumably older than history is.
Note that on objective metrics like "health" and "surviving children", American slaves were among the best-off anywhere in the world.
(Chinese imperial eunuchs might be considered better off, but obviously they didn't manage to have children. The Mamluks were definitely better off. At some point you have to question whether you want to include them in the category "slaves". They were formally slaves, but they were also formally in charge of the rest of the society.)
A point worth noting that even under Jim Crow, supposed to be cruel and harsh, there was no emigration of American blacks I heard of, there were no refugees fleeing oppression.
https://en.wikipedia.org/wiki/Great_Migration_(African_American)
How do you think Detroit and Chicago acquired their massive black populations?
I've been reading the bible for the first time and it has quite a few rules in there on what to do in various situations regarding slaves.
Chattel is being used to differentiate it from milder forms of servitude that sometimes people claim is slavery but isn't what most people first think of, like serfdom or "wage slavery". It means slavery where you can buy and sell people freely rather than the slaves being bound to the land or something like that.
Needless to say many, many societies practised it prior to America. Most notably the Romans. I think this is a case of some Americans not really internalising that fact that the rest of the world exists.
Doesn't chattel slavery mean slavery for purely economic reasons? As in, the slaves weren't your enemies who you captured in battle, nor were they dudes who owed you a bunch of money they needed to pay off. I think in Ancient or Medieval times, these were commonly (almost always?) what slaves were, no?
> Doesn't chattel slavery mean slavery for purely economic reasons? As in, the slaves weren't your enemies who you captured in battle
No. It's about the structure of their slavery, not how they got there.
https://www.merriam-webster.com/dictionary/chattel%20slavery
> slavery in which a person is owned as a chattel (see chattel sense 2)
[chattel]
> 1: an item of tangible movable or immovable property except real estate and things (such as buildings) connected with real property
> 2: an enslaved person held as the legal property of another
To be clear, the term "chattel slavery" derives from sense 1, and then sense 2 derives from "chattel slavery".
Thanks, by that definition it doesn't seem like the US slavery situation was exceptional at all.
However, I do think the fact that the slaves were neither prisoners of war, nor people who owed you a debt, but were, rather, just some random easy-to-catch guys who would allow you to run your business much more cheaply makes that type of slavery exceptional (and to my mind, much more abhorrent).
> However, I do think the fact that the slaves were neither prisoners of war, nor people who owed you a debt, but were, rather, just some random easy-to-catch guys
That's not really right either. Originally they *were* prisoners of war. Their captors then sold them to Europeans to be sold into American (North or South) markets. The Europeans weren't going around capturing random Africans; they bought them, already enslaved, from more victorious Africans. That's generally also what the Arabs did, although I'm pretty sure I've heard that sub-Saharans who made the trip to Mecca might be enslaved for their trouble.
(As the trade developed, there almost certainly was a growing element of "we caught these guys because the market is good".)
Because American (USA) slaves had such an impressive rate of natural increase, this came to be a much less significant source of slaves there. Instead, slaves were mostly born into slavery as the child of local parents.
There's nothing historically unusual about slavery being an inherited status. But the American (USA) situation is unusual in that the population of slaves was able to sustain itself this way. Further south, they constantly needed newly imported slaves.
---
It isn't really clear to me why slaves captured for the purpose are more offensive to you than slaves captured in war. As far as I can see, those aren't different situations. If you're taking slaves from another group, you are at war with that group. And the main purpose of war is to confiscate resources belonging to the other side. Slaves are one of those resources.
Because in a war it quite often might have been the case that the "other" side was the aggressor and when my side happened to be victorious I used the captive bastards who had been attacking me for decades as slaves to compensate all the suffering and loss they had caused me.
In the African slave trade, as far as I have read, the "wars" going on were specifically and primarily to go out and capture slaves for sale. I agree that in some sense they were just another "resource", but it seems somehow uglier that one would make war on another group only to enslave them, rather than just to steal their riches or their territory.
But I can understand not seeing that as a big ethical difference.
A review on a Bob Dylan song made me want to try to write something in the style of Dylan. I remembered a song of his, but couldn't remember any of the words, so was trying to go through titles that might possibly have been the song, and re-stumbled across The Hurricane:
https://www.youtube.com/watch?v=bpZvg_FjL3Q
Been a while since I listened to the fully political ones. Man, he pulled no punches at all. That... would suck to try to copy. The worst-case scenario would be managing to pull it off.
And then I found the one I was originally looking for: It's Alright Ma
https://www.youtube.com/watch?v=_CJHbfkROow
So for the soundalike project, I understand and know too soon that there's no sense in trying.
...so uh... does anyone ELSE want to try it?
Time to update the Milei scorecard? Begging Trump for a $40 billion bailout hardly seems like the kind of thing that counts as successfully turning the economy around.
It's a complicated situation. My understanding is that the anti-Milei party won a local election, the market panicked that Milei would lose, they sold off pesos, and the US intervened to prevent a currency crash. So it's not really a clear-cut "Milei good" or "Milei bad" narrative, just more of Argentina being a basketcase.
Also, Trump has been as explicit as possible about the loan being about political influence:
https://www.youtube.com/watch?v=V_wlkkiU-6c
I've heard of the $20b loan recently, has that doubled already?
Scott Bessent has said he wants $20 billion from other sources to complement the $20 billion from the U.S. government. I’d suspect the odds of that happening are fairly low. Trump has suggested more money will not be forthcoming if Milei’s party does badly in the Oct. 25 election. If the party does well, the Administration may figure that the promise has accomplished its objective, and see no reason to actually follow through.
https://www.nbcnewyork.com/news/politics/us-trump-administration-working-doubling-aid-argentina-40-billion/6405604/
Why aren't therapy books used more in the psychiatric system? Scott has mentioned in some posts that if you're seeking help for a specific problem/illness (he mentioned depression, anxiety, OCD, maybr borderline?) then reading a therapy self help book seems to have as good results as therapy.
Why isn't it utilized more? I can imagine for many patients1p it also feels less scary than speaking to a human. Not to mention a billion times more cost effective, will take pressure off the (in my country with socialized healthcare, at least) completely overloaded psychiatric system with insane wait times, etc.
I can read self-help books till my ears fall off, but I'm not going to implement any of the changes unless there's a real person standing over me with a whip to force me to do so.
I don't know how responsive that is; the claimed result is "reading a therapy self help book seems to have as good results as therapy".
That's different from "implementing the recommendations in a self help book seems to have as good results as therapy". Medicine is riddled with this problem; most notably, taking a placebo faithfully leads to much better outcomes than taking the same placebo irregularly.
One of Scott's articles for covid made the observation that masks-for-the-public research is more about the intervention of telling people to wear masks than about the intervention of having people wear masks, because you can't do the second thing. You can tell people they should wear masks and see what results you get, so that's what the research measures.
But in this case, that's 'good'; the research is also measuring reading the book more than it's measuring "correct implementation of the book".
I can only speak from experience regarding depression, but if you ask a depressed person to do x or y simple thing (excercise, reading self help, eating better etc) that is proven to help alleviate symptoms, they usually will not do it. Metaphorically, they are standing a looking at a cliff. You might be shouting down to them "There's a path up!" but all they hear is " You can just leap to the top!" or "You can just scale the cliff!"
Seeing a therapist is kind of like sending someone down to guide them to the path. It might be arduous, but its easier with a sherpa.
Your comment reminds me of one of my favorite ACX essays of late: https://www.astralcodexten.com/p/singing-the-blues I also agree with Deiseach. I'm currently in physical therapy for a back issue, and the willpower needed to exercise on my own is gargantuan compared to the willpower needed to spend an hour with my physical therapist. Humans are an odd lot.
An important factor is likely that many people just don't really read books anymore, so setting people who are already more likely to have attentional issues reading would probably be fruitless for a large percentage of people.
It also doesn't help that patients can differ significantly in their reading proficiency, so you would probably have to take care to assign books on an individual basis to match patients' abilities.
Re: ACX EEG Entrainment grant. The current sign-up link is broken, the correct link: https://forms.gle/X37zyTV3KhbSb3Ze9
Looks like I messed it up in my update to Scott, sorry. Ah, the irony of messing up a link on the thread 404.
Should be fixed now, sorry.
Thank you so much! And it's definitely my fault — it was wrong in my email.
Scott, I don't know if you've seen the concerns raised about cheating in the review contest. But if you have, and you just ignore them and don't address them head-on and take an explicit position on what you consider fair behaviour and why...I'm seriously considering cancelling my subscription.
I feel like it doesn’t really matter? It’s an extremely low-stakes review run by one guy. There isn’t even really an easy solution for influential people using that influence to encourage others to vote. Unless there’s a serious attempt at mass manipulation (which I believe there was in the past that was found out) it seems like things have worked pretty well so far.
I do think people should be more aware that there is a genre of rationalist-adjacent media that boils down to, “here is a thinly-veiled ad for my startup/advocacy group”. I’m not totally against this, since a lot of the startups/advocacy groups are legitimately interesting and working on important problems, but it does lead to some conflicts-of-interest.
Do you mean the person who copy-pasted a Gemini response that the Alpha School entry contained signs of "marketing techniques"? I haven't tried it, but I predict that if you ask Gemini a leading question about any other ACX review for a product (maybe the mashed potatoes one) it will say the same thing.
I think he refers to another comment claiming (with a working link) that there was a vote-soliciting, pinned tweet form a high follower count account asking people to vote for the alpha school review during the voting-phase.
No, I mean that an Alpha School founder tweeted for people to go and vote for the review. Eremolalos elaborates in several comments, e.g.
https://www.astralcodexten.com/p/non-book-review-contest-2025-winners/comment/167386427 and
https://www.astralcodexten.com/p/non-book-review-contest-2025-winners/comment/167540180.
T.T also raises a few related concerns about deception and deliberate marketing, but I only intended to reference the vote stacking allegation.
I'm not sure what the rule here is - it doesn't seem feasible to say that nobody may ever recommend a review. Liemandt has 7000 Twitter followers and his recommendation doesn't seem outside the range of normal things that don't shift the vote too much.
I re-ran the analysis to compare votes before and after Liemandt's tweet. For first-choice votes only (too annoying to rerun the whole thing), the numbers are:
Total voting period: Alpha 15.7%, vs. Ukraine 11.6%
Pre-10/5: Alpha 14.4%, vs. Ukraine 12.6%
Hopefully this will satisfy people that Liemandt's tweet did not change the ranking. I have no solution for the general case except to limit the voting to ACX subscribers, which has some technical disadvantages and doesn't avert the fact that people could just subscribe and then vote. Limiting it to paid subscribers might work, but this would change the nature of the contest and I'm not willing to do it without some issue bigger than a 7,000 follower Twitter account praising one of the entries.
Please consider the implications of billionaire investors co-opting your personal blog’s review contest into advertising their venture capital equity investments on a level deeper than a person “ever recommending a review”.
Thank you. That's what I consider an investigation and response. I do think much more discussion is needed: you focus on the number of followers (7000), Eremolalos focuses on the 35,000 views.
Mainly, I think the lack of a clear rule on this isn't fair. It encourages the more brazen types of people, the less overly scrupulous ones, to push and test the boundaries, see what they can get away with. The rules are ambiguous enough that they can go very far and claim to have never violated an explicit rule. But the rules are also ambiguous enough that the more scrupulous and anxious types of people can worry that they are breaking rules and fear punishment from even slightly debateable things. Is this really the dynamic you want in your contests? Do you really want to reward those brazen risk-taking types and punish and disadvantage the scrupulous types? I don't understand why you of all people would want that, and why it's so difficult to make a clear rule on what level of outside engagement is acceptable. Multiple people have asked for such a ruling in the past.
I also think it's extremely relevant whether or not the review author knew of this solicitation. I don't know whether or not you think it is.
But that said, I appreciate your response and I'm sorry for my aggressive comment. I am, I confess, suspicious that you would have responded had I or another said "do you think you could maybe please look into this Scott?" but only you know the answer to that and you can't possibly prove it to me either way.
What rule do you recommend? There is no way for authors to prevent third parties from signal-boosting their work, and if someone does, I will never be able to prove that the author wasn't involved.
It would be a bit presumptuous of me to say what *the* rule should be; I'm not running the contest, I didn't expect to have that power, and I'd have to think about it for a while.
My argument is that there should be *a* rule, something to work with, so we're all on the same page, a common knowledge coordination point, etc etc.
To that effect, I'd consider any of the following to be sufficient:
"No promoting your review outside this blog, no bringing in friends or other outsiders to vote for you. I may not be able to always prove this right away, but if I do find evidence I will disqualify, even retroactively if necessary."
"If someone else promotes your review, I will hold you responsible, so you'd better do all you can you discourage it."
"You can tell a few friends and/or mention it on other platforms, but if there end up being enough outside votes to substantially shift the result, you've crossed the line."
"Outside promotion is pretty much fine. If you are able to get a massive audience to brigade votes from, you've earned your success."
"Basically, if I can prove you solicited votes, you're disqualified. Either keep it a low enough level that I won't notice, or be really good at hiding it."
"The only rule is don't piss me off. I'll decide what I'll allow and what I won't, after it's happened. So yes you need to make an educated guess at reading my mind."
You might say you're already applying the last one. I submit there's a universe of difference between applying a rule like that explicitly, with common knowledge that that *is* the rule, and applying it implicitly, where it's just an aproximation of the practical result of the rules being unclear. In the first case someone who crosses a line can expect a response like "well you tried to read the judge's mind like we all were and you failed, better luck time"; in the second case they can expect something more like "you disgusting cheat, you should have known what the rules were". Compare a game like Poker where bluffing and reading secret information is part of the game vs a game like (I think?) tournament Bridge, where it's considered cheating. Compare a sport where some action is explicitly allowed as long as the referee doesn't see it vs a sport where it's clearly against the rules, but lots of people secretly do it anyway. Compare a dating context where it's explicitly understood that guessing what will cause discomfort or not is the entire aim vs one where that which causes discomfort was "obviously" just objectively wrong and there's something wrong with you if you didn't realise that.
I have very strong feelings about this whole category of fairness and social honesty, which is my answer to anyone who asks the (incomprehensible to me but amazingly common even around here) question of "why do you care so much?"
Who do you think cheated in the not a book review contest and in what way? I tried looking through comments but couldn’t find the accusation, so it may have been buried
Claims that the founder or co-founder or guy associated with Alpha School linked to the review and urged everyone to go vote for it. That is expanding beyond the pool of ACX readers and the complaint, so far as I understand it, is that a lot of outside votes would come in and artificially inflate the result; ballot box stuffing, if you will.
Thanks, that is concerning. I didn’t even vote this year because I only had time to read half the reviews and thought that would be unfair. It’s likely these voters didn’t read any of the other reviews.
Yeah, that's what is making people mad. The outside votes just came in (if they did come in), voted for Alpha School, and that was it. It wasn't about the review contest, it was boosting and publicity for the school.
I think Scott is right that you can't really stop third parties from doing this, but you can (maybe) make it a rule that reviewers can't shill for votes in that naked a way. If they link to their review, the people coming in should vote for other reviews as well.
It was a fairly well commented on in the thread.
This is kind of the last straw for me. Many comments have been left over several years of book review contests politely arguing that certain things can be improved or are unfair in some way, or asking for you to clarify what is acceptable and what isn't when ethical intuitions diverge. My impression is that almost all of these have been ignored. Not even a one line response saying you've taken note of the concern and are giving it serious consideration.
I also think you've been very unwilling to say anything critical of EA/rationalism for a long time. I've found that very hypocritical when the first essay of yours I read and the one that made me love your writing was I Can Tolerate Anything Except the Outgroup with its emphasis on the importance of criticising one's own "tribe".
So these are two different issues I have with you...not engaging with your own comment section's concerns and objections, and being unwilling to criticise your own tribe. This issue seems to combine them. Several people are angry about the apparent vote stacking and think it's blatantly disqualifying and want a clear response from you. And such a response involves taking a potentially critical stance on a possibly rationalist-adjacent institution (maybe the Alpha School isn't rationalist adjacent, I don't know, but it seems to have overlap in ideology and according to some info provided in a comment on the Review Winners post it has shady connections with a number of other actors).
So even if each of those former concerns is inaccurate or can be excused...this one I think absolutely demands a response.
This comment is aggressive but my experience is that nicer ones get reliably ignored.
For what it's worth, if it were up to me I would have specifically ignored this comment because it was too aggressive, but enough people brought it to my attention that I am responding to it just to satisfy them. I have asked them not to bring aggressive comments to my attention in the future.
That's fair and I don't want to encourage aggressive comments and I am upset about the way they tend to more reliably generate engagement.
But I'm also upset when I write a long and thought-out comment politely objecting to something, full of disclaimers and reassurances that I don't mean to insult and including clarifying questions instead of accusations and so on...and just get ignored. I feel like this has happened many times in several years of book review threads, as well is in other contexts, but I admit my feeling could be wrong.
> I also think you've been very unwilling to say anything critical of EA/rationalism for a long time.
Very different perception here. For example, there was recently a review of If Anyone Builds It Everyone Dies which very explicitly criticised Yudkowsky et al's approach to AI doomerism. If this isn't an example of criticising your own tribe, then what is? Surely you don't expect someone who is part of a movement/group to periodically criticise _all_ of said movement/group just to show they can?
Very different perception here, to borrow a phrase. The review was way too deferential toward the authors, and even upon a brief realization that maybe Y. isn't interested in addressing even mildly challenging questions* Scott keeps treating him as some kind of a sage**.
And then swiftly bans people for mildly irritated comments toward Y. in this thread***, while completely ignoring a deteriorating clownshow that open threads tend to descend to with little help of shitposting Wimblies.
* https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/155029071
** https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/155114676
*** https://www.astralcodexten.com/p/book-review-if-anyone-builds-it-everyone/comment/154935672
It's mostly about not criticising elements of the lifestyle and subculture and ideology. It's merely my impression and I could be misremembering or wrong. I also could be holding Scott to an unreasonably high standard. And in this context I was trying to be confrontational to make sure the main complaint couldn't be easily disregarded.
But I came to follow Scott primarily because he seemed to be one of the only people who valued truth intrinsically and who was willing to openly criticise things like wokeness and cancel culture at its peak while being left-leaning himself. He seemed to be one of the only people who would criticise his own side. So I'm very sensitive to signs he may be actually not living up to this principle when it comes to his "real" tribe.
For one example, I talked a bit about it here https://www.astralcodexten.com/p/hidden-open-thread-3925/comment/141041693 which was about a double standard to epistemic arrogance and the principle of charity.
Your link is for subscribers only so can't comment on the specifics. But it's sounding from the outside less about unwillingness to criticise his tribe than about unwillingness to criticise specific aspects of his tribe with which you personally disagree.
All right I'll paste it here. Scott has replied to my original grievance, so I don't want to attack him any further. But since I'm being challenged on this point, and Scott also challenged me on my current tone being too aggressive, I feel the need to defend myself.
Maybe the following is also aggressive, but I thought it was polite but direct. I'm raising it only to clarify part of the general perspective I have, rather than as a challenge or attack or demand for engagement:
"Scott...I mean the following as politely as possible. But I think you've got an extreme bias, a blatant double standard, that you are, dare I say, a hypocrite here. And I haven't ever seen you address this. (Of course, that doesn't mean you didn't.)
Two parts to my problem...
1. I think that's a good essay. By which I mean, it makes some criticisms in a striking way that I *really want to see rationalists address*. And that, I feel I have very rarely seen them address, in any kind of direct way. The prima facie arrogance of taking utilitarianism as somehow obvious when ethical theory is a notoriously difficult problem and only a distinct minority of professional philosophers think it's logically defensible. The obsession with using rationality to increase one's status and "win" instead of advancing truth for its own sake. The simultaneous treatment of AI as an existential risk and also as a path to utopia, both positions routinely held by the same people. The open contempt for centuries of philosophy and the unironic belief that it can all be ignored because one guy in the late 2000s figured out all important truths from scratch in a fanfic amd some blog posts. The creepy conflation of "people who want to think more accurately" and "people who live in the Bay Area and love orgies" with very little apparent embarrassment at this widespread conflation.
I accept that not all of these are fair descriptions. In fact, my prior is very high that some (maybe even all) of them are inaccurate or irrational accusations. But I don't actually really know what the response to them is supposed to be, because very rarely do I see any rationalists actually confronting the criticisms head-on and making a full-throated defence of their movement. You've sort-of done it a few times (e.g. the consequentialism FAQ, the tower of assumptions) but not nearly as often as I think ought to be done by someone who mostly treats all the above things as presumptively reasonable and good, despite their plethora of obvious serious objections, and who is otherwise very careful at pointing out the problems and blind spots with other movements and ideologies.
And mostly when these things are brought up in the comments I see a lot of vague deflection, a lot of detailed nitpicking, a lot of lazy linking to The Sequences, and a lot of irritation. The vibe I get (which I could of course be getting completely wrong, I totally accept that) is that there's very little interest among most rationalists in seriously relitigating the foundations of their movement and ideology. Which, if actually true and unless I'm missing something, seems to almost utterly invalid the entire movement on its own terms.
(And even if I'm wrong and there's an enthusiastic willingness to keep debating the foundations, which there should be for any movement that cares primarily about truth, there are just a lot of important facets that are routinely ignored. Kriss is the first one in ages who I've seen, in discussions of utilitarianism and EA, bring up the really basic point of the Phil Papers survey results, for example.)
2. Despite all that, I'd completely understand why you might see that Kriss essay as a bad-faith, overly mocking hit piece. It lacks charity, it's too snarky, it makes broad sweeping claims, it condemns a belief system without doing enough to understand and engage with its real claims, it sounds very arrogant. This is all objectionable and I'd completely agree with your "don't feed the trolls" dismissal except that...
Elieizer Yudowsky writes in the *exact same way*. Over and over. The immensely arrogant tone. The wholesale vaguely-mocking or outright-mocking dismissal of whole categories of thought (theism, Cartesian Rationalism, dualism, non-consequentialism to name a few) with little substantive engagement with their claims and arguments. The near-total lack of certain basic virtues (he's one of the only people I've ever seen unironically call themselves a genuis; even most of the vilest partisans and fanatics have more humility than him and know that decent people don't talk like that). He writes in this style all the time.
*You* don't! I can never imagine you writing something with the level of uncharity and mocking arrogance that both Kriss and Yudowsky display. But I've never seen you once say a word against the latter. Even when you've been asked point-blank by multiple people, including me, whether you agree that he's unacceptably arrogant (among other things), it's always silence. Every time.
And yet you're happy to call Kriss a troll in a one-line putdown.
So I don't see how there's any explanation other than that you are living by a principle of "different rules for my friends and other people". Loyalty above all else, truth and fairness be damned.
All in all, one of the following is true: (a) you *have* repudiated Yudowsky and I've missed it, or (b) you need to reread your own Outgroup essay and take the part at the end about your own tribe more seriously.
Sorry for the pointed criticism, I really love your writing and find you overwhelmingly truthful and charitable. But I think you have a major blind spot. And the former is precisely the reason the latter is so jarring."
...
As for this: "But it's sounding from the outside less about unwillingness to criticise his tribe than about unwillingness to criticise specific aspects of his tribe with which you personally disagree."
Obviously I don't think that's the case. I think it's about unwillingness to do the same kind of internal criticism that he wanted and demanded from the woke left back in the SSC days, and which I applauded him so much for demanding.
I think disagreeing gently about a factual matter is the sort of thing even tribalists can do. From my perspective I want the same person who rightly wanted reasonable feminists to clearly and publically say "you are a cruel and hateful bully who has no interest in truth or dialogue with those different from you; stop it!" to the likes of Amanda Marcotte to himself clearly and publically say "you speak like an obnoxious and egotistical asshole and wannabe cult leader who has no interest in charity towards those you disagree with; stop it!" to the likes of Yudkowsky.
Eh, criticising the lifestyle is tough, that veers on moral judgement and moral judginess isn't part of EA (apart from "Drowning Child you must agree or be a moral monster" type judgements. And shrimp. We can't be mean to the liddle shrimpies!)
"bunch of weirdoes should not be weird" is going to be a tough message.
A finalist was disqualified last year due to vote manipulation. But that case was relatively easy to spot, because the reviewer was from a geographical area where not a lot of ACX readers live, and the initial voting form responses giving the review a 10/10 were all from this same geographical area. (Meanwhile, the other votes from elsewhere rated it much lower.) This review was disqualified before it was posted to ACX as a public-facing finalist.
1. I just want to wish everyone well.
2. I want to recommend Joseph Heath's blog and specifically his last post on Populism.
https://josephheath.substack.com/p/populism-fast-and-slow
[Error 404: comment not found]
[Error 418: I'm a teapot]
[Setting a reminder to repeat this joke on Dec 29th]
Error 404: Blog not found (substack.com was down with error page for me earlier today though its own status page brazenly denied it, possibly related to the general AWS outage?)
Strikes me that if Alexander signs up for the (un)conference, he could find some guinea pigs (I mean volunteers) for the flashing lights experiment there!
Since it's being hosted in a place called Newspeak House, which makes me wonder *very* strongly what the heck was going through the minds of the founders of "the London College of Political Technology" (why are you naming your headquarters out of "1984" by Orwell?), it seems like the perfect place for "we just want to see if we can alter your brain, trust me it's harmless" experiments.