Interesting. Seems a big issue in society in general (not unique to EA) is critiques "that everyone has agreed to nod their head to." This post helped me understand how such critiques can be perpetuated because they help people ignore more serious, specific critiques that threaten individuals' relative status or would require them to work/think harder.
By the way, is this argument a tiny bit reminiscent of the old-line Marxist argument that identity politics is a distraction from class conflict? (Which is not to say their solution to class divides is correct...)
You’ve identified two different modes of critique, which one could call the philosophical and the journalistic.
It’s a lot easier to do the philosophical critique than the hard work of finding the anomalies within the existing paradigm, whether it concerns the relative benefits of two similar psychoactive chemicals, the efficacy of a philanthropic intervention, or the orbits of the planets.
Note that we have a plethora of theories of dark matter and quantum gravity but rather few and quite expensive ideas or f how to gather the data that can prove some of them right and some wrong.
This mismatch reflects the fact that theory is relatively cheap and high energy physics experiments very expensive. I suspect that is equally true of pharmacology and social interventions.
To bring this back home to EA: Perhaps not enough money is being spent on measurements.
Or, alternatively, we should accept that systems change slowly, that some money will be spent ineffectively or even counterproductively, and that too much worry about optimization is really a path to neurosis that does nobody any good.
Has "EA is too high-neuroticism" been done yet?
"The fact that this has to be the next narrative beat in an article like this should raise red flags. Another way of phrasing “this has to be the next narrative beat” is that it’s something we would believe / want to believe / insert at this place in our discourse whether it was true or not. "
In a word, *no*.
It is the next narrative beat because many of us *smell* it. It has been the smell of EA for years. It has been the smell of EA for numerous readers of this blog (and its immediate predecessor) ever since you (very gently) said they made you feel bad for being a doctor instead of working on EA. That was a bad smell and it never, ever went away.
I don't know precisely what is the problem with EA, because I'm not putting in the epistemic work. But I feel *very* comfortable saying "I know it when I smell it; EA smells like it has good intentions plus toxic delusions and they're not really listening".
If you want the best definition I can personally come up with, it's this: EA peer pressures people into accepting repugnant conclusions. Given that, *of course* it doesn't want real criticism.
Side note: Zvi did an extremely good job writing the coherent and detailed breakdown of EA's criticism problem from the other side, and I understand that the whole point of this essay is you don't want to give that kind of thing any air, but it's not very nice or epistemically reasonable not to give it air.
A lot of this could be extended to explain why /r/unpopularopinion is full of popular opinions.
This is an unusually good ACX post, easily top 10%. I've long been frustrated that society has made paradigmatic self-criticism the ultimate high status move, to the point that folks become suspicious of any movement that doesn't do enough of it. You implore the reader to treat EA as a variable — I would really enjoy a fleshed out version of this post with EA actually replaced by X, and merged with the points about self-critique from I Can Tolerate Anything Except The Outgroup.
>Go to any psychiatrist at the conference and criticize psychiatry in these terms - “Don’t you think our field is systemically racist and sexist and fails to understand that the true problem is Capitalism?”
Of course, there's a sense in which this isn't really criticism of psychiatry at all; it's just advancement of one's own ideology. Most of these critics have their own version of "sufficiently progressive" psychiatry that they think would be just fine.
I think EAs are motivated to make those "broad structural critiques" because they feel like things outsiders are thinking, or saying, and EAs want to anticipate those external criticisms so they can (1) refute them and (2) proudly claim they already thought of that and its a good point, but here's why all things considered it doesn't hold (kind of like a memetic immune system). And they do _this_ because they think it'll win people over and grow the movement.
>I don’t know if it’s meaningful to talk about EA needing “another paradigm"
Beyond this, I'm not clear how EA could reach another paradigm while still being EA. My conception of EA is a charity selection and funding paradigm based on a rationalist quantitative approach aimed at producing maximum utility for everybody in the world (sometimes including animals).
If something is fundamentally wrong with that model, a paradigm shift wouldn't entail 'changing' EA, but instead shifting the substantial amounts of philanthropic money in the rationalist-sphere away from EA and towards a different grantee-search model altogether.
Of course, anybody with a job in big EA would very much prefer that the big donors stay within the EA paradigm when choosing how to give away their money. Openphil and etc. would likely change a bit if asked nicely, but if the fundamentally best way to perform charity was something far away from EA I am skeptical they would be able to resolve the massive conflict of interest. It seems to me that a much better exercise would be to convince the money behind EA to pick their grantees somehow else rather than convincing professionals to kill their industry.
Two bits of wisdom from my grandfather you may enjoy on this topic. He was a Master Chief in the US Navy.
“As soon as nothing starts to become something it can’t be everything, and that always pisses someone off.” Or, whenever you bring something into the real world and hit actual constraints people being mad about it is unavoidable.
“Saying that there should be more goodness and less badness isn’t an idea, it’s just a complaint from someone who doesn’t want to spend the effort to figure out how to fix something.”
EA folks seem nice, if a bit at odds with some of my own beliefs. Though to one of your posts I particularly enjoyed, I think those things seem especially prominent to me because I am otherwise identical to them in 95% of the rest of my beliefs. I am always happy to bend over backwards with my spiritual generosity toward complete heathens whose beliefs don’t overlap with mine in any regard.
Re III. “Zen and the Art of Motorcycle Maintenance” was available at the scholastic book fair in approximately 1990 so I bought & read it. It was published in 1974. IIRC it raised all these points about duality (maybe I’m thinking of “What the Buddha Taught,” but still, widely available.)
Not only is the criticism no longer attached to the ostensible target (in this case EA), I think the machine got stuck in about 1980 and now all it can do is clank and mutter “too individualistic…”
Sometimes it adds “ego death” and “transcendence.”
Things wear out after a while, lose their zing. So back in 1970 when so many American intellectuals were getting high and writing about what their high felt like jn Eastern-philosophical terms, it was surprising and potentially impactful. It is clearly not surprising anymore (re-watch Koyaanisqatsi if this needs to be clarified.) The medium moved on. But the habit of saying those same things has never gone away. I think it shouldn’t count anymore though. The more everyone tries to consciously group themselves, the more their uniquenesses are revealed.
Part II of the APA conference topics is a great example, thank you. Maybe it boils down to criticism that requires me to change my behavior versus not.
It is easy to blame capitalism/racism/the system, and no one needs to change anything because capitalism will stay at least for another 50 years. So, we have a wonderful scapegoat. It's not too different from blaming politicians.
In contrast, if anything requires individual people to change something, you'll face resistance. The no-show fee is a great example - even a bit related to capitalism.
My personal preference: I love unfounded criticism, because I can easily rebut it. But if other people criticize me rightly, then I have a problem 😊
(Original author of the Anti-Politics Machine review here -- writing anonymously to keep anonymity, but it seems fair to respond since Scott commented on it directly.)
EDIT: as Ivo points out below, I read this article somewhat defensively and I no longer think this comment is useful. See my reply to this comment labeled “Update:” for less defensive thoughts.
I have a lot of thoughts on this that I do not have time to write up properly, but I do think you’re kind of missing the point of this kind of critique. “The Anti-Politics Machine” is standard reading in grad-level development economics (I’ve now had it assigned twice for courses) -- not because we all believe “development economics is bad” or “to figure out how to respond to the critiques” but because fighting poverty / disease is really hard and understanding ways people have failed in the past is necessary to avoid the same mistakes in the future. So we’re aware of the skulls, but it still takes active effort to avoid them and regularly people don’t. My review gave a handful of ideas to change systems based on this critique, but in a much more fundamental way these critiques shape and change the many individual choices it takes to run an EA or development-style intervention.
RCTs are a hugely powerful tool for studying charitable interventions, for all the reasons you already know. But when you first get started, it’s really easy to mistake “the results of an RCT” for “the entire relevant truth”, which is the sort of mistake that can massively ruin lives (or waste hundreds of millions of dollars) if you have the power to make decisions but not the experience to know how to interpret the relevant evidence. I wrote the review not to talk people out of EA (I like EA and am involved in an RCT I think will really help add to our knowledge of how to do good!) but because I think being aware of this kind of shortcoming and when to look out for it is necessary to put the results of RCTs in context and use them in a way that’s more responsible than either “just go off vibes” or “use only numerical quantitative information and nothing else”.
It's easy to talk about changing the paradigm, because that's not a lever you have. In some organizations, criticism gives you status by giving the appearances of being intellectually courageous and ahead of the curve. Some dogs bark loud on a leash.
Start talking about decisions people actually make and you see what they hold dear.
But you should talk about the decisions people actually make, because that's what they can change. No metric in the world matters unless it's helping you make a decision. Horses were an awful way to fight in WWI, what with machine guns and artillery and all, but for a time they alone filled critical recon and maneuver functions and so the cavalry charged again. Command knew what the casualties would be. They hated it. But they (felt that) they didn't have a decision to make; they needed that intel, they needed those maneuvers. So they just accepted the casualties as the cost of business and moved on. It was useless to talk about a war without horses until cars and trucks and tanks could actually fill all those roles, which is why horses were used again in combat in WWII ... and are still occasionally used today.
Should you reverse any criticism you hear?
Scott: "Not because EA is bad at taking criticism. The opposite: they like it too much."
Reminds me of an old joke:
"Masochist: Beat me.
Sadist: No ..."
Not to go off topic, but this does suggest an alternate history where Mercury is closer to Venus and nobody discovers relativity for another fifty years.
1. Seems to me a large part of the reason specific criticism pains more than general paradigmatic criticism because it effectively pushes certain members of a community away from the community. That could generate more anger/sympathy.
2. The development of Lagrangian mechanics seem to come in part from a long term paradigmatic push to incorporate the idea of principle of least action?
3. Asian countries seem to be more collectivistic than western nations and they do enjoy pretty rapid growth?
I don't know much about the organizational scale, but at the individual social level, a preemptive apology -- even one that's fairly vague and open-ended -- is often the best shield against criticism. If I greet guests with "sorry the house is such a mess" it's rarely really an apology, doesn't communicate genuine guilt or a desire to change, but it makes it socially difficult for anyone to say anything regardless of how messy my house is, because making a specific criticism after a general apology makes the other person look rude. (Note: I understand *many* people feel genuine guilt when their house is anything but spotless, and am not asserting that everyone is like me).
From your description, this feels like the same behavior scaled up. "Oh, I'm so sorry we're terrible in every way" makes it much more difficult for someone to say "you are failing in this comparatively minor but very specific way".
I have some issues with the whole EA movement, but this is a nice piece that gets at some broader truths.
Best work yet! I laughed so hard reading this I woke my wife up. Also, it really crystallized what I dislike about all those vague complaints and explained why people are willing to jump on board things that seem to say really awful things about them. This is the kind of essay that makes it worth my subscription!
"But the specific claim at the end of Part I above - that the people in power prefer specific to paradigmatic criticism, because it’s less challenging - seems to me the exact opposite of the truth."
The 'Sadly, Porn' analysis of this would be that the *purpose* of paradigmatic criticism is to defend against having to act on specific criticism. (The false meta-criticism that the "people in power prefer specific criticism" is the repressed true meta-criticism returning as it's inversion but instead localized on "the people in power" (i.e. not *you*) to further defend against action.) I don't speak psychoanalysis so my formulation is probably wrong, but you can see the parallels.
This sounds like basic business self help: get specific. Management studies contains a lot of nonsense, but one thing that everyone says, and which seems to me to be true, is that you have to be specific. Things must be planned with total specificity or they don't get done; meetings must end with the assignment of specific tasks; job responsibilities must be explicitly listed. The corollary is: if you have a criticism but don't have an action that can be taken to remedy the problem, then the criticism is (next to) useless.
Brilliant post - amazing it came from the same person who thought it was a good idea to hand tens of thousands of dollars to woke loonies like Alice Evans
How much sense does it make to think this way about a book written in 1990? The book’s criticism may seem vague and unactionable now, but isn’t that likely because the paradigm it targeted no longer exists?
As a response to "why people in this area love criticism so much" moreso than "why global critiques/narratives with clear political agendas are bad" aka "I don't like the woke"
"Effective" altruism means there is always going to be criticism about how "effective" it really is. It lends itself to min/maxing, because there is an optimization implied in the very first word. And really that's where the value is added here IMO.
So you have to accept whether you are really trying to be "effective" in an optimization sense, or just being altruistic where anything postive is good.
I learned a new word ☺️
Great post! I think you should share it on the EA forum so that more EAs will read it.
Or do you prefer if others cross-post your EA-related posts to the EA forum? (Or do you prefer that no one does this?)
I love you inordinately for the tricyclics paragraph
Do "people in power" prefer "paradigm shifts" versus "specific issues"? The answer might surprise you ... it depends!! On the people and the issues and the paradigm!! WoW!!!
I think "the EA community" should actually do something and then these debates would be in a little more focus.
Rationalism Critics: "Rationalism doesn't work. Stop it."
Rationalists: "A basic tenet of rationalism is that we'll consider your idea. Should we stop doing rationalism? In this essay-"
Rationalism Critics: "No you're doing it wrong."
Yes, these other people are pushing their agenda/viewpoint: race, communism, partiality; but aren't you dismissing them out of hand?
I think it's obvious that EA takes a capitalistic worldview, it's basically squaring the circle of how can I make money and still be virtuous without dedicating my life to doing what I know is good. Because I spend money efficiently to have others do that via excess value.
Which is fine but then you should have to deal with the stock criticisms of capitalism, like its colonial racist foundations etc.
Speaking of the APA - or rather of a permutation of that acronym - wonder if you've seen or have any plan to weigh-in on the apparent efforts of the American Academy of Pediatrics to gag the "debate on a resolution calling for an independent and rigorous review of the evidence on treatment of youth gender dysphoria":
Apparently many there are not terribly enthused about any "criticism" of "the 'gender-affirming' treatment model."
Knowing good and evil was the seduction of the sorcerer. The sorcerer still seduces.
If your point is that specific, well-researched criticism is harder than general, paradigmatic criticism, I agree. I think that's why you tend to see more of the latter, though much of it is low-quality.
If your point is that paradigmatic criticism (or this specific paradigmatic criticism) is without value, I strongly and specifically disagree.
I admittedly haven't read any of the other entries, but I would be happy to see Zvi win (at least some of the prize pool of) this contest. I briefly considered entering this contest, but was put off for the same reasons he expresses in his post.
To distill what he's trying to say: Imagine if the Catholic Church had an essay-writing contest asking to point out the Church's sins. But then, in the fine print, they strongly implied that they will be judging what is a sin based on the teachings of Jesus Christ, and that it would be judged by a select group of Cardinals. That would drive away anyone trying to point out cases where their interpretations of Jesus's teachings might be wrong, or where the teachings of Jesus don't work on a fundamental level.
This is the same deal. The criticism contest asks for criticism, but then implies that it's going to be judged within EA's interpretation of utilitarianism, thus pushing away any potential criticism of the fundamentals.
Could most places stand to be a bit more utilitarian? Sure! Most places could also stand to follow the teachings of Jesus a bit more closely. Those are both in the general vicinity of "good" in my book, if bounded by general common sense.
But both of them have problems, or at least diverge, if you take them to the extreme. You know this and wrote about it in a post of yours, which I think about a lot.  That's when you start getting things like Zvi describes, of non-vegans being treated "as non-serious (or even evil)".
Another red flag is EA focusing on "community building" as a core focus area. You can easily torture utilitarianism into justifying that: sure, you could research malaria cures yourself, or you could talk to ten undergrads and convince them to go into malaria research, and get ten times the probability of success!
Meanwhile everyone starts thinking you're a cult.  And they're not… totally wrong? EA isn't yet a cult, but is arguably becoming a religion, even more so than the way "every social movement is a religion". It's built on a core moral foundation (utilitarianism), does free distribution of holy books,  and has convinced itself that missionary work is of the utmost importance. (Seriously, please read .)
And what tends to happen to religions? They tend to start believing in their own importance a bit too much, sometimes at the expense of actual social good. They're at risk of being captured by people that are more interested in improving their social status than actually making the world a better place. They have a tendency towards purity spirals that take their morality farther and farther into Extremistan.
If (as Zvi suggests) we're at the point where people who might be able to work in AI safety or research a cure for malaria or whatever are being treated poorly because they eat chicken, then that's a red flag that EA is starting to fall into these traps.
When you attack a religion, you've got to attack its roots. Not "This person isn't following Jesus properly," but "There is no God and it's absurd to think that there is."
The problem is, this usually results in the destruction of the movement.
How can EA survive this? I think if it took a diminished view of its own importance, you could still salvage a lot from it.
Instead of convincing a large number of people to be good little EAs/utilitarians, have only a small number of core utilitarians bringing up potential cause areas for broader consideration. This is what GiveWell does, and it works pretty well. Their top recommended charities are hard to argue with, even if you don't buy into the overall utilitarian bent behind their work.
Instead of recruiting undergrads to EA as a whole, try to recruit them to explicitly work in specific cause areas you think they may be well-suited for and are understaffed.
To borrow from a different religion's sacred texts: the goal is to cut your enemy. The goal of EA should be to move the needle on these cause areas, not to move the needle on the acceptance of EA or utilitarianism more broadly.
…I guess I sort of ended up writing that contest entry in this comment.
I’m curious about the way you apologize to other EA thinkers. Your audience is big, and you really don’t want to be nasty. But the apologies signal to me that your thoughts are cut to a system of cordiality that always underpins conversation. I wonder if you had rougher, cleverer comments that you deleted for kindness? I hear you already express unease for easy and harmless conference talk, but its polite air still emerges in your own writing here. Maybe the Kuhnian anomalies we’re looking for can only show up on the fringes of the conversation, away from the padding of politeness.
Best Ending Ever
An example of a specific EA critique that I’ve found really seems to invite serious pushback, downvotes, etc. is the suggestion that more should be spent on Public Relations. Not sure if I’d be feeding the beast by adding my critique to the conversation though.
I’m confused. You’re saying that criticizing criticism of EA for not actually listening to “real” criticism against it is the expected (and wrong) move, but then for the rest of the essay you give excellent arguments for why most criticism that’s received well is generic and not the “real” critique that actually needs to be said! Am I misunderstanding, or are you coming to the same final conclusion as the original expected one?
Love this - working in global health & development research, everyone (especially US/European elite) furiously agrees that the whole enterprise is fundamentally inequitable, post/neo-colonial, unconsciously biased, patriarchal and probably racist, and we should all be doing much better.
So that's fine - journals and pundits compete to signal their commitment to decolonising, reframing, and power-shifting, and we all nod along.
BUT, when someone tries to publish a paper unpicking why a particular trial or treatment programme doesn't live up to its vaunted claims (looking at you, deworming), or how Bayesian analysis shows that some hard-won data might be wrong, the knives come out sharpish... but that's how science works right?
Paradigm, schmaradigm... Mercury is notoriously unreliable, so those hidebound Newtonians were also justified by nominative determinism
We welcome complete paradigm shifts as long as everyone's status remains the same.
I went to my first (and probably only) EA meeting by accident last month, because I know a guy who attends.
They should call that stuff AA. I am purposefully not expanding that acronym.
So far, I've assumed that many EA-aligned people welcome criticism because of a culture of (performative) openmindedness. I think the point this essay makes is better, though.
If criticism is vague enough, no one feels personally attacked – it's easier to nod along. You can feel productive and enlightened while changing nothing. I'm not sure if that's everything there is to it. What I am convinced of now is that being specific when criticising is valuable. Suddently whatever you're talking about becomes tractable.
This isn't quite where I thought this was going. I thought this would be, "If you indiscriminately listen to all criticism, you waste too much time listening to idiots. Biologists are best off ignoring the criticisms of creationists. Physicists should ignore the flat earthers."
This title is a reference to the young Marx (hegelian stage of his life) in which he criticizes Bauer's brothers? If not, it is an amazing coincidence. (ref: https://www.marxists.org/archive/marx/works/1845/holy-family/index.htm)
Having spent a lot of times in the humanities I think there is a simpler explanation that probably gets closer to the truth: paradigmatic criticism is an easy win.
I recall being in philosophy seminars and somehow these types of critiques would always emerge, even when they were barely tangentially related to the subject matter.
I eventually came to admire the brilliance of it, since it is almost impossible to argue with. On the one hand, you cannot make an argument within a paradigm if someone completely refuses to engage with that paradigm. It's like stating that 1+1= 2 and someone replying that the existence of numbers is culturally relative. On the other hand, no one wants to argue with paradigmatic criticisms because they are often morally loaded. To use the previous example, any failure to acknowledge the cultural relativity of mathematical truth might be suggestive of some kind of bigotry against people who are bad at math.
To make matters worse, these kinds of critiques were always given the most appreciation. I suspect because no one dared question their relevance, and that they make everyone feel like their doing something important without doing anything. It's like educated middle-class white people embracing their 'white privilege': it makes them feel virtuous without having to do anything costly to help disadvantaged groups.
In sum, it's an easy rhetorical tactic that gives status to the speaker, status to the audience for acknowledging it, and it's difficult to argue with for logical as well as social reasons.
> Are we sure that becoming less individualist would be a better use of our energy than becoming more individualist? How did we achieve that certainty? It sure seems like more individualist countries are richer and better places to live. And that within those countries, the most individualist regions and social networks are the richest and best. Aren’t more intelligent people generally more individualist when you do the psych tests and surveys?
There are many books that do allude to how people get to this certainty, that show people are generally happier when they are religious say. I don't feel comfortable summarising these arguments because I think there are entire schools of such thought. I think asking these questions suggests that this case has been assumed, rather than argued for, which is a little unfair.
But I was mainly intrigued by the 'intelligent people more individualist' idea - is this true? It doesn't mesh with my experience. There are so many terrible articles saying 'intelligent people are more likely to do X', for obvious facebook clickbait reasons, which makes researching this area hard. Do we have good data on actual traits that exist for more intelligent people, and where can I find them?
The question is, will OpenPhil print this and put it on a wall?
Now this is, finally, some proper criticism of EA. You should submit it to the contest for extra meta points.
This is probably my favourite article this year. Predictably so of course, since I'm an EA.
It's easy to reuse paradigmatic critiques from other fields/disciplines, and people will likely either not notice or praise you for being widely read. (This isn't necessarily a bad thing; there's immense value to cross-pollinating insights! It just lowers the barrier to entry.) Specific critiques can only be made from scratch, which takes much longer. It's also much easier to falsify a specific critique, so people are more likely to work on one and give up or not publish it.
This means that there would be a lot more paradigmatic critiques than specific, even if people in a field collectively spent the same number of hours on each. To balance the two types, you'd need a culture that goes out of its way to reward specific critiques and acknowledge the higher effort involved. (Which might ruffle the feathers of some paradigmatic-critics, unless you're really sneaky about it.)
Really enjoyed this sentence:
"It’s so fun that it can be hard to resist the temptation to believe you’re in it: just as economists have predicted ten of the last two recessions, so science journalists have predicted ten of the last two paradigm shifts."
This is a bit like reverse bikeshedding (https://en.m.wiktionary.org/wiki/bikeshedding).
"The architecture of our office perpetuates systemic bias" -> solemn nods of agreement.
"The bikeshed should not be that hot pink colour" -> angry designer questions if you read his vision document carefully enough.
Phase 1/phase 2 paradigmatic shifts:
"We need to rethink architecture with sustainability in mind" -> unactionable, unfalsifiable.
"We should sacrifice arbitrary amounts of parking space if it allows optimisation of cycle storage" -> now we can have an interesting debate.
I don't want to nitpick examples too much, but Remmelt's piece was largely an attempt to lay out what Glen Weyl thought about the Rationalist Community and EA Community, not just to provide a criticism of his own, and he succeeded at this so well that he wound up in an extensive private correspondence with Weyl that helped motivate Weyl to be more nuanced/charitable towards Rationalists/EAs in general. I don't know how well this generalizes, but I think you could have spent some time on the idea that, between overshooting and undershooting with criticisms, correctly shooting may be implausible as a community-wide standard, and overshooting is often valuable to community health for reasons other than just the criticisms themselves all being substantially good.
Hey, I am the author of the ‘some blindspots in EA’ post.
Just started reading Scott’s post, after a friend shared the link with me. She, like me, thinks there is a self-criticism fetish in this community – which can get quite unproductive *in terms of* how people seek out and address criticism. The intro of the ‘criticism of criticism of criticism…’ post resonated. I agree also that criticism tends to be quite broad and abstract, and this is a fair portrayal of the list of specific distinctions in my post.
In terms of Scott’s responses to parts of the ‘some blindspots in EA’ post, I appreciate Scott being transparent and humble about what he focussed on and selected out for his writing, and what he cannot or is not making claims about. It’s always hard to portray others’ work you are criticising in a fair or at least open-minded way, and I appreciate the care Scott put into doing this.
The main clarification I need to make is that the brightspot-blindspot distinctions I wrote about are not about prescribing ’EAs’ to be eg. less individualistic (although there is an implicit preference, with non-elaborated-on reasoning, which Scott also seems to have but in the other direction).
The distinctions are attempts at categorising where (covering aspects of the environment) people involved in our broader community incline to focus on more (‘brightspots’) relative to other communities and what corresponding representational assumptions we are making in our mental models, descriptions, and explanations.
These distinctions do form a basis for prescribing the community to not just make hand-wavy gestures of ‘we should be open to criticism’ but to actually zone in on different aspects other communities notice and could complement our sensemaking in, if we manage to build epistemic bridges to their perspectives. Ie. listen in a way where we do not keep misinterpreting what they are saying within our default frames of thinking (criticism is not useful if we keep talking past each other). I highlighted where we are falling short and other communities could contribute value.
~ ~ ~
I’m coming at this subject from a very different angle than Scott, which is going to take too much time to clarify. I do not want to waste my and everyone else’s time by reading and writing long comment exchanges. If you are interested though to hear my thoughts on a specific comment I may have missed, ping me at remmelt[at}effectiefaltruisme.nl with a link to the comment.
Thank you for capping this excellent analysis with the last sentence. Bingo!
It may lead straight to "But then what?", but it gets the essential problem perfectly.
A couple of instances of racism in medicine-- racism of the failure to pay attention variety rather than active malice.
Skin discoloration problems which are more common in people with darker skin.
Malone Mukwende, a med student from Zimbabwe who's studying in London, is putting together a handbook/website about diagnosing problems in patients with dark skin because symptoms like blue lips or the classic target pattern for Lyme disease look different.
Pulse oximeters don't work as well on people with dark skin. People are working on developing better pulse oximeters, but it isn't finished yet.
At this point, the only solution might be to convince doctors to not trust pulse oximeters too much if other symptoms are present.
Is there a predictive processing angle here?
Scott’s argument seems to be that effective criticism requires generating a prediction mismatch: do this treatment instead of that one. Fund this intervention instead of that one.
Paradigms themselves are much harder to argue for because they are so harder to develop and transmit. Usually one or two prediction mismatches or gaps can be explained away, but the problem is worse than that; understanding a new paradigm is often very hard to do from “inside” the old paradigm.
I suspect paradigm happen more through an NP-type approach: young people see what works better and copy it, and older people have arranged so many memories and experiences in terms of the older paradigm that they can’t change, since the cost and risk of transition are extreme.
So perhaps Most people aren’t constructing paradigms from scratch or evaluating new ones behind a certain age, probably for cost and risk reasons. “Our paradigm has problems at the edges” is part of most current paradigms, hence the focus on “marginalized people”, it’s a tacit admission of flaws while avoiding being explicit
about them. I suspect this notion outcompetes “our paradigm is perfect” and will eventually lose out to paradigms that highlight their own failures in specific ways.
As you embody with your apology, effective criticism deals in the particular. Therefore to actually do it risks offense as well as being disputed or disproven. My suspicion is most people just don’t have the guts. I also suspect many a career is built on vagueness and virtue signaling. It’s entirely rational not to take an unnecessary risk, isn’t it?
>If we had a hundred such complaints, maybe we could figure out some broader failure mode and how to deal with it.
I want to complain about the passivity of your phase one.
I agree it seems good to have more complaints like the criminal justice criticism. I also agree with the broader point of this article, about looking for specific problems over generally accepted broad challenges to the whole system that
Your phase one and the sentence above paint to me a picture that after sufficient independent clues accumulate, inspiration strikes (as it is now likely to do), we quantum tunnel through the insight barrier, and from there follows more mundane work following the road, developing the new paradigm, planting the signposts.
However a subtle inaccuracy that observably keeps reoccurring — as in the mercury story — might look more like EA being inefficient with funding due to a specific repeated error that requires concrete changes to the framework to stop systematic miscalculations.
This is a (transparently) wild guess about the shape of what the next paradigm shift might be caused by, and I'm drawn to it by trying to walk closer to the Mercury story.
Where I think this is incompatible with the picture of phase one above is that looking for one such systematic concrete issue is very different than waiting for a pile of independent problems to accumulate, such that we may accumulate enough 'inspiration fuel' to finally light the fire.
The idea of waiting for a hundred such complaints so we might, maybe, see something broader emerge seems to still follow a pattern of liking broad non-specific patterns emerging over hard contradictions rooted in one or two solid anomalies that you can keep coming back to for support and/or gently bash people/yourself over the head with.
The older I get, the more I notice that people who have actually tried to build things (companies, institutions, organizations) usually don't make broad paradigmatic criticisms. I think this is simply because they understand the practical uselessness of such critiques.
If there were some sort of ACX Survey Question that could come out of this, I would be curious to probe the age dependence. I would suspect that younger people are more prone to either making or vaguely nodding along with paradigmatic critiques.
I imagine it would be hard to ferret out the real variable I'm looking for: "have you, yourself, actually tried to or succeeded in building a human organization." A lot of managerial types would likely count themselves into this bucket, despite merely being hired into a leadership role, and thus probably never learning the relevant lessons. (In fact, hired-in managers are likely to have an even more skewed sense of what is required to build an organization, because they take existing structures and norms for granted.)
Great article! It spells out many loose thoughts and believes I had for a while, but never quite managed to pin down. Strongly agree that EA has a fetish for criticism, even outright bad-faith criticism and I am glad someone of significant cultural power pointed it out.
Scott, you've always had a great talent for pointing out arguments I've felt a vague frustrated discomfort with in some way I couldn't figure out and exposing them as part of a broader pattern made of bad thinking and weird status dynamics.
The next narrative beat *would* be that I recognize that these arguments always have the result that I'm more comfortable with the beliefs I already had and should be skeptical from now on of such things, but as far as I can tell from my memory of such cases I still mostly agree with your arguments even after accounting for that.
So far as I understand it, Einstein did not develop his theory of general relativity to deal with the Mercury "error".
Rather, "Einstein felt a compelling need to generalize the principle of relativity from inertial motion to accelerated motion. He was transfixed by the ability of acceleration to mimic gravity and by the idea that inertia is a gravitational effect. " See, the following for an accessible treatment: https://sites.pitt.edu/~jdnorton/teaching/HPS_0410/chapters/general_relativity_pathway/index.html#:~:text=Einstein%20felt%20a%20compelling%20need,static%20gravitational%20fields%20in%201912.
Newtonian physics was replaced (sort of replaced - because it is still taught thru most of high school and college) not really because of prediction errors per se but because it could not explain the motion of light. So, I'm not sure your distillation of Kuhn does Kuhn (or even critics of Kuhn) justice. Not every 'revolution' is born of the accumulation of errors during 'normal science'.
Part of the lens thru which you are viewing EA is as if EA is a "new paradigm". And perhaps most EAers think they are doing something "new". But I'd have to say EA is not really new at all. The incorporation of "science" into political and eleemosynary structures and endeavors has a quite a long history.
Not to miss the point entirely, but there’s an extension to the collectivist vs individualist argument that may not be in the critiqued article but is implicit wherever this argument is made, giving it relevance as a moral argument opposed to a philosophical or factual one.
By encouraging processes that lean towards individualism in more communal cultures, you are essentially supporting the changing of the culture from above, a kind of soft imperialism. King Cnut provides specific counters to the benefits of individualism like the greater happiness of religious folks, but even if it *is* on-net better, one should recognize and reckon with the paternalism inherent in the approach — that those sad collectivists would be happier if they did it more this way — our way. This is precisely what empire-lovers have said about savages for centuries.
There are of course good bitter pill arguments for this approach; Rome gave the Isles civil infrastructure, the Isles gave much of the world common law. And no one is trying to dominate others here. But there should be acknowledgment of this trade off, especially by a philosophy trying to maximize the good.
Really good article. I can recall instances of myself doing paradigmatic criticism of technical architectures, getting the flat "nod along and ignore" response, but having the spiciest email threads come out on fiddly details that look like bikeshedding from the outside.
Do it! Be specific!
Have you read NOISE? Seems relevant.
Quote from the above review:
To be strictly fair, the authors do acknowledge the existence of algorithmic bias, although they perhaps underestimate its magnitude. A crucial point they do not acknowledge, however, is that algorithms don’t merely replicate human biases, they amplify them – and by a significant amount. One that was trained on a dataset where pictures of cooking were 33% more likely to involve women than men ended up associating pictures of kitchens with women 68% of the time. Until these issues are ironed out we should beware of social scientists bearing algorithm-driven gifts.
Good post. Also in the category of specific criticism that raised enormous hackles on the EA forum: On Deference and Yudkowsky's AI Risk Estimates.
The post argues that EY has made some very bad predictions in the past, including a since-disproved prediction that the world would end due to nanotech, that should influence the weight we give to his future predictions. It's about as stone-cold a Bayesian take as one could ask for, and people absolutely lost their shit over it.
EA is a paradigm and as such won't be the source of the next paradigm. It should get comfortable with that and just place the bets it wants to place.
The discussion about EA's self-flagellating (my words) "love of critique" made me think of the following (with apologies to the original parable):
There once were two workers. When the boss was dissatisfied with their work, he began to critique them. The first grumbled and complained about being criticized, responding that he was already doing everything right. But, in the end, he changed his behavior according to the critique. The second accepted the critique gratefully, exclaiming that he was glad to be corrected and that the boss should feel free to let him know instantly when something wasn't right. But then changed not a thing. Which one of the two was more responsive to the critique?
Also, insofar as the EA paradigm is made up of things like utilitarianism, hedonism, altruism and claims about moral obligation, it would be kind of silly to solicit paradigmatic criticism via a contest like this. Philosophers have been examining these ideas under a microscope for hundreds of years. Anyone capable of thinking up a truly novel and mind-changing objection to one of them would have published it in Philosophical Review rather than waiting around for an internet EA contest.
"But we’ve got to change paradigms sometimes, right? How do we do that without soliciting paradigmatic criticism?"
I write about this general principle more here.
Specific criticism towards the scientific community here.
Implementation testing here and here.
The TL;DR is that scientists and academics operate along the same incentives of any other profession and are reluctant to admit that they are wrong because it will mean they lose status and power. The only way to force them to change their habits is to make it clear that the consequences of NOT admitting that they are wrong (when they are) are going to be far more severe and harmful to them than the consequences of admitting they are wrong. You do this by building up a cult following that is willing to hurt the shitty scientists. Also, if you have any sort of useful knowledge (memetics, superforecasting, etc) that the scientific community refuses to believe in, you need to deliver that knowledge correctly. Instead of humbly petitioning the "respected scientists" to look at your findings, weaponize them and use them to destroy people's trust in the scientific paradigm altogether. Don't come to arrogant people as a supplicant pleading for them to "please notice me senpai", instead come to them as a conqueror and hurt them until they are FORCED to notice you.
Wait, what are you prescribing levothyroxine for?
To state somewhat of an obvious point: the reason EA criticism gets so much traction on the forum is because it’s the most interesting thing you can write. All other posts are about cause areas and they’re (a) only appealing to people who understand or are interested in the cause area and (b) don’t lead to a lot of discussion in that the most you can say is usually “that’s a great idea, keep it up / someone should try that!”
Why is the final part of this article solely about Kuhn? Surely there are other thinkers on the subject of paradigms?
I feel like the example of paradigmatic criticism given in the article about how do we know we know reality is real or that capitalism is good is a bit of a straw man. I've always thought paradigmatic criticism of EA work was more more points like:
- Giving in the developing world, as EA work often recommends, is often used as a political tool that props up violent and/or corrupt governments, or has other negative impacts that are not easily visible to foreign donors
- This type of giving also reflects the foreign giver's priorities, not the recipient's
- This type of giving also strangles local attempts to do the same work and creates an unsustainable dependence on outsiders
- The EA movement is obsessed with imaginary or hypothetical problems, like the suffering of wild animals or AIs, or existential AI risk, and prioritizes them over real and existing problems
- The EA movement is based on the false premise that its outcomes can in fact be clearly measured and optimized, when it is trying to solve huge, complex, multi-factorial social issues
- The EA movement consists of newcomers to charity work who reject the experience of seasoned veterans in the space
- The EA movement creates suffering by making people feel that not acting in a fully EA-endorsed manner is morally bad.
This is the kind of criticism I would consider paradigmatic and potentially valid but also not, as far as I can tell, really embraced by EAs.
> But we’ve got to change paradigms sometimes, right? How do we do that without soliciting paradigmatic criticism?
> I don’t know, man, I don’t know.
But this one is so obvious that the answer is a well-known aphorism. Change comes one funeral at a time. New entrants to the field -- any field -- choose a paradigm to work in, and to a first approximation they all stick with their choice forever. A paradigm fails when its practitioners end up dying as bitter, lonely losers instead of rich celebrities; this prevents new entrants from joining that paradigm.
The remoteness of the determinant of "success" from the truth explains why paradigms need not be tightly connected to the truth.
> The anomalies with Newtonian gravity weren’t things like “action at a distance doesn’t feel scientific enough” or “it doesn’t sufficiently glorify Jesus Christ” or even “it’s insufficiently elegant”. The one that ended up most important was “its estimate for the precession of the orbit of Mercury is off by forty arc-seconds per century”.
The post gives this due credit, but the issue of Mercury has the most historical prominence because it's easy to measure. "Action at a distance doesn't feel scientific enough" is, unlike the other two 'bad' examples, an important criticism that has, in the case of gravity, completely carried the day. We do have a modern theory relying on action at a distance, quantum entanglement, and opinions range from virulent loathing to resigned neutrality.
I'm happy to see the post explicitly equating EA's fetish for content-free criticism of EA with the "structural racism" white fetish for content-free criticism of whites. I would go so far as to suggest unkindly that in fact it's a white fetish in both cases, and the reason EA is like this is that it consists mostly of self-hating whites. In that context, I kind of wish people with this level of need for masochism would get into EA where the same fetish does less damage. If they would, then stamping the impulse out of EA would be productive for EA on its own terms, but counterproductive for society.
More realistically, if instead of getting really into EA or antiracism the same people got really into Catholicism, or any other well-rooted traditional belief system, there would be an established channel for their impulses (maybe several!) with the advantages that (1) a fairly robust support system already exists; and (2) people around them would already have well-calibrated theories of how seriously to take them. I think this model -- "when people go nuts in predictable ways, they get channeled into a system that absorbs their quirks, maybe gives them something useful to do, and lets everyone else know what to think about them" -- is underappreciated as an aspect of traditional religious practices.
Dualism / objectivity / disconnectedness / analysis is a constellation of mental skills that have one use, while non-dualism / subjectivity / interconnectedness / synthesis is a constellation of mental skills with a different use.
If you're writing a short story, for example, you're usually in a very non-dualist, subjective, synthetic, interconnected mental mode while you're composing the first draft. You're asking, "Is this cool?" "Does this resonate?" "How does this seem?". Whatever you put down, that's the right thing to put down. But to edit and proofread, you have to switch to a dualist, objective, disconnected, analytical framework: "Is this right or wrong?" "What does this section do?" "Is this sentence grammatically correct?" In this mode, it turns out that lots of things were the wrong thing to put down and you need to cut them.
They say, "Write hot, edit cold." This is what they mean. I think of these as "expansive" versus "contractive" phases. The expansive phase is characterized by waste, inefficiency, creativity, contradiction, high spirits, productivity. The contractive phase is characterized by perfectionism, miserliness, efficiency, focus, goals.
I don't know EA deeply, but the essential idea ("current charity is inefficient; how can we do better?") is fundamentally contractive. EA is editing / proofreading / analyzing / criticizing charity. Of course, everything partakes of both phases; EA needs expansive-phase thinking too, but it's going to be secondary.
"Here’s my proposal: ask why they prescribe s-ketamine instead of racemic ketamine for treatment-resistant depression."
Isn't it common for one enantiomer to have more of the desired effect and the other to have more of the undesired effect? (Escitalopram out performs citalopram in studies, if I remember correctly, and you questioned whether Johnson and Johnson picked the correct ketamine enantiomer. And then there was that whole thalidomide thing...) Perhaps there should be more psychiatrists willing to prescribe racemic ketamine to take at home, but if regulation or insurance forces psychiatrists to monitor anyone on ketamine and racemic ketamine has significantly worse side effects (if I remember correctly, a dose of esketamine is similar to a moderate recreational dose of ketamine and an antidepressant dose of ketamine may need to be double that), it's reasonable to be unwilling to do so.
Specific vs general criticism reminds me of an exchange between Peggy and Bobby Hill. Paraphrased from memory:
Peggy: All my life the media has tricked me into believing that my big feet are ugly.
Bobby: Who? Who in the media tricked you?
> So instead, they’ll preach things the old paradigm says are good, which haven’t been implemented because they’re vague or impossible or not worth the tradeoff against other considerations. Listen too hard, and you’ll go from a precise and efficient implementation of the old paradigm, to a fuzzier implementation that emphasizes trying to do vague, inefficient, or impossible things
Could I get an example of this?
I'm wondering what two million dollars (or two hundred million) could do to improve criminal justice in the US.
I've at least been following news stories about failures and occasional successes in the project for decades, and it's much harder than I thought.
One possibility would be to just publicize existing problems-- bad forensics, coerced confessions, and so on.
One problem is a double standard-- relax standards for imprisonment before trial, and you'll be blamed for any serious crimes committed while waiting for trial, but you won't get credit for not screwing people's lives up.
In regards to damage done by the criminal justice system, it's really hard to compute, because it goes well past the individual-- costs to family and friends, and general damage caused by pulling useful or somewhat useful people out of a community. There's also a corrosive effect of putting innocent people in jail.
Much as I detest the war on drugs (and how would you spend two million dollars on ending it? it's probably a problem of comparable size), it isn't the main problem, and neither are private prisons, even though the incentives are ugly. The major problem seems to be excessively long sentences for crimes. and carelessness in applying them.
Maybe just give the money to the Innocence Project and say the hell with it. It isn't even enough to make another tv show about the problem, though maybe it's enough to get people to produce one.
"What do you mean I'm not engaging with your criticism? I spend all my leisure budget on dominatrices telling me what a bad boy I am!" This is a silly argument.
Glad to see a return to the kinds of posts by Scott I prefer
Software engineers seem to love to complain about the general paradigm of software development within their organization. Stuff like "We never have any documentation for our code," or "Our whole application architecture is convoluted, ad-hoc, and fragile" or "We never think about usability or accessibility until the very end of the development cycle." Everyone knows on some level that the current paradigm is broken and can complain about how broken it is for hours.
But, when you try to propose some specific solution to address those concerns, like "We should run a nightly build task to ensure that all symbols have Doxygen-formatted comments of at least 500 characters," or "We should have a giant three-hour Zoom meeting with stakeholders from the architecture team, the interface design team, and the accessibility team before beginning work on any task," you'll start getting pushback. Because replacing a flawed paradigm with a magically perfect paradigm is always going to be better for everyone by definition, but specific solutions are always going to have tradeoffs and are always going to have some kind of negative impact on someone.
I don't know much about EA; I am too far from it. All I can react to is what has reached me.
And what has reached me is EA giving money to weird and self-serving things, such as printing physical copies of Harry Potter fan fiction, or paying someone to relax and learn to ride a bike. When I bring this up, for some reason all the EAs defend this instead of just saying "that wasn't representative".
If EAs keep defending printing physical copies of HPMOR, it does certainly look like they cannot take criticism. (Or at least it looks like *something* has gone badly wrong with them.) So that's where I, for one, am coming from: I find EAs are blind to their self-serving bias.
(I know that most of EA money is in global poverty and AI risk or whatever, but then it should be easy to disavow paying someone to learn to ride a bike, and yet EAs resist it for some reason.)
I don't have a problem with people inviting "in-paradigm" criticism while rejecting out-of-paradigm criticism. If I ask for specific tips on how to improve my piano playing, I'm not interested in hearing your opinion that pianos suck and I should learn the guitar instead. If I've already decided on the piano you're not helping, you're just being annoying.
Having in-paradigm discussions about how to better do the thing you're doing is always valuable, but out-of-paradigm discussions tend to be louder and more distracting, especially as there's always more outsiders than insiders.
Out-of-paradigm discussions are valuable too, but they can't always be the order of the day.
"If you incentivize people to preach at you, they’ll do that. But they can’t preach the tenets of the new paradigm, because they don’t know it yet. And they can’t preach the implementable tenets of the old paradigm, because you’ve already implemented them."
I see a lot of this in the social justice movement. We've already implemented the easy parts of the paradigm (e.g. de jure desegregation, gay rights), so we get a lot of non-specific, non-actionable complaints ("Black lives matter!"). The irony is that if there was a new paradigm, it wouldn't be about fighting oppression. It can't be; that's the current dominant paradigm. I strongly suspect most social justice activists would react to a new paradigm by calling it racist, i.e. by becoming the hidebound retro-thinkers who don't get the new thing.
There's a fetish for criticising the government in the US. Like saying anything negative about the government automatically notches up your social status and in-groupness. It's an unquestioned cultural observance, a birth-right, plus it has that attention and stress release that sustains and naturalises habits of thought. It doesn't get much reality testing. There are downsides to this social game.
Any organisation that resembles a government will get it too. The EA movement is a kind of alternative government (ok, alternative-to-government, if you must) populated by those both high on the fetish scale coupled with heads full of ideas. EA occupies a paradoxical government-anti-government niche, it's going to get hit.
The real and awful downside is the manifold ways it screws up process of government, now at web speed and scale. Governments aren't going away anytime soon. Magical thinking can work, but it often makes problems worse. In this case, clearly worse, in my view.
This is classic salesmanship and propaganda. You point out your own faults ahead of time so it seems like you've addressed them and/or are open to that sort of thing, then steamroll ahead with the things you already wanted to do anyway. It derails real criticism and fills in the part of the conversation which might get filled in by something actually controversial. Like replacing the paint by numbers psychiatrists of the world with a simple form patients fill in.
It is in the 'dark arts' of artificial self flagellation and faux shows how your so open and progressive and yea yea we've covered this part of me being wrong already...and it turns out I'm not wrong!
It is a framing tactic to do just as you said...create a false non-actionable appearance of things that sound controversial or critical (which are even self nominated!), but pose no concrete challenge to any specific thing anyone is doing to ask them to stop doing that, do it differently, or do something new or actually risky. This is a classic herd mentality's defence mechanism.
People play along with this unwittingly and need no sophisitcaed understanding of what they're doing...just as in all things in life. This is the way all propaganda infects the mind and people parrot empty words at the right time to forestall any actual action or change on their part. People love doing nothing - if you give them an 'out' to already be right and offer meaningless 'awareness' of things then they'll pick it almost every time. People get to highly paid professional positions and niche intellectual places by jumping through all the hoops put in front of them like good doggies. A life time of hoop jumping and barking on command successfully doesn't lend itself towards deep reflection or critiques of what they're doing. With soldiers or with doctors they were 'just following orders'.
It supports the inertia of their mentality to just keep on trucking and not think about things like $225 fees for a missed appointment from someone who only makes $18k a year who had to pick up an extra shift or get fired - or who had a breakdown or depressive lay down episode or panic attack or whatever mental problem they are seeking treatment for. The financial penalty IS THE treatment....maybe? Let's go back to talking about something that doesn't personally challenge things I'm doing to make money only treating those who can afford to pay! I've got a lovely batch of approved criticisms we can distract ourselves with over cocktails.
Like how 'the system' is to blame for them being too poor to pay for my expensive services...as if any economy exists where peasants can afford the services of professionals - doctors, lawyers, accountants, and psychiatrists will always be too expensive for serfs. But choosing to personally earn less money to help more people is in the no way taboo basket!
If you want to know if you have a real criticism, people will get angry about it and you may well get punished for speaking it. That ketamine guy, so annoying, let's not invite him back next year.
A real hard one would be....if you're just going to treat people with a protocol spelled out by some professional body based on asking simple questions about how they feel, can't most Psychiatrists simply be replaced by a questionnaire? That'll really get them angry!
You personally are a fraud...but then Scott ask them about making a choice outside the guidelines such as with the Ketamine example and booooom they're upset and don't want to talk to you anymore. For them to personally take a risk or have to learn the differences rather than follow the guidelines like a damn recipe book? They've got insurance and lawsuits to worry about! Taking a risk with 0.5% blowback could lead to a lawsuit and loss of a decade of their damn live to train and get certified as a doctor!
They plan to treat more than 200 patients, so why risk 10 years of YOUR life for other people who are already failures? They're also here to help! But not at any cost or risk to themselves. It's the insurance industry to blame, go look there! Leave me alone! I've got a trip to Sonoma and am taking my 3rd long weekend in the first half of this year and you're really cramping my mood! Even a tiny risk to me is unacceptable. Racism is still bad, let's go back to that topic!
No way they'll risk their personal status or have an independent medical view! So why do they exist again?
So why tell us again why the vast majority of paint by numbers psychiatrists can't they be replaced by a simple form which doesn't cost $300k plus a year in salary? You can get a 5-10x savings to have some 30k-60k a year nurse make sure people fill in the form correctly to discover which pill is right for them! Hurray! Cost disease dispelled!
A lot of what's being described as "paradigmatic criticism" is not closely related to Kuhn's idea of paradigm change.
The most common view in the philosophy of science before Kuhn was Popperian falsifiability. Popper believed that theories were rejected when they made false predictions. Kuhn rejects falsifiability because (1) all theories have always been falsified and (2) then what? You can't do science with no concepts to understand the world at all. Kuhn instead argues that a paradigm is only replaced when there is something better to replace it with.
Telling EA that they are too individualistic does not contribute to a Kuhnian paradigm shift. It's trying to falsify EA without offering a replacement. What would move towards a potential Kuhnian paradigm shift would be to create an alternative altruistic organization that was more communitarian. Initially, this alternative organization would only be better than current EA in a few specific areas. But as they work more on those areas, and as more people start shifting to their way of doing things, eventually they might become the dominant paradigm for this community.
Watching crypto markets develop was fascinating because they reproduced a huge number of historical scams and frauds that had largely been driven out of 'real' markets - it was a chance to see history replay, with some small tweaks in the initial parameters.
Watching EA & Rat spaces independently work their way through the same issues and dilemmas as eg Aquinas, 800 years later, has a similar vibe. There are probably other parallels that could be drawn between that space and monasticism.
I have mixed feelings about this. It infuriates me to hear EAs trying to claim credit for their willingness to hear criticism. But it also seems odd to require them to do so in the first place.
Let's say I have a negative view of socialism. I think it's terrible and has brought more suffering to the world than anything else human beings have ever foisted upon themselves. I'm really not going to be impressed by how willing socialists are to receive criticism, unless of course they come round to the understanding that socialism is truly terrible, they should stop advocating for it, and join me in ridding the world of it's malign presence.
What I'd actually see in socialists being willing to entertain criticism is some perverse attempt to strengthen their woebegotten ideology. They'd believe that having survived criticism it had become even more worthy than it was before. The criticisms allowed would never be paradigmatic, merely concerning minor matters of implementation, or whatever went wrong the last time socialism was inflicted on some unfortunate population.
But from the point of view of people who really like socialism, my demands are ridiculous. I'm seemingly only interested in socialists accepting the kind of criticism that immediately and irrevocably ends socialism itself. Why would anybody do that concerning something they already fundamentally believe in?
I feel something very similar with EA. I think it is a bad idea in it's entirety, so I'm unimpressed by EA's attempting to strengthen their belief in their ideology by welcoming 'criticism', because to me this criticism can only be superficial and non-paradigmatic. For the "Yeah, we're open for criticism" to mean much, it would have to include the realistic possibility that the believer might renounce the EA worldview as a whole. And of course, in general, nobody is willing to do that, me included. For the very good reason that you'd be offering up something akin to all your values, all at once.
If you're an EA, do you ever countenance the possibility that what you're doing is a properly bad idea and you should just cease and desist? No? What then do you think being open to 'criticism' is going to do for those of us who think your whole enterprise is a terrible mistake?
When you have an institution with self-criticism as a cultural centerpiece, the self-criticism is always ritualized. Oftentimes, the *self-criticism will often be extremely accurate and correct* (e.g. X is too authoritarian, X is too political, X is missing a bunch of obvious corruption due to a self-serving bias towards legibility), but still ritualized, as a symbolic protection mechanism. Even this exact mechanism can be called out, and people will still nod their head sagely, rather than feeling curious or fearful, both of which are emotions that could motivate actual change. Getting people to read The Uruk Series won't help; only focusing on the object level and discomfiting people will help.
I think this is what is happening with EA. When you take a normie and have them do things they know are wrong as part of an institution, you will get confabulations and complete failures of self-reflection. When you do the same thing with hyper-reflective hyper-scrupulous autists (term of endearment, I am one), you will get reflection-as-ritual, which strips away its semantic value. This paragraph is more speculative.
With advance apologies if this point has already been made, I think one should consider the possibility that the reason the questions cited as possibly starting a real fight would start a real fight is because they actually challenge very profound issues in medical practice. I can't speak to levothyronine and so on, but "asking if they’re sure it’s ethical to charge poor patients three-digit fees for no-shows" certainly does touch some of the most fundamental questions. Our age's medical practice, indeed our whole society, is based on a model of organized, bureaucratic rationality (I mean rationality in the sociological, Weberian sense). In this model, rules are established and must be consistently followed. Schedules are set and punctuality is a moral value. Good practice involves following those rules. If patients are informed in advance that no-shows will be charged for their appointment, then all no-shows must in fact be so charged. This ensures fairness, predictability, and (crucially) unimpeachablity before supervisory organs. The other model is essentially feudal. Here, the dr's practice is the dr's property, and the dr may do what they like with it, "to the displeasure of anyone who says otherwise," as one 14th-century Aragonese hidalgo boasted to a royal officer. If a patient pleases the dr, perhaps by being earnest and respectful and really trying to get better, but has problems in life that make it hard to attend appointments--all, and this is key, in the dr's independent, unreviewable judgment--the dr can exercise grace and favor and forgive the fee (a terms that derives etymologically from "fief," incidentally). If the patient is disrespectful, high-handed about making appointments, or anything else that displeases the dr, the dr is entitled to damage them with a punitive fine. In other words, in the final analysis, to ask whether it's ethical to charge poor patients for no-shows challenges one of the most profound and important drives of western civilization in the last 500 years, which is to undo that feudal independence and arbitrariness of judgment, for better or worse, and replace it with bureaucratic, depersonalized rule-following. I suspect this all sounds a little flip (or, conversely, grandiose), but I'm deadly serious.
What if the point of criticism is not to criticize at all, but rather to associate oneself with (in Lacanian terms the object signifier of) a movement or organization or social group which has adopted criticism as part of its identity. This would correspond to the student who attends the university, not out of a desire for knowledge (especially not knowledge about himself) but instead to associate himself with the university (which purports to have knowing, as its object signifier). Actually knowing anything that might threaten his association with the University would defeat his purpose in pursuing this association in the first place.
Since Lacan's day perhaps the "university" (outside of STEM obviously) is no longer so much associated with knowing as with 'critical studies'; those of us who come up through such a system have learned to identify ourselves with being critical, but have no actual interest in criticism per se. When for instance, I talk about my white privilege, I am communicating to other university-trained people that I am a critical sort who thinks deeply about race and equity and justice. This, of course, is in sharp contrast to people who are not like us University types. Those people aren't critical at all, and in fact, are so uncritical as to accept on face value that professional wrestling is real and that their politicians vote the way big oil wants them to because it's what's best for America.
Lest I appear one sided here, imagine instead that I am beating my breast publicly on a powerlifting forum about how we all need to be trying harder and pushing through for more weight, for more reps, for more mass. Here I am associating myself with a particular kind of "no pain, no gain" self improvement, and I'm not interested in any self improvement that might challenge why it is that I wish to identify with this particular narrow kind of personal growth. I might be able to post in good faith a criticism of ignoring zone two cardio if I tie it narrowly to metabolic gains and long term muscle development. But if I post something about how cardiovascular health is more important than size, I would be dismissed at once as an obvious interloper. Talking only about how real serious lifters have good metabolic health and castigating myself for not doing enough to develop myself in this regard would be a signal in favor of a "more pain, more gain" self improvement, as well as a kind of shot across the bow of the lazy and untrained (of which of course there are some on this hypothetical forum kept around as targets for abuse but which mostly exist in the outside world -- not in this community!).
Perhaps what Scott here is calling specific criticism is in fact just true criticism. (Lacan would say that this is criticism in the mode of the 'hysteric' rather than the 'student' -- one desperate to actually know because of the suffering one is already experiencing, rather than merely wanting to be associated with the signifier of "knower".) Even this (hypothesized) 'true' criticism has little power to reach others. Nobody who is deadlifting six plates is going to want to hear about how they need to spend more time walking briskly uphill. And nobody who is comfortable discussing their white privilege wants to be called out for obvious classism and a genuine hatred of the poor.
Final point: I am way out over my skis here in any attempt to apply Lacan to section four of Scott's article, so I'll switch to an older but more familiar-to-me form of discourse. I can't help but question whether or not actual paradigmatic shift ever happens as a result of genuine self criticism (at least in the kinds of paradigms we're talking about here with Effective Altruism, or things that seem similarly cultural or at least culture-adjacent). Perhaps rather people who are critical of something gain in numbers and power and eventually change the thing that once they could merely snark about. This looks like paradigmatic change as a result of criticism, but it's just business as usual. The old guard who favored X regressive policy weren't convinced by reformers who favored Y new progressive policy, they were simply replaced and our (internal and external) Press Secretary stitches this process together into "look at how this organization changed its mind!" Comparing this to the accumulation of scientific development seems overly optimistic, though of course, I acknowledge that the scientific communities of the past functioned according to all sorts of power dynamics as well (perhaps they do just as much today in a way that is less visible to me personally).
The real problem with EA is they devote 100% of their time and effort trying to bring up the bottom 10%, and 0% of their time and effort trying to do something to move the boundry forward for everyone. I'd be interested in giving to charity, and would like recommendations of what charities give the most bang for the buck, but all their charities are laser focused on helping the poorest of the poor. That's all very well and good, but, where are their recomendations on how to most effectively support curing aging? or terraforming Mars? or fusion power? etc? There are hundreds of important randomised controlled trials that aren't being run because of lack of funding. But they've got nothing -- not a single recommendation-- for the entire field of medical research; nothing for people that want to donate to help science or technology working at the cutting edge.
> All of these are the opposite of the racism critique: they’re minor, finicky points entirely within the current paradigm that don’t challenge any foundational assumptions at all. But you can actually start fights if you bring them up, instead of getting people to nod along and smile vacuously.
I think you nailed it, minor finicky points are actionable by individuals, and that makes those individuals defensive.
> I don’t know, man, I don’t know. Thomas Kuhn seemed to think of paradigm shifts as almost mystical processes. You don’t go in some specific direction carefully signposted “Next Paradigm”.
I think making surprising progress in a *specific* direction can do it, rather than focusing on the "big picture" (this seems to overlap with your conclusion as well). Kind of like Musk with Tesla. That wasn't really a paradigm shift, but it was disruptive and disproved the decades-old "prevailing wisdom" that electric cars just wouldn't sell. Pretty much every engineer knew the reasons automakers cited were mostly bullshit, it just took someone with the courage and funds to push and see it through.
For a real paradigm shift, like general relativity and quantum mechanics, most physicists knew there were fundamental problems with prevailing theories at the time as well. How those questions all get resolved is an open question, but it seems like what matters most is that someone is paying attention and puts in the work in specific directions. Sometimes they'll then notice a generalization that then becomes a new paradigm.
I just want to know where the Deonotological and Virtue Ethics EAs are at. Especially since EA is something of a natural fit for Virtue Ethics, with the whole deal with being wise and how that relates to charity (insert Maimonides writings etc). The trouble is at the moment EA is dominated by Utilitarianism, so it's hard to separate out a critique of EA from a critique of Utilitarianism. Would arguing that you can't really assign utils to human and animal lives and so measure how many pigs are worth a human be as criticism of the former, or the latter?
It's hard to come up with a good criticism of the idea "If you're goal is to do X, you should find the best way to use your resources to accomplish X". But criticisms of utilitarianism go back as long as utilitarianism does, I don't think anybody's going to have anything new to add.
>The racism critique doesn’t imply any specific person is doing any specific thing wrong.
I don't buy this for one second. It may not imply any specific thing is wrong all by itself, but it's easy to *attach* to some specific thing. Just claim that it's racist to believe or support X. Voila, anyone who supports X is doing something wrong. Or claim that since your paper is about racism, it's racist to deny X. Voila, you can call critics of your paper racist.
In other words, please stop with the mistake theory. Racism accusations are a powerful weapon. Go ahead, say "this paper about racism is trash, and also, some races are smarter than others". You'll quickly discover that yes, you will be accused of doing specific wrong things.
I have a general feeling of unease whenever the powers that be preach radical social justice, that they're waffling vaguely about paradigms in order to avoid having to make specific changes that would actually produce material differences to disadvantaged groups. It seems like it's much easier to host a talk about how overcoming capitalism is the only way to demolish patriarchy, than to change the wording of criterion x.y.z in the official promotion process document in a way that would less disadvantage staff who are also mothers.
I didn't think the Anti-Politics Machine offered vague, paradigmatic criticisms of the development model - I read it as offering very specific criticisms of a particular development intervention (cattle were functioning as pensions, trying to encourage people to replace them with things that couldn't function as pensions was never going to work, etc.), and a plea for finding out how a system actually operates before trying to change it. I think this is the kind of criticism people like because it feels like you're finding out How Things Really Work - I can't imagine that the project directors responsible for the interventions criticised would have found the book's criticisms unthreatening. The book claimed that they were idiots.
Agree that this practical, specific criticism is how things change, kicking every support for a model to see which ones collapse and then seeing what you've got left.
> A case against RCT-driven development aid, somewhat related to the one in Anti-Politics Machine, got 389. It was the #6 highest upvoted post of all time;
What kind of consequences did this have? If any.
It seems to me that if any of these left-wing criticizers got their way and radically overhauled EA, all it would mean is that many of the big EA donors would simply find somebody else to give their money to. So it's strange that they should be trying to radically transform EA rather than create their own thing, or focus on the more obviously left wing charities that already exist. Perhaps they feel that EA has enough name brand recognition and relatively unconditional financial support that they can trade off of even with their new left wing paradigm?
Interesting paper about the impact of individualism and collectivism. Collectivism may well be better for low IQ countries.
I really appreciate both this post and Zvi's and don't think they're (very) contradictory!
I'm _extremely_ sympathetic to EA and mostly think it's fine – tho I'm at a considerable 'personal distance' from it and mostly just read some forum posts and posts like this one (or Zvi's). It feels like one of the big successful 'children' of our 'rationality movement'. It's grown up enough to have moved out and been living independently for years now :)
I do believe there's probably a lot of Sad angst and 'self-flagellation' internal to EA that I'm happy to not have witnessed up-close, and maybe that's all worse _because_ of how EA draws the relevant selected people together, but I think they mostly would have found other outlets for the same thing regardless.
And the more general critique just seems like (yet another) Sad fact about people. At sufficient remove, it's pretty fascinating, but being able to find someone that can even hold it in their heads as a coherent idea is like finding manna in the desert!
One problem is that people insist on there actually being a paradigm for a given sector in the first place.
What if there aren't any actually useful paradigms, and we just have to do our best to analyze things on an ad hoc basis because all the "rules" we think exist are incorrect and so the only way to tell if something is good or bad is by looking at it and analyzing it?
This is basically what we have to do with art and writing. There are no rules for either, just warning markers that if you go this way, it's a lot harder to produce something good.
A lot of people want there to be rules because they serve as a heuristic for judging whether something is a good idea or a bad idea, or is good or bad, but in a lot of fields of human endeavor ,there are no hard and fast rules.
You suggested in this essay that we have to adopt some new paradigm after we discard the old one, but I would argue that this is wrong; we don't actually have to accept that there is a paradigm at all, especially when the evidence for any paradigm's existence is lacking.
What I find fantastic about the criticisms of the medical system from within is that they have obvious actions the criticizers have to fix the problem. It's full-on virtue self-signalling. What do I mean?
* Complain that not enough black people have access to X care? Move your practice to a predominantly black area.
* Complain that poor people can't afford your service? Charge less for your services?
* Too many disadvantaged people can't travel to your office? Do house calls!
Alas, that would be inconvenient. Instead, people want to feel good for doing something *about* a problem while not having to *do* something about a problem.
Yes, doctors who prescribe esketamine instead of racemic ketamine are abusing their patients for the benefit of the bottom line of pharma companies. And more to the point, making it more likely that those patients will eventually commit suicide.
I mean, I'd say that sounds like "malpractice" to me, but of course it can't legally be considered such because they have the blessing of the state to not actually treat their patients and make a lot of money doing it.
Yes, I'm still *very* evangelical about this topic since it's the only reason I'm still breathing and didn't kill myself. (Four year anniversary of the day after my planned exit, next Friday!)
Thalidomide kids would like you have a word with you.
You may be good at math, but you're shit at chemistry. Don't encourage people to poison themselves, please.
I think a simpler explanation for what you're observing is simply that these "paradigmatic criticisms" are actually tacit reassurance that the existing paradigm is fine.
Consider an evangelical priest giving a speech to his congregation about how they all need to stop thinking of themselves and think only of what Jesus wants. To someone with no knowledge of Christianity, that would probably sound like a criticism which calls for them to adopt a new paradigm, but anyone familiar with the cultural context recognizes that that is the exact opposite of what's actually happening. It's the same with telling a conference of academics that they need to consider minority perspectives more; the actual message that's being sent isn't "you need to change your beliefs" but "all of your existing beliefs are completely correct, the only problem is that you and others aren't actually doing the things you think everyone should be doing".