234 Comments
author

Housekeeping question: I accidentally sent this to paying subscribers only first, then edited it so everyone can see it. So:

1. Can nonpayers see this?

2. Did nonpayers get an email about it?

3. Did payers get two emails about it?

4. Any other weird bugs caused by this?

Expand full comment

3: just one email

Expand full comment

I, a nonpayer, can see this, and it appeared on my RSS feed as normal.

Expand full comment
author

Do you usually get emails about my posts, and did you get an email about this one?

Expand full comment

I don't, and I didn't; I'm following you purely via RSS.

Expand full comment

I, a non payer who usually gets emails, saw this in RSS and not email.

Expand full comment

The origins of technocracy post is visible to my rss feed, despite your commitment to the contrary.

As I'm not planning to subscribe, I would greatly appreciate it if this problem was solved.

Expand full comment

One email (payer)

Expand full comment

I'm a nonpayer. I can see this post, but did not get an email.

Expand full comment
author

Argh.

Expand full comment

To add a data point: Same; non-payer (as of writing, but hopefully not too much longer :P), I can see the post, but did not get an email.

Expand full comment

Also a non-payer with email subscription, also no email

Expand full comment

I'm a non-payer too, but got an email 28 hours ago.

Expand full comment

I (nonpayer) can see this, but I didn't get an email.

Expand full comment
author

Can someone confirm that they're a nonpayer and they got my *second* email, the one informing them that a first email should have happened?

Expand full comment

I haven't gotten it.

Expand full comment

Non payer. Showed up on my RSS feed, but still no emails.

Expand full comment

(And I usually do get emails)

Expand full comment

I'm (currently) a non-payer. I did not receive first or second email. The most recent one in my inbox is for 'Contra weyl on technocracy'.

Expand full comment

Since signing up for emails on the 27th, i have only gotten two emails: one for "Contra Weyl" and one for "Ontology of Psychiatric Conditions"

Expand full comment

1. Yes, I can see it.

2-3. No email, but I did see it in my RSS feed.

4. Not that I know of.

Expand full comment

Oops, #3 was for payers and I'm a non-payer. My bad.

Expand full comment

Can second (ha!) that I as a nonpayer didn't get an email about this, but I can see it.

Expand full comment

The RSS feed includes subscriber-only posts (https://astralcodexten.substack.com/feed), which I guess is unintended.

Expand full comment

non payers didnt get a mail. I had no idea you wrote to this till I went to Weyls twitter

Expand full comment
author

My response to Weyl: Thanks for your thoughtful response. I'm not as familiar with your other work as I should be and I took the essay as a stand-alone document (and excused my laziness because I got the impression from the essay that you were repudiating some of your other work). I can't figure out a good way to get full-text access to the David Levine review of your work but I can certainly believe he erred in the other direction.

You're right that I was unfair to accuse you of using only those examples. Many people (I'm thinking particularly of a group who call themselves "post-rationalists") obsess over those particular examples, and you used a set of examples including those, and I unfairly used you as my target in attacking people who rely on those too much.

I'm a bit confused by the broadband allocation example though, even after reading your separate piece on it. The impression I get is that some technocrats designed an auction mechanism, and you (another technocrat) wanted a different, better auction mechanism. It doesn't sound like they failed to consider alternative perspectives or things outside their models (their first draft included the dynamic reserve prices you wanted, and then they took them out). It sounds like maybe some companies that were going to profit from the bad design pressured them to remove it.

And it sounds like your preferred response isn't to be less mechanismy in how broadband gets auctioned (eg have Congress allocate spectrum to whoever they like without any auction or rules), it's to try to explain these issues to the people, so that they can demand the good mechanism instead of the bad mechanism. I'm pretty skeptical of this. I will admit that after reading the article somewhat closely I still don't feel like I understand the issue well enough to know if you or the other technocrats are right (I assume you were, via social proof, but I can't intellectually prove it). I imagine if this somehow became a vibrant topic of public debate, it would devolve into AOC saying that dynamic reserve prices are necessary to prevent greedy Wall Street fat cats from stealing your spectrum and Ted Cruz saying that dynamic reserve prices are a plot by anti-progress extremists to destroy broadband, and then half the spectrum ends up going to Halliburton and half to queer women of color. It's hard to imagine it ending with everyone understanding the math behind dynamic reserve prices and rising up to have them included in the finished proposal at the exact right level. Maybe I'm being too cynical here, but keep in mind I work in health care, which a field where complex technocratic proposals constantly get elevated to the popular consciousness and we get to see what happens next.

(also, I'm unclear how this involves rationality being dangerous, or needing to incorporate perspectives from Continental philosophy, etc. It sounds like lots of people thought rationally about how to design an auction, and you did it better and more rationally than they did, which made your proposal better, and potentially they were also corrupt in some way. How would being less rational, or incorporating perspectives from Continental philosophy, have helped with this?)

>> Furthermore, the positive examples of technocracy @slatestarcodex refers to are...surprising. Two examples. To call school desegregation a technocratic invention papers over decades of community activism for desegregation. Perhaps even more dramatically looking at the coronavirus as an example of the success of technocracy runs against pretty much any reasonable reading of the international data. Danielle Allen and I have a piece coming

There were also decades of community activism for Soviet communism; in fact, there was a whole popular Revolution in favor of it. Does that mean collective farms aren't technocratic?

Overall I continue to be concerned that you're trying to channel James Scott but seem to be thinking of this on a completely different axis than the one he is. There being decades of community activism for something is completely consistent with (maybe contributory to) the sorts of top-down technocratic policies he focuses on. Well-intentioned reformers, many of whom may be some kind of "community activist", demand that the government interfere in a bottom-up emergent system and make it better. Then the government agrees to do that. That's the essence of Scottian technocracy.

This is also why I'm including the coronavirus. For Scott, the alternative to technocracy isn't "some even better government policy". It's ordinary people living their everyday lives unaffected by government reformers. I think the government letting ordinary people make their own choices on the coronavirus (ie no official lockdowns, everyone chooses individually or perhaps on a neighborhood-by-neighborhood basis whether or not to quarantine) would have been a mistake.

I continue to think we have some deep definitional disagreement on what technocracy is and what non-technocracy is. Usually I would be more embarrassed about this and understand that the impetus is on me to resolve it, except that we both seem to think we're using it the same way James C Scott does and so I would rather figure out how our interpretations of him are differing so badly.

(Substack is forcing me to cut off this comment here, second half continues below)

Expand full comment
author

(continuation of comment above)

>> Perhaps the sharpest point here is that the country, Taiwan, which performed best in the virus was led in part by Audrey Tang who moved back to Taiwan after being immersed in and repulsed by the rationalist movement in Silicon Valley (see e.g. https://www.wired.com/story/how-taiwans-unlikely-digital-minister-hacked-the-pandemic/) and dedicated herself to doing things differently in Taiwan (see her amazing poetic job description here: https://twitter.com/audreyt/status/767953441746411524).

There is nothing in that article about Audrey being in the rationalist movement. I've been in the rationalist movement ten years and never met her or heard of her (other than in the news like everyone else), unless she was using a different name before. I'm not going to say I've literally met everyone in the movement, but I would be really surprised if I had missed someone that interesting for that long. Gwern also says he's worked with her and she never seemed interested in or affiliated with the rationalists.

I'm worried you think of "the rationalist movement" as "anyone in Silicon Valley who likes reason", whereas when you're referring to it in the context of Eliezer Yudkowsky and Overcoming Bias, it means "a specific set of a few hundred people, sometimes we literally have group singalongs". It looks like Tang was in California around 2000, before the rationalist community in that sense even existed.

I'm kind of defensive here because I think when you talk about "the rationalist movement and Eliezer Yudkowsky" you're intending to refer to an entire worldview as vast as "liberalism" or "postmodernism" or something. But what you're actually asserting, based on the terms and names you're using, is that the group of people who used to come to my house for choir practice once a week are responsible for alienating the Cabinet Minister of Taiwan.

>> Is through public communication across lines of difference and in different value systems/communicative modes. One thing I find striking in the history of technology is that the vast majority of technologies that are actually useful today were pioneered by people who had similar critiques to mine here of technocracy, while those who zealous defend technocratic approaches have generally either not themselves actually developed successful technologies or have great technological dreams that have generally led to poor social outcomes. Consider Douglas Engelbart, Norbert Wiener, Jaron Lanier, etc. Calling people like this, most of whom were not even willing to express their views in the rationalistic terms I wrote in, "anti-technology" redefines technology to be only rigid and inhuman systems that fail. The process of socio-technological change has a far greater element of the "socio" when it succeeds than those focused on autonomous "technology" allow. Communication and collaboration outside of affordances of the technology itself are always critical to success. See, for example, Don Norman's Design of Everyday Things, or anything else in the field of human-centered design.

I agree we have some kind of extreme definitional confusion here on what technocracy is. I think I'm being faithful to James Scott and to what you imply by critiquing the rationalists and EAs, but at this point we should probably just agree that mechanism is potentially good and being stupid about it is potentially bad.

My main concern is that your use of "technocracy" for this position sets up for a motte and bailey here ( https://slatestarcodex.com/2014/11/03/all-in-all-another-brick-in-the-motte/ ) where you criticize Brasilia and then believe you can extend the criticism to EA or AI research or whatever. In other words, I'm worried your definition is so vague that you can apply it to anything done by a scientist, smart person, or rationalist that you don't like, while also choosing not to apply it to anything done by a scientist, smart person, or rationalist that you do like.

>> I think @slatestarcodex's insistence on breaking apart mechanisms v. judgement from top-down v. bottom-up misses a key part of the argument and of what sociologists of science have long said. There is no unitary thing called "science" or "mechanism". There are a variety of disciplines of information processing across academic fields, across cultures, across communities with in a culture, etc. "Mechanism" is just how one group of people seeks to claim that their mode of reasoning is uniquely unbiased and unaccountable to other ways of processing information. It is precisely this move, the unwillingness to think, speak or justify oneself on terms acceptable to those who think differently from you, that his response manifests and that concerns me.

I think mechanism is a useful term (though like most terms, it's a spectrum and not a taxon). There is a real difference between tiling a state with compact polygons, and getting all the legislators together in a chamber to figure out how to best gerrymander something. That's not a "fake claim to objectivity", that's actually putting some effort into not being biased. Yes, we should worry that some people might claim objectivity falsely, but we're also allowed to worry that things genuinely aren't objective enough and we should try to improve them.

You describe yourself as a mechanism designer. I think that's good. It sounds like you try to create systems (like your spectrum auction proposal) which are harder to exploit or bias than some other system would be. I think that makes you a good person who is working to improve the world. I think if you accepted that and stopped trying to argue against yourself, we would be on the same team, Team Create Mechanisms That Work For The Common Good And Are Hard To Bias. Instead, I see you as doing this with one hand, then with the other saying it's impossible and anyone who tries it is a fool or a stooge. I agree it's possible to try to do this badly and end up as a fool or stooge, just like it's possible to do *anything* badly and end up as a fool or stooge, but it's not clear to me how your discussion of technocracy helps us avoid that fate. I still don't feel like I understand how you're thinking about this, but I definitely object to my possibly-false best guess at what it is.

Expand full comment

I think a lot of what is going on here is that Glen senses that there are a lot of people involved in policy making/analysis who don't sufficiently take into consideration what broad swaths of people want, take the time to explain to them what is going on, and to make matters worse, ignore all their valuable insights/knowledge that may be less legible - while implementing sub-optimal policies.

I suspect Scott's view is that making good policy (or being a rationalist) includes taking into consideration all of these things as a way to make better policy.

Expand full comment

Yes but sensing/perceiving the wrong thing. Less 'sufficiently' and more deliberately. Incompetence is the wrong sense here.

<i>"....that there are a lot of people involved in policy making/analysis who don't sufficiently care what broad swaths of people want, want to take the time to explain to them what is going on, and to make matters worse, attack all their valuable insights/knowledge that may be more legible than their own - while implementing sub-optimal policies."</i>

I suspect Scott's view is what you describe, without a sufficient appreciation for the maliciousness and arrogance included in keeping these things as far distant as possible from making better policy.

Expand full comment
author

I think I do take the maliciousness seriously; I might make a post on this next week.

Expand full comment

I assume you'll write about this, but isn't the ethos of SSC (or ACT now) to assume people aren't malicious? That's how I've always interpreted "This blog does not have a subject, but it has an ethos. That ethos might be summed up as: charity over absurdity."

Say people make some policy by avoiding rationality/mechanism/anti-bias-tools and you think said policy failed. You could interpret it either as 'they made a bad policy because they had these good ideas about how to make the world better, but those ideas failed in these ways'. Or you could interpret it as 'they made a bad policy because they hate rationality and mechanisms'. The first one would follow that stated ethos, but I don't think the second one does.

Expand full comment

I thought it was (and I very well might be mistaken) more to the effect of most people being [mistaken, rather than malicious](https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/), not that nobody is malicious.

In other words, I think Scott (saying this out loud so somebody can correct me if I'm wrong) believes that *most, but not all* problems are problems of coordination, not defense from evil.

Expand full comment

This but not just this. It's not just a question of which mechanism extrudes the most Common-Good nuggets; it's a question of determining what the Common Good even is. Housing prices are one component, so is continuity of community. So are any number of competing aesthetic preferences. Fun easily available for singles. Access and safety for small children (and peace of mind for the parents who mind them). Proximity to gainful employment. Lack of pollution. And on and on into approximately infinity. Any combination of these can be weighted in just about any way in someone's preference ordering, and some of these are a lot harder to justify outside of personal preference than others - Scott's first reply started getting at this. THAT is one of the big things that gets lost in a focus on mechanism. Spend too much time in the weeds and you lose sight of where you're going (or why you'd want to go there in the first place).

Expand full comment

I too am on "Team Create Mechanisms That Work for the Common Good And Are Hard to Bias". However, as someone who has actually worked with redistricting software (albeit as a hobbyist) I did want to point out that your comment "There is a real difference between tiling a state with compact polygons, and getting all the legislators together in a chamber to figure out how to best gerrymander something" is an over-simplification and caricature that makes me more sympathetic to Weyl's point of view rather than less.

The "legislators" in question (or rather their consultants) are in fact using a mechanism (i.e., their own automated redistricting software), they're just deploying that mechanism for partisan ends, motivated by particular (political) values. And the "mathematicians" are not using a single seemingly neutral criterion like compactness of districts. There is a whole set of parameters that go into automated redistricting, many if not most of which involve some sort of value judgment over which there is or could be political disagreement.

For example: the redistricting mechanism needs population counts as input, but who counts for this purpose? Do you include or exclude non-citizens? children? prisoners located in an area but prevented from voting? Do you try to balance the voting power of racial/ethnic groups across districts? If so, which racial/ethnic groups, and how are they counted? Do you similarly try to balance the voting power of parties across? If so, based on registered voters? election results? And so on...

I can't speak for Weyl, but I think he may be responding to two rhetorical strategies that often get employed in discussions like this: First, the claim (or at least implication) that a mechanism being proposed is uniquely value-free or value-neutral (e.g., "tiling a state with compact polygons"). Second, if value judgments do need to be made, the claim (or at least implication) that there is some group of people uniquely equipped to best make those judgments (e.g., "mathematicians" as contrasted with "legislators"). We can quibble whether the people employing this type of rhetoric should be characterized as "rationalists" or "technocrats" or "experts" or whatever, but in any case I think Weyl is pushing back against a real phenomenon.

Expand full comment

This is mostly going to be baseless speculation, but hopefully it's relevant and kind enough to pass the filter.

You mention many different things to take into districting considerations that legislators might have to balance. The way I read Scott (Alexander)'s point was that despite their promise to do things fairly and equitably, many times districts end up suspiciously favoring certain parties, conveniently the ones who happened to draw up the boundaries. This may be analogous to the doctor diagnosis case, where even though the doctors have more knowledge and context, the algorithm still outperforms them. Another example might be a chess engine. We are no where near solving chess, so engines aren't making the optimal move every time. And there are even many positions that you can construct where an engine wildly misreads the situation. But despite all that, engines are a thousand elo higher than the highest humans.

This is all mostly a repeat of Scott's point that contrary to human intuition, often times people aren't the best optimizers. And that's before you start to take perverse incentives into account. To go back to the districting, I don't think it's necessary for a tiling algorithm to be perfectly value free, however you define that. It's necessary for it to be inspectable and public. There may be cases where value judgments are needed, but they may not be an necessary or relevant as you think. Or we might find that those doing the value judging are doing less valuing and more judging. In fact (this is the baseless speculation part), it's quite possible that most any way you account for values in your tiling algorithm will end up fairer by any sane metric than what people are doing.

With all that said, your point is well taken that just coming in and saying "we're gonna fix all your problems with MATH!" usually doesn't go over well. Or the part where you accuse the decision makers of being partisan shills and say they're ruining the country or whatever. But the solution to that isn't to ditch the tiling algorithm, it's to get a better PR department. (I wonder how many of Weyl's concerns would be assuaged by that? Can we just talk to each other in different ways and solve half our problems? The more I think about it, I think it's at least part of what he's pushing.) Do what Scott's doing, acknowledge that your solutions aren't perfect, but also educate people on the merits as well.

Expand full comment

I don't disagree with your comment, especially the point that "I don't think it's necessary for a tiling algorithm to be perfectly value free, however you define that. It's necessary for it to be inspectable and public." Doing redistricting in an open and public manner, with all code available and all decisions on input parameters made explicit, is exactly what's needed to make the process of making any necessary value judgments potentially open to wider input. This is different from just accepting the assurance of "experts" that "this algorithm is bias-free".

I do take issue with your statement that "despite their promise to do things fairly and equitably, many times districts end up suspiciously favoring certain parties". Parties overseeing redistricting are often quite explicit that their goal is to elect more of their own, and if you accept their values and agree with their goals you'd conclude that that would be a good thing. You are correct though that automated redistricting algorithms (given the right inputs) can do a much better job of gerrymandering than any human could.

Expand full comment

I actually wasn't aware of any redistricting parties that explicitly touted their partisanship. My statement should more accurately read "even if they promise to...". And if that's your goal, then sure closed room meetings are great. (Though as you say, a computer could probably do even better.) I suppose I was approaching that section from the perspective that some measure of district fairness is a goal to work towards, hence the need for a better system.

Expand full comment

Just this month, the January 11 AP story "Redistricting power at stake in 2020 legislative elections" quotes Republican State Leadership Committee President Austin Chambers as saying of Republican redistricting efforts, “This is the long-term investment. This is about making sure that we have a congressional majority and a conservative majority across the country at the state and local level for the next decade.” There are quotes from Democrat that are not quite that explicit, but are still far from stating that "some measure of district fairness is a goal to work towards".

And in any case, if one were a partisan that firmly believed that your party was a force for good, and the other party the opposite, why wouldn't one be willing to be open about one's desire to maximize one's own party's seats and minimize the other's? There are values that people hold that go beyond fairness per se.

Expand full comment

Why have districts at all? If you want the election to elect people who are as representative as possible of the voters' wishes, surely you have everything as one big district and use proportional representation?

(If, on the other hand, you want the representatives to be as representative as possible of the voters demographically, just choose them by sortition).

Expand full comment

"surely you have everything as one big district and use proportional representation?" I agree. In the redistricting example I was playing around with, I used three five-member districts with PR-STV. (I thought electing 15 members in a single district would be too unwieldy.)

Incidentally, the redistricting software I used for my testing was Auto-Redistrict (autoredistrict.org), which has support for multi-member districts and for balancing the voting power of parties. The data you'll need (population counts and map boundaries) comes from the US Census Redistricting Data Program, and for the current round of redistricting should be released later in 2021. (June 30 is the last date I saw.)

Expand full comment

The point of districts to make sure that people have a national representative who specifically represents them, in the most narrow sense possible. The bigger the pool you draw from - and the broader the constituents of that pool - the less directly you are being represented.

Expand full comment

I disagree; I thought Scott's point with redistricting was spot on.

I worked professionally on redistricting - specifically I worked for the Florida Senate on the 2012 redistricting cycle writing their custom redistricting software. I like to say we made history - an anti-gerrymandering amendment to the Florida constitution had been passed (in 2010, IIRC), and thus we were the first legislature to have our plan thrown out of the state supreme court for being gerrymandered on its face.

You can google that particularly sordid affair, but two main points - 1. the amendment required districts to be drawn in a compact way, and 2. partisan legislators (lead by then senate president Don Gaetz, father of Matt Gaetz) both actively avoided that objective mechanism, and were also prevented from doing what they wanted by it.

There are tons of problems and biases that you raise, but Scott's point still stands - namely, that objective mechanism can function to thwart bad faith action. No one involved thought compact was well defined or without wiggle room, but it was solid enough that it really fucked up Gaetz's efforts.

Now of course, bad faith can be baked into the mechanism itself - Florida wetlands mitigation credits are a sterling example. From the Florida DEP - "Mitigation banking is a practice in which an environmental enhancement and preservation project is conducted by a public agency or private entity (“banker”) to provide mitigation for unavoidable wetland impacts within a defined region (mitigation service area). The bank is the site itself, and the currency sold by the banker to the impact permittee is a credit, which represents the wetland ecological value equivalent to the complete restoration of one acre."

The wiggly bit is "ecological value equivalent" - it's defined such that it's quite open to abuse, and I'm personally pretty appalled by it; the externalities for destroying wetlands are expensive, and aren't being adequately captured.

Expand full comment

I don't think our areas of disagreement are that large. I favor redistricting done by an independent body via an open public process with inspectable software, all assumptions documented, an acknowledgement of any trade-offs, and an identification of decisions that have to be made that are essentially political in nature.

My irritation with Scott's comment was not because I am against objective mechanisms. Rather, after he started off well by acknowledging that there's a spectrum involved, I felt his subsequent statement contrasting "mathematicians" vs "legislators" was overly simplistic. I thought it was close to falling into the trap of saying "just trust the (objective) experts" and "this algorithm is guaranteed bias-free", a rhetorical strategy I think is counter-productive given the mixed track record of "experts" and the resulting public distrust in them.

Expand full comment

Scott can we just have a conversation, perhaps on a podcast? It would be more efficient.

Expand full comment
author

I don't like real-time communication. I'm happy to talk via email, scott@slatestarcodex.com.

Expand full comment

I hate real-time communication, because my memory is for shit. "I said that??"

Expand full comment

The only way a podcast conversation is "more efficient" than text is if both parties usually use text-to-speech, neither party ever uses their backspace key or text-to-speech equivalent, and if all listeners use screen-readers.

Expand full comment

Are you not assuming here that both parties can type faster than they can talk?

That's not true in the general case, and it certainly isn't true for me. Also I can talk and think at the same time to a greater extent than I can type and think.

Expand full comment

It depends on where you put the goalposts. I put the goalposts at "finished, polished thought ready to be shared".

With that in mind, 10 minutes of drafting a paragraph of text, which is then consumed by 100 people in 5 minutes, is much more efficient than saying something in fifteen seconds that isn't quite right and requires fifteen minutes of back-and-forth with a co-host to get to an almost-good-enough version of the thesis, which is then suffered through by 100 people in 15 minutes. The former is one util of information at a cost of 510 man-minutes, the latter is one util of information at a cost of 1530 (assuming one co-host) man-minutes.

Expand full comment

Don't know about podcasts vs long articles, but I vastly prefer (video-)calls to text chat. I notice that I'm missing a ton of important subtext when using a text-only medium.

Also, specifically for podcasts vs articles, you can get a much quicker back-and-forth on a podcast. YMMV on whether that is a good or a bad thing.

Expand full comment

Video-calls are nice for client-provider situations, because it renders the whole "like this?" "no..." "like THIS?" "hm, better, but no" "like... *this*...?" "yes bingo perfect!" exchange a lot less arduous.

However, the lag with video-calls puts them in this uncanny valley emotionally. That brief 100-150ms pause after every statement makes everything end up hitting like an awkward comment. So when it comes to less-formal, more "friendly" sort of chats, I find video calls nearly as horrific to sit through as an audience member as I do actually participating in one.

Expand full comment

This isn't so much about voice vs text : all 3 of podcasts, (video-)calls and text chat are real time (for the speaker/typist). Well done long form writing either lets you accurately guess the subtext, or directly turns it into text.

The good thing about real time is that you can more easily catch someone being disingenuous.

The bad thing is that if you're good enough at rhetoric, you can basically convince anyone about anything : this is much harder to do when someone can dispassionately examine any portion of your text for as long as they like.

Expand full comment

> I'm kind of defensive here because I think when you talk about "the rationalist movement and Eliezer Yudkowsky" you're intending to refer to an entire worldview as vast as "liberalism" or "postmodernism" or something. But what you're actually asserting, based on the terms and names you're using, is that the group of people who used to come to my house for choir practice once a week are responsible for alienating the Cabinet Minister of Taiwan.

I think there is some legitimate confusing here with "rationalism" in philosophy (https://en.wikipedia.org/wiki/Rationalism), as I mentioned in a comment on the other post. (And see also recent Twitter non-debate between David Chapman and Eliezer.)

On the other hand, insisting on "rationalists" as a few hundred Bay Area peeps who go to the meetups or live in houses with other rationalists seems a bit too narrow.

Like, there is some sense of "rationalists" in which that's the referent, but the readership of LessWrong and Codex is in the tens of thousands (at least).

And we're a bit bigger and more influential than just a community of a few hundred friends. And I think it's not too weird for someone to have been on the periphery and interacted with some of the people (and ideas, and blog posts, etc.) and had a negative reaction and bounced off, w/o any central people having met the person.

Expand full comment

Taking "rationalism" here to mean the philosophical school doesn't make much more sense than Yud et al. The Silicon Valley zeitgeist is not particularly inspired by philosophical rationalism; it tends toward disdain for philosophy altogether, but inasmuch as there's a philosophy behind it, I'd look for it in empiricism.

Expand full comment

> Taking "rationalism" here to mean the philosophical school doesn't make much more sense than Yud et al.

I agree. I think Glen is kind of equivocating between using it overly broadly like this, and using it to refer to the LW rationalists. But then Scott retreated to a definition of it that is even smaller than the LW rationalists and inappropriately narrow, in my view.

I think Glen should distinguish between when he means philosophical rationalism, and when he means the LW rationalists. And Scott should acknowledge that the LW rationalists are much bigger group than a few hundred friends in the Bay Area (even if there is a sort of central core about that size).

Expand full comment

Seconded. Also, Paul Crowley looked into it and she's read HPMOR, which isn't being a rationalist, but it's not never having heard of it:

https://twitter.com/audreyt/status/1355363180285882371?s=20

She doesn't seem to have an aversion, but describes herself as having a different ethos. Hard to figure out exactly how different it is in practice, and what she's saying as a political figure.

Expand full comment
Jan 30, 2021Liked by Scott Alexander

As someone living in Taiwan at the moment: In reference to Tang, I'd also like to point out that her contribution to the otherwise rather mechanistic/technocratic response to the virus here was minor. It certainly was nothing like "doing things differently" from rationalists here; rather she seems to just have made a suggestion that reduces waiting times for people standing in line to buy masks (at least, for those who use the app, which most people don't need to). This was relevant for a rather brief period of time, but quite a while now you've been able buy masks literally everywhere here.

I do admire many of the things she stands for, but she is not at all a figurehead of the coronavirus response here. The real leader here is Dr. Chen Shih-chung, Taiwan's Health Minister, whose policies have been (IMO) quite mechanistic/technocratic, consisting of border controls, mask rationing, and contact-tracing (as well as a system of restrictions that take effect immediately when potential domestic spread takes place).

The portrayal of the response here being led or influenced by anti-rationalists or anti-technocrats is profoundly distorted.

Expand full comment

Thanks Scott. As you dislike live interactions, I have a very slight reading and writing impairment, so responding to all your points is beyond my bandwidth. I'll have to somewhat cherry-pick, so please forgive that.

1. On the spectrum stuff, drawing an equivalence between the design we advocated and the one used in terms of their simplicity/sociotechnical properties is inappropriate. The piece Vitalik referred to (https://www.radicalxchange.org/kiosk/blog/2018-11-26-4m9b8b/) expounds on the distinction, but to put it very briefly, the basic concept of the depreciating license design is so simple that it can be described in a single sentence and so graspable that there have been dozens of tweet debates about hundreds of properties of it that are pretty close to the research cutting edge on it. Compare this to the Milgrom-Segal-Leyton-Brown design which is literally so complex and opaque that I have never seen an even reasonably adequate description of it outside an academic paper, no one even in the academic economics literature identified its most important weaknesses until about 5 years after it was proposed and a year before the auction began and even now I have not been able to a major reporting outfit to go into it because they find it too technical/arcane. David Levine's review (publicly viewable here: http://www.dklevine.com/papers/radical.79.pdf) basically ridicules simple designs like this and public engagement around them for being insufficiently optimal. My point was to highlight that things other than optimality matter a lot, namely the ability to create technologies that are sufficiently comprehensible to people who are not the technical community that created them that they are not just able to use them well, but to reuse and recombine them for purposes never imagined by the designers.

2. I continue to find it challenging to understand how you think that by insisting on design in a democratic spirit I am somehow entirely undermining any possibility of technology playing a role in society. It is very clear from my essay that this is not my intent, as I hope you will agree. And if you are having trouble parsing the distinction, I would suggest the work of John Dewey (founder of progressive education), Douglas Engelbart (inventor of the personal computer), Norbert Wiener (founder of cybernetics), Alan Kay (inventor of the GUI and key pioneer of object-oriented programming), Don Norman (founding figure of human-computer interaction), Terrance Winograd (advisor of Sergei and Larry) or Jaron Lanier (inventor of virtual reality). Distinctions of exactly the kind I am making have been central to all their thinking and many of them express it in much "air-fairier" humanistic terms than I do. I particularly enjoy Norman's "Design of Everyday Things" and his distinction between complexity and complication. If you want empirical illustrations of this distinction, I'd suggest comparing the internet to GPT-3 or the iPhone to the Windows phone. In fact, I would go as far as to challenge you to name a major inventor of a technology as important or widely used as those above who publicly expressed a philosophy of invention and design that didn't include a significant element of pushback against the rationalism of their age and an emphasis on reusability and redeployment.

3. I agree that contemporary American politics is in many ways broken. Based on the way you draw things, I suppose one way you think about this is about the cycle of distrust between the left and the right. I'd invite you to imagine that left-right isn't the only divide with a cycle of mistrust dynamic. The technocratic-populist divide is another. Lack of transparency and inattention to public justification fuel distrust of technocrats and science, which leads technocrats and scientists to lack attention to public communication. The way out of a cycle of mistrust is to expose yourself with an outstretched hand to the other side. This is why I write in a self-critical tone about rationalism, because only when people in the rationalistic worldview take some responsibility for the present state of affairs do we have a realistic chance of repairing it.

4. I think and extended exchange around what James Scott does or does not believe is probably not especially salutary...he was far from being the only or even primary influence on what I wrote and it seems from what you are writing that you have read precisely one of his writings, while I have read much of his corpus and he is a colleague of my wife's. But let me just say, I don't think your mental model of his perspective is especially accurate.

5. I included a few canonical examples in my essay because they are canonical out of respect to those who helped establish the canon. But I have endless other examples and listed many of them, including e.g. much of what appears in the movie The Social Dilemma, the spectrum example (and my wife can give you dozens more like that from the Latin American contracting and procurement space), financial market in the 2000s, etc. Actually, your own readers have given some nice examples, including an expert on the redistricting rules you hold up.

6. Yes, it is possible to do anything badly. But it is certain you will do something badly if you are optimizing for the wrong things. One critical thing to optimize for, I and the other thinkers I mention above contend, is that the mechanism you design make enough sense to people who think in very different terms from the designer that they are able to reuse them in ways that are hard for the designer to imagine and thus that they empower people to be authors of their own fates. This is a much squishier thing to optimize for, involves large servings of humanistic ideas like culture, semiotics, etc. But the fact that this idea is hard for you and others in the community to parse is, I think, primarily a reflection on the fact that the community is systematically uninterested in engaging with content with a significant humanistic, religious, cultural, etc. component, even when produced by prominent technologists. And that is precisely the point I am trying to make. It is a point about praxis and sociology of a set of people as much as it is about any purely logical distinction, but that shouldn't be surprising given that the point is precisely about the limits of ways of thinking that imagine themselves to be purely logical.

One thing that I find salutary about exchanges with folks from the rationalism community is that they usually illustrate better than anything I could write about them the points I am trying to make. For example, in a similar exchange on Twitter with Eliezer a year and a half ago, he insisted I was so ideological that I could not literally take a segment of text he had tweeted, copy and paste it and retweet it and offered me a monetary rewards for doing so. That is not a behavior pattern I would recommend to those seeking to make their interlocutors feel treated as epistemic peers, whatever the background.

Expand full comment

"he insisted I was so ideological that I could not literally take a segment of text he had tweeted, copy and paste it and retweet it and offered me a monetary rewards for doing so. That is not a behavior pattern I would recommend to those seeking to make their interlocutors feel treated as epistemic peers, whatever the background."

I find this bit really interesting. On the one hand, yeah, it seems like Eliezer is heavily implying you've got something wrong with your epistemology... in fact he's outright stating it. On the other hand, sometimes people really are too ideological and sometimes that really is a flaw in their epistemology? Scratch that, *almost all people are too ideological almost all the time, and this is almost always a flaw in their epistemology.* So, what is to be done? Currently I lean towards the norm of "People have a basic right to be treated with respect, their arguments listened to politely and open-mindedly. But people do not have a basic right to be treated as epistemic peers. For example, being too ideological is one of several important and common epistemic flaws and it's OK to call it out."

Expand full comment

Danielle Allen has argued, quite persuasively I think, that treating fellows as rough epistemic and moral peers is perhaps the core value of democracy. This value is clearly on the decline in many places, probably closely related to the decline of democracy. In some cases this is "populism"; in this case it is "technocracy". So, I would say you are, as I noted above, nicely exhibiting the point I was trying to make.

Expand full comment

You say "decline" - when would you say it was at its peak?

Expand full comment

50-60's were a pretty good period, along certain dimensions.

Expand full comment

I haven't read Danielle Allen, but I certainly agree that democracy is declining and that this is in large part because collective epistemology is declining, and that that's in large part because people don't listen to each other anymore. But I am not sure the best way to solve this is to tell everyone to treat everyone else as their epistemic peer. Rather, I'd advocate telling everyone to treat everyone else with respect and open-mindedness etc.

Do you agree that being overly ideological is indeed an epistemic flaw? Do you agree that it's fairly widespread in society today?

Under what circumstances, if any, do you think it is OK to call it out?

Expand full comment

I rarely say never, but I’d say this is one of the most overused and least informative ways to dismiss someone’s argument and is almost never an effective way to persuade someone unless it is used in a very surprising way (to label something rarely called an ideology as such).

Expand full comment

"treating fellows as rough epistemic and moral peers is perhaps the core value of democracy."

Also, and perhaps less obviously, of sensible child rearing.

Expand full comment

This has been quite fascinating to follow because I'm broadly sympathetic to Glen's philosophy by temperament but also generally trust Scott as a synthesist or at least a good asker of questions. Glen is making what feels like a real distinction here between innovation *I* (me, myself) like and innovation that feels imposed from on high in opaque, needlessly obfuscatory, trust-the-experts sorts of ways that foster resentment towards expertise. Scott has correctly pointed out that this is a harder distinction to make than it might appear, and wondered how closely Glen's gut reaction towards styles of technology actually cleaves reality at the joints. This is good wondering even if my sentiments remain strongly in Glen's camp about how to do technology.

And then they're both using terms from *Seeing Like a State* about (mostly) much earlier societies that don't really capture the modern technological landscape very well one way or the other.

Glen is also correct that some of the Eliezer types are annoying and off-putting -- even when right. This blog in its various incarnations is noteworthy for starting with Bayesian tools and general epistemic humility and then incorporating or engaging critically with a really broad swath of ideas from different disciplines and obscure ideologies, in a way that I think Glen should appreciate, always with charity and a willingness to be proven wrong.

Come on, guys, you're on the same team here!

Expand full comment
author

Thanks. I'm going to keep this short because I'm getting the impression you're getting tired of this.

I'm still having trouble nailing down your position. I'm certainly not accusing you of being anti-technology. I think my current best guess at it is something like "you want expert planners to use very simple ideas and consult the population very carefully before implementing them". I agree that all else being equal, simple things are better than complex ones and consulting people is better than ignoring them. But I feel like you're taking an extremely exaggerated straw-mannish view of these things, until you can use it as a cudgel against anyone who tries to think rigorously at all.

It sounds like it's genuinely disappointing that nobody listened to your plan for a superior auction system. But I think it's important to note it wasn't a member of the populace with deep illegible knowledge outside the technocrats' system who noticed the flaws in the government's plan. It was you, a better technocrat, who knows more about auction theory than they do (or maybe is more honest/less corrupt than they are). I think the project of figuring out how to get better, more honest, less corrupt technocrats (like you) to contribute to arcane disciplines like auction theory, and improve the mechanisms within it, is a very important one. That's part of the reason I disagree with your essay which I interpret as pooh-poohing technocrats, mechanism design (I know you're going to say you're in favor of it, but in the essay you explicitly call out in the same sentence as rationality and AI alignment), and any pretension people might have to honesty or objectivity. I still don't think you've given any evidence that getting the public involved would have been particularly productive in that case. I'm reading you as saying that the auction-theoretic principles involved are actually quite simple. But keep in mind that among actual members of the public, only about half can answer "what is 3.15 divided by 3"? (see https://phys.org/news/2018-03-high-adults-unable-basic-mathematical.html ), and people tend to do even worse at math issues once they become politicized (see https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2319992 ). What we need is for good technocrats like you to feel emboldened to challenge bad policies using language that acknowledges that some things can be objectively better or more fair than others, and I think you're hamstringing your own ability to do that.

My usual tactic is to trust non-experts near-absolutely when they report how they feel and what they're experiencing, be extremely skeptical of their discussions of mechanism and solution, and when in any doubt to default to nonintervention. When my patients say we need to stop a medication because they're getting a certain side effect, I would never in a million years deny the side effect, but I might tell them that it only lasts a few days and ask if they think they can muddle through it, or that it's probably not a med side effect and they just have the flu (while also being careful to listen to any contrary evidence they have, and realistically I should warn them about this ahead of time). Or more politically, if someone says "I'm really angry about being unemployed so we need to ban those immigrants who are stealing jobs", I would respect their anger and sorrow at unemployment, but don't think they necessarily know better than technocratic studies about whether immigrants decrease employment or not. I think generally talking about how technocrats are insular and biased and arrogant doesn't distinguish between the thing we want (technocrats to listen to people's concerns about unemployment and act to help them) and the thing we don't want (technocrats agreeing to ban immigration in order to help these people). The thing that does help that is finding ways to blend rationality with honesty and compassion - something which I feel like your essay kind of mocks and replaces with even more denunciations of how arrogant those people probably are. I'm not sure *what* I would be trusting the general public with in terms of spectrum auctions. Maybe to report that they want fair distribution and don't want to unfairly enrich the already privileged? But it hardly takes an anthropological expedition to understand that people probably want this.

I think your last paragraph is unworthy of you. Anyone who wants to read the actual exchange can see it at https://twitter.com/ESYudkowsky/status/1164215730293788672 , though maybe someone else can find a better/more navigable version.

Expand full comment

Yes, we are probably hitting diminishing returns. Just a few clarifications quickly:

1. I was not advocating an alternative at the time of the auction, and the point of this was very explicitly not "use my thing not theirs". The point was that there are lots of alternatives, and alternative process (such as that used in Taiwan) that could have been much more open and engaged with the public, and that there is an attitude of "leave it to the experts" that I think in this case became a cover for a lot of inside dealing and harm to the public interest. I do not think the specific designs I have been advocating for this and related things are nearly as subject to this problem, partly because I have been disciplining their advocacy by a significant degree of participation in public debate that was never the case with the other approach. I would suggest this is an important desideratum of designs and one that is under appreciated.

2. I think it would be helpful if you clarified the minimum degree of rigor with which you think a criterion needs to be stated in order to be worth considering in an analysis. If your contention is that I am cudgeling anyone who is rigorous, my contention is that wish to exclude from analysis many of the most important factors which we do not yet know how to state with your "minimum degree of rigor", despite the fact that these are probably more important than those that we do know how to state that way, leading to a lot of "precisely wrong" outcomes.

3. I wrote the last paragraph late at night, when I found the condescension of your tone frustrating. I feel that frustration much less now, so I am happy to apologize for it. My frustration with Eliezer's behavior was longer term and hasn't changed.

Expand full comment

Another way of putting all this is:

1. Do you see no discernable or meaningful distinction between the operating philosophy of these pairs: Audrey Tang v. Eliezer Yudokowsky or Paul Milgrom, Douglas Engelbart v. Marvin Minsky, Don Norman v. Sam Altman etc.? Or are you unaware of the former cases?

2. Let me agree that it would be good if we could formalize more what this distinction is and define it more precisely. However, I am trying to understand whether you think that the inability to define the distinction more precisely means we should just ignore the distinction/never attempt to invoke it, or simply that it would be good to work harder on refining the definition?

3. Was the focus of the essay, on legibility v. fidelity trade-offs, simply not comprehensible? Or was there another reason you didn't focus on it?

Expand full comment

2. Formalizing the distinction would be really helpful for me. It seems that you see yourself, Danielle, Audrey, (and Vitalik?) as a new kind of designer - a more human centered designer. At first I thought the distinction your were making between technocrats and yourself was similar to the distinction between an engineer and a product manager. But maybe that's not right?

3. My take is that this community is starting to internalize some of the legibility vs. fidelity tradeoff. Tyler Cowen recently highlighted Matthew Yglesias' https://www.slowboring.com/p/making-policy-for-a-low-trust-world.

Expand full comment

I guess I'm confused about what specifically you plan to implement. My (possibly off base) impression is that policy doesn't get written by technocrats or the public at large. It gets written through a combination of ideology and vested interests from the most powerful players in whatever field the policy is about. So for example agricultural, land and water use policies don't get written by experts on natural resource management, soil and nutrition - instead they get written by the biggest farmers, lobbyists and politicians who live in big agricultural states who campaigned promising pork for their farmer constituents. That's just my cynical impression, but I think there's an element of truth to it - and I don't see how you change the incentives so that politicians listen to experts or the broader public. They are accountable to the people who vote for them and donate to them.

Expand full comment

Would it be a fair characterisation of your dispute to say that: you think that there are costs to what one might call 'consent of the governed' involved with government decisions (whether intrinsically or in the method by which they are arrived at), and that you don't think Scott Alexander is correctly accounting for them when he evaluates how good government decisions are, or what the best choices might be?

Expand full comment

I think https://twitter.com/ESYudkowsky/status/1163984973910556672 has the specific thing Glen mentioned. Eliezer apologized for it here: https://twitter.com/ESYudkowsky/status/1164280778399703040

As a fan of both Scott Alexander and Glen Weyl (especially some of the ironically technocratic-ish ideas in Radical Markets), this debate sure is riveting! Really appreciate the good faith effort from both of you, even though it's genuinely hard to pin down the crux of the disagreement (if there even turns out to be one!).

Expand full comment

I clicked through to the study on math skills and it honestly looks like shit. I couldn't get access to the actual questionnaire, but going by the linked article the actual question was not "what is 3.15 divided by 3"? it was "Suppose, a litre of cola costs US$3.15. If you buy one third of a litre of cola, how much would you pay?"

Expand full comment

That's a shockingly badly-worded question. The questionaire doesn't say anything about people's maths ability, but it does say a lot about questionaire-setters questionaire setting ability.

Expand full comment

I am unable to understand how Weyl thinks that Alexander claims that "by insisting on design in a democratic spirit I am somehow entirely undermining any possibility of technology playing a role in society", when Alexander said nothing like this. I wish Weyl would respond to direct quotes by Alexander instead of responding to made-up paraphrases. Weyl said he has a slight writing impairment, so wouldn't it be easier to copy+paste direct quotes than to paraphrase?

Anyway, nostalgebraist summarized Weyl's essay in a way that Weyl endorsed, so everybody should check out his comment.

Expand full comment

"But I feel like you're taking an extremely exaggerated straw-mannish view of these things, until you can use it as a cudgel against anyone who tries to think rigorously at all." was what I was responding to.

Expand full comment

FWIW I think you've got the comment order swapped. The comment where Scott wrote "... use it as a cudgel ..." was a reply to the comment where you wrote, "... by insisting on design ...", not the other way around.

(Though I don't blame you for mixing that up -- this Substack comment UI seems not great for long comment threads. Been having a bit of trouble myself tracking which comments are responses to what.)

Expand full comment

At a tangent ... . The idea of auctioning off spectrum was, I believe, originated by Coase at a point where almost everyone else regarded it as an absurd alternative to regulatory allocation. Do you count Coase's proposal as mechanistic, or the procedure by which it eventually convinced enough of his fellow economists of its obvious advantages as non-mechanistic, bottom up?

Expand full comment

Agreed...I think that folks like Coase and your father were not technocrats in the sense I am critiquing...I disagree with many of their ideas, but they had a profound commitment to public engagement and a spirit of epistemic democracy

Expand full comment

"For example, in a similar exchange on Twitter with Eliezer a year and a half ago, he insisted I was so ideological that I could not literally take a segment of text he had tweeted, copy and paste it and retweet it and offered me a monetary rewards for doing so."

Did you do it? Because if not, it seems like he was right.

Expand full comment

I did, but refused to accept the payment.

Expand full comment

I think there's a lot of interesting points here, but a cynical and pessimistic part of me thinks there's a good reason why those in power prefer complex inscrutable systems that only experts understand over a more open decision making process. Anything that actually shows the people how the sausage is made is going to end a lot of people's careers.

I think it's obvious that Elizer Yudowsky often comes across as insufferably arrogant, the only question is whether that's a bug or a feature, and that's probably just a matter of whether on not you agree with him at the time.

Expand full comment

I think I fully agree...

Expand full comment

That's discouraging to hear, but I respect you for trying anyway!

(I really enjoyed your talk with Rob Wiblin on the 80,000 hours podcast about Radical Markets, I just expect Britain to continue to be ruled by the same posh elites for the next few centuries, and would rather focus on things I can actually change.)

Expand full comment

The fact that there are real barriers to change doesn't mean it can't happen if people become aware, mobilize, etc. There is already a lot of populism, it is just flailing around. With real social connections, you can build real alternatives. This is what happened in Taiwan.

Expand full comment

"Lack of transparency and inattention to public justification fuel distrust of technocrats and science, which leads technocrats and scientists to lack attention to public communication...The way out of a cycle of mistrust is to expose yourself with an outstretched hand to the other side."

It's dangerous to rest the future on a hopefulness that 'if we reach out with open arms, we'll come to a middle ground'. It leaves us unprepared for anyone opting-our or spoiling your plan. No, rather when people are *over-confident* then they *refuse* to learn - ignorance of the importance of a policy is self-perpetuated when they *ignore*. The flat-earthers will tell you that they're smarter than the average Joe (arrogance, over-confidence) DESPITE their lack of knowledge or facility with reasoning. The reason populists distrust experts is because "that just doesn't make SENSE to *ME*!" When someone *refuses* to learn their errors, they are *destined* to be disappointed by good policy, because it doesn't match their favorite plan. You seem to think that 'if I just *say* the plan better, then it *will* make SENSE to them..' *Willful* ignorance is going to rain on that parade. (Dunn-dunn-dunning-kreuger)

"But it is certain you will do something badly if you are optimizing for the wrong things."

Preface: I'm not an in silico Bay-team "rationalist"; I'm wandering-in here and disagreeing with everyone. :) I'd previously written-up an example of a technocratic *policy-finder* which is mandated according to "best performance along the set of CONCERNS and INTERESTS expressed by the total, combined populace." That is, everyone can update a list of their own concerns and interests at any time; no waiting four years; no being-ignored-for-losing-the-election. EVERYONE is heard, ALL concerns and interests. Constantly. That list is transparent, anonymous and verifiable - your key lets you can see that your interests were properly pooled with those who do share it (instead of being mis-represented). That aggregation of concerns and interests is the *mandate* for measuring the value of each PROPOSED policy. Where is your "optimizing for the wrong thing" and "not listening to public feedback" in this case? Step two, in this example...

Anyone can propose solutions in an attempt to meet those concerns and interests; experts are only tasked with turning these into simulations, prototypes, experiments, pilot programs (never give ANYONE'S *WHIM* control of anything!). Whichever MEASURES best, according to the aggregate of all peoples' *concerns* and *interests*, is mandated. New proposals and testing, feedback are *always* necessary for changes in circumstance. Oh, and detailed interviews are data, too... I don't know why "learn from data" would *exclude* learning from interviews, forums, and longitudinal surveys of all the people impacted.

In general, we have historical proof that: if we don't measure something, it bites us in the butt. If we don't listen to people, that bites us in the butt. If we assume *without* testing, it'll bite us in the butt. I only hear of people acting against these facts ("Let's NOT measure that one thing..." "Let's ignore these CONCERNS..." "Let's pass this law without having a CLUE what it'll do...") when they are acting in BAD FAITH or IGNORANCE. And, as I mentioned, that ignorance is fed and tended by an underlying *arrogance* - they refuse to listen long enough to notice their favorite solution is harmful... which is why we need to look for alternatives to the "reach to them with open arms" naivete.

Expand full comment

>...the group of people who used to come to my house for choir practice...

wow, so the choir thing https://unsongbook.com/chapter-2-arise-to-spiritual-strife/ actually happened.

Expand full comment

In reality, choir practice was more like a dozen people in a living room and the "people all over the Bay Area gathering [for a sing-along]" only happened once a year, and in an auditorium rather than a house's basement, but the creative inspiration is definitely there.

Expand full comment

>I see you as doing this with one hand, then with the other saying it's impossible and anyone who tries it is a fool or a stooge.

No offense, but I don't think the author of the world's most pro-libertarian Anti-libertarian FAQ gets to complain about that.

Expand full comment

" I've been in the rationalist movement ten years and never met her or heard of her (other than in the news like everyone else), unless she was using a different name before"

If it helps, Audrey Tang was using the name Autrijus in 2000, having transitioned since.

Expand full comment

I think the most comprehensive entry point for Glen Weyl Thought is his appearance on the 80,000 hours podcast

https://80000hours.org/podcast/episodes/glen-weyl-radically-reforming-capitalism-and-democracy/

Expand full comment

I have a problem with your Covid example. You are only counting the parts of the mechanism that you approve of.

Roughly speaking, the vaccines we are now using took a week to develop, eleven months to get FDA approval. That's mechanism too. In the non-mechanism world, the vaccine producers would have been free to sell to anyone who wanted to buy. A month of challenge trials would have demonstrated that they sharply reduced infection and had no serious immediate side effects, at which point people would have been willing to buy as many as they could produce and they would have an incentive to immediately start scaling up production. It's hard to believe that that wouldn't have saved a million or so lives.

Expand full comment

The safety trial process went very quickly for all of this (while still being long enough to be thorough, itself longer than a month) and was done early on more or less as you described.

Various governments also funded infrastructure build up to ensure that production capacity was going to be available.

There is also a delay associated with building out the necessary production systems, verifying their products etc.

You also wouldn't have seen the production capacity we have now without government incentuves because vaccine candidates do not traditionally have a high success rate. In an unrelated market without the subsides there would be incentives to wait for proof of efficacy (not just safety) before starting to scale up and make capital investments. Alternately you would get groups that scaled first and then continued to sell ineffective product in order to pay for the capital expenditures. Likely this would be accompanied by a disinformation campaign to encourage continued sales.

For reference since this is probably not obvious to the public but a lot of vaccine production infrastructure (tanks and systems but also microorganisms and production knowledge) is only useful for a given type of vaccine/platform (i.e. RNA vs weakend virus etc) and sometimes a specific vaccine (i.e. production of viral proteins for a specific virus in a suitable host or modified microorganism).

This can make it hard to switch production on a capital investment to another product/different vaccine quickly particularly for new tech like the RNA vaccines.

I also think removing the controls and safety checks on what could be sold would actually decrease vaccination rates in the long run as people stopped trusting vaccines because so much the market was garbage or dangerous. Given the importance of vaccine induced herd immunity for actually controlling spread, that decrease in vaccination rate might well tender the entire effort a bit pointless.

Expand full comment

I would recomend reading his book, Radical Markets. It is really quite wonderful and worthy of review.

Expand full comment

I'm actually sort of surprised that Scott hasn't already reviewed it! I assumed he had!

Expand full comment

Is it just because I'm hung over that I have absolutely no idea what he's talking about here or in anything linked here? It seems almost like word salad.

Expand full comment

I found his book very clear. It really is worth reading. Weyl's writing on AI is deeply confused, presumably a confusion inherated from Lanier, who (though a genius) is - in my opinion having read much of his work - mindkilled on the topic of AGI by his mysterianism and psychedelicism.

Expand full comment

I'll take that as evidence in favour of the "too hung over" hypothesis.

Expand full comment

FWIW, I believe Glen now disavows significant parts of that book (or maybe something like the overall "framing" it presents, which I would say is fairly technocratic).

Expand full comment

Wouldn't it make sense to put this in the main post?

Expand full comment

>> The impression I get is that some technocrats designed an auction mechanism, and you (another technocrat) wanted a different, better auction mechanism.

I think there is another more suble idea in the article about the auction thing. Weys is comparing the flawed mechanism the FCC went with, the better mechanism he proposed, and a theoretical mechanism that he hasn't figured out yet but would be even better.

From the article about the auction thing:

>>This will almost always require distillation, simplification and imperfect optimization

This theoretical mechanism would *not* be as good at efficiently allocating broadband to the best users, but would have the advantage of being easier to understand and communicate to the masses. Weys believes in sacrificing efficiency on purpose to get more clarity, just like most economists believe in sacrificing some amount of growth on purpose to get more equality.

Just like there is an equity-efficiency tradeoff in economics, there is a also a tradeoff between efficiency and legibility in epistemology. This is also why Weys uses the word legibility kinda backwards. He wants to make technocratic solutions legible to the masses, force the technocratic solutions into philosophical evenly spaced square grids so that the populace can understand what the heck is going on.

Expand full comment

As far as I can tell, it's also that more clarity would, in fact, result in more efficiency despite what the supposedly logical arguments for the complicated, opaque system say, because things being complicated and opaque cause problems that they haven't taken into account and things being clear and simple allow for people to contribute in unforeseen ways?

That is, clarity doesn't even need to be a terminal value, even if we're optimizing for efficiency it is valuable to sacrifice some theoretical efficiency in exchange for simplicity and clarity.

Expand full comment

Right. Opaque things have higher variance which usually resolves on the negative end because the opaqueness pieces discretion that is easily corrupted. Optimality gains must be large to justify opaqueness. https://www.radicalxchange.org/kiosk/blog/2018-11-26-4m9b8b/

Expand full comment

"Weyl", not "Weys"

Expand full comment
founding

>>You're right that I was unfair to accuse you of using only those examples. Many people (I'm thinking particularly of a group who call themselves "post-rationalists") obsess over those particular examples, and you used a set of examples including those, and I unfairly used you as my target in attacking people who rely on those too much.

Do you (or anyone else who sees this) happen to have any links to this sort of thing? I'm working on a piece on post-rationalist thought, and I'm having some trouble collecting evidence to back up my impression (or any impression) of it. Everyone seems to have some idea, but it seems like most people are running on vague impressions picked up from conversations in person or in siloed and hard-to-search social media threads. "Citing Brasilia at the drop of a hat" sounds totally in-character though.

Expand full comment

The places to look are mostly Ribbon Farm and twitter. People like to include David Chapman in this, too, but he doesn't think of himself as responding to this kind of rationality much at all.

Expand full comment

fwiw here is a response from Paul Milgrom on Glen Weyl's accusations/criticisms of that spectrum auction thing:

The Market Design Community and the Broadcast Incentive Auction: Fact-Checking Glen Weyl’s and Stefano Feltri’s False Claims (https://digitopoly.org/2020/06/14/the-market-design-community-and-the-broadcast-incentive-auction-fact-checking-glen-weyls-and-stefano-feltris-false-claims/)

Expand full comment

"I was blowing the whistle on one (promarket.org/2020/05/28/how…) at roughly the same time I wrote the technocracy piece."

Link seems to be broken.

Expand full comment
author

Thanks, it's fixed now.

Expand full comment

I don't think that it is...

Expand full comment
author

How about now?

Expand full comment

Now it is the same link as the next, which I don't know if it is supposed to be.

Expand full comment

"Blowing the whistle on one" link doesn't work, it has ... in the URL instead of the actual rest of the URL.

Expand full comment
author

Huh, I think the original posted the same link twice but it only worked once, I'm going to just put the same link in in a working way both times.

Expand full comment

Some valid critiques. Also read through the AI article. The idea that “AI is an ideology” is, um, extremely contrived. It seems to be based primarily on the tired argument that “image classification isn’t really AI” and similar arguments. It doesn’t matter what you call it, it’s still an important technology and the advantages that China is cited as having are still advantages.

Expand full comment

Agreed. It is just not seeing the forest for a tree. Currently in any technical discussion one can replace the word "AI" with Machine Learning - our current best approach of generating AI. However AI is more broader concept than just Machine Learning - the main promise is to create strong AI. What Glenn seem to be doing is to array of concepts from algorithm, to Machine Learning and strong AI interchangeably.

Also I do not quite understand his concept of "human data" in relation to AI. My issue here is that there is a spectrum of human input. Early chess engines relied on human input from software engineers thoroughly. The engine could "learn" something new only if engineer coded it that way. Now we have systems where all that is required is input of rules broadly speaking. Alpha GO is perfectly capable to use the initial input of rules and board and play against itself. Now the input is still required but there is qualitative leap from the old engines. One can hook AI into different data - let's say data from meteorological sensors - and then do its job. Would this also mean human data for Glen as sensors were installed by humans? I do not know where Glen is going with this idea.

Expand full comment

The very article addresses both of those ?

Expand full comment

Scott, thanks for posting Glen's response.

I think Glen is one of the more interesting critics of EA/AGI concern/rationalism precisely because his way of thinking about the world is such a close cousin of the rationalist approach. I see Glen as someone who has infused his natural tendency towards economic/rationalist thinking with ideas about social justice and inclusive community building. I think it's a really valuable mindset, and an effective bridge between people and concepts that are often separated by a much greater chasm.

Still, sometimes it seems like Glen is trying to create a bridge--or deal with tensions--that lead to him coming across as hypocritical or self-defeating. That's annoying, and it has definitely set off my alarm bells in the past: "Oh really, this *Princeton economics prodigy turned Harvard fellow turned U Chicago professor turned Microsoft guru* is also suspicious of complex, technocratic solutions for public problems?" But, actually, yeah. That's valid. I'm glad that some of the brilliant people doing technocratic, rationalist-adjacent work are occasionally hypocritical, almost confused critics of what they are doing. That seems pretty useful, at least in Glen-sized doses.

Expand full comment
author

I think it's completely fair for him to be critical of a group he's in, I just don't understand the difference between the cases where he works on creating complex mechanisms that are hard to bias and the cases where he condemns smart people working on complex mechanisms and claiming they're hard to bias. I understand he's probably going to say something like "democratic feedback" or something, but I don't have the same sense that this bridges an interesting gap that he does.

Expand full comment

I think the differences between RadicalXChange and EA are actually quite clear. Sure both movements have some similar qualities but as you say these things exist on a spectrum and EA is definitely more technocratic.

RXC is focused on creating new tools/social systems to empower a broader base of people. Simplest example of this would be quadratic voting, everyone gets a certain number of vote credits a year and can apply several votes to a candidate/issue to express the intensity of their opinion. Quadratic funding follows the same principles https://vitalik.ca/general/2019/12/07/quadratic.html.

EA in comparison is far more 'technocratic' in attitude in that it prescribes not only methods (have smart people in research institutes/universities write papers and think in rationalist terms, everyone else donate to support these people), but also the problems (Ex risk, animal welfare, global health) and often times what solutions should follow. It's a vertically integrated chain from the philosopher class down.

Expand full comment

Having had some contact with Weyl's thought before, his radical egalitarianism is interesting to me, because I agree with his values but I feel like some of the substantive views he combines it with are fairly far-out.

Like I remember a podcast with Rob Wiblin where Weyl said something along the lines of claiming that there is no such thing as a difference in overall epistemic skills among humans. Rather, when it looks like there is such a difference, it's really that people have specialized in different ways.

Can others who are more familiar with his views explain why he thinks this?

Expand full comment

I found the quote, I wrote it down because it was so striking: "Glen Weyl: I know nothing about my local candidates either. So the notion that there are a class of people who are just epistemically crap, I think, is just wrong. I think the answer is that people focus on different things, because they are adapted to different areas. And yes, I absolutely believe that there should be division of labor, and that people should be allowed to opt in to things that they care more about and opt into things that they care less about. But I think that the notion that some people are epistemically great and other people are epistemically crap is just really wrong and deeply problematic."

Expand full comment

Sounds like a much less extreme claim - it's saying that most of the difference in relevant knowledge about political candidates is due to specialization, and not aptitude. While I suspect there might be some sort of aptitude that is a necessary condition, I think the overall point is right. (I don't think most of my well-educated friends are particularly knowledgeable about the *relevant* difference between political candidates - most would still just do better voting based on the single binary bit indicated by party affiliation, than anything else.)

Expand full comment

I looked up the transcript and in context it's more general than that, he's talking about policy knowledge in general: https://80000hours.org/podcast/episodes/glen-weyl-radically-reforming-capitalism-and-democracy/

Expand full comment

I agree with the first part; it's undoubtedly true that much (even most) of the differences in accurate beliefs on a given topic are due to prioritization, not epistemological skill.

But there are definite counter-examples. People who both believe in and strongly care about conspiracy theories don't have a prioritization 'out' here. They opted in to caring about this issue a great deal, but still ended up very wrong. That's simple epistemic failure, not specialization.

Expand full comment

What do you mean by epistemological skill? For instance let's say that domain of expertise is "string theory". I doubt I can have accurate beliefs on sufficiently sophisticated level even if I focused on the field. So I will not even try and rely on Ed Witten to condense the knowledge with a hope that my limited capability will be able to grasp even these simplifications. Also not to be smug I can of course also offer example from different field - let's say music. Apparently only one in 10,000 people have ability to achieve perfect pitch - ability to recreate musical tone just from hearing it.

So in the end Witten should defer to famous composer Kaija Saariaho when it comes to music and she should probably deffer to Witten when it comes to theoretical physics.

For instance I believe that crowdsourcing works exactly because you cast your net wide enough so that you can find wide area of experts who can then provide their unique skill relevant for unique problem. In other words it is search for technocrats. It does not exactly work in a way that more people automatically means better outcomes because everyone can get good magically. But people in general of course play also crucial role - their participation is necessary in order to create conditions where such a search can flourish.

Expand full comment

He's 100% right that it's deeply problematic. This does not imply that it's wrong. I don't think I can provide a stronger counterexample than the conspiracy theory one mentioned in a previous reply. Possibly the differences among the (self-selected) participants in the Good Judgment project?

Expand full comment

Just to be clear, I meant factually wrong, not morally wrong.

Expand full comment

Yeah, I think superforecasters refute his claim. Good point.

Expand full comment

GW is likely correct about one thing, which is that EA types and rationalists should pay (more) attention to insights from humanities and humanistic social sciences. Those disciplines are not in fact synonymous with some simplistic "woke ideology" or whatever , as Scott seems to imply.

He is also of course correct that democratic accountability is good, but I am not aware of anything from the EA movement (which is more institutionalized than rationalism and whose principles are thus easier to pin down), that would suggest that it is against democratic accountability. That seems to be a rather crude strawman. I guess we can find someone who identifies as rationalist and thinks that world should be ruled by unelected technocrats, but they are most definitely not representative of the EA movement.

Also GW´s examples about Phillips curve and Eastern European "shock therapy" are kind of empirically shaky, i.e. imho he arguably got his history wrong. I am hedging here a lot, since those are two huge rabbit holes of a discussion, for which I don´t have time at the moment.

Expand full comment

Powell, Fauci, Wray, etc. are unelected technocrats who made many important decisions that Trump, the democratically elected President, was unable to overrule. People who regard this as a good thing and identify as rationalists express the revealed preference for rule by unelected technocrats. If you want to understand how the government actually works, and why democratically elected officials cannot overrule the unelected technocrats: here it is:

https://en.wikipedia.org/wiki/Administrative_Procedure_Act_(United_States)

Expand full comment

I respectfully disagree. It is perfectly possible to think that an unelected expert is on some issue substantively correct and an elected official is in the wrong, but nevertheless maintain that as a matter of general principle deciding power should be with elected officials.

Expand full comment

Yeah, and there's an ambiguity here. For example, say you said, "I wanted Trump to be President." You could mean, "I wish that more people had voted for Trump, such that he won the election lawfully." Or you could mean, "I want Trump to remain President despite the actual outcome of the election."

I think usually when people express regret over a democratic outcome, they do not mean, "I want to change the mechanism that resulted in this outcome," they just mean, "I wish that the inputs to that mechanism had been different such that it went the other way."

But clearly not always!

Expand full comment

It's worth making at least two distinctions here. First is the difference between unelected experts creating policy, and them having no democratic accountability. I must confess to not closely following recent politics or reading the entirety of the APA, so I'm not sure the exact status of Fauci, say (though the Secretary of Health and Human Services, who administers said department, which is the parent department of NIAID, is appointed by the president, so surely that counts for something?) But one can point to a similar example of the Supreme Court, who aren't elected, but are certainly subject to democratic oversight by virtue of presidential appointment and congressional confirmation. (You might say that after appointment they aren't beholden to anyone, to which I answer, that's the point of confirmation, and also only as long as everyone agrees to not pack the court.)

The second point is that just because the president can't unilaterally overturn something doesn't mean that that thing is undemocratic. Congress is the obvious other factor here, but also the state and local governments. Trump has many times fought with the states over issues of immigration, but it's hard to say that this is due to rule by unelected technocrats. It seems to me there is precious little that could not be changed if all voters actually agree on what the change should be. The issues arise when you have a bunch of conflicting voter mandates.

All this is not to say that you may not be right in your specific examples. I'm not going to take a stance there. And I don't think you're actually arguing that the president is the sole arbiter of democracy. But I think you're being just a tad cavalier about how you're equating praise for Fauci et al. with people having a revealed preference for "unelected technocrats", especially when you seem to be trying to imply that they have no democratic accountability and are undermining the rightful "democratically elected officials".

Expand full comment

I generally agree that there are probably good insights out there in the vast universe of stuff I haven't read, but that isn't at all helpful when deciding what to read.

Such criticism seems kind of empty compared to saying "I read this great book" and explaining why you like it? More communities should have FAQ's pointing you to the best stuff to read about their subject of interest. Or failing that, maybe we should do more link blogging.

So, I guess I should recommend something? Regarding "democratic accountability" I am skeptical of the whole idea, and in particular I'll refer to you Edmund Morgan's book *Inventing the People*. The "Will of the People" is a sort of myth justifying the rule of the few over the many, much like the divine right of kings. Of course it's *our* myth and I think it's a better system, but not by much as of late.

Anywhere I see "accountability" I want to know which concrete mechanism is intended and what it's supposed to accomplish.

Expand full comment

I see people who claims that rationalist should pay more attention to the humanities. It follows I trend I've observed in many volunteer organizations: Some people want the organization to do something, so they put forward a proposal about what someone should do. But volunteer (and "movements" like the rationalist movement) doesn't work like this: people in the movement already knows what they want to do. Time and energy is the limit, not ideas.

If you want something done, do it. Don't call for others to do it. That might work in politics or in the workplace but doesn't work for movements and volunteer organizations. I'm sure everyone would be super exited about high-effort lesswrong posts that brings up insights from the humanities and social sciences. But someone else won't wright them, they have other projects.

https://www.noisebridge.net/wiki/Do-ocracy

Finally, I'm sure rationalists would benefit from knowing more about music theory or shark behavior or historic west-African construction techniques or whatever. Knowledge is good. But work always has an alternative cost. What do you propose rationalist do less so that they can spend more time on the humanities? (To me it is quite clear that the world would be better if more people in the humanities engaged in EA, but the EA-rationalism overlap can be discussed.)

Expand full comment

"Also GW´s examples about Phillips curve and Eastern European "shock therapy" are kind of empirically shaky, i.e. imho he arguably got his history wrong. I am hedging here a lot, since those are two huge rabbit holes of a discussion, for which I don´t have time at the moment."

Agreed that these are very poor examples. But technically it is also irrelevant. There were different approaches when it came to transformation of command economy into market economy in different countries with command economy ranging from Cuba retaining it to large extent through China doing only partial reforms through careful privatization like in Slovenia or "public" privatization where all citizens were distributed shares in state enterprises like in Czechoslovakia. Virtually all of these measures were highly - they were imposed top down. I do not recall any country doing a referendum about what to do or anything like that. So whatever method Glen prefers it does not probably matter because something akin to it was probably tried by technocrats.

Expand full comment

yowza, i don't want to get into the larger issues, but that audrey tang tweet is insanely dumb. maybe it's ok as a poetical musing, but to pretend it has any non insipid content is really something :\

Expand full comment

I read that tweet as representing the age old, never-ending cosmic battle between squares and hippies. That tweet was telling technocracy that tie-dying should take precedence. I don't think it was dumb per se, but it is profoundly representative of a large population cohort's preferences. Those aspirations are all 'equality of result' code phrases.

But that's just like my opinion, man.

Expand full comment

i have no beef with her, and wouldn't complain about "equality of result" stuff in this way. but like... man...

why is it YOUtube and not UStube bro

Expand full comment

> why is it YOUtube and not UStube bro

Why do we spend so much time focused on the thumb pointing up, and not the four fingers clenched in a fist...? O_o

Expand full comment

Joke answers:

Because US Tube manufactures tubes and selfishly kept the naming rights like the capitalists they are. (Actually might be true)

Or

Because it was marketed at millennials so it had to be about them. (Wait we're all adults now and supposed to be complaining about green z right?)

Expand full comment

I really enjoyed her recent appearance on Conversations With Tyler podcast... my first exposure. I'd like to see what the context of the tweet was. But if she meant it the way Weyl is trying to use it, then yeah.... pretty weak.

Expand full comment

yeah, i don't know anything about her, and wouldn't assume anything negative based on one tweet. just... yeah

Expand full comment

The reversal of the cover art... funny! Might have been cool to reverse it on the original and then reverse it back for the response. But would hav been hard to think of that in advance.

Expand full comment

"AI" is anything we don't know how to do yet. Once we know how to do it, it's just programming.

First, I programmed in machine language. 0 means and, 1 means add, 2 means increment and skip if zero, 3 means store, 4 means jump to subroutine, 5 means jump, 6 means I/O, and 7 is a bunch of math operations.

Next, I programmed in assembly language, and a robot translated it into machine language for me.

Next, I programmed in C, and a robot translated it into assembly language for me. Mostly I could do a better job than it could, but my time was better spent programming in C.

Next, I programmed in Python, and a robot ran the programs for me. I could have turned the Python into C, but that would have been too much work, so I let the robot just run the programs.

People think I should be scared that a robot will take my job.

I'm not. A robot has always taken my job. Robot, come take my job so that I can get more valuable things done.

Expand full comment

Huh? This sounds like a massive strawman to me. No one ever claimed that a c compiler (or similar) is an "AI". Such tasks are much more mechanical and narrowly defined than the kind of stuff AI is used for these days, or what people predict/fear it will be used for in the future.

Expand full comment

Natural language processing is definitely AI, and now that we have programs that do it, it's not AI anymore. AI is anything we can't currently program.

Expand full comment

GPT-3 is still AI. Just because the term AI was misused for decades doesn't mean it will continue to be misused every time.

Expand full comment

It won't be AI in a few years. Just you watch.

Expand full comment

Alphazero has been AI since at least as far back as 2016. I'm willing to bet gpt will still generally be considered AI in say, 2025.

Expand full comment

sure, nlp is AI, but compiling programs is not nlp!

Expand full comment

You can read Python. Can a computer read python? Of course not -- that is natural language processing. Just because we can do it *now* doesn't make it not AI.

Expand full comment

Python is a language, but not a natural language - in fact, it was specifically designed with the goal of being readable by computers. I don't think anyone developing Python, or using it in the first days of its existence, ever considered it "AI".

Expand full comment

If you think that's the case, then you don't know much about programming. It is merely incidental that a computer can read Python. It was created to communicate with yourself and others in six months. Other languages are not as successful with that property.

Of course it wasn't considered "AI", because people were programming it. That doesn't fit under my definition of AI as "anything we can't do right now." If we can do it, it's hardly AI, is it?

Expand full comment

Yes, a computer can read python. A valid python program can be parsed into a syntax tree, using a well-defined, deterministic algorithm (given by a formal grammar). If a string of text can't be parsed in such a way, then it's by definition not a python program.

Respectfully, that's as far as I'll engage here. You seem to have a very non-standard definition of terms such as AI and NLP, I hope you're at least aware of that.

Expand full comment

Thanks for the reply.

Expand full comment

He's talking about the historic definition of AI, not the current buzzword that pretty much means "Neural Networks". https://en.m.wikipedia.org/wiki/AI_effect

And indeed there is some overlap with "robot".

Expand full comment

You guys are talking past each other on critique of "rationalism." It's clear it doesn't mean the same thing to the two of you, but what does it really mean at all? Is it people in and around the software and venture capital fields on the west coast who have gone all in on the notion that billionaires have some special mental property that makes them qualified to disrupt and hack any industry, including government? Is it a few hundred people who all lived in San Francisco at around the same time who attended cuddle parties and coalesced around a few high-profile autodidacts highly committed to intellectual parable via Harry Potter fan fiction?

There's clearly some overlap, but these are not the same groups. I think Scott is particularly sensitive about this because one of these groups has tremendous social power and the other not so much and may in fact have spent much of their lives being mocked and bullied, and sure, a few now have advanced degrees and decent salaries, but don't fundamentally see themselves as being aligned socially with the Adam Neumanns and Travis Kalanicks of the world.

And then there's the rest of us, who for some period of time possibly now spanning decades, possibly well before Less Wrong or even Overcoming Bias ever existed, have tried in some way to systematically overcome all of the various failure modes of human cognition for one reason or another, possibly profit, possibly fun, and possibly in some small subset clustered around the cuddle party people specifically to stave off the AI apocalypse. I don't know that this really has to entail specific policy opinions. That the commentariat may in practice lean toward some anyway I think is a consequence of Scott's commitment to charitably reading all opinions as long as you aren't being a dick. If you build a place where culture warriors can speak their minds without fear of being canceled, they're gonna show up, even to a blog nominally meant to be principally about psychiatry, philanthropy, and AI risk. This creates a very skewed view of "rationalism" because Scott is seen as probably dignitary #2 of the entire movement, but his blog is largely populated with people who don't identify as rationalists and strongly disagree with Scott on nearly everything except the commitment to open discourse.

So I think you end up with two broadly wrong ideas about who rationalists even are based on cherry picking people who are socially adjacent but ideologically committed to very different goals. Then you have the broader group that is probably a more accurate view but may not even consider themselves rationalists. Then you have Scott's very specific peer group who would definitely call themselves that and in fact made up the name and they take it very personally when you criticize a different group but implicate them.

Expand full comment

This is exactly the type of infuriating talking-past-each-other conversation I find myself having with partially-aligned people who've ended up in a different bubble than me and with whom I haven't had the same tipsy, late-night, intention-clarifying conversations.

Whatever the result, I'm grateful to see y'all do it in public.

Expand full comment
founding

Glen keeps nodding towards Human Computer Interaction and user centered design as a model forward, which makes me as uncomfortable as he is as a mechanism designer. As someone who's job is to "communicate across lines of difference" and then bring /something/ back to the technocrats who actually build (and design) things, I know how hard and flawed the work is. I have to use all my own technocratic skills to build a better map of the territory that is still legible to the product team. The outcomes he celebrates (99dots - so cool!) are some successes he heard about that survive in a brutal space out there.

I'd also like to understand what "democratic communication" means. Is it a style of communication with some characteristics but not others? Likes and votes don't count, but do survey responses? Open-ends only? Does it have to be public, or can private or anonymized communication count? Who is part of the demos for a given topic? (I'm hoping this isn't completely depenent on point 7 above - what does "democracy" mean)

Expand full comment

> the country, Taiwan, which performed best in the virus was led in part by Audrey Tang who moved back to Taiwan after being immersed in and repulsed by the rationalist movement in Silicon Valley - see e.g. https://www.wired.com/story/how-taiwans-unlikely-digital-minister-hacked-the-pandemic/) and dedicated herself to doing things differently in Taiwan

Did anyone else read the fairly-long cited article and find anything to support this claim? As far as I can tell he just made it up..

Expand full comment
founding

Yea I read that entire article looking for the part about Audrey "being immersed and repulsed by the rationalist movement in Silicon Valley" but there really is none. If that claim is true it must come from some other sources, as someone who's never heard of Audrey or been to California I have no idea what to make of it.

And even the claim about her in-part leading the response to the pandemic seems a bit weak, she basically saw an app for determining where masks are available for sale and used it as a reason for the government to distribute masks through NIH pharmacies and keep the data about availability open-source.

Expand full comment

See this comment from gwern for a deeper look at the claim: https://astralcodexten.substack.com/p/contra-weyl-on-technocracy#comment-1150379

My impression is that the claim doesn't hold. Weyl has a valid point on the important of human-centric design. But he is also making quite strong attacks against rationalists, like this claim about Audrey. I don't think these attacks are supported by his main thesis, they don't seem constructive, and they make me feel quite uneasy about him and his motivations.

Expand full comment

He also made up the bit about Stalin's Ukrainian famine in the Technocrat article that this all began with - the author he cites actually says just the opposite of what he claims in the article : "In the waning weeks of 1932, facing no external security threat and no challenge from within, with no conceivable justification except to prove the inevitability of his rule, Stalin chose to kill millions of people in Soviet Ukraine. It was not food shortages but food distribution that killed millions in Soviet Ukraine." Whatever serves his arguments apparently.

Expand full comment

I'm not sure if I ought to rephrase things to be politer, but:

I thought the stuff on rationality and effective altruism is pretty obviously wrong. The next step is wondering how much this is an aberration vs. representative. And isn't it an interesting coincidence that the section of Weyl's essay that this audience know the most about (and is thus best equipped to evaluate for ourselves) is also the part that Weyl is *apparently* the most wrong about?

Naively this will be some cause for suspicion for the rest of the essay, and perhaps enough that he's not worth engaging further with.

Yet as far as I could tell, nobody else has made this (seemingly obvious) point so far. Are other people just virtuously being silent? Is there collective Gell-Mann amnesia? Or am I being dumb for object-level reasons (eg his other points are reasonable and has a nontrivial correlation with reality, the coincidence I noted happens to just be an interesting coincidence, rather than a "interesting coincidence")

Expand full comment

The section on AI was so misinformed that I will admit it makes me somewhat suspicious of the reliability of the rest of the piece, which I am less equipped to evaluate. Still, I think it is best to assume that Glen has made a misstep there and that the points which are closer to his expertise (i.e. mechanism design in general) are more reliable.

Expand full comment

Great to see you back Scott.

I have an issue with the Firefox reader mode on substack. When navigating through your blog, only the first page, I visit, works in reader mode, after that the icon disappears from the URL bar. I have to refresh the page to get the reader mode icon again. It's just a small annoyance but maybe you can pass it on to someone at substack to look into.

Expand full comment

This is probably due to the fact that clicking on a link here doesn't just link you to that page like in most other sites. It just switches out the html on the current page, then changes the url to make it look like you clicked on a regular link. From Firefox's perspective, you didn't change pages at all, the content just disappeared, which is likely what's messing it up.

At the risk of shameless plugging, I've created an extension that, among other features, ends up refreshing the page whenever you change articles. Find it at https://github.com/Pycea/ACX-tweaks if you're interested.

Expand full comment

Yeah I already noticed that and really dislike this behavior... This might actually break accessibility for some people (haven't verified yet)

Your plugin looks nice. I'll give it a try.

Expand full comment

Again, I think that this misses the real division, which is Coerced Vs Chosen. Ultimately, the idea that you can force people to do what you want because you're LARPing science is no different than the idea you can force people to do what you want because you have Divine Revelation / Dialectical Materialism / Whatever on your side.

If your ideas are actually good, you don't need force; you just need people's rational judgement. It's when your ideas aren't up to snuff, that's when you need force.

Notice how this is completely independent of whether you are claiming your ideas are good because of lots of bottom-up practical, hands-on experience, or whether you're claiming that they're good because of lots of top-down, formal-studies knowledge.

This also get's to why it's people who actually do technology who are critical of technocracy, while those who are wannabes like technocracy.

Expand full comment

> If your ideas are actually good, you don't need force; you just need people's rational judgement. It's when your ideas aren't up to snuff, that's when you need force.

This is not clear to me. Or at least, the implicit claim that we live in a world where people are sufficiently rational. Cf. the pushback to vaccines, or people's opposition to mandatory seatbelt laws.

(I am not making the claim that all good ideas are unpopular, or that all unpopular ideas are good. But not all good ideas are popular, and not all popular ideas are good.)

Expand full comment

> Or at least, the implicit claim that we live in a world where people are sufficiently rational.

I think it's best to go one layer deeper: The issue is not that people are or aren't sufficiently rational, it's that they're not sufficiently rational *sufficiently quickly*.

To possibly-paraphrase Scott possibly-paraphrasing someone else, the arc of justice points towards truth, but is long (from https://slatestarcodex.com/2017/08/09/the-lizard-people-of-alpha-draconis-1-decided-to-build-an-ansible/).

I believe that nearly all good ideas will win out in the marketplace of ideas given time, but oh my do those last two words carry a lot of weight. Anyone who says "if your ideas are actually good, you don't need force; you just need people's rational judgement. It's when your ideas aren't up to snuff, that's when you need force" and leaves it at that has almost certainly never been privy to the part of enterprise where decisions more complex than "should the background on our Mission Statement page be tan or olive" have been made, and has never found themselves forced into a holding-pattern while waiting on bureaucracy to resolve an issue.

The sad, simple fact is that we don't always have enough time to let everybody involved in a consensus-based decision come to terms with everything at their own pace. As such, implementation often becomes a balancing act between "natural consensus" and "speeding along the inevitable", where the ultimate goal is to establish a good Pareto point where a high-enough percentage of intelligent-enough people have reached consensus in a thoughtful-enough manner that the project can commit, and pivot towards implementation while handling the concerns of those who aren't-quite-there-yet in a manner that finds an optimum balance between efficiency and compassion.

Expand full comment

Instant karma for me - I dismissed Weyl in toto on the basis of one article, and here is someone doing the exact same thing to me. Go figure. Suffice it to say that you have no idea what my job is, or how much consensus building it involves.

As to the rest of this - it is too vague and abstract to begin a response. I'll take an example from Scott's last post, namely global warming.

Why is there pushback on this? It's not because of the rational, argument side - the "here's why we have good evidence that human activity is causing warming and that's a problem side". It's because of the irrational, technocratic, coercive side - the "and that's why you need to be forced into energy poverty, the world's poor need to be permanently immiserated, and maybe we need to work out who'll die and get rid of democracy" side. That's the bit that get's the pushback. And, yes, I can give you the quotes to substantiate all of those views, from leading lights in the movement.

The important division is chosen / coerced.

Expand full comment

If people aren't rational enough to accept your good idea unforced, they won't be rational enough to apply it correctly.

More importantly, you are part of the set labeled "people". If "people" aren't rational enough to rule themselves, how much less are they able to rule others?

This is why laws should only be there to make sure one person's irrationality does not harm to another.

Expand full comment

I think defining harm and irrationality in this context is a very very tricky thing and leads down all sorts of paths that aren't initially obvious.

Expand full comment

What about co-ordination problems? E.g. in the spectrum allocation case, complete anarchy would suck. The FCC needs to be able to force people not to use wavelengths that other people are using, or else the spectrum barely be useable at all.

Expand full comment

As regards airwaves, that's best handled through something like the American's "homestead act", first come first served. Again, the point of law is to protect individuals.

"Co-ordination" problems are, to my mind, a bogus category, that packages together legitimate concerns about individual rights with ideas of how people can be organized and regimented. The flaw in worrying about co-ordination problems, and in Scott's anti-Libertarian F.A.Q. is that once you've granted someone the power to co-ordinate society, he can co-ordinate it to some truly horrific ends.

Expand full comment

Doesn't that encourage squatting when the amount of 'land' (spectrum) is much less abundant than the physical land being homesteaded on the Frontier? (see also: patent trolling)

Expand full comment

I concur with everyone who pointed out this is two people who mostly agree and mostly say correct things talking past each other.

Scott, you made a mistake by not heeding your own warning ( https://slatestarcodex.com/2018/12/18/fallacies-of-reversed-moderation/ ) and not realizing Glen is himself a "rationalist" talking about how "maybe we should also think about other people". This in turn makes you argue for expert opinion and top-down intervention being viable at all. It turned out poorly. Except for vaccinations, your examples are highly debatable for any position more nuanced than "it's possible for top-down not to be 100% harmful", and this actively distracts from your point.

Other than that, or perhaps because of that, I think Scott is more correct in the big picture. Glen, you're worried about people justifying their selfish, close-minded decisions with rationalism. This happens, but, as Scott points out, so does people justifying their selfish, close-minded decisions with "human values", etc. At some point you have to come around to the notion that it's selfish, close-minded people who are the problem and their excuses are just that, excuses. (I believe Scott (James), being an anarchist, would support that conclusion.) Forget that, and you're just arguing about what values should selfish people use a cover for their selfishness. (And as Scott forcibly but correctly points out, humanities are already on top of that ranking nowadays, to the point where even naive rationality looks like a useful correction.) This directly ties to Scott's point about mechanisms being useful to restrain selfishness and biases of those in positions of power and expertise. Yes, mechanisms can themselves be just an extention of this selfishness and bias. But we're nowhere near the point where the realistic alternative is a more human-friendly process. The realistic alternatives is blatant self-interest and corruption. Say, voting may not be the best form of democracy (much less the only one), but Scott never claims it is. He says it's a democratic mechanism that's been successfully implemented, and it beats other available mechanisms. More importantly, it actually serves as a check on selfish people and their top-down planning. (It's hardly perfect, and you could argue it's acting as a tool of legitimization that disables other checks on them. But it did not arise as an alternative to participatory democracy, it arose as a replacement for dictatorships and tribalism. The nearest alternative to it is a reversion to dictatorship.)

Also Glen, ignoring the matter of what (James) Scott actually meant, what Scott takes from him is important for the same reason your praise of iPhone is important.

I think you're wrong, Scott is right, the actually important part is for people to be able to express themselves legibly, making the environment legible to them is a necessary part of this, but nowhere near sufficient. I say this because modern technologies, of which iPhone is a poster child, (ostensibly) aimed to make electronic appliances more legible are in fact actively taking control away from their users, and the result is the social hellscape we're currently witnessing - collapse of creativity and individuality, atomization, epistemic bubbles on one end, and forced conformism and more and more authoritarian control on the other. (Moreover, "it was certain that this would happen because they optimized for the wrong thing".) It's the geeks away from "user-friendly" ecosystems who still keep us going, against all odds, and it's not because their tools are arcane, because they never stay arcane for long. It's because their tools are made to serve people, not to let them perform the exact curated set of options someone wants them to perform. Yes, sometimes the people are just them and not random person off the street, but that's always more easily fixed than Apple Corp.

Expand full comment

This engagement is valuable and food for much consideration. Both positions are well articulated and thoughtful, willing to accept scrutiny and without ad-hominem attacks or malice. It is EXACTLY what I want to see in my Internet of people. Thank you both for taking the time to write so clearly on your thoughts and engage with each other as fellow travelers should.

Expand full comment

“There is no unitary thing called "science" or "mechanism". There are a variety of disciplines of information processing across academic fields, across cultures, across communities with in a culture, etc.“

He means standpoint theory, from postmodernism. Has this stuff permeated Everything? 👆👆👆

Expand full comment

I'm kinda surprised that this debate has not explicitly touched on Weyl's argument about "fidelity" vs "legibility." I had thought that was the core point of the original essay!

Specifically, I thought Weyl's central line of argument went something like this:

-------

1. Currently, the practice of government involves a lot of "mechanism design," i.e. proposing and adopting rules and procedures that will guide or constrain how the government does a particular thing.

2. Currently, the mechanisms are designed and argued for by a small elite quite different from most people. This elite has trouble communicating with much of the populace.

3. Mechanisms designed by this elite tend to leave out important factors in a way that matters practically. This happens for general "all models are wrong" reasons, but is exacerbated by the elite's lack of communication with most people.

Even when communication happens, it is delayed by the need to "translate" the opinions of the masses into the language of the elite before the elite can respond to those opinions. And it occurs unreliably, depending on whether someone's around and willing to do this "translation."

4. For the sake of the present argument, let's assume 1+2+3 are fixed for the foreseeable future.

So we're assuming we *will* have mechanisms and they *will* be designed by an out-of-touch elite. The question is how this elite ought to behave if 1+2+3 are true.

5. To help with #3, the elite needs to provide ways for the masses to directly intervene in a way that corrects the elite's own errors. There are two ways to do this:

- Try to mend the communication breakdown between elite and masses

- Design mechanisms which the masses can directly modify to their own ends, somewhat like open-source software

Weyl mentions both, but focused mostly on the second one, about mechanisms. I take this to be the central *goal* articulated in the essay -- to make this kind of mechanism feasible.

6. Currently, the elite tends to focus on making their mechanisms better when judged using existing models ("optimality"), and on making the models more realistic ("fidelity"). Pushing for optimality can make ideas more *or* less complicated, but pushing for fidelity usually makes them more complicated.

7. Due to 6, the elite's models and mechanisms tend to get ever more complicated with time. Thus, they get steadily more difficult for the masses to understand. (Indeed, the highest-fidelity mechanisms available to the elite may be incomprehensible even to the elite themselves, e.g. black box neural nets.)

8. What needs to be true for a mechanism to be open to modification by the masses? For one thing, the masses need to understand what the mechanism is! This is clearly not *sufficient* but it at least seems *necessary*.

9. Elites should design mechanisms that are simple and transparent enough for the masses to inspect and comprehend. This goal ("legibility") trades off against fidelity, which tends to favor illegible models.

10. But the elite's mechanisms will *always* have problems with insufficient fidelity, because they miss information known to the masses (#3). The way out of this is not to add ever more fidelity as viewed from the elite POV. We have to let the masses fill in the missing fidelity on their own.

And this will requires more legibility (#8), which will come at the cost of short-term fidelity (#9). It will pay off in fidelity gains over the long term as mass intervention supplies the "missing" fidelity.

I take this to be the central *piece of advice* articulated in the essay.

-------

This argument is interesting, novel (to me anyway), and very different from hoary old complaints about the downsides of mechanism itself.

On the other hand... I don't really buy it? It's all technically true as far as it goes. But it seems narrowly interested in a necessary condition that's far from sufficient. There are plenty of laws, etc. that are simple and easy for most people to *understand*, yet very difficult for people to *change*.

Weyl seems to want mechanisms that are easy to customize for different local circumstances -- perhaps even mechanisms that are more like templates or genotypes, specifying not the rules themselves but how to produce a set of rules for your local context. It's an appealing idea, but it would require all kinds of work that his essay doesn't argue for.

Meanwhile, the change which the essay does argue for -- towards more legibility -- feels only tangentially relevant to the problem. Yes, designs that are easier to understand are often easier to customize. But sometimes a design is easy to customize precisely because you *don't have to* fully understand it to usefully customize it. (The Apple vs Microsoft example is illuminating. Early Macs were not simple machines! They didn't turn the average person into a computer expert. Instead they succeeded by carving out a valuable kind of interaction with a computer which *didn't require computer expertise*.)

It's as if Weyl had written an ode to open-source software which assumed the user was carefully reading every line of source code, and that no one could adapt a piece of OSS to their own ends unless they fully understood it. And had then argued for more readable source code, even at the cost of performance, correctness, and *meaningful* adaptability. (I.e. how easy it actually is to adapt the code in practice, as opposed to the coarse proxy "is the code readable?")

Expand full comment

This is the core of the piece and precisely what people should be talking about if they took the piece seriously and did not intend just to dismiss/ignore it in my view. A central frustration in Scott's response for me is that the substantive arguments of the piece were ignored to focus on things that were clearly not nor intended to be its central arguments.

Expand full comment

I think that may be part of the misunderstanding.

From my experience, I wouldn't disagree with this statement at all and wasn't seeing it as the central thrust of the discussion. To me this seems self evident as how a good technocracy should run.

This then makes it a word definition argument and a whole lot of disagreement over how to interpret specific examples.

Expand full comment

Big thumbs up for this comment!

Expand full comment

Excellent comment. But this to me seems like just a call for different type of technocrats. We do not want enlightened absolutist monarchs, we want father founders.

Look at one of the most technocratic projects of last century: European Union. The core principle is that of the subsidiarity: the union where necessary, national when possible.

By the way I'd love to hear from Glen of how he views EU project as a whole. Is it technocratic? In its entirety is it a good project or is it project doomed to fail because of distant elites? Which aspects are good and which are bad? It is easy to talk in general principles, but in the end talk - and especially criticism - is cheap.

Expand full comment

As many have said, Scott A and Glen seem to mostly agree with each other but I think part of the difference is the concept of "taking people with you". Sometimes you actually can put in place a really good system that doesn't need huge buy in. A pretty trivial example is a deposit refund system on beer glasses at sporting events. You pay a £1 for your cup and get £1 back at the end when you return it. Or you can keep it if you want. If anyone leaves there cup on the floor, someone just picks it up and pockets a very easy £1 when they return it. The problem of having loads of cups left everywhere has been solved by a smart (I guess technocratic) solution.

However, this is a small thing and bigger things often require people to "buy in". Implementing a company policy, government policy etc rarely works if everyone thinks it's a terrible idea. It's not impossible, but you are swimming upstream. This is where there's some tension between Glen and Scott's view. Even though I think Glen is just obviously, objectively wrong by saying (as referenced in another comment) that everyone's epistemology is equal and people just specialize for different things, that sorted of viewpoint is (I think) much likelier to get people to buy in. "We think you are equal and value your feedback" works far better than "you are too stupid to understand why this is a good idea".

However, I think Glen is going too far the other way. If you're a mechanism designer and you really believe that designing better mechanisms will lead to better outcomes, just defend your view! Yes, there are caveats. Yes, there are some things which you absolutely have to take account of that are better expressed as general principles than precise graphs. But ultimately, not everything is just a matter of preference. Some things do work better than others.

I love Scott's writing and it took me a while to figure out that this was unusual. I don't think he's writing for the benefit of everyone, it's a pretty small self-selecting group that will enjoy it (which Scott seems totally fine with). I don't think an SSC piece is going to have much impact on 100 randomly selected individuals. But as Scott sort of alluded to, I don't think the average Joe who's upset with an overmighty, arrogant authority is necessarily bemoaning their lack of focus on continental philosophy.

I do think these two views are worth discussing though. In the UK, by the time lockdown was announced in late March 2020, I recall seeing an opinion poll giving the policy 98% support (have that lizardman!). This is certainly taking people with you. But it was also too late and thousands of people died as a result. A technocratic early intervention would have been the right thing to do, even if there had been a backlash to it. The irony here is that the government pretty much followed the medical advice they got early on in the pandemic, it was just wrong.

I suppose a middle-ground is to take people with you where possible, but when time is scarce the best thing to do use all the tools you have to figure out the right course of action and do it. Does this seem fair to both?

Expand full comment

Me: "Audrey Tang repulsed by rationalist movement"

Google: "It looks like there aren't many great matches for your search"

Citation needed.

Expand full comment

Speaking as someone whose academic experience is entirely on the 'humanist' side, I am sceptical that it is generally better on any of Weyl's criteria or capable of offering the insights he hopes for.

Expand full comment

Reading the excellent comment (probably the best one in this whole conversation) by

nostalgebraist, as well as rereading some of Scott's responses makes me realize that most of what I was actually at the core trying to argue and call attention was almost completely lost in this discussion.

There is a fundamental and extremely central element of how most positive social change at scale takes place that is mostly ignored in rationalistic discourse and technocracy (not just the rationalist community, but most of the economics community etc.) that is studied extensively in literatures that I am just starting to learn about. A key goal is to call attention to these other literatures and ways of thinking. I have tried doing a search and may have missed something but as far as I can tell there is basically 0 discussion of these literatures on LW, SSC or OB.

These literatures include the work of John Dewey, the whole field of human-centered design (e.g. Don Norman's The Design of Everyday Things), the related field of participatory design, etc. I won't go too much into a lit review here as I am about to put out paper with an extensive one. These fields have a pretty deep methodology for thinking about the role of technology in society and how social change takes place. They have been both tremendously influential and successful; contributors and participants have originated many of the technologies we use most today.

These areas emphasize a basically different model of social change than shows up in the way Scott poses the distinction between top-down v. bottom-up. Social change happens through a range of communities coming up with designs and then other communities experimenting with, having a range of experiences with and reshaping and repropagating technologies. Little successful change comes from a single center gathering "evidence" and then "implementing". Think about the internet (which diffused in a very complicated, polycentric way and was reshaped half a dozen times along the way), the personal computer (which was invented in universities, developed at a corporation, then redeveloped in a much weaker form by hobbyists, spawning an industry that eventually rediscovered the earlier work, etc.), democracy in America (which grew out of local self-government, diffused into colonial governments, etc.) and so forth.

The success of change thus depends crucially on practices that facilitate comprehension, reuse, refashioning, participation in the design process from a range of people etc. This does not mean, as nostalgebraist points out, only or perhaps even primarily making the entirety of a system completely comprehensible (though that can sometimes help). It has to do with allowing accurate mental models of various parts of systems and understanding that the ways a system will be used relate less to how it is originally intended and more to the basic ways it can fit into a complex system. The internet is probably the single best example of this, but the iPhone is quite good as well.

Is this hard to model or make fully rigorous? Absolutely. Is it very easy to completely mess this up by totally ignoring and debasing its importance? Absolutely as well...as the examples from economic policy making I give illustrate. Have I frequently made this mistake myself? Countless times. What led me to want to bring this to the attention of this particular community? I advocated, in a very rationalist way, a range of things in my book Radical Markets. I took a lot a flak from people along the way for technocratically ramming things down people's throat (e.g. see the reaction to this piece: https://www.politico.com/magazine/story/2018/02/13/immigration-visas-economics-216968). I was initially defensive, and of course I do think some of the reaction was extreme/unfair. But in the end the conversations I had with diverse publics and people from a range of fields tremendously enriched and expanded my thinking not because, as Scott suggests I am advocating, I took everything they said as literally true, but *because the discipline created by having to take seriously folks' objections and reimagine my own thinking synthetically in light of them, so that I could justify my views in their own language stretched and improved my designs*. If you want to see my own personal learnings from this, see here: https://www.radicalxchange.org/kiosk/blog/why-i-am-not-a-market-radical/.

Treating other people as rough epistemic peers does not mean making them read tons of source code, nor does it mean taking every comment they make as literally truth or expert on precisely the issues you the designer are. It means holding yourself accountable to the ability to articulate your ideas in their language and realizing that you are likely missing something important and will thus be ineffective in making change if you cannot. Ignoring the need to do so and instead using power to implement change usually results in harm even when there is underlying merit to the ideas, because people resist and the ideas are usually broken in a place that is unappreciated by the designer and could have been fixed if the designer had tried to focus more on communication and less on optimization.

Expand full comment

Thanks a lot for this comment Glen, it has clarified a lot for me. As an aside, my local EA chapter was very fond of your book, so it's good to know that we should read the update as well!

You are right that the rationalist community does not engage much with the sources you allude to (for a counterexample though see Julia Gaef's rationally speaking latest podcast episode which focuses on criticisms of RCTs in international development), but this is hardly a unique feature of rationalists or even tech bros. It did not come up in my few sociology classes, which is all the more reason to look forward to your review.

My guess is that it is true that groups tend to make decisions that are biased towards themselves, if only because of missing information, and that many rationalist writers miss that. However, I also think you are kind of glossing over the fact that there are some good technocratic decisions that you just cannot expect non technical folks to understand, especially with the limited bandwidth you have in public communications. Should countries for example not use the new mNA vaccines in favour of inferior, conventional vaccines because the former are difficult to explain and people are worried that their DNA will change as a result of the vaccination? You might well say that yes they should, as in the long term political stability will increase.

Fwiw, my intuition is that it's better to address the bias problem by making sure to include technocrats from underrepresented communities, so biases will be minimised but everybody involved still understands the issue. You make sure to build trust by holding experts and their decisions accountable to outcomes that normal people can understand. Everybody gets the concept of "skin in the game".

I hope you continue to engage with the mentioned communities, your contributions are very welcomed!

Expand full comment

I would fully agree that there are *some* domains like medicine where a more technocratic/evidence-based style is useful (and the my original essay states this and the conditions I think make something eligible for this approach). In fact, I have in the past advocated and still think I stand by my advocacy for expanding this approach to financial innovation (https://chicagounbound.uchicago.edu/cgi/viewcontent.cgi?article=8369&context=journal_articles). I simply don't think most of what folks in this community discuss meets these conditions (nor do the examples of failed technocracy I gave). I also agree that all fields have their limitations, human-centered design, RadicalxChange etc. not being an exception. My call is *not* for "just do things my way" but rather for pluralism and democratically egalitarian respect for people who communicate in different styles than you do, which you might call "ideological" or "irrational" but are usually just limited in different ways from which your discourse is limited...what it is to aspire to make progress or become rational is to realize that there is often much more to be gained by communicating with people with very different communication styles than to be gained by following further and further existing communication styles. This applies to all fields (and I tell e.g. sociologists and activist communities about the limits of those approaches quite a bit too). The reason I have devoted so much time to critiquing the rationalist community is because I think Silicon Valley has an enormous amount of power in the world today which it is abusing for these reasons and I think one of the greatest leverage points for setting the world on a better path is correcting this.

Expand full comment

So you're interested in the architecture of Bazaars, not Cathedrals? (does anyone host a copy of CatB that's in a sensible markup language, as opposed to colpletely unstyled XHTML?)

Expand full comment

"the discipline created by having to take seriously folks' objections and reimagine my own thinking synthetically in light of them, so that I could justify my views in their own language stretched and improved my designs"

This is at least the fourth time I have heard this rough idea of fully engaging with other people's ideas, even if their style of reasoning and communicating is different or maybe even "wrong" in your framework.

The first was by Ezra Klein in his final episode of his (first) podcast. https://podcasts.apple.com/nz/podcast/what-ive-learned-and-what-comes-next/id1081584611?i=1000503053986

Second was when a friend recommended book Sand Talk, where an aboriginal australian describes the aboriginal perspective on modern issues. I started reading it, and it is tough because style of communication/reasoning feels 'wrong' to me. (The author even says the mere fact it is written down rather than orally told already changes the true meaning of what they are trying to convey).

Third, on DataExchange podcast, host often talks about how data science/tech community is increasingly aware of the need for more diversity in their teams, because people simply do not realise what they take for granted or what their blind spots are.

Expand full comment

I'd be interested in what your thoughts on Civium - a contemplative take on societal master planning: https://www.youtube.com/watch?v=yXBAtdyBto0

I haven't looked that closely, but I'm getting a sense of fragility and taking-things-personally in many of the frames used in the broader discussion and I *think* Civium points to things upstream of that.

Completely unrelatedly, I also just want to note here that afaict Jane Jacobs' stances still involved top-down policies/structure (and she loves grids, Scott!) and also that almost all existing cities and societies involve way, way more top-down structure than you might expect (and Brasilia just changes a few street layout variables and specifies building heights?) and that the parts of this blueprint that result in Paris or Los Angeles ...are actually quite easy to tease out! (mostly housing supply restrictions, mass transit, and street width/connectivity)

Expand full comment

I don't feel like either Scott's post or Weyl's succeeded in clarifying what they disagree on. It definitely feels like a "bravery debate" and/or a definition debate, in that "rationalism" and "technocracy" and "high modernism" are all fuzzy ideas with different meaning in different contexts. So I'm trying to think through what, in practice, would be a source of disagreement between them.

First, it seems like what they disagree on is all in the sphere of politics / policy / public decision making. All the examples they discuss fall into that category. It doesn't seem like Weyl would object to, say, his doctor using "expert knowledge" or "formal training" to make decisions about how to treat him. What he's claiming is specific to decisions that amount to a policy for society more generally.

Second, let’s divide policy disagreements into values (what do we want) and beliefs (what courses of action will do what we want). It seems like Scott and Weyl disagree on the correct process to determine beliefs, not values. If Scott were to learn in hindsight that, say, the economic and social damage caused by COVID lockdowns were more harmful than unchecked COVID would have been, that would change his opinion about the correct course of action. Weyl nods at this when he mentions (in the intro to his original essay) that "technocrats" exist in, and are responsible to, a variety of democratic and authoritarian regimes.

So I'd frame the disagreement as: "On the margin, would shifting public policy toward expert input and sophisticated plans, or toward community input and simple plans, lead to more successful actions?" I do think it's fair to say that Scott and the rationalist community lean towards the former, though it's not their primary focus and Weyl shouldn't have picked on them specifically. Interest in utilitarianism and QALYs, effective altruism as a social norm, interest in complex tweaks to democracy (ranked-choice voting, quadratic voting, futarchy), and the long-term vision of friendly AI running the world all point in that direction.

I don't know who's right on balance, but I'm sympathetic to Weyl's side. It seems like a key point to be made here-- which rationalist responses have neglected so far-- is that engaging and mobilizing the community is itself a huge part of what determines a policy's success in practice. A policy that won't be "correctly" implemented due to lack of democratic support is, ipso facto, a poor one. Again COVID is a good example: lockdowns were pitched as a way to stop the spread or flatten the curve on a timeline of weeks, but instead led to a miserable year of R~=1 due to people's reactions and especially their low confidence in public officials (thanks in part to misleading messaging from experts early on!). Another example: I favor approval voting over ranked-choice voting even though it's less expressive, because approval voting is a more intuitive and less disruptive tweak to the current model.

Expand full comment

I really appreciate you trying to think through the fuzzy and hard to pin down aspect of this, that I have trouble getting my head around. This was a helpful frame for me.

Expand full comment

Given the radical scope of Glen's vision for societal restructure I'm heartened by his effort into the anti-thesis of the technocratic spirit. Especially considering he seems to be largely animated by it's thesis (smart people can do smart things to make things better).

I really liked the first 3/4's of the essay and the discussion of legibility in particular but found the critic of especially EA tricky to absorb from my position inside that ideology.

Expand full comment

respect to this guy for defending his viewpoints; not a strong defense but still respect

Expand full comment

Is this an example of what Glen is talking about/advocating for? https://urbankchoze.blogspot.com/2014/04/japanese-zoning.html

Tl;Dr: It's a post praising the Japanese zoning system, as opposed to the US 'Euclidean' model.

The National Government of Japan has defined 12 zones that are applied consitently across the country. Local government has to zone things more or less into these 12 categories. Compare the US, where local government can apply whatever byzantine system it wants short of de jure racial discrimination.

Another difference is that height limits are set up according to simple, consistent geometric rules as opposed to arbitrary maximum heights.

This results in a system where the government is forced to behave in ways legible to homeoners and property developers. In the US, however, property development seems to be more about knowing the correct masonic handshake that will get the zoning authority to approve your plans.

In theory, the American system lets local governments tailor things more optimally to the local situation. In practise, that power is mostly used for nepotism, redlining, housing market manipulation and general graft.

Expand full comment

Hi Scott, I am a nonpayer. I received a copy of something that was blocked by a paywall on RSS, but I am not sure if it was this or another article (and it would be a bit time consuming to check). Would it be possible for you to create two RSS feeds (one for payers and one for nonpayers)?

Expand full comment

Who else thinks Scott and Glen should do an Adversarial Collaboration?

For it to work, the title would need to be well chosen. Something like 'Technocrats are overrated' seems like a good first stab. Obvious issue is that 'overrated' is not a well-defined concept, but that could be a good thing. The obvious need to define 'overrated' would probably help clarify what they do and do not disagree on.

Expand full comment

I'm a bit confused about desegregation being a technocratic thing. Are we saying that every moral crusade imposed from the top down represents technocracy? What is the *technical* angle to school desegregation. That it would raise test scores?

Expand full comment

This is an interesting discussion. I just wanted to add that I think the popularist revolt against what Glen describes as the technocratic decision making process is not only or even mainly about technocrats living in some kind of technical bubble that would be popped if they consulted the common person a little more. I more get the impression that populists are mostly worried that the system designers don't have their best interests at heart and are either power-hungry or unduely influenced by vested interests (of a different flavour whether you are left or right leaning). Regardless of whether that is true or not, I think most people would be happy to leave it the experts if they thought the experts had the right intentions. I'm not sure this is a good thing, but I think it's true, because most people lack either the time, inclination or ability to thoroughly explore and public policy topic and the best they can offer is a casual impression. IMHO, populism isn't aiming to achieve an improvement on the mechanism design process but rather is an attempt to assert the interests of the popularists who believe their interests have been excluded.

There are also at least some mechanisms (or non-mechanisms) that couldn't be described as technocratic in nature, but are a result of the internal workings of the political process. Established political parties aren't run or staffed by technocrats, but instead usually bring them in as consultants. So I don't know if technocracy and populism are really the only approaches on show here.

Scott and Glen's discussion partly hints at this, but I'd like to see them explore that further. Regardless, great discussion, thanks and really happy to see the site up and running!

Expand full comment

(I haven't read Glen's only what has appeared in this post, so apologies if he has raised this issue and I missed it)

Expand full comment

I read the link about Taiwan's covid response. It's about as "technocratic" as you could get. It's more-or-less exactly what we'd expect a Bay Area rationalist to do if they were in charge.

Weyl wishes the "rationalist" response was as bad as the US medical establishment's actual response, because that would tell the David and Goliath story he's trying to push.

What Audrey Tang *did* do differently is recite poetry and say Daoist things to the press. Apparently that's all it takes to convince some skeptics that you're a new kind of benevolent leader, even as you solve the actual problems through the same technical data-driven mechanisms.

Expand full comment