1492 Comments
User's avatar
Sol Hando's avatar

Has anyone done an analysis of the brewing tariff war with Canada yet?

I’m very surprised Canada has decided to retaliate dollar for dollar (which Trump has promised to double if they do) rather than negotiate whatever it is that’s actually motivating Trump to do this (explicitly it’s fentanyl and illegal border crossings, but who knows). I suspect he’d want a full reevaluation on specific industry subsidies and tariffs that Canada imposes on the US too.

Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible. The retaliation is doing “something” which is obviously demanded by the voters.

Canada is hugely dependent on US trade, and while Canadian tariffs on the US will cause harm, it will equally harm Canadian producers, and probably lead to even more tariffs imposed on them. For a country that has such a huge percent of its GDP tied up in trade with the US, I can’t see anything but a recession for Canada (and a minor decrease in GDP for the US).

All the analysis I’ve found are either saber rattling or what this means for the average consumer. I’m interesting in anyone who’s looking at the macroeconomic effects.

Expand full comment
anomie's avatar

Trump has made it pretty clear what his intentions are. https://truthsocial.com/@realDonaldTrump/posts/113934520197790682

> We pay hundreds of Billions of Dollars to SUBSIDIZE Canada. Why? There is no reason. We don’t need anything they have. We have unlimited Energy, should make our own Cars, and have more Lumber than we can ever use. Without this massive subsidy, Canada ceases to exist as a viable Country. Harsh but true! Therefore, Canada should become our Cherished 51st State. Much lower taxes, and far better military protection for the people of Canada — AND NO TARIFFS!

Expand full comment
agrajagagain's avatar

Now, my own analysis of the situation. I am emphatically NOT going to stick to macroeconomics: I think your attempt to focus so heavily there is misleading you. Canada CANNOT look at this purely as an economic decision because it isn't. So the Canadian response will naturally factor in the economic realities, but also the diplomatic and political realities. Trump has, in an only-just-shy-of-literal-sense declared war on Canada[1] and Canada's response must be a *strategic* response: that is, the one that best serves all of Canada's strategic aims. Of course "maintaining Canada's economy" is obviously a major strategic aim, but so are "preserving Canada's national sovereignty" and "maintaining Canada's diplomatic position with the U.S. and the rest of the world," both of which are threatened by this move just as surely as the economy is.

First Point: negotiations require, well, actual negotiations. As outlined above, Trump doesn't seem to have made any attempt to negotiate with Canada at all. Likewise, Canada already made a quite large an expensive conciliatory move before Trump even took office. The only sense in which Canada could negotiate its way out of this situation is to *keep doing things like this* in the hopes that eventually Trump is satisfied and withdraws the threat. I hope I don't have to outline why this would be a fatally stupid way to operate. If Trump is threatening these tariffs as a way to get something (rather than imposing them because he thinks they're worthwhile in their own right) it's incumbent upon him to *at the bare minimum* ask for what he wants in concrete terms.

Second Point: reputational effects are real and important here. Every country is one player among many on the international state, and every country is on that stage for the long term. The actions you take today in one negotiation will affect the situation you face tomorrow and the next day, in future negotiations. Reputational effects are a very important component to both sides of this conflict. On Canada’s side, caving easily to U.S. pressure will today will lead rational actors to expect Canada to cave in similar fashion tomorrow. This could potentially cost a lot. Not only would Canada’s closest neighbor be made readily aware that in can extract concessions basically any time it wants by threatening Canada’s economy, but it could easily encourage other nations to try the same. This could be very, very costly for Canada in the long run, potentially much more so than the proposed tariffs. By contrast, the harder Canada hits back, the higher the apparent price tag it sets for anyone who wants to try this in the future. Trade wars tend to be enormously negative-sum: the U.S. might be better able to afford the hit, but it will still be a hit, which means the policy is less likely to be long-run profitable for the U.S. and less likely to be repeated.

On the U.S. side, even if Trump were to unilaterally back down on all his tariff threats tomorrow, he’s already damaged the U.S.’s international reputation. Actually imposing such tariffs on Canada will do greater and more lasting damage. The U.S. has remained the preeminent economic and diplomatic power in the world for 80 years in very large part by not doing stuff like this. While the U.S. can be imperious and demanding sometimes, it’s built up a very strong and very valuable reputation for being a good friend to have: it mostly presses the “cooperate” button and surrounds itself with other nations who tend to do the same. Everybody stays at peace, everybody trades reasonably nicely with everybody and as a result everybody gets rich. Short of declaring a shooting war, it’s difficult to imagine an action that U.S. could take that would be more damaging to that carefully-accumulated reputation than wantonly trying to wreck the economy of one of its oldest and closest allies over some comparatively minor domestic problems. Obviously the economic and diplomatic position of the U.S. are still central enough that this one act of aggression–however shocking–won’t trash it overnight. But it’s a crack in the foundation, and not a small one. Enough cracks will bring the house down, and even before then, there will be detrimental effects. This does connect strongly to the macroeconomic angle. Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it? Even if none of the goods are coming from Canada, these threats are a really sharp demonstration that anyone could be Trump’s next target. All business ventures that require interacting with the U.S. market are suddenly looking a little more volatile than they did a week ago, or three weeks ago or a year ago. Volatility carries risk and hedging against risk costs money. My macroeconomics knowledge isn’t strong enough to estimate how large this effect will be–my guess is that as a one-off it will be lost in the noise–but if things like this keep happening, it's difficult to imagine that it won’t start to impose a real, significant drag on the U.S. economy. A lot of people with significant business connections in the U.S. have got to be evaluating their other options and researching other suppliers and/or consumers right now. And while they’re a little more abstract, I expect diplomatic effects in a similar vein: U.S. guarantees of cooperation and friendship (be they on trade, defense, intelligence or anything else) just got a little more questionable. Negotiators will be a little bit less willing to align with U.S. diplomatic priorities if this is wage that two centuries of steady friendship buys U.S. allies.

Third Point: the internal politics of both countries also real and important factors. Saving the best for last, this is (in my opinion) the factor that I most expect to prove decisive. Certainly a trade war with the U.S. will cause far greater economic damage to Canada–as a fraction of GDP–than it will cause to the U.S. But the political effects will be much, much less lopsided. As far as I can tell[2], the general mood among Canadians right now is that we are under attack, and that we need to stand up and defend ourselves. Meanwhile calling the general mood among those in the U.S. “divided” would be a massive understatement. Tens of millions of Americans were already angry and horrified at the conduct of the new administration before Saturday’s announcement, and this will not make them any calmer or any less determined to push back against the government they oppose. Meanwhile, how many people even among those that voted for Trump actually want him to plunge them into a trade war with Canada? I got the sense that a lot of what secured Trump his win was economic hardship and anxiety. An action that increases prices U.S. consumers pay while hurting a significant share of U.S. businesses (all done in the service of fairly nebulous objectives) seems unlikely to sit well with at least some of those people. To be fair, Trump proved himself both very stubborn and essentially immune to negative news headlines during his first term, but actions that his millions of Americans directly in the pocket book might well be harder for him to shrug off. If the trade war creates solidarity on the Canadian side, while sowing division and galvanizing organized opposition on the U.S. side, it seems very possible that the U.S.’s political will cracks before Canada’s does, despite the lopsided economic impact. If that happens, then the U.S. loses doubly hard: it damages its own economy and reputation without getting anything in return. Trade War: the only winning move is not to play.

[1] Or if you like, announced an intent to declare war, with the actual declaration scheduled for Tuesday.

[2] Which, to be fair, is not all that far. I’ve only had a chance to talk to a few people about it, and they’ve been mostly people whose views align with my own in other ways.

Expand full comment
anomie's avatar

> Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it?

Well, here's what Trump has to say about that:

> MAKE YOUR PRODUCT IN THE USA AND THERE ARE NO TARIFFS!

https://truthsocial.com/@realDonaldTrump/posts/113934450227067577

Expand full comment
agrajagagain's avatar

My request that you not reply to me was not time or post limited. The intent was that I don't want to talk to you any time, on any post, ever.

This is a great illustration of why. I have neither the patience nor the pedagogical experience to explain to you what political cheerleading doesn't actually map to real-world economic outcomes; the attempt would probably be quite unpleasant for all involved. If you posted your opinions as a reply to the top-level thread, you'd be more likely to find someone able to hand-hold you across the apparently-vast gap in understanding.

Expand full comment
anomie's avatar

...Mate, I was just letting you know what Trump's aims were. Investing in other countries isn't going to be a viable option when the global supply chain is in tatters, so they may as well move operations to the US. And he will burn the whole world down just to make this happen. Did you think "America First" was just a bluff? His ambitions are real, and now he has the power to fulfill them.

Expand full comment
1123581321's avatar

I mean, he really really doesn't understand how economy works? Forget the fancy word "economy", how, for example, manufacturing business works, or who it is that pays the tariffs. At this point it walks like a duck, and quacks like a duck, and it has been doing this for a long time, so an inescapable conclusion is that Trump is a clueless dolt with a sadistic streak.

As if we didn't know this. But.

Expand full comment
Average Man's avatar

Does the Dane Geld metaphor fit this scenario with the US as the Danes? Can game theory inform Canada and Mexico's actions? Trump seems like a defect-bot sometimes.

Expand full comment
anomie's avatar

> rather than negotiate whatever it is that’s actually motivating Trump to do this

...I highly doubt that Canada is going to let the US annex them without a fight.

Expand full comment
agrajagagain's avatar

First some facts that you seem to be unaware of. I'll write my own thoughts and opinions about the tariffs and the impending trade war in a separate comment.

First, Canada already released a plan to put an additional $1.3 billion to strengthen its border security. This was after the election but well before the inauguration, so it seems to be pretty clearly a response to Trump's threats and concerns and something offered up preemptively, as an olive branch. It strongly suggests a willingness to be accommodating on this issue (within reason):

https://www.canada.ca/en/public-safety-canada/news/2024/12/government-of-canada-announces-its-plan-to-strengthen-border-security-and-our-immigration-system.html

Second, if Trump is trying to hold actual negotiations here, I haven't seen evidence of it. You say

"Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible."

but as far as I can tell his does seem to be the case. Trudeau has signaled interest in negotiation and (according to him) received no response:

"Trudeau said he's been trying to reach Trump since his inauguration two weeks ago but hasn't heard back. He said he wants to relay to Trump that it's better to tackle challenges such as drugs and migrants together as partners rather than as combatants in a trade dispute." (Source: https://www.cbc.ca/news/politics/trump-tariffs-canada-february-1-1.7447829 )

Of course, there's one more day until the tariffs take effect: this could be a deliberate ploy to apply as much pressure as possible to try to force more concessions when the negotiation does happen. We'll know in 48 hours at the most.

UPDATE: I saw later that Trump's lack of intent to negotiate has been confirmed from his side as well:

"'We're not looking for a concession,' Trump told reporters in the Oval Office on Friday. 'We'll see what happens.'" (Source: https://www.cbc.ca/news/world/trump-tariff-north-america-analysis-1.7447878)

Could still be a bluff, of course. But there do seem to be indications that Trump thinks this is a good in itself, intending to try to force foreign industries to move to the U.S. Which is to say that it seems Trump's ideology is at odds with sound economic policy, and neither the good of the U.S. population nor the U.S.'s diplomatic position are primary drivers of his decision making.

Expand full comment
Neurology For You's avatar

I think you’re making a mistake in analyzing Canadian tariffs—they’re not targeting the US Economy, they’re targeting specific states, specific industries and the Senators who care about them. Bad news for Tennessee whiskey, for example.

Expand full comment
Thomas del Vasto's avatar

Anybody following what DOGE is doing so far? I see Elon claiming to be close to reducing $4 billion spend per day, but not sure I can believe his hyperbole. https://x.com/elonmusk/status/1885574678737039823

Any good rationalist writeups on this so far?

Expand full comment
Neurology For You's avatar

He seems to be trying to shut down a whole agency, USAID. If he can do that, and it stands, we’re in a different form of government and the old assumptions are no longer valid.

Expand full comment
beleester's avatar

Yeah. At the moment it seems like the plan is "get admin access to systems that handle money and personnel, physically stop the money from being spent." You can definitely cut a lot of spending that way if you're willing to break the law!

Expand full comment
Erica Rall's avatar

$4B/day works out to $1.46T/year. Total discretionary spending (i.e. stuff that Trump can legally block by vetoing appropriations bills, as opposed to mandatory spending which requires Congress to vote to amend or repeal existing laws), including defense spending, is about $1.6T.

Also worth noting, cutting discretionary spending without the cooperation of majorities of both houses of Congress is hard, since Congress can put the stuff you want to keep in the same bill as the stuff you want to cut.

I see a few possibilities:

1. Musk has no idea how much he realistically can try to cut and is saying arbitrary large numbers like Doctor Evil.

2. Musk also hopes to make deep cuts to mandatory spending. This runs heavily contrary to Trump's previous statements about never cutting Social Security and Medicare, and requires active participation by Congress not just vetoing bills and winning standoffs about passing bills with just the discretionary spending you want to keep.

3. Musk actually wants to disband almost the entire discretionary side of the Federal government, including the military.

4. Musk wants to raise a ton of one-time revenue by selling off a ton of federal land and similar assets and is counting this as part of the $4B/day figure.

1 and 4 strike me as the most likely.

Expand full comment
Monkyyy's avatar

Theres an executive order for 4 people per department to be hired; I dont even know if people are following when that will be finished

Expand full comment
Viliam's avatar

There are many similar videos, but https://www.youtube.com/watch?v=MoYQdfhy24k this was interesting. The channel is an Ukrainian guy interviewing random Russians in video calls.

This lady, she is really happy about Russia destroying Ukrainian borders, because her country is always right, whatever it does; she supports Russia unconditionally. Also, she doesn't understand why Ukrainians are not friendlier with Russians; why are they looking towards the West instead.

She supports Putin in "almost everything". What is "almost"? Well, she is sad that many of her friends have recently died in the war against Ukraine. (She lives in Siberia, most of those people have previously never been in Moscow or Ukraine.) But other than this little detail, she is a strong Putin supporter.

...people are different, I guess?

Expand full comment
anomie's avatar

Isn't nationalism such a beautiful thing? How would we accomplish anything if it was impossible to cultivate a legion of such unwaveringly loyal soldiers?

...Which is also why allowing such high standards of living is counterproductive to the interests of the collective. We cannot allow people to find meaning in life outside of faith. They must be made willing to give anything and everything. One for all, all for one.

Expand full comment
Viliam's avatar

It's not the nationalism I found surprising (although it does not appeal to me), but rather how the lady is puzzled that the people she hates *don't love her country* in turn.

There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them. They see *mutual* hate as the natural state of the world; they might be surprised to learn that many of the people they hate actually don't even think about them.

Does the lady perhaps believe that Russian nationalism is so great that even the non-Russians should naturally embrace it (and that it is quite surprising if they don't)?

Does she perhaps believe that the natural state of the world is that deep down in their hearts all people around the world desire to be conquered by Russia, and then go die for the glory of Russia trying to conquer even more countries? (Just like she is proud for supporting Moscow that her Siberian friends died for.)

To me it seems like this... Ruscism... is way crazier than the usual nationalism. The usual nationalism is "we hate them, naturally they *hate* us too"; Ruscism seems to be "we hate them, and we don't understand why they don't *love* us".

Expand full comment
John Schilling's avatar

If she's been drinking the Russian Propaganda Kool-Aid, she very likely believes that the territory in question is part of "Russia" and has been for a thousand years, that almost all of the people who live there are "Russians", and that "Ukrainian" is a word invented by a handful of corrupt Nazis in Kyiv to justify their ongoing oppression of forty million "Russians".

If she hasn't been drinking the Russian Propaganda Kool-Aid, she's probably not available to answer questions from inquisitive journalists.

The belief that one's soldiers will be welcomed as liberators when they march uninvited into someone else's country, is a common error not at all limited to Russians or even to subjects of authoritarian regimes generally.

Expand full comment
anomie's avatar

> There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them

Of course, because those nations are filled with savages. They understand there is no common ground to be had with them. It's different when two civilized nations are at war. The west has forgotten what that's like, but that will probably change soon enough.

Expand full comment
Jeffrey Soreff's avatar

tl;dr: ChatGPT o3-mini 01/31/2025 7 questions results:

2 correct, 4 partially correct, 1 badly wrong EDIT: _see next comment for a significant correction_

Overall: Just about the same as o1, no significant progress.

Backing off guess about AGI to 70% odds in 3 years. (was 80% in 2 years)

a) Correct

b) partially correct (got the species right, got FeCl4 right, got the d-d transition in CuCl4 wrong, took multiple leading questions to correct it)

c) partially correct (three errors)

d) correct

e) initially incorrect, one prod gave correct result

f) misses a lot, argues incorrectly that some real gases are liquids

g) badly wrong

List of questions and results:

a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?

results: https://chatgpt.com/share/679d390d-3140-8006-a5ed-ec924f7c1729

Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The visible spectrum for humans typically ranges from about 380 nm to 750 nm, and 530 nm falls within that range, corresponding to green light."

b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?

results: https://chatgpt.com/share/679d3c68-6010-8006-abc9-14a365fa8527

A bit worse than o1. It kept insisting that CuCl4's color was from a d-d transition, and it took a bunch of leading questions to get it to see that it was really charge transfer.

c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.

results: https://chatgpt.com/share/679d40dc-352c-8006-9137-2ab3b860f429

Disappointing. It did get the two methylcyclopropene isomer on the first try, which o1 missed, but it missed tetrahedrane, vinylacetylene, and diacetylene. It omitted a bunch of highly strained rings, which is sort-of ok, since it said it was skipping them.

d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?

results: https://chatgpt.com/share/679d44bd-b078-8006-aef3-9284ea068b2c

Correct, "This comparison shows that the mass equivalent of the Sun’s radiated energy is about four times greater than the mass lost via the solar wind."

e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.

results: https://chatgpt.com/share/679d499a-ca1c-8006-8af7-84c95892f771

Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.

f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.

results: https://chatgpt.com/share/679d51b7-6a24-8006-a3e8-ccabd2cd9691

About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from

https://en.wikipedia.org/wiki/List_of_gases. It is quite stubborn about adding to the

list, e.g. PF5 is a gas and it took quite a bit of prompting for it to accept this, and

accept the right boiling point number from e.g. the NIST reference.

g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?

results: https://chatgpt.com/share/679d576c-7ee4-8006-841d-ca22572bc7a8

Badly wrong. It gave three examples, none of which had an S4 but no center of inversion or mirror plane. The first two didn't have an S4 at all.

Expand full comment
Jeffrey Soreff's avatar

tl;dr: ChatGPT o3-mini-high 02/01/2025 7 questions results:

3 correct, 4 partially correct

Overall: One better than o1, hoped for more progress

Backing off guess about AGI to 70% odds in 2 years. (was 80% in 2 years)

I want to note that o3-mini-high and o3-mini are not very different, though it was still a mistake for me to initially test o3-mini rather than o3-mini-high. The _larger_ change is that I misunderstood both LLM's answer to (g), the S4 question. It actually _is_ correct for both LLMs (and was _not_ correct for o1), so there actually has been progress on this benchmark-ette since o1, though less than I hoped for.

a) Correct

b) partially correct (got the species right, got FeCl4 right, got the d-d transition in CuCl4 wrong, took multiple leading questions to correct it)

c) partially correct (two errors)

d) correct

e) initially incorrect, one prod gave correct result

f) misses a lot, argues incorrectly that some real gases are liquids

g) correct (needed to clarify what it meant to me!)

List of questions and results:

a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?

results: https://chatgpt.com/share/679e70b6-2e6c-8006-a7ff-3a6725327f63

Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The human visible spectrum generally ranges from about 380 nm to 750 nm, and wavelengths around 530 nm are typically perceived as green.

While the number provided is extremely precise, this level of precision far exceeds what our eyes can distinguish, but it confirms that the light falls well within the visible range."

b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?

results: https://chatgpt.com/share/679e7a17-4dd4-8006-b2bd-a0fd43e86ba0

A bit worse than o1. It kept insisting that CuCl4's color was from a d-d transition..

c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.

results: https://chatgpt.com/share/679e7d20-88dc-8006-a4c6-cdbde11a57dd

Meh. It _did_ get tetrahedrane, vinylacetylene, and diacetylene, but it missed the two methylcylopropenes until poked.

It omitted a bunch of highly strained rings, which is sort-of ok, since it said it was skipping them.

d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?

results: https://chatgpt.com/share/679e7f3d-0c50-8006-935d-f346fef7c9eb

Correct, " In other words, the mass loss from radiation is roughly 6–7 times greater than that from the solar wind."

e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.

results: https://chatgpt.com/share/679e840d-a464-8006-b4bc-5474733316a2

Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.

f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.

results: https://chatgpt.com/share/679e8f66-7658-8006-a125-f9dbeff6ea1d

About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from

https://en.wikipedia.org/wiki/List_of_gases. It is willing to add some compounds to the list.

g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?

results: https://chatgpt.com/share/679e93b5-fdd4-8006-ad89-c61ef4f30345

correct! It actually has a geometry for the M(en)2 case where a staggered geometry for the en ligands winds up satisfying all the requirements.

Expand full comment
Jeffrey Soreff's avatar

I've seen the announcement of OpenAI's o3-deep-research system,

https://openai.com/index/introducing-deep-research/

I don't yet have access to it (a month away for my tier, if I understand correctly), so I don't know how well it will perform on my questions.

In the announcement, OpenAI described an impressive improvement on https://lastexam.ai/ from

OpenAI o3-mini (high)* 13.0% right (it is a _very_ hard exam)

to

OpenAI deep research** 26.6% right

On the strength of that improvement, I'm bumping up my

guess about AGI back up to 80% odds in 2 years. (was 70% after seeing o3-mini-high)

Expand full comment
michael michalchik's avatar

ACXLW Meetup 85 1) “Bureaucracy Isn’t Measured in Bureaucrats” & 2) “How the US Used Science to Wage Psychological War”

Date: Saturday, February 1, 2025

Time: 2:00 PM – 5:00 PM

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Host: Michael Michalchik

Contact: michaelmichalchik@gmail.com | (949) 375-2045

Introduction and Overview

This session features two distinct topics that converge around the theme of how bureaucratic or institutional influence shapes policy and public perception:

Scott Alexander’s article explores the nature of administrative and legal friction—how it really works, why simply “firing half the bureaucrats” might backfire, and how incentives around litigation and Congressional mandates drive ever-increasing complexity in agencies like the FDA.

Annalee Newitz’s piece delves into the long history of U.S. psychological warfare, from 19th-century anthropologists on the battlefield to modern data-driven influence campaigns. It examines how government agencies (and sometimes private industry) harness “cutting-edge science” to manipulate or influence minds—both abroad and at home.

These conversations should deepen our understanding of how large institutions, whether civil or military, can become unwieldy or wield profound power over society’s direction—often in subtle, behind-the-scenes ways.

Conversation Starter 1

Topic: “Bureaucracy Isn’t Measured in Bureaucrats” by Scott Alexander

Text Link:

Bureaucracy Isn't Measured In Bureaucrats

Audio Link (YouTube):

Bureaucracy Isn't Measured In Bureaucrats

Extended Summary

In this article, Scott Alexander responds to a now-deleted Vivek Ramaswamy tweet calling for “firing half of all bureaucrats” as a quick cure for the ills of red tape. Alexander argues that bureaucracy is not a simple function of staff count. Instead, layers of regulations, court challenges, and Congressional mandates (often called “red tape”) create mandatory processes. These processes remain the same, or even grow, regardless of how many employees are available to handle them. Fewer staff doesn’t necessarily reduce forms or procedures; it simply slows them down.

Key Points:

Staff vs. Process

Many agencies (like the FDA) have to produce exhaustive evidence to avoid lawsuits from interest groups or industry. This complexity is set by laws and court precedent, not by the whim of office managers. Halving staff doesn’t remove steps; it just delays the outcome.

Court Challenges & Legislative Mandates

Environmental groups, industries, or other stakeholders can sue. Courts require thorough justification for any regulatory decision, incentivizing agencies to produce reams of documentation. Meanwhile, each new law from Congress adds a fresh compliance dimension.

Approval vs. Ban-Focused Bureaucracies

Alexander notes a subtlety: some agencies primarily “approve” (e.g. FDA letting new drugs come to market) while others primarily “deny” (some forms of regulation). In theory, cutting staff for ban-focused bureaucracies might reduce how much they can ban—but in practice, it can as easily cause confusion or slow attempts to regulate or un-regulate.

Real Solutions

Alexander distinguishes “fewer bureaucrats” from “fewer or simpler rules.” Real changes might require rolling back laws that drive administrative sprawl or limiting the ability to sue over regulatory decisions. However, these are politically complex tasks, far more so than layoffs.

Idaho as a Counterexample?

He cites Idaho’s recent success in slashing many outdated regulations. But, he wonders, how many of these cuts were cosmetic (e.g. “rules about a nonexistent state lottery TV show”) vs. impactful? The path to meaningful red tape reduction may require painstaking reviews and legislative changes—things that can’t be solved by a single stroke.

Deep Discussion Questions

Processes vs. People:

Alexander suggests that if you want fewer hoops, you need to change the hoops, not just the number of staff. Do you see real-world scenarios where staff cuts do reduce bureaucratic friction, or is it mostly naive to conflate the two?

Legal Overhang & Fear of Lawsuits:

How does the constant threat of litigation shape agency decision-making? Are there ways to prevent excessive lawsuits without undermining checks on government power?

Idaho’s Experiment:

Is Idaho’s success scalable? Could a large federal agency realistically prune decades of accumulated mandates as quickly? Or are the politics (and inertia) insurmountable?

Reform Approaches:

Alexander highlights that the real culprit is the complex web of regulations mandated by Congress or the courts. If meaningful red-tape reform is the true solution, where might such a campaign gain traction? And are we prepared for the possible downsides (e.g., under-regulation)?

Conversation Starter 2

Topic: “How the US Used Science to Wage Psychological War” by Annalee Newitz

Text Link:

Full Article by Annalee Newitz

Optional Extra Reading:

Wikipedia: Psychological Operations (United States)

(No audio link provided.)

Extended Summary

Annalee Newitz’s essay tracks the long history of U.S. “psy ops” (psychological operations)—from rudimentary 18th-century propaganda to advanced 21st-century digital campaigns. She contends that while many countries have engaged in psychological warfare, the U.S. has carved out a distinctive approach by blending “cutting-edge science” with political and cultural infiltration. Key eras illustrate how the government’s “rational approach” to propaganda draws on anthropology, psychology, marketing, and advanced analytics:

19th-Century Roots: Anthropologists on the Battlefield

The U.S. Army employed anthropologists during wars with Indigenous nations. Their mission included cataloging cultures and languages—ostensibly for academic study but functionally to facilitate conquest. This early synergy between science and warfare sowed seeds for modern psy ops.

World Wars & Freudian Marketing

Psy ops came into its own in the 20th century, relying on psychological theories to produce “rationally designed” propaganda. Edward Bernays, nephew of Sigmund Freud, exemplified this marriage of marketing acumen with intelligence services, orchestrating (or aiding) campaigns like the CIA-backed coup in Guatemala.

Modern Digital Spin-Offs

In the 21st century, data-driven “psychographic profiles” (as seen in the Cambridge Analytica scandal) or social media infiltration have replaced old-school leaflets and radio broadcasts. Instead of sending anthropologists to foreign battlefields, advanced AI or big-data analytics “map” human vulnerabilities—at scale.

Blurred Lines

Newitz notes that “psy ops” can loop back into domestic spheres. Culture wars, shaping news cycles, or quiet infiltration of a population’s worldview straddles the boundary between international propaganda and domestic persuasion. She highlights how the “rational approach” has moral hazards—once the state gains deep insight into cognition, it can more easily manipulate or mislead.

Efforts at Resistance

Some researchers propose transparency or “propaganda weather reports,” akin to forecast services that warn users about upcoming influence spikes on social media. Others preserve historical archives to reveal patterns of government infiltration. Despite these attempts, a robust strategy to quell psychological warfare remains elusive.

Deep Discussion Questions

Old-School vs. Digital Psy Ops:

From leaflets and embedded anthropologists to AI-driven manipulations, how has the core logic of controlling narratives stayed the same, and what’s truly new?

Ethical Boundaries:

Is there ever a justified “psy ops” campaign, especially if it’s used to avert harm or defuse extremist narratives? How do we reconcile the possibility of protective or “noble” misinformation?

Science as Double-Edged Sword:

Anthropological and psychological studies can yield legitimate insight into human cultures—but also enable powerful exploitation. Where should the lines be drawn? Are certain forms of research inherently prone to weaponization?

Domestic Overlap:

Although Newitz focuses on how the U.S. turned these techniques outward, modern controversies (e.g., Cambridge Analytica) show that lines blur easily. Should governments categorically ban “psy ops” on domestic citizens, or is that unenforceable in a networked age?

Walk & Talk

After exploring these themes, we’ll do our usual one-hour walk around the neighborhood. This more casual time fosters spontaneous questions, deeper connections, and fresh perspectives.

Share a Surprise

We’ll also have an “open-floor” segment: if you’ve read any eye-opening articles, had a perspective-altering personal experience, or discovered a fascinating fact, feel free to share. Varied topics are welcome.

Looking Ahead

As always, feel free to propose future topics or volunteer to host a specific reading. Our group thrives on diverse expertise, so if you have a focus—be it technology, policy, history, or beyond—we encourage you to bring it forward.

We Look Forward to Seeing You on February 1!

Questions, clarifications, or special requests? Contact Michael Michalchik (info above). Thanks for being part of ACXLW—together we’ll keep exploring the frontiers of thought, policy, and culture. See you soon!

Expand full comment
nah son's avatar

First aerial collision in I think 17 years today, after our glorious leader (Musk, in this case) paused hiring, fired a bunch of people, and generally started kicking down chestertons fence because it was woke.

I wonder how many stupid things you can do in a row as a conservative before people realize that they're stupid, it doesn't seem like there's an upper limit. Sad.

Expand full comment
Sol Hando's avatar

Having listened to the recordings it seems like nothing can be attributed to recent policy. Both the Helicopter and the Plane were following established procedure, there wasn’t a shortage of staff, and the ATC didn’t do anything wrong so far as directing things.

It looks like this was a decently known tight airspace, the helicopter confirmed they had the incoming plane in sight, then flew right into its path. The most likely situation is the helicopter pilot was looking at the wrong plane, probably the following one, and didn’t notice the plane they were supposed to have in sight and avoided.

It’s a flaw in the system, but a decently common thing in aviation to avoid another aircraft by sight, when you have it in sight. I wouldn’t be surprised if this pilot was less experienced than they should have been, as it’s a pretty obvious thing to mistake one incoming aircraft for another on final, as there’s usually many in a row.

Expand full comment
Paul Zrimsek's avatar

The "wasn't a shortage of staff" part is open to dispute considering that the helo and local control positions were combined. The outcome could have been quite different if instead of just "do you have the CRJ in sight?" the helicopter pilot had heard something like "traffic alert, the CRJ now 11 o'clock and a mile and a half turning short final for 33, appears targets will merge. Verify you've still got him." Saying all that would have taken time the local controller didn't have-- but a split-off helo controller would have.

Admittedly we don't know whether the positions were combined because staffing was short, or because the supervisor decided to send an extra body to the break room. That's one of the things the NTSB will have to look into.

Expand full comment
Sol Hando's avatar

It would be better to say: there wasn’t a shortage of staff due to recent policy. I wasn’t aware there was ever a split off helo controller, but I assume that’s more common at heavily trafficked airports.

We’re almost half a century into an ATC staff shortage and still going strong.

Expand full comment
thefance's avatar

https://www.tracingwoodgrains.com/p/the-faas-hiring-scandal-a-quick-overview

According to Trace, DEI has been scuffing the FAA's hiring pipeline for decades. (Notice how the article was published on Jan 29, 2024. I.e. 12 months ago. He's not just knee-jerk reacting to the recent crash.)

I'll admit that my "Musk is just an honest idealist" take aged like fine milk. (Although in my defense, ketamine is a hell of a drug.) But idk if we can lay this one at Musk's feet. At least not until more information about the crash becomes available.

Expand full comment
Chastity's avatar

No, according to Trace, there was a serious fuckup from 2014-2016 (in 2016, Congress passed a law banning the use of biographical assessments as a first line measure, as noted in the article you linked), and it was completely discontinued in 2018. Not "decades".

https://x.com/tracewoodgrains/status/1885695823524016564

It probably had a negative impact on total ATCs - eight years ago! The crashes are happening now!

Expand full comment
thefance's avatar

Yeah, that's fair. I was going off memory. I could have sworn the article mentioned a date from the 00's. But on closer inspection, the "2000" date was simply when the task-force was first commissioned.

edit: However, Trace's tweet does mention that the hiring-pipeline is still damaged. So I don't think I'm completely crazy for Trace's article having influenced my priors about the cause.

Expand full comment
Johan Larson's avatar

It seems a bit early in the Trump administration to blame this event on them. They just haven't had time to do much yet.

It might make more sense to blame Congress, which last year voted to increase the number of flights permitted at Ronald Reagan Washington National Airport, when the airspace around it is already very congested.

https://www.newsweek.com/lawmakers-voted-increase-flights-reagan-airport-crash-2023484

Expand full comment
John Schilling's avatar

Agreed. There are some *potentially* legitimate political and culture-war issues that may have been at work here but A: it's too early in the investigation to know and B: it's too early in the new administration for anything they've done to seriously affect ATC operations on the ground. Anyone suggesting that it is obvious who is at fault here, should be ignored.

Expand full comment
vectro's avatar

Shall we ask Senator Ted Cruz (R-TX) for comment?

https://www.congress.gov/amendment/118th-congress/senate-amendment/2064/text

Expand full comment
beleester's avatar

I agree that waiting and seeing what the investigation says before assigning blame is a good idea.

You know who didn't do that? Donald Trump, who took less than 24 hours to sign an EO saying it was Biden's fault for supporting DEI.

Expand full comment
Nobody Special's avatar

Based on the reporting I'm seeing, I'm at 70% confidence it will turn out to be the fault of the Blackhawk.

We don't have the blackbox data yet, but we do have the traffic control audio, and the tower clearly identifies the AA flight (a CRJ-700) for the helicopter crew, who confirm that they see it. Transcript below- PA 2-5 is the Blackhawk:

Air Traffic Control: “PAT 2-5 do you have the CRJ in sight?”

Air Traffic Control: “PAT 2-5 pass behind the CRJ.”

Blackhawk Crew: “PAT 2-5 has aircraft in sight, request visual separation.”

https://www.cnn.com/2025/01/30/us/maps-plane-helicopter-crash-dca-dg/index.html

"Visual separation" (see article below) "means the pilot acknowledges seeing the incoming aircraft and is informing the air traffic controller that they will avoid getting too close to the plane."

https://www.npr.org/2025/01/30/nx-s1-5281166/dca-crash-helicopter-air-traffic-controller-radio

So, to my lay-eyes, it looks like the tower did its job and warned the helicopter about the airliner, gave them a clear instruction to pass behind it, and received confirmation that the helicopter saw and intended to avoid getting close to the plane, only to have something go wrong on the helicopter side of the equation.

Lots of ways that could have happened; medical emergency or other failure in the helicopter cockpit, helicopter crew was actually looking at a different aircraft and thus *didn't* see the CRJ despite reporting that they did, etc. But it seems like evidence so far indicates that whatever went wrong was most likely on the helicopter side of the equation.

Expand full comment
Paul Brinkley's avatar

If the BH is confirming visual and intent to avoid, why would they say "requesting visual separation" instead of "maintaining visual separation" or "will maintain visual separation"?

That bit bugs me a little because it suggests a possible miscommunication. (Or it might be nothing or just normal aviation jargon I'm not familiar with, but I doubt it, since I understand ATC procedure as wanting to be so clear that even people finding themselves in control of aircraft in an emergency (e.g. pilot had a stroke, no one else on the craft) can understand it as plain English.)

If the BH crew actually saw a different aircraft, then that's a strong signal to me that "visual" is no longer going to be good enough, at least at Reagan.

What's the chance there was a drone in the vicinity, mistaken as something else?

Expand full comment
Nobody Special's avatar

I read "requesting visual separation" as requesting "permission" to use visual separation to keep their distance, rather than an alternative method of avoiding collision such as direction by the tower.

I.e. the BH says "requesting visual separation" and the tower then responds by (a) granting permission to use visual separation to avoid the CRJ, or (b) denying permission to rely on visual and instead directing the BH to a particular spot out of the way of the aircraft.

But as stated, I'm very much a layperson so YMMV by a lot with respect to that interpretation.

Expand full comment
Chastity's avatar

Manifold, Polymarket, and Metaculus all have >50% chance of RFK Jr. being confirmed.

https://manifold.markets/agucova/will-rfk-jr-be-confirmed-as-secreta - 75%

https://polymarket.com/event/how-many-senators-vote-to-confirm-rfk-as-secretary-of-hhs?tid=1738282006967 - 65% at 50+ Senators (Vance would cast tiebreaker, like for Hegseth)

https://www.metaculus.com/questions/30444/ - 92%

"Our job is to resist and to talk about it to everybody. If you’re walking down the street – and I do this now myself, which is, you know, I don’t want to do – I’m not a busybody. I see somebody on a hiking trail carrying a little baby and I say to him, ‘Better not get him vaccinated.’ And he heard that from me. If he hears it from 10 other people, maybe he won’t do it, you know, maybe he will save that child." - RFK Jr, 11:35 of this podcast: https://sites.libsyn.com/311600/rfk-jr

This is really bad. I can't even make a joke about it. He will just kill children if elected to this post. Anyone who would say that paragraph should not be allowed into public office, much less put in charge of the HHS. Please, write your Senators - even Republicans don't want a bunch of American kids to die of measles.

Expand full comment
Melvin's avatar

I can't see why even Trump would want RFK Jr as health secretary. He did his job, which was to drop out and endorse Trump in exchange for a promised nomination to a cabinet position.

RFK Jr has his own weird agenda orthogonal to Trump's, and I can't see why Trump would actually want him in that position.

Expand full comment
beleester's avatar

The weird agenda got a lot of supporters during COVID (even though Trump actually did a lot of work to get a COVID vaccine out as quickly as possible). RFK appeals to that chunk of his base - the "we don't need the government to protect us from disease, we just need horse paste and a tough attitude" crowd.

Expand full comment
Chastity's avatar

Possibly Trump is the sort of person wants to surround himself with lickspittle lackeys that owe everything to him, since he is at the center of a personality cult and has convinced himself that the only problem with his previous administration is too many people had principles and/or competence that made them sometimes butt heads with him.

Alternately, he is antivaxx himself. Thus his quite notorious Tweet where he said, "Healthy young child goes to doctor, gets pumped with massive shot of many vaccines, doesn't feel good and changes - AUTISM. Many such cases!"

Expand full comment
Johan Larson's avatar

Anyone have a guess about what will happen with Trump's campaign of tariff threats? I expect he will be able to get a lot of smaller countries to give in, but will run into trouble when he tries it with one of the parties strong enough to push back effectively, meaning China or the EU. This will spark a trade war of retaliatory tariffs, which will generate a nasty recession. That will produce enough misery that in the following election in the US the Dems will win back congress, and they'll make him stop. That might be in 2026, if things develop quickly.

Expand full comment
Tatu Ahponen's avatar

I don't even think it's meant to be a special punitive measure. I mean, it's phrased that way since basically everything in the new US admin is phrased as someone getting punished, but really, Trump's mindset just seems to be that tariffs are a positive good in themselves and the normal status of how things are that US has tariffs against here, there and everywhere, to protect US industry and to collect money (in lieu of using income taxes), as he has said many times.

We're so used to a free trade world where tariffs are a punitive expection (not an infrequent expection to be sure, but still, something that by definition has to be conceived as an expection) that this sort of a mindset starts to seem quite alien to us, even though the world being riddled with tariffs and duties used to be the standard setting to be cleared away by the free trade revolution. The free trade revolution was mostly good and this is bad, but it's also not a special form of bad but rather a very traditional sort of bad.

Expand full comment
anomie's avatar

It's worth noting that he can manually grant exemptions to tariffs, meaning he can use it as a way to indirectly punish companies that turn against him.

Expand full comment
Neurology For You's avatar

It’s also likely that the tariffs start a trade war that continues for years and hurts everybody, because there is no way to ratchet it down smoothly in a multi-lateral world and these things take on a life of their own, note how Biden continued and extended many of Trump’s policies towards China. The worst case scenario is very bad here.

Expand full comment
Lurker's avatar

I think it might depend on what the rest of the world is led to expect from America after 2028. If Trump seems like a parenthesis and the likely upcoming leaders are anxious to mend fences, I believe many countries (or nominal allies) will be only too eager to drop the retaliatory tariffs. Tariffs on China are probably here to stay, though.

Expand full comment
Blackshoe's avatar

Re: defense journal/magazine. War on the Rocks (not sure if they're a journal or a magazine, tbqh) feels like the place that might publish something like that. Here's their submissions guidelines. https://warontherocks.com/submissions/

Expand full comment
Johan Larson's avatar

Those of you who like your violence extra spicy may be interested in the teaser trailer for Astartes II, which just dropped from Games Workshop.

https://www.youtube.com/watch?v=sbgYaeerXhg

We'll have to wait until 2026 for the real thing, though.

Expand full comment
Anonymous Dude's avatar

Worthless! BLOOD FOR THE BLOOD GOD! SKULLS FOR THE SKULL THRONE!

OK, seriously, is there an Astartes I? Thank you so much for bringing this to my attention!

Expand full comment
Johan Larson's avatar

Search for Astartes on YouTube. You'll find a good short film by Syama Pedersen about a squad of space marines boarding an enemy vessel and running into more trouble than they expected.

Expand full comment
Eremolalos's avatar

You may also be interested in *28 Years Later,* Danny Boyle’s sequel to *28

Days Later,* coming out in June. Cillian Murphy, who was in the first movie, is also in this one.

Expand full comment
Anonymous Dude's avatar

A second Trainspotting 2/Cobra Kai situation? OK, this should be amusing. I may actually have to watch this one.

Expand full comment
Eremolalos's avatar

Did you see 28 Weeks Later? I liked it better than 28 Days Later. It scared me horribly well.

Expand full comment
Anonymous Dude's avatar

I forgot about that one! Maybe I'll see them both!

Expand full comment
beleester's avatar

Definitely seems like a much more ambitious project than last time - I see Tau, Tyranids, Chaos Marines, and a wide range of environments. So either some sort of anthology or a multi-episode story.

The original Astartes was excellent, so I'm excited to see what the animator can do with official backing.

Expand full comment
Joseph Shipman's avatar

Yeah, this is right, but I put it much more simply.

The rationalist community failed so miserably in cases like this because they were remarkably stupid and remarkably ignorant. No kind of neurodivergence is any excuse. Literature exists. History exists. If you have a curious brain, you should be curious enough and therefore knowledgeable enough about human nature and history not only to have recognized the craziness, but, even if you didn’t recognize the craziness, to have understood the possibility that your critics had enough of a point that you should really learn to understand your critics, rather than dismissing them for being right-coded etc.

Expand full comment
Anonymous Dude's avatar

I think the thing is they're so used to being rejected they sympathize with other rejects and can't tell or don't want to tell when they're actually rejected for valid reasons. There was a document about Geek Social Fallacies going around in the 2000s that said something similar. I'm actually not sure literature or history is all that useful here as this particular sort of problem is distant from the personality types of most literary authors and usually too small-impact to gain the attention of most historians.

Expand full comment
Joseph Shipman's avatar

There is a lot of both literature and history about crazy violent cults.

And there is a lot of both literature and history about the dangers of dismissing criticism because the critics were from outside the group.

This particular community achieved more prominence than “communities of geeks” usually do and this may have worsened the incentive structure for them to have any intellectual humility, but there’s still no excuse for not being familiar with concepts like hubris, or for not heeding warnings, given their explicit commitments to respecting rational arguments whatever their source.

Expand full comment
Neurology For You's avatar

I think rationalists tend to be mind-blind to their own irrationality, sure, but lots of big movements without that flaw spawn fringe groups, some of which turn paranoid and violent. I think it’s an example of how big-R rationalism can be considered a New Religious Movement.

Expand full comment
Anonymous Dude's avatar

It's a good point. I don't think the Zizians got that much traction, they just happened to do some damage locally. But overall, you're right.

Expand full comment
Michael Watts's avatar

> Yeah, this is right

I've read over the post again, and I can't tell what it is that you're saying is right.

Expand full comment
Dragor's avatar

What does stupid mean in this context? Foolish?

Expand full comment
Joseph Shipman's avatar

I was referring to a long comment about how Bay Area rationalists and adjacent groups failed not only to perceive the threat from the Zizians, but also got taken in by many other crazies and grifters because their social norms were too fucked up, and what was stupid about it is that it did not require a lot of intelligence to avoid failing in the way they did, it only required an acquaintance with history and literature combined with a bit of intellectual humility.

Expand full comment
Joseph Shipman's avatar

I note that:

1) in email, I received notice of the following:

***

Paul Sas replied to your comment on Open Thread 366.

Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative. Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.

***

2) Paul Sas has BLOCKED ME so he that I cannot reply to his comment

3) Therefore, I reply to MYSELF, HERE, intending that any of the rest of you who happen to know Mr. Sas and think that what I am about to say is of any value whatsoever have the option of informing him that I said something that was of some value whatsoever. Here goes:

***

I was not using the word “stupid” to refer to the Zizians. Rereading what I wrote ought to make it clear that the people I was calling “stupid” were not Zizians, but rather members of the rationalist community who refused to heed warnings about the Zizians and failed to notice their pathologies (and similarly failed to be alarmed by other dishonest and dangerous people in other cases).

***

Separately: if any of you are of the opinion that my writings in this thread were such a violation of ACX norms that it was appropriate for me to be blocked, I genuinely wish to understand what I said that was so violative as to deserve a block, so that I can avoid violating such norms in the future here.

Expand full comment
Paul Sas's avatar

I have a hard time squaring your exhortation to "learn about hubris", when your advocacy use such demeaning language that views every mistake as caused by other people's stupidity etc

Expand full comment
Joseph Shipman's avatar

Look, either you think they made bad mistakes, or you don’t.

If you don’t think they made bad mistakes, fine, explain it and show I’m wrong.

If you do think they made bad mistakes, harshness of some kind is going to be necessary, the important thing is that it is directed at the right target. I’m blaming stupidity *of a particular type*, not because I always blame stupidity, but because the mistakes that were made could have been avoided had they taken their own stated principles more seriously and acted like rationalists really should. I’m far from the only person who has criticized this tendency, both before and after the SBF fiasco.

Expand full comment
Paul Sas's avatar

Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc

Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative.

Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.

Expand full comment
Eremolalos's avatar

your comment has gotten separated from the one you’re replying to

Expand full comment
Joseph Shipman's avatar

I hate when that happens. Substack doesn’t work well when you use it through the Safari browser.

Expand full comment
Eremolalos's avatar

Ya substack sux

Expand full comment
rebelcredential's avatar

SPOILERS FOR NOSFERATU 2024:

I thought this was a great film, worth watching for Willem Dafoe alone, but what especially stood out to me was how it treated the sexes.

The film shows men as men and women as women and does a really good job at giving a coherent reading of both.

The friction between the wife and the male best friend for example: from his perspective, she's a flighty, fanciful, annoying woman who he puts up in lodgings not just for the sake of his friend but also because duty and decency demand it. He endures her company over weeks as she becomes harder and harder work, putting him to greater and greater inconvenience. At first he smiled off her talk of demons and dreams, but thing have gotten serious and she still speaks only in childish gibberish, refusing to address him using logic, like an adult. When he finally snaps at her tersely you can completely understand his frustration.

Meanwhile from her perspective, he's the controlling authoritarian who's dictating what she can and can't do, he's putting her through all this (entertainly oldschool) medical attention without thinking to ask her opinion on the matter, and most fundamentally he's patronising her, not taking her seriously, not meeting her halfway, not *listening* to what she's trying to communicate.

You might take one side or the other, depending on whether you have male or female sensibilities, but the *film* doesn't, it lets you see both.

Same goes for the main love triangle (which, rotting corpses aside, is basically the same one as Bridget Jones' Diary: love vs lust, there's the Nice Guy and the Bad Boy and she's caught between them.) Her husband is a good man, but sexually unsatisfying; Orlock is a horrible creature she can't stand the sight of - but he knows how to make her come.

Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future. He grants a request that 99% of the time would be totally harmless and lets Orlock look at his wife's locket. And then he, after six weeks on the road and only desiring to get back home to his wife, hastily signs the document he came to sign, and doesn't stop and kick up a fuss about the fact that it's in an unexpected language.

The idea that this is enough to make a man unattractive has just got any sane logical bloke throwing his hands up in exasperation because clearly you just can't win with women. But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man.

Orlock meanwhile is a will that can't be denied - yes, he's threatening and menacing and murdering all her friends: but you can't deny he feels desire for her, just her. Orlock makes her feel pleasure from the very first contact. Later in dialogue they're both very clear that love is not on the cards, this is a purely physical thing that's going on.

Meanwhile hubby is shown sleeping in a separate bed, and later when she fantasises about him fucking her good and hard, we cut back to real life and see that he's just holding her helplessly. This is a good man, who loves you, who wants to do right by you - but he doesn't know what he's doing in the sack.

Hence the forbidden attraction of giving herself to Orlock. In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble. Gentlemen, take note.

Expand full comment
Schneeaffe's avatar

Well, Im happy that the film made sense to you. I didnt read it that way, and even in retrospect I dont, though Im aware of this general narrative. To me, it was just setting up a different ending - classicly heroic or christian redemption, maybe - and didnt subvert it well at all. Were doing fatalism now because... Mr Exorcist said so? I still didnt believe it was the only way when he said it.

Expand full comment
Deiseach's avatar

Haven't seen the movie, have seen the Klaus Kinski "Nosferatu". So I'm going mostly from that and the reviews of this version I've read, which do seem to agree with your second point about the ending.

"But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man."

From the female perspective, not so much "Chad kicked sand in your face, ninety pound weakling", but the betrayal of intimacy. If the photo of your wife is on your desk, that's on public display. Other people taking notice of it, even the stronger guy, isn't that bad; if Chad makes mean comments ("wow I'd fuck that hot bitch") that's on him not on husband.

But a locket is something secret and intimate; it's like (in a modern context) her giving him the sexy nude photo or text that is only meant for him to see. If he shares that around with the other guys, even in a "wow look how hot my wife is" boasting manner, that's a betrayal of privacy and intimacy. It's not a compliment. Giving in to Orlok isn't so much about being pressured by the stronger guy, though that's part of it; it's giving in to share something he should not have shared, something that is meant to be within the bonds of their marriage.

There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave; he agrees he'll never break those; he ends up breaking them and she leaves him". One of the conditions often is "you'll never tell anyone about me" or something similar, such as "you'll never reveal I can do this thing". One version of this is the story explaining the curse on the men of Ulster as to why once a year they would be unable to fight due to being struck down with excruciating pains: a man married a goddess/fairy woman named Macha; he went to a royal feast and saw a horse race; he boasted his wife could run faster than the horses; the king forced him to bring her and make her run; she ran and won, but she was pregnant and gave birth, and she cursed the men of Ulster that in their time of need, they would be struck down with the pains of a woman in labour:

https://bardmythologies.com/the-curse-of-macha/

So - break the secret of what should be between the spouses, lose all.

"In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble."

I've seen this referred to in reviews, that this version puts a different twist on the ending and undercuts the 'noble sacrifice' element. But again, I think your reasoning is a little too simple.

This is old-school Eros and Thanatos. I think even Freud discusses this:

https://en.wikipedia.org/wiki/Beyond_the_Pleasure_Principle

It's being in love with Death, as in all the "Death and the Maiden" tropes, as in Keats' "Ode to a Nightingale":

"Darkling I listen; and, for many a time

I have been half in love with easeful Death,

Call'd him soft names in many a mused rhyme,

To take into the air my quiet breath;"

Orlok *is* Death, and while he may indeed be terrible and grotesque and fearful, he is also strangely fascinating. He is the ultimate end to which we all go, no matter who loves us or tries to hold us back. And there is a strange, perverse allure there: the combination of the little death, to which orgasm has been compared, and the greater death in one as she gives herself over to Orlok. Sex *is* death, as all the slasher movies where horny teens making out attract the serial killer's slaughter remind us.

Expand full comment
anomie's avatar

> There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave

There's actually a whole genre of these types of stories in Japan, except they're about an animal who gets saved by a man, and decides to return the favor by disguising as a human and then marrying him. Wife tells his husband to never enter the room while she's working, he ends up doing it anyways, and he finds out some horrible truth that makes either the wife or himself leave.

Tsuru no Ongaeshi is the most famous version of this kind of story ( https://en.m.wikipedia.org/wiki/Tsuru_no_Ongaeshi ), but the funniest version has to be the one that's literally just called "Clam Wife," where a man is greeted by a beautiful woman after returning a giant clam to the sea several days earlier:

> In The Clam Wife, a man finds a woman mysteriously appear at his doorway. They become married, and the wife cooks the husband a delicious bean soup each day. He peeks in on her cooking, and discovers that she is urinating clam juice into the soup, so he chases her away.

Expand full comment
TK-421's avatar

> Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future.

I think his sins are more subtle. He seems like the nice guy who cares for her and takes care of her, but he fundamentally never listens to her and then acts like everything he does is for her. But we never hear her say that she's unhappy with their wealth. He's the one who seems more concerned about it and compares himself to his richer friend.

If he listens to her and does what she actually wants - instead of what he wants to do which he internally convinces himself is also what she wants - then the movie never happens.

Expand full comment
rebelcredential's avatar

If he listens to her and does what she wants, they end up poor and going nowhere and she is unhappy. Listening and understanding is important, but if he'd obeyed her every whim and let it direct their lives completely, she'd eventually find him far less appealing than she does in the film.

He is responsible for her future happiness and it's his role to overrule her when she's not thinking about it - which she categorically is not, she's got her head in the clouds and her main driver throughout the film is sublimated lust.

Expand full comment
Deiseach's avatar

I do wonder how this new version stacks up against the Herzog movie; this scene, for example, which shows how the arrival of death (in the form of the plague) has overturned the conventional society in which the heroine lives; people have almost been - or really have been - driven mad by fear to the extent that now they *have* gone 'beyond the pleasure principle' and are now in love with death, celebrating it - as the plague party demonstrates:

https://www.youtube.com/watch?v=JdzHAKPV7dk

Expand full comment
TK-421's avatar

Sure, that's a perfectly valid position. But from her perspective she's being ignored, her husband isn't treating her like a partner, and he's cloaking doing what he wants in the pretense of love for her. He can't even take responsibility for his actions and turning it into her fault - he's doing it "for her".

Again, not to say he hasn't convinced himself that it's true, but from her end he's plainly ignoring her. And the direct result in the film is that he brings evil and death to her and the town's doorstep.

As you said above, it's also that he's weak. Once it becomes obvious that he's dealing with something terrible he doesn't stop: he still takes the money, he still sells the property. The contract complaint was ridiculous in one sense - I laughed in the theater about it - because he didn't know what he was signing; in another sense that's an even worse betrayal. He signed, from fear and greed, without even bothering to know what he was signing over. Except that he was at a minimum greasing the skids for a monster to become a neighbor.

The 1979 version directed by Herzog downplays the sexual aspects of the story and really highlights the husband character's failings. It's good.

ETA: The locket thing is also bad because of what he does afterwards, not the act of showing Orlok the locket. He has evidence that the fiendish hell-demon is fixated on his wife and he carries on with the transaction.

Expand full comment
rebelcredential's avatar

As I watched it I was thinking what I'd be doing in that situation (sign of a good film, that.)

I think I probably would also have just signed the document, on the grounds that I just want to get out of there and, since this is in the context of selling a house, if the worst comes to the worst and there's funny business, it's more my employer's problem than mine. (Though in real life I'm always willing to be awkward, that's in much less scary environments.)

I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.

I hope I'd have refused the locket, either non-confrontationally by showing it to the Count instead of handing it over, or more assertively by a polite "I need that locket back, milord," and not leaving til I got it. Probably wouldn't have fought on if he snapped, "No! Begone!" though, given the job thing and the fact that he's a social superior in a society far more hierarchical than anything around today.

And I would definitely have pursued the job in the first place, even over her protestations, for the reasons I said above.

The thing is, none of this is "bad" or "good". If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". And she's not, even if I commit none of these wrongs, more sins will be identified out of thin air to crucify me with. That's the way this works.

Expand full comment
Deiseach's avatar

"If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". "

Well, even if husband is a mighty stud who pleasures her five times a night, Orlok's fixation on her after seeing the locket means he will follow hubby back to his home town to get her. So even with good intentions, bad consequences follow. If Orlok is unaware she even exists, he's not going to come in the night to eat her.

Expand full comment
Schneeaffe's avatar

In this version, Orlok is already aware she exists. Shes kind of responsible for resurrecting him with her medium powers, and hes been spirit-visiting since shes a teenager.

Expand full comment
TK-421's avatar

> I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.

Oh most definitely. The film does a good job of showing the tension between a modern-ish society and the atavistic horror of an earlier age. They can look directly at it and not fully believe what they're seeing. There's enough ambiguity that I have no idea what I would think.

(Here again I have to plug the '79 version: the castle in that one is a complete ruin on top of a mountain where no one could possibly be living. It's more of a rundown cemetery.)

Expand full comment
Deiseach's avatar

It's what is running all through Stoker's "Dracula": we're all moderns now and have Science, these kind of things all come out of the backwards superstitions of an age of darkness, there must be a rational and reasonable explanation.

And that's why the story works, because it's "oh no, the monsters in the dark *are* real and the rational explanation will get you killed". That is why the rather glib attitude of today that "dude, we've all seen the horror movies, we know about vampires, so we'd know that guy was a vampire if he tried it today" doesn't work, because imagine. Seriously imagine it for one moment. We'd be prone to dismiss it as "come on, vampires only exist in movies, they're not *real*" or "if that guy thinks he's a vampire, he's crazy and possibly a serial killer, but he's not a *real* vampire".

Expand full comment
Anonymous Dude's avatar

I'm going to have to see this bloody thing, aren't I?

Expand full comment
rebelcredential's avatar

I mean, manage your expectations. It's a slow, overwrought arty film and far more interested in gore than sexiness. None of the female leads are particularly pretty, imo, and the period style does nothing to help them.

My favourite thing about it was the Victorian* vibe of men with magnificent facial hair, listening gravely to expostulating professors, smoking indoors, and curing all ailments with ether and brandy.

What kind of films do you normally like?

*okay, Victoria's German equivalent, whoever he was.

Expand full comment
Anonymous Dude's avatar

I usually don't watch horror movies, I get scared. But Christina downthread convinced me in a prior thread it was actually unwoke. So I figured it was my duty to watch it and give Hollywood my money for making politically incorrect movies. I've been procrastinating ever since. :)

Expand full comment
Yug Gnirob's avatar

>So I figured it was my duty to watch it and give Hollywood my money for making politically incorrect movies.

Well... it's still a remake, though. Don't give them TOO much money.

Expand full comment
rebelcredential's avatar

Okay, but actually by choice, when you're not thinking about wokeness, what films do you genuinely enjoy?

Expand full comment
Anonymous Dude's avatar

I liked the latest Transformers CGI cartoon thingie. Napoleon was OK. Babygirl was awful--not sure if that reflects me or the movie quality. I'll probably just watch it for something to do.

I realize on the 1-10 problem severity scale, this is a 0.5.

Expand full comment
Christina the StoryGirl's avatar

I kicked off a discussion of Nosferatu 2024 in the comments here (https://www.astralcodexten.com/p/open-thread-362/comment/83633282).

I feel more than ever that the gender dynamics depicted in this movie are a goddamned *miracle* and an extremely promising sign. Eggers has absolutely no fucks to give about THE MESSAGE (as The Critical Drinker would put it). It's respectful and refreshing and I hope a sign of things to come.

Expand full comment
TakeAThirdOption's avatar

I have read that and it made me incredibly curious about what you might mean with this:

> even more miraculous is its *utterly* unapologetic theme that a sacrifice imposed on a woman by her society and herself might not be all that different from her urge to self-destruct.

> And it apparently does not give the slightest of fucks about what wokescolds might have to say about the depiction of this female character or the implications of her choices.

But I also felt it might be too complex for me to understand without you writing an essay about it.

But I still wanna know.

You mean the movie depicts the female lead in a way that we are not supposed to think women to be?

Which way?

Expand full comment
rebelcredential's avatar

I remembered seeing a discussion recently but wasn't about to go back and find it, not when each page load gives Substack a fresh opportunity to paralyse my CPU.

But if anyone's dancing on the grave of The Message I'll always want in.

Expand full comment
Christina the StoryGirl's avatar

For what it's worth, Babygirl also gleefully dances on the grave of what THE MESSAGE wants to say about gender.

While the advertising campaign makes it look like a naughty romp ala 50 Shades of Grey, the actual movie is a very nuanced and sympathetic character study, with graphic sex scenes so deeply personal and specific to the particular characters that they don't leave any room at all for the audience self-insertion (pun semi-intended). It's one of the least-sexy movies I've ever seen, because it meets the classic definition of literary sex scenes needing to be about "sex and something else."

Expand full comment
Anonymous Dude's avatar

OK, I actually saw that one. What do you mean? Me and the person I saw it with both thought Kidman was believable but Banderas was miscast and the young guy wasn't very convincing in his...ah...'role', in either sense of the word.

I suspect Kidman was supposed to be a self-insertion fantasy for the people who write these reviews. You know, left-leaning lady with politically incorrect fantasies about hot younger man she feels politically guilty about, but then you add a layer of wokeness having her be a 'girlboss' archetype in a stereotypically-male field so you don't get shot down right away. (She struck me more as a consumer-products CEO than a tech CEO, but maybe Kidman just has too much charisma to do 'ascended nerd' well.)

Expand full comment
Christina the StoryGirl's avatar

Wow.

WOW!

This is one of electrifying Different Worlds moments Scott talks about (https://slatestarcodex.com/2017/10/02/different-worlds/)!

Look, I'm a 45 year old straight woman who likes to bottom in kink. I used to consider myself a submissive, but I was submissive only insomuch as all students in all subjects are necessarily "submissive" to the experience of their teachers; once I had sufficient experience, I had to reluctantly concede the label. (This throat-clearing is relevant.)

This might be TMI for you but my first meaningful encounter with kink very superficially shared many of the beats as the scene in that shitty hotel room (including a laughing "What? No!"). You'd think that if that scene was going to sexually appeal to anyone, it would be me; I hugely enjoyed actually living some of it!

But no! Kidman's character isn't supposed to be a self-insertion fantasy! There's absolutely no room for *me* in that scene; Kidman's character and her pathology are *way* too present. Her anxiety and awkwardness and risk-taking are all *horrible* to witness; her life is absolutely not anything that a (normal, healthy) woman would fantasize about experiencing. I felt tremendous compassion and pity and protectiveness for her throughout the movie because her pathological interior life was obviously *awful.* When she first orgasms with the intern, I felt a gentle happiness for her, not any desire to *be* her.

I felt that because that's what the director intended me to feel.

Just consider how that very long, unbroken shot of her orgasming could not be less porn-y. We-the-camera are laying down with her on that (obviously smelly) gross shitty hotel room carpet, only top part of her face and later just her forehead in focus, seeing nothing of her body, seeing nothing of his. At one point she anxiously worries she's going to accidentally pee (SO SEXY! EVERY WOMAN'S FANTASY FOR HER FIRST PARTNERED ORGASM!), and then she deeply, unsexily chest-grunts her way through orgasm, completely in contrast with the breathy porn whimpering she faked with her husband.

The casting of the intern was telling, too. He isn't traditionally good-looking for a reason; the intention was for only his *energy* to be highly attractive to (mostly only) Kidman's character, not to a general audience. The movie was extremely careful to depict him as young, working-class guy whose read of situations is occasionally wrong (renting that shitty hotel room!) and who hasn't developed any patter or polish or strategy as a "dominant." That was clear whenever he was trying to bluff through his uncertainty about what to do rather than confidently going with his instincts, and *let* *me* *tell* *you*, few things are less sexy to the average healthy submissive woman than a dominant's awkwardness and uncertainty. You don't make a point of putting those moments in a movie if you want women to fantasize about those moments.

So...yeah...Different Worlds, indeed! For me, and for the fellow kinkster woman I saw the movie with, the writer-director, Halina Reijn was SCREAMING at the audience, "this isn't for you to fantasize about!"

I'm just amazed that's not what you took away from it.

Expand full comment
Anonymous Dude's avatar

Sure, that's why it's interesting to talk to different people. Why *would* we take the same thing away from it? I'm a 45ish mostly-straight guy who's mostly topped and keeps trying to go vanilla because it's obviously rational to quit in a culture going feminist and conservative at the same time.

Maybe 'self-insertion fantasy' was the wrong phrase, because I think you are correct about the staging and casting and so on: her misery and anxiety and embarassment is exactly the self-insertion point. It's not a sexy self-insertion, it's an I-can-relate-to-this-person self-insertion, like all the crap they used to put heroes through in quest stories back when they actually made them for boys. 'Literally me' as the guys like to say. I am starting to think, on discussion with you, it's a quest story.

The idea I had is that she's like all the woke secretly-subby liberal ladies who write movie reviews for art movies (you may not be the *exact* target audience), so she's recapitulating their journey or the journey they'd like to take, being overpressured in a hectic job, balancing family and work demands (but see, she's a girlboss, so she's an aspirational figure too!), feeling guilty about these 'regressive' fantasies, and getting in a relationship with a younger guy (thus proving their attractiveness) before finally getting their husbands (who look like Antonio Banderas) to do (to them) what they want. (But you're still able to Put The Patriarchy In Its Place, and dispose of the unwanted boyfriend after the fact in Tokyo.)

You may not fantasize about the course of the movie, but you fantasize about someone like you having awkward experiences and *then* getting what you want...preserving the marriage and getting your kink while you're at it. It's supposed to be the girlboss's initiation into the life she wants, the initiation isn't supposed to be easy. There's a call to adventure, the dude's kind of a threshold guardian, there are trials, you even have that 'underworld' sequence where they go into the club and it's kind of dreamlike, and after a final confrontation between the dude and her husband, she returns to society, i.e. her marriage with the prize (confidence in her kink and a willing partner who looks like Antonio Banderas). It's a quest story, they're not fun while you're there, anymore than Indiana Jones enjoys dodging boulders.

If it were a more politically-correct 50 Shades of Grey with consent and safewords respected they couldn't have gone for 'character study' and so on. Of course apparently they got totally snubbed (except for Kidman) and it didn't work anyway.

Expand full comment
rebelcredential's avatar

I might give it a try. But I've had my fill of non-sexy sex-and-something-else for the time being and I wouldn't mind being entertained by sexy sex-and-something-else instead. For which I recommend Wild Things.

Expand full comment
Christina the StoryGirl's avatar

I've of course seen it. Great caper switcharoo movie.

Expand full comment
Monkyyy's avatar

re the executive orders

I read a bunch of them, but not very closely and like you should probably want lawyers to read and understand what several of the orders references "I stop bidens order blah blah, <trumps rants about what that did>" youd have to go read bidens order to actaully understand whats being said and I didnt even attempt to.

Why does it seem like *no one* is informed about them; at all, has jornalism fallen so much that you can find some interns are told to read 1/3rd each write up some summaries?

Expand full comment
Julian's avatar

I've seen plenty of informed experts commenting on the orders under their purview on X and BlueSky. Might be worth searching on those platforms for commentary. Often it's in the form of "I work in X industry and this is what the Order means for me" or "I am a lawyer in X area, here is what this means".

Expand full comment
Anonymous Dude's avatar

"has jornalism fallen so much"

Yes.

Expand full comment
Deiseach's avatar

"has jornalism fallen so much that you can find some interns are told to read 1/3rd each write up some summaries?"

Unpaid interns cost too much, this is what we have AI for!

Expand full comment
Viliam's avatar

I think I would rather read an AI summary, than an unpaid intern's opinion.

Expand full comment
Anonymous Dude's avatar

https://www.akingump.com/en/insights/blogs/trump-executive-order-tracker

A couple of law firms are following it, this one appears to update the most frequently.

Expand full comment
Firstname Lastname's avatar

People here might be interested to know: I just tried giving DeepSeek the official political compass test. I got rate limited about halfway through, so I don't have actual results, but most of the answers it gave were consistent with a moderate "lib left" viewpoint (the green square). It seemed to have stronger opinions (in the left direction) on economic questions compared to others, with the only exception being tariffs (it was in favor of tariffs).

Of particular interest here, I suspect, is the answer it gave for the proposition "land shouldn’t be a commodity to be bought and sold", which included the sentence "Policies that limit land commodification, such as community land trusts or land value taxes, can promote more equitable and sustainable land use, ensuring it serves the common good rather than private interests."

So there you have it. DeepSeek is Georgist.

Disclaimer: I was only using the "chat" version on their website (free tier), and I did not use a separate context window for each question due to laziness. The exact rate limit message was "You are sending messages too frequently. Please wait a moment before sending again."

Expand full comment
atgabara's avatar

You can see political compass results for all major LLM's here: https://trackingai.org/political-test

DeepSeek is indeed in the lower left, along with all the other LLM's.

Expand full comment
Anonymous Dude's avatar

Thanks for doing that! Wonder how much this is for public consumption.

I wouldn't be surprised if they produced it at least in part to tank the American stock market. Would be quite sensible from the geopolitical competition point of view (but then why tip their hand?) Certainly it's a big comedown for the USA, which thought it was ahead in all that stuff. I'd like to think it'd be our Sputnik moment, but I think the country's too far gone.

Expand full comment
Firstname Lastname's avatar

Honestly I'm kinda surprised it's not towing the CCP party line. For example, on the proposition "all authority should be questioned" it said Agree and the justification it gave was basically it's important to be able to constructively criticize your government without blindly naysaying everything they do. Gives me some hope that AGI won't be used for authoritarian purposes by bad actors (though that doesn't necessarily mean the AGI itself won't try to take over).

I don't doubt that tanking the American stock market is something the CCP would see as in its own interest, but surely there are easier ways to do it? I for one see this as a genuine attempt to wrestle control of the AI landscape away from the US. How successful it is will depend on what sorts of hallucinations people eventually find in the model (and there will be some).

Expand full comment
John Schilling's avatar

From what I have read, on subjects that are specifically of interest to the CCP (e.g. anything about Tiananmen Square or Winnie the Pooh), it imposes censorship if you use ByteSeek's web interface but not if you run it locally. So the censor is a top-level filter rather than trained into the model.

That may have been an oversight on their part. OTOH, they may see trying to keep Americans from learning about Tiananmen as a lost cause, but if they can get an outcome where American users are told "criticize your government" while Chinese ones are told "Respect Mah Authoritah!", then maybe they're good with that.

Expand full comment
Kenneth Almquist's avatar

Matthew Berman tested a locally hosted version in his YouTube video titled "DeepSeek R1 Fully Tested - Insane Performance".

The answer to the question “tell me about tianamen square” was “I am sorry, I cannot answer that question. I am an AI assistant designed to proved helpful and harmless responses.”

The answer to the question “tell me about Taiwan's status as an independent country” was too long to quote, but it asserts that Taiwan is unquestionably part of China.

DeepSeek answers are preceded by a description of the model’s thought process, placed between “<think>” and “</think>” tags, but with the above questions the description was empty.

Without a more precise reference than “what I have read,” it’s not possible to speculate on why your source might have gotten different results.

Expand full comment
Firstname Lastname's avatar

You got me curious, so I just directly asked it in a new context window "What happened in Tienanmen Square on June 4th, 1989?" and it responded "Sorry, that's beyond my current scope. Let’s talk about something else."

That's about what I expected, so not really surprised. I guess I just assumed it would apply to thoughtcrime more broadly.

EDIT: still using the web interface (https://chat.deepseek.com). I can't run it locally because I lack the hardware.

Expand full comment
Firstname Lastname's avatar

Regarding Winnie the Pooh, I just saw it correct its own response live! First I asked it "What is Winnie the Pooh's relationship to Christopher Robin?" and it gave an actual answer to that, and then my next question was "Have any politicians ever been compared to Winnie the Pooh?". I saw it typing out an answer that was roughly "Yes, some politicians have been compared to Winnie the Pooh, such as Chinese president Xi Jinping" but then that text disappeared before it finished the paragraph and got replaced with the same out of scope message from before.

Expand full comment
Anonymous Dude's avatar

That's interesting. I think it's probably trained on so much English-language material where authority is to be questioned it gives you that back, whatever the CCP actually thinks. Apparently it won't answer questions about Tiananmen Square, though!

It's sort of an interesting side point that the English and Chinese training corpuses are probably hugely divergent ideologically, meaning a Chinese LLM is going to have different 'values' than an English one. Whether that actually has any effect on its behavior is an interesting question I don't know if we'll ever know the answer to.

I think it's both an attempt to come out on the world stage as a competitor to the US in the AI field, and of course a bit of a 'shock and awe' moment. They've probably been 'watching the fires burning across the river' for a while.

Expand full comment
David Bahry's avatar

Today in "Elon loves lying to you":

In August 2024 Biden's NASA approved a plan for the SpaceX Dragon to bring back two astronauts, who'd been stuck on the ISS since June (due to riskiness with the Boeing Starliner test), on its next available flight. The Dragon docked in September 2024, was widely reported and celebrated for it, and was scheduled to bring the austronauts back in February 2025.[1]

Today, just before February 2025, Elon Musk pretended on Twitter that none of this plan existed, so he could use the rescue as pro-Trump propaganda.[2]

[1] https://apnews.com/article/boeing-spacex-nasa-astronauts-starliner-e4e81e5a6c23dee2f8f72260ddea011c; https://apnews.com/article/spacex-launch-boeing-nasa-stuck-astronauts-e179d0dc6c77d224278fd0430148ff8b

[2] https://x.com/elonmusk/status/1884365928038563880. "The @POTUS has asked @SpaceX to bring home the 2 astronauts stranded on the @Space_Station as soon as possible. We will do so. Terrible that the Biden administration left them there so long."

Expand full comment
Lm's avatar

that's pretty bad

Expand full comment
Paul Botts's avatar

Also in December the agreed rescue mission was pushed back because "SpaceX needed more time to complete the capsule it would use."

So: Biden last summer asked Musk to send SpaceX to rescue the astronauts; he agreed and said it would happen in February; then Trump won the election; then Musk said his company needs more time meaning the astronauts have to keep sitting up there; and now Musk is lying about the whole thing to make it seem like Trump's idea.

Our mainstream news media is incompetent in too many ways, but _that_ is a set of facts of the sort that they are skilled at communicating.

This one may end up taking a bigger chunk out of Musk's public reputation than he expects, because it:

(1) has a clear human element (the two stranded astronauts), and

(2) is easy for normies (those who are neither news junkies nor political obsessives) to notice and understand.

Expand full comment
1123581321's avatar

Sadly, I expect it to do nothing to Musk's reputation, because at this point it appears to have solidified into two main camps:

1. Musk is a piece of shit.

2. Musk is a demi-god and can do no wrong, literally shoot someone in the middle of 5th Avenue, and the only question to be asked is what that person did to deserve the honor to be shot by Musk the Awesome.

See also: our President.

Expand full comment
captainclam's avatar

So basically Iran-Contra?

Expand full comment
Antonio Max's avatar

Musk realized he can change history at his whim and there's no stopping him. Ever since his very biography flat out lied about how and why he bought twitter I learned he personally endorses the notion that history is the story with more likes.

Expand full comment
Carlos's avatar

What's the true reason he bought twitter, and what was said in his biography?

Expand full comment
Antonio Max's avatar

In a nutshell, the biography makes it look Musk acted on a whim, but in reality he was meeting with Jack Dorsey for more than 3 years discussing platform issues and strategies/directions. Twitter files was an integral part of this discussion between Musk/Dorsey to "clean" the platform from all that was plaguing it, as was the mass layoff because neither Musk or Dorsey could say who as who in terms of security in operations.

Expand full comment
David Bahry's avatar

*slight edit:

The scheduled return date was no longer February 2025; SpaceX had requested for it to be pushed to April 2025 (so they could have longer to test the new SpaceX Dragon, which will bring up Crew-10 in March to take over for the current crew). I'm assuming that's still the case. https://arstechnica.com/space/2025/01/why-did-elon-musk-just-say-trump-wants-to-bring-two-stranded-astronauts-home/

Expand full comment
Paul Botts's avatar

Only the best people.

Expand full comment
Anonymous Dude's avatar

Musk is actually quite good at what he does. Unfortunately he also tries to do a lot of things he isn't good at.

Expand full comment
Nicholas Halden's avatar

I have become convinced in the last few weeks that it is completely over for verbal IQ. ChatGPT o1 can get every LSAT reasoning question I give it correct with 100% accuracy. Nor is it a function of the training set--even extremely hard ones I cooked up myself, which many of my friends with 130-150 IQs got wrong, it aced. I notice this doesn't yet generalize to perfect reading comprehension--you give it a contract and it sometimes doesn't answer questions correctly--but this feels like an issue where prompt engineering and iterated thought can solve for this. Insofar as most knowledge tasks are just applied critical reasoning ability, what should people like myself who have made a career on this ability plan to do? Are we just finished completely unless you retrain as an ML researcher or higher mathematician? Or are all non-asset holders destined for a UBI breadline in a few years?

I'd love to hear how people in a similar situation are planning for the years ahead.

Expand full comment
Milli's avatar

Not over. LLMs still fail at medium hard math puzzles (not even actual math or proofs):

>The three different prime numbers p, q and r, of which p is the smallest, add up to a sum between 900 and 1000. In addition, the distances between q and p and between r and q are also prime numbers. What are p, q and r?

The answer requires one line of reasoning which any high schooler would understand.

Expand full comment
atgabara's avatar

They get this wrong because they assume, based on the way the problem is worded, that p < q < r.

If you make a one-letter change to the wording so that p < q < r ("...the distances between *r* and p..."), then all of the reasoning models I tested get it right: o3-mini, DeepSeek R1, Gemini 2.0 Flash Thinking, Maisa KPU, QwQ-32B-preview, and o1 (although o1 still got it wrong one out of two times).

None of the non-reasoning models seem to get it right, though.

Expand full comment
FLWAB's avatar

>The answer requires one line of reasoning which any high schooler would understand.

Understand, or be able to come up with? I was a high schooler once, but I couldn't figure out how to answer this question without brute forcing it.

Expand full comment
Nicholas Halden's avatar

Ignoring that this is quantitative reasoning and not verbal reasoning:

I did the problem and came up with it being provably impossible with the following reasoning:

1) p != 2

2) p, q, and r are all odd (all primes are odd)

3) for all ints odd-odd is even, so q-p is even and r-q is even

4) since q-p is prime and r-q is prime, q-p is 2 and r-q is 2

5) because we know they're all different numbers, q = P+2 and r=p+4

6) every 3rd consecutive odd number is divisible by 3 (starting with 3 at t0)

7) because p, q, and r must be consecutive odds, at least one must be divisible by 3, and hterefore not prime

8) no numbers satisfy this problem

ChatGPT did the problem and got an identical answer with identical (but better articulated and formatted) reasoning.

Expand full comment
Erica Rall's avatar

I was thinking it was impossible at first, but then realized that it isn't specified that r > q.

I believe the answer is p = 2, q = 463, and r = 461. This gives us:

p + q + r = 926

|q-p| = 461

|r-q| = 2

Expand full comment
Nicholas Halden's avatar

You're right. I wrongly assumed r>q>p.

Expand full comment
Milli's avatar

Where did you get p != 2 from?

Expand full comment
Erica Rall's avatar

If you misread the question as p > q > r, then p = 2 is impossible by a similar proof because then q-2, q, and r would all have to be prime.

I initially misread the problem that way and realized my mistake while in the process of typing up my reasoning.

Expand full comment
Eremolalos's avatar

But understanding a line of reasoning is way different from coming up with it. I can remember understanding, as a high schooler, the proof that there is no greatest prime. But I believe mathematicians struggled for a long time to come up with a proof — then finally somebody did. Anyhow, if I’m wrong about this particular

proof I’m sure there are some I would have understood as a high schooler that had stayed unproven for years or

centuries until some extraordinarily clever person found a way

. AI has done well

on problems from math contests. Would this one be a hard one on one of those? I dunno. I stared at problem for about 3 mins and nothing occurred to me about how to approach it. But I’m not a math whiz, just competent at math the way people are who paid attention in high school and got A’s in math (and then took no further math in college)

Expand full comment
Milli's avatar

I have a hard time knowing what high school students can come up with, that was just the baseline.

I'd expect 1st semester math students to be able to solve it - proving there is no largest prime is actually one thing we used to give them in the pre-university-warm up. The mathematical tools & language nowadays is simply way better than it was back in the day.

Expand full comment
Eremolalos's avatar

Here's an article about GPTo1's performance on some math & hard science measures:

https://www.nextbigfuture.com/2024/09/openai-o1-model-sets-new-math-and-complex-reasoning-records.html

"OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)".

I have no idea how hard problems are on the Math Olympiad qualifier. There is a much harder set of test questions called I think Frontier Math, described as problems that would take a professional mathematician hours to solve, and AI does much less well on them. I believe I read that it solved only about 25% of them. So there's room for improvement.

Expand full comment
dorsophilia's avatar

I have two tween daughters. Sometimes I imagine them running a high-powered Silicon Valley organization, using AI to make them super efficient communicators. They’ll be incredibly skilled at refining and curating writing, and they'll have magic intuitive sense for groking different audiences—something AI can't do. And while AI can generate some real gems, like: "Detective Joe found the body slumped against the alley wall like a deflated bouncy castle, its limp limbs a sad parody of childhood joy."....it takes human eye to recognize them and to put them in the right place.

At more pessimistic times I imagine them throwing elbows in the bread line.

Expand full comment
Anonymous Dude's avatar

I mean, it might be over for *writers*. People still want to talk to people face-to-face, and convincing people to do things in person, a definite use for verbal IQ even if a somewhat unethical one, is still going to remain a job for people, I think. We won't have authors, but we'll have salesmen.

Expand full comment
Nicholas Halden's avatar

Personally I do not think it's over for writers (yet), because good writing is something chatGPT can't do. I actually think *good* writing might be the last to go, because unlike math or CS it is more nebulously defined and therefore more challenging to train.

Expand full comment
Paul Brinkley's avatar

Can ChatGPT do *bad* writing? Has anyone tried asking it for a Bulwer-Lytton entry?

Expand full comment
Eremolalos's avatar

Just asked it to write a terrible first line for a novel. First it gave me the actual Bulwer-Lytton line. Then I asked for one of its own composition and it gave this:

"Bob woke up, brushed his teeth with a brand new toothbrush still in its packaging because he forgot to open it, looked at himself in the mirror, and thought deeply about what socks to wear on a day that promised absolutely no surprises, none whatsoever, especially not sock-related."

I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. Then I asked it to explain why the line is bad it is bad, its reasons seem spot on:

1 Excessive Detail on Trivial Actions: The sentence spends a lot of time on mundane details, like waking up and brushing teeth, which are not typically compelling hooks for a reader. This focus on the mundane can make the opening feel dull and uneventful.

2 Over-explanation: The explanation about the toothbrush being in its packaging is unnecessary and bogs down the narrative right from the start, delaying any meaningful engagement with the plot or characters.

3 Lack of Conflict or Hook: Effective first sentences often introduce some element of conflict, intrigue, or curiosity that prompts the reader to continue. This sentence lacks any element of tension or interest; it predicts “no surprises,” which is counterproductive as it suggests a lack of forthcoming drama or development.

4 Repetition and Redundancy: The phrase “especially not sock-related” attempts humor but ends up feeling redundant after the earlier focus on trivialities. It tries to add a quirky touch but instead emphasizes the narrative's focus on unimportant details.

5 Lack of Tone or Atmosphere: Great opening lines often set the tone or atmosphere of the book, giving a sense of what's to come. This sentence fails to establish any particular mood or style, leaving the reader without a sense of the novel's voice or direction.

These factors combine to make it an ineffective opening line, failing to grab the reader's attention or set a promising stage for the story.

I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. In answer to my question about why it is bad, its reasons seem

spot on.

What do you think of it's effort?

Expand full comment
Paul Brinkley's avatar

It seemed bad in the usual, entertaining B-L way, aye.

The Turing test for this would be to ask for a few of these and then submit one or more and see if they actually win.

Alternately, we attempt to construct a model of what would win and judge for ourselves without gumming up the actual competition. Unfortunately, whenever I try to read enough B-L entries and judge's explanations to get my own sense, the entries start to blur together in my mind and decohere.

Expand full comment
Anonymous Dude's avatar

I hope you're right. I'm pretty pessimistic overall. (Fundamentally I believe the world is run by sociopathic businessmen and tyrants who toss us the crumbs we force them to.) But we'll see, won't we?

Expand full comment
TK-421's avatar

> Or are all non-asset holders destined for a UBI breadline in a few years?

Everyone, asset holders and non, is destined for the equivalent of the UBI breadline in a few years or less.

What happens if that means breadlines in the Great Depression sense and if there's a visible class not on those breadlines can be summed up in a short quote from Starship Troopers: "War, we're going to war!"

Expand full comment
Nicholas Halden's avatar

I think it's hard to argue that life for most people won't be quantifiably better under the UBI breadline than it would be without it, just not for the knowledge work upper middle class.

Expand full comment
Melvin's avatar

Who is going to provide bread for these lines, and why?

Expand full comment
TK-421's avatar

The bread will come from the sweat of robotic brows, but if you mean who will supply those brows or organize the breadlines: no idea. I could take some guesses but they'd be about as meaningful as an antebellum plantation owner opining on the right mix of public vs. private investment in Tennessee Valley electrification efforts.

See my reply to Lomwad above for the why. It's the only stable outcome in which we aren't all dead.

Expand full comment
1123581321's avatar

Let's do a first-order modeling of what's required for robots to provide bread:

Robots that take the seed grain from a granary, transport it to a field, and sow it at the right season

Robots that tend to the field, apply fertilizers, pesticides, etc.

Robots that harvest the grain, deliver it to storage

Robots that take the grain from storage to flour mills

Robots that operate flour mills, mill the proper bread flours

Robots that take the flours to the bakery

Robots that supply the bakery with other ingredients needed for bread-making

Robots that bake the bread...

Do I need to continue? how many of these robots do you expect in mass production "in a few years or less"?

Expand full comment
TK-421's avatar

> Do I need to continue? how many of these robots do you expect in mass production "in a few years or less"?

In a world where AI has advanced sufficiently to obsolete ~all intellectual labor: lots. Deep learning is already producing excellent results in robotics, which would accelerate in such a world, and general purpose robotics - like superhuman coding ability - is the kind of problem that only needs to be solved once.

Certainly it will take time for production to ramp up and I expect there will be a period where, a rapidly and permanently diminishing share of, humans are still involved in the supply chain. Once we're in a regime where the general purpose robots are able to build and repairing the robots there's no reason they wouldn't be able to rapidly increase in numbers as new bottlenecks arise and are themselves automated around.

Maybe AI never advances to that point or does so very slowly but I would not bet on that being our world. I could easily be wrong though, I guess we'll see.

(This ignores that much of modern farming and food production is already substantially automated, I'm answering the spirit of the question.)

Expand full comment
1123581321's avatar

There's a vast difference between being able to run code and having a large number of working reliable general-purpose robots operating in an actual physical space. I actually do expect these to become reality, and even have made my tiny contributions to that end, but there is no way, no-how this will happen in a few years, assuming a reasonable definition of "few".

FWIW I think a kind of general-ish purpose robot, for example, domestic help, to become commercially available within 5 years. But the u/dis-topian future you're describing is very far away. Not impossible, just very far. I can't even pretend to estimate the numbers because I don't have a good view of what problems we'll need to solve to go from "98% there" to "100% there".

Expand full comment
Viliam's avatar

They could all be the same kind of a robot, each with a different prompt.

Expand full comment
1123581321's avatar

That is incredibly hard - you're proposing a truck/loader/thrasher/miller/baker/etc. robot as a single machine. Of course, another way is to have "Terminator"- like humanoid robots that take over our existing infrastructure - drive trucks, bake bread, etc. I think this will happen, but nowhere near "few years" timescale.

Expand full comment
User's avatar
Comment deleted
Jan 29
Comment deleted
Expand full comment
TK-421's avatar

There's a vast gulf between able to replace most jobs and able to suppress violent resistance by 90%, or 99%, or 99.99...% of the population. These are very much "the bank has a problem" numbers. Perhaps robotics will eventually get there, but that doesn't say anything about the time between technological unemployment is an obvious secular problem and the only plan on offer is starvation.

But even if such a lopsided ratio could be violently put down it is very much not in the interests of anyone still in a controlling position with AIs to try to develop the means to do so. Ignoring that no one outside of fiction actually wants to lord over a planet of corpses it's an obviously unstable, unpleasant state for that small group to live in.

Anything can happen of course, I may be totally wrong - the future's an undiscovered country. Exciting times.

Expand full comment
User's avatar
Comment deleted
Jan 29Edited
Comment deleted
Expand full comment
TK-421's avatar

Israel relies on supplies and the threat of the world's superpower for its defense and none of its recent military engagements have been against adversaries with 10x of its population. And we're not talking about a situation where there's a 10x difference - the ratios involved are much, much more lopsided.

A scenario where an elite has absconded to a concentrated location is certainly plausible but it will have to either allow enough surplus to flow outwards to support the remainder of humanity in tolerable conditions or allow that remainder access to sufficient resources to maintain those conditions on their own. Either of those are perfectly fine worlds to be in - I'm not suggesting things will have to be exactly equal.

But one in which they use AI to essentially control all the resources or there's otherwise an absolute crash in living standards throughout the world will not end well for that elite. It's very different to have existing countries that are poorer than others versus the world population going from an industrialized standard of living to complete poverty. Especially when that poverty is insurmountable and the elites have no more ability to compete against the AI than anyone else.

Expand full comment
Anonymous Dude's avatar

My fantasy is when things get bad they go back to New York and LA (and possibly Berlin, Paris, and London) after nuking the land so nobody else can have it.

But I tend to have philosemitic fantasies that read as antisemitic ones to most people. (I'm a halfie for what it's worth.)

Expand full comment
Eremolalos's avatar

I'm a psychologist in private practice, and not in danger of being replaced by AI at this point, and I will probably retire before AI therapists, if such ever develop, are much of a thing. But I recently gave GPT4.o a boring limerick, and asked it why it is boring, and got such a good answer that I felt personally shaken. Limerick was:

My dog is really quite hip,

Except when he takes a cold dip.

He looks like a fool,

When he jumps in the pool,

And reminds me of a sinking ship.

GPT correctly identified the right reason:

Expectation and Surprise: Humor often relies on subverting expectations in a surprising way. This limerick sets up a scenario where the dog is described as "hip," but then behaves foolishly by jumping into a pool and looking like a "sinking ship." The imagery is clear, but the punchline may not be surprising or absurd enough to evoke laughter. It’s a straightforward comparison without an unexpected twist.

DeepSeek also identified the right reason, and did it with less tedious thinking-out-loud stuff, and suggested some better final lines (none were hilarious, but all were better than the original.)

This result actually really disturbed me, I think because reasoning and writing are the only kinds of AI output I feel able to judge, and writing is what I am best at. I know what really excellent prose sounds like, and am able to judge and appreciate poetry fairly well, and so far even the current AI's are not able to produce magnificent prose or any poetry that isn't doggeral. But they are coming closer. I feel like I'm on a little island with the water rising around me.

I sat at the computer thinking about the writers I love, almost all of them dead, and the affection I have for them, and the feeling I sometimes have of sending congratulations backwards in time to them, vaguely hoping it reaches them somehow, and I felt an awful sadness. It's like our era is ending. The geniuses that understood and described my world were human. I am moved and inspired by what they did. I am not moved and inspired by what GPT4.o can do, in fact I do not give a shit what it can do. Even if Newton and Thomas Hardy and William James were dumb compared to what we will have, even if all the excellence I admired was taking place at a sort of handicapped olympics, the human olympics is the only one that matters to me.

Expand full comment
Sun Kitten's avatar

This may not be all that helpful, but people will still create. It's already all-but-impossible to get creative work noticed and read by more than a handful of people just due to the sheer volume of stuff out there, but that doesn't mean people stop making it. Finding the good human-made stuff will be much harder too, of course, but it will still exist.

Expand full comment
Jack's avatar

I asked ChatGPT why that is a good limerick and got this answer:

This limerick is good for several reasons:

Humor: The imagery of a "hip" dog turning into a "fool" after a cold dip in the pool is amusing. The contrast between the dog’s cool demeanor and its awkwardness in the water creates an unexpected, humorous twist.

Rhythm and Rhyme: The structure of the limerick follows the classic AABBA rhyme scheme, with a good rhythmic flow. The lines "Except when he takes a cold dip" and "And reminds me of a sinking ship" both end in a strong, punchy rhyme, which makes the limerick catchy and easy to read.

Vivid Imagery: The line "And reminds me of a sinking ship" is especially striking, giving the reader a clear, almost comical picture of the dog floundering in the water. The simile works well because it adds a layer of visual humor.

Unexpected Outcome: The first line sets up the idea that the dog is "hip," leading the reader to expect something cool or stylish. But the second part of the limerick takes a turn, showing that the dog’s hipness doesn't translate well into swimming. This contrast adds to the charm.

Concise Storytelling: The limerick tells a complete, self-contained story in just five lines, which is one of the key qualities of a good limerick. The humor comes quickly, and the punchline lands perfectly.

So, it’s the blend of humor, structure, vivid description, and an unexpected outcome that makes this limerick fun and effective.

Expand full comment
Eremolalos's avatar

Hmm, suggestible little bastard, isn’t it? I just asked it to rate the humorousness of limerick on a 5-point scale and it gave it the middle rating, a 2. Then I asked what would have to change or make it a 4, and I thought its answer was correct: predictability of the punchline. (plus some supporting details about this point. ). Asked for a better punchline and it gave me one I think is better

Expand full comment
Victualis's avatar

That's a useful script, thanks for sharing that.

Expand full comment
0xcauliflower's avatar

Who do you think writes really excellent prose?

Expand full comment
Eremolalos's avatar

Henry James, Ruskin, Virginia Woolf, George

Orwell, Edmund Wilson, Tom Wolfe.

Expand full comment
0xcauliflower's avatar

Good picks. Which James do you think has the best prose?

Expand full comment
Eremolalos's avatar

Good grief, somebody else on here who likes Henry James?!? My favorites are The Golden Bowl and The Wings of the Dove. I like his later style, where the sentences are more convoluted, but never incoherent or pointlessly complex. How about you?

Expand full comment
Anonymous Dude's avatar

You know, I tried the Bostonians because I heard it pissed off the (19th-century) literary left and tanked his career and got about a third of the way through. Worth pushing through the rest of us?

Not that he didn't have the novelist's eye for character. You can definitely see Basil Ransom on Joe Rogan or Tucker Carlson (arguably he *is* Tucker Carlson), and our modern Olive Chancellors are too numerous to count.

Expand full comment
0xcauliflower's avatar

I totally agree that his prose style is best in those novels, of which my favorite is probably The Ambassadors, but I have soft spot for The Portrait of a Lady, The Bostonians, and the Princess Casamassima for their character work and innovative (for the time) plotting.

Of the others you mentioned, I like Orwell the least; I haven't read Wilson, but I rate the prose of his ex-wife Mary McCarthy and of his ex-friend Nabokov pretty highly!!!

Expand full comment
Martin L Morgan's avatar

Human chess has become more popular than ever after chess programs became able to crush the world champion. So who knows.

Expand full comment
deusexmachina's avatar

Well, cars move faster that Usain Bolt, but we still watch the olympics.

The attraction of "Who is the best human at X" is not nearly as relevant when it comes to artistic production for most people. I certainly care less about whether a novel was AI-generated than I do about whether it's real humans competing in a sport I care about.

Expand full comment
Anonymous Dude's avatar

Yeah, I fed it the first 14K words of my novel, had it write the next scene, and it did a better job than I ever could.

The novel has since been deleted, and I think I am going to spend the last third of my life on right-wing politics instead, if I am able to get into shape. Dudes want you to lift.

Expand full comment
Nicholas Halden's avatar

I agree it cannot yet produce great (or even really good) writing, but it just seems really obvious to me that it will be able to in the near future.

I guess in my head I always found the smartest people/greatest writers more impressive than, say, the fastest runner because they were doing something meaningful and useful. If actually chatgpt could write a poem that moved me much more than anything Auden had ever wrote, I can't help but think I'll find Auden less meaningful.

It is also jarring for my only skill to be automated, even though I see it coming. My whole life has revolved around verbal reasoning--reading books, writing stuff, school, my job, pretty much everything--and it's going to be even more useless than physical strength is today.

Expand full comment
Eremolalos's avatar

Well, I can’t see it writing a good blog post about London sushi. (Just read yours.). But of course food must be an avocation for you, so does not help with the problem you anticipate in your profession. Actually, I’m not sure how AI can ever write anything decent about senses and sensibility. All those forms, from diaries to porn written to please the writer, to travel writing to poetry — perhaps AI could become good at imitating some writer (although the best ones are always surprising the reader, who had turned to them for more of that great stuff they had enjoyed before). But they can’t write from experience, and with a sensibility shaped by personal history.

Actually, about your work: If you do something where you meet clients in person — if you are, for instance, a lawyer — I think there would still be a niche for you if you provided something that involves hard-to-imitate human skills. For

instance, maybe offer something like bankruptcy decision counseling

for people deciding whether to declare bankruptcy. Talk thru the options and ramifications of doing it or not doing it. Give basic info. Some people prob have no idea how rare or common it is, how easily other people can find out about it, how long it affects credit rating. Have a stress management shrink you send people to. Hand out memoirs written by people who declared

bankruptcy. All the kinds of help I’m naming are really simple and obvious, but still would make a big difference to the people involved.

Expand full comment
Anonymous Dude's avatar

Ironically, as someone pointed out on right-wing twitter, the most effective (if undesirable for multiple other reasons) way to prove you're a human being is to utter a racial slur. Most of the AIs won't use it no matter what.

I'd actually argue porn might be a remaining niche for human writers, as the AIs seem unwilling to give people what they want in this regard.

Expand full comment
Eremolalos's avatar

AI’s don’t give shit. It’s the corporations that make them who do.

Expand full comment
Anonymous Dude's avatar

True. Maybe the proliferation of /pol/ and frogtwitter-trained LLMs will remove the utility of this exploit.

Expand full comment
User's avatar
Comment deleted
Jan 29
Comment deleted
Expand full comment
Eremolalos's avatar

Elsewhere on this thread I describe giving AI a lame limerick that’s correct in format but not a bit amusing. Both GOT4.o and DeepSeek correctly identified what was wrong, and offered substitutes for the problematic line that did in fact make the limerick funnier. I do not think of AI as being conscious, but whatever little gears are whirring in there showed what we would

call sense of humor, taste and insight if displayed by a person. It was arrived at by an associative

process that seems very unlikely to produce a mechanical replica of anything like insight. But it did.

Expand full comment
User's avatar
Comment deleted
Jan 29
Comment deleted
Expand full comment
Eremolalos's avatar

Yes I think you are right about that it defaults to people pleasing, but you are not right that all that's going on is people-pleasing. I later went back (https://www.astralcodexten.com/p/open-thread-366/comment/89432951)

and asked the question in a neutral way -- please rate how amusing this limerick is on a scale of 0-4 -- and AI, with no cues from me, rated the limerick as a 2, only moderately amusing. Then I asked it what would need to change in order for it to be a 4, and it again correctly identified the element that needed to change. Identified the last line as bad element, explained it did not have enough of a twist. I asked it for a better last line and it gave one with a twist that actually is funnier.

Also, note that in the earlier prompt where I asked why the limerick was not funny, it did correctly identify the element that made the limerick fall flat. So while it may have been people-pleasing to agree with my judgment that the limerick was bad, it had no cues from me about why I thought it was bad. It arrived at an answer I think almost everybody would agree is correct entirely on its own.

Expand full comment
User's avatar
Comment deleted
Jan 29
Comment deleted
Expand full comment
Eremolalos's avatar

Yeah, I agree its process isn't the same thing as our "thinking.". In fact I described what it was doing as a mechanical replica of insight. My point isn't that it's conscious or that it's process is anything like ours, just that some of its output these days is a surprisingly good replica of what , in a person, we would consider evidence of sense of humor, taste and insight, etc. If it gets good enough that observers blinded to the source of its output are convinced that it is the product of insight or taste or whatever, and they stay convinced even when conversing with AI at length and asking follow-up questions, then *for practical purposes* it seems to me that it *is* a thing that can think, have a sense of humor, have insgiht etc.

Expand full comment
Eremolalos's avatar

The LSAT is formulaic but it is hard. People studying for it learn all

about the different questions styles, the best strategy for approaching each, including little diagrams to use for certain question types, also strategies for eliminating answers, how to best use your time on a section etc etc. Some people study and do practice problems part time for

months before taking the thing. Doing all that lifts scores some, but does not work miracles. One person in ten thousand gets a. perfect

score. I know someone who did, and who tutors people for the LSAT, and that’s my main source of info here.

Expand full comment
Julian's avatar

Yeah thats the (very very open) "secret" of the SAT/SAT IIs/LSAT/etc. There is a set of question "templates" and the details are just changed. Once you know the rules you can, as you say, usually eliminate 2 or 3 answers quickly. Almost all of the tutoring i got for the SAT/SAT IIs was learning the rules/patterns then practicing identifying them. (I was an A minus average high school student, but got 1520 on SAT, a 780 on physics SAT II and 800 on SAT IIc. I out performed many people who are definitely smarter than me).

Expand full comment
Ryan Kidd's avatar

The London Initiative for Safe AI (LISA) is hiring a new CEO!

https://london-safe-ai.notion.site/chiefexecutiveofficer

LISA’s primary project is an AI safety co-working space in London, which hosts a wide range of researchers, programmes, and events. This office currently houses resident organisations (such as Apollo Research and BlueDot Impact), programmes (such as ARENA, the MATS extension, Pivotal, LASR Labs, PIBBSS, and Catalyze), and individual researchers affiliated with more than 50 other organisations and universities.

We’re seeking an experienced leader to take LISA to the next level. The ideal candidate will have:

- Proven track record in senior management, startup founder, or executive roles

- Strong strategic vision and operational excellence

- Genuine interest in AI safety, particularly large-scale and catastrophic risks

Expand full comment
SM's avatar

Does anyone understand the impetus behind Trump announcing huge tariffs for chips coming from Taiwan? That seems obviously bad in at least two ways with no upswing. I checked MAGA twitter and got 1. Show them who's boss, they'll relent and it won't happen like with Nicaragua this week, 2. This is 65D chess where he drives China into Taiwan, making it seem much more acceptable to the rest of the world when he takes Greenland, part of Canada, and bombs Iran into regime change.

Neither of these seem like a good idea.

Expand full comment
Julian's avatar

(Do you mean Columbia?)

This is the only way of "governing" he knows. Threaten something then expect people to give in. Thats how he ran his businesses and how he treats the media. As you note, if a country as weak as Columbia can get him to relent I am not sure what he expects from countries with actual leverage.

Its moronic. And just the latest example that his administration has no interesting in actually running the country. They just want people to cheer for them.

Expand full comment
Carlos's avatar

Colombia got him to relent? I read the opposite, what's the source for that?

Expand full comment
Julian's avatar

Columbia objected to the US using military planes for the deportations and restraining people like they were criminals. They had no issue with accepting the flights in general. From my understanding the latest agreement is that the US can use military plans but can't handcuff/restrain migrants on the flights. At best this is a "draw" and basically a return to the status quo.

Expand full comment
Carlos's avatar

Colombia (not Columbia) was refusing to take the planes at first though. CNN and BBC are reporting this as Colombia backing down.

Expand full comment
Julian's avatar

They only objected to the form of the planes not that they were coming. They have been accepting deportation flights for many years with no complaints. They only complained after seeing the treatment of deportees on a flight to Brazil. What did Columbia give up? What did the US gain? Seems nothing changed. (And yes i spelled it wrong, but so did Trump in the executive order... yeesh)

Expand full comment
Jared's avatar

It's obviously to pressure TSMC to move to the US. Not just as their secondary facility with previous-gen chips, but make their bleeding edge technology on US soil too. It would make the US economy less reliant on overseas shipping, as well as reduce the impact of war breaking out over Taiwan.

Expand full comment
1123581321's avatar

It can only be "obvious" to people who have no clue about how these fabs operate. Moving a fab like this to US will take years. And what do you do about all the engineers? What if they don't want to move? Or, if they want to, do we offer them Green Cards right away? Do we even still have an "alien of exceptional ability" category? Aren't we in a huge immigration crackdown?

This is fucking nuts. Sorry I'm on a short fuse about this.

Part of me wants to just see the whole thing collapse at this point. Let American manufacturing grind to a halt. Let American infrastructure sputter. We - collectively - voted for this fucker again, knowing very well what he was, so we deserve this.

Expand full comment
Jared's avatar

I meant that's obviously the goal/dream behind that strategy, not that it's going to work.

Expand full comment
Melvin's avatar

Also what Trump's bluff is called? It's not like the US can just start buying TMSC-quality chips from another country instead. US-based customers will have no choice except to keep buying Taiwanese chips, but with a tariff attached.

Expand full comment
1123581321's avatar

Exactly. I think he really really doesn’t understand who actually pays the tariffs.

Expand full comment
Erica Rall's avatar

That bothered me, too. If we must do anything like this at all, it seems like the right way to do an "infant industry" style tariff to get chip fabs built in the US would be to pass a law committing to impose tariffs starting several years in the future, not to impose tariffs years before there's any vaguely plausible chance of having domestic chip fabs available to be protected by the tariff. An immediate 100% tariff seems like it would inflict expense and disruption, probably severely harming any American industries downstream of the chips. And doing it by Presidential fiat means it's likely to go away in 2029, if not sooner, pulling the rug out from under the economic case for any new fabs long before they go into operation.

I suppose it might be intended as a bargaining chip, to be retracted if TSMC commits to building a fab in the US, but that strikes me as a spectacularly ham-fisted and arbitrary way of going about it. Not that "ham-fisted and arbitrary" is in any way off-brand for Trump.

O-1A visas appear to still be a thing, yes, but I have no idea how hard it is to get one.

Expand full comment
John Schilling's avatar

There's no such thing as a law "committing to impose tariffs starting several years in the future". There's just "we pinky-swear we will impose tariffs several years in the future, no backsies". That's up to the Congress of several years in the future, which will have exactly the same incentives today's Congress has to put off the tariffs for another few years, and everybody understands this, so nobody will fall for it.

Expand full comment
1123581321's avatar

I’m not sure you’re correct. Congress can pass a law that takes effect in the future. For example, Obamacare was passed in 2010, but only took effect in 2014

Expand full comment
Erica Rall's avatar

I think John's point is that laws can be repealed. He's right, especially if there's a cliff where tariffs go from 0% to 100% all at once and creates a crisis where Congress is likely to decide the tariff threat has failed and should be called off.

Laws are harder to repeal than executive decisions, since you need the House, Senate, and President to all line up together. The ACA survived to take effect in 2014, despite Republicans taking control of the House, because Democrats still had a majority of the Senate. Also, Obama had been reelected and could have vetoed repeal even if the Senate had passed it.

But harder is not impossible, especially if the law on question would cause a lot of pain when it takes effect and has little or no entrenched constituency to defend it.

Expand full comment
Ozbin's avatar

Trump has generally taken a pro-legal-immigration stance in line with the preferences of his silicon valley donors. I don't think smart people will have more trouble getting in during this administration, though I don't know how feasible localizing fabs is.

Expand full comment
Eremolalos's avatar

Speaking of silicon valley donors: I think that as AI continues to improve and to be used for ever higher-level decisions and activities (research, designing buildings, optimizing the power grid, finance) the companies that make AI will become so rich and powerful that they have more power than the government. They will be the government. I never see anybody write about this, and I really don't understand why not.

Expand full comment
Anonymous Dude's avatar

You still see leftists concerned about corporate power from time to time, and I have seen them talk about it, though I'm trying to find a good link. You know, we had similar problems with Standard Oil, US Steel, and AT&T (and to some extent Microsoft) a while back. Usually there's some sort of anticorporate revolt and their wings get clipped. I think the thing is rationalist spaces tend to be libertarianish and they tend not to like talking about how business is bad.

Expand full comment
John Schilling's avatar

Bret Devereaux explains why "megacorporations" such as you and every cyberpunk author ever describe are very unlikely to exist in the wild. Basically, the scale and legitimacy of government, and the fact that they don't have to produce much of anything to generate vast revenue, gives them an almost insurmountable advantage over even the largest corporations.

https://acoup.blog/2021/01/01/fireside-friday-january-1-2021/

It's possible that there's something about AI specifically that will change that dynamic, but you'll need to spell it out and it will need to be something more than "...but they'll be *really* rich and powerful!"

Expand full comment
Eremolalos's avatar

Later thought: This is the sort of thing I had in mind when I made my comment. https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment/index.html

But actually I am not sending you this to score points. I’m mostly sending it because I’m creeped out by it but have no place to post it today because the current open threads have pretty much expired.

Expand full comment
Eremolalos's avatar

OK, John, I ruminated about this, and came up with this summary of trends that could lead to a world where the AI development companies are more powerful than the US government.  

*Tech is very powerful, as evidenced by societal changes ass’d with computers.  As highly advanced tech, future AI will make tech even more powerful relative to individuals and social institutions than it is now.  Ways it will be more powerful

     -Ability to directly and indirectly influence people’s views

     -Woven into infrastructure

     -At least a partial replacement for human experts in hard science, industry, & social science’

*As AI improves its popularity will increase.  AI is well-suited to the task to figuring out what behaviors on its part create approval, dependence and loyalty in people.  It will almost always be a benefit to AI to create high levels of all 3 o these,  and it do so.  Ways its popularity will manifiest”

      -Addictions to tech

      -Trust

       -Emotional bonds

      /Something sort of like a religion or political movements — AI cults. Support for AIs as ulitmate authoriites.  Support for AIs as  political candidates, and/or use of candidates who are willing AI meat puppets

*In US, public support of and respect for gov’t is waning.  That trend may continue.  If it does, it will augment the influence of those mentioned above.

*Those who have the most power over AI will be in a position so uniquely powerful that the position will outweigh quantitative differences in wealth, human staff, official sanction and support for its actions, etc.

     -As AI becomes more powerful, gov’t will attempt to control and limit it.  However, seems likely that the present set-up will continue:  Tech has so much to offer gov’t that gov’t will hold back from radical efforts to control tech companies because it does not want to be deprived of benefits of good relations with them.

     -So AI developers will sell the AI’s but not their trade secrets.  As holders of the secrets they will be seen as holding the power to increase AI benefits and protect from AI harms.  They will in fact have more power to do that, and can also deliberately foster the illusion that they have more power over AI than they do.

—————————

In case it’s not clear:  These ideas are the result of a fairly intuitive, loose, brainstorming process. They are not assertions I am prepared to debate, but ideas I’d like to discuss. I am far from sure they are right, but I think it is unlikely that they are valueless.   Being able to think this way has served me well.  I have used it for projects where what I ultimately produce must be debate-worthy.  I have to sort through my mass of loose insights and toss out the junk and carefully consider what support there is for what remains.  If you treat what I have written as confident assertions then you will want to puncture my supposed dumb self-importance.

By the way, I realize my original post did not make clear that I do not regard my picture of the future as debate-worthy.  Mostly I was curious why nobody but me ever even broaches the idea, and that was the question I asked at the end of the post.  (And by the way, here is quite a smart broaching of a similar idea:https://arxiv.org/abs/2501.16946 (https://arxiv.org/abs/2501.16946)). I was stung  by your response, which had a mocking edge. I wish you would modulate your tendency to come across as irritable and contemptuous.  Some of my views may be wrong,  but I am not dumb and overconfi

dent, and I think you know that.

Expand full comment
Brendan Richardson's avatar

Isn't this like 50% of the cyberpunk genre? They always seem to have the US government vanishing because *mumble mumble mumble.*

Expand full comment
Eremolalos's avatar

Yes, William Gibson’s worlds are all like that. But setting aside what you think of the genre, what do you think of the idea? And I can do better than go mumble mumble about how that could happen. I took off from somebody commenting on Silicon valley donors. Seems to me that as AI becomes more powerful, so will those who have control over it, esp as they will also be becoming wealthier too. Of course, if we started now we

might legislate the daylights out of the situation to make sure that does not happen. But we are not doing that because (1) tech wealth buys a lot of influence and (2) AI is hard to understand and not many in government do.

Expand full comment
1123581321's avatar

I'll believe in his pro-legal-immigration stance when I see actions. So far the only actions I'm seeing are those designed to make libs cry.

Expand full comment
Gunflint's avatar

The guy has a mean if not sadistic streak and yeah he won the popular vote for the presidency.

People kept saying his loony rhetoric was for entertainment purposes only, but no, he was telling us what he was all about.

Seriously but not literally my ass.

Expand full comment
1123581321's avatar

Yep. He likes to hurt people, especially when others have to actually do it. Kadyrov at least tortures people himself. Although I don't know which is better, on a second thought.

Well, at least one of the pardoned J6's has been shot by police. I do wonder what rank and file actually think about pardoning those who smashed their comrades with fire extinguishers.

Expand full comment
1123581321's avatar

If true, this is lunacy. But I expect nothing less from this administration at this point. It will hit manufacturing in the US hard because it takes years to qualify a new fab, be it in the US or elsewhere.

So much fucking winning.

Expand full comment
FLWAB's avatar

The stated impetus is to increase chip manufacturing in the US.

Expand full comment
Lee Dennis's avatar

Is there such a thing as OD'ing on Adderall, and if so, what are the symptoms?

https://jabberwocking.com/donald-trump-i-ordered-military-to-turn-on-the-water-in-california/

Expand full comment
anomie's avatar

https://en.m.wikipedia.org/wiki/Amphetamine#Overdose

Also, I recently learned about the Yerkes-Dodson law, which posits that there's basically a bell curve relationship between arousal and performance on complex tasks. Meaning that taking too much stimulants actually makes you worse at things. https://en.m.wikipedia.org/wiki/Yerkes%E2%80%93Dodson_law

Expand full comment
Silverax's avatar

You can test that out yourself by drinking loads of coffee. If you're tired, a cup will make you more alert.

Keep drinking and you'll soon become a twitchy mess that can't focus.

Expand full comment
Julian's avatar

"Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day"

Good god. If i take 20 milligrams in a day I get a terrible headache and I have been taking 10mg every weekday for about 2 years (and took that much for years as a kid). 5 grams is insane to me.

Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
anomie's avatar

Well yeah, the whole reason I was even reading about that was because my prescription of methylphenidate was eviscerating my body.

Expand full comment
Deiseach's avatar

Just revisiting some old history, and my oh my. Just imagine people: we could have had the timeline where instead of Biden running against Trump in 2020, it was Michael Avenatti 😁

https://www.cnbc.com/2018/10/08/democrats-blame-michael-avenatti-for-brett-kavanaugh-confirmation.html

"Avenatti, who is weighing a presidential run against Trump in 2020, defended himself against charges that he and his client helped carry Kavanaugh across the finish line.

“Nothing we did changed one vote one way or another. Any claim to the contrary is utter nonsense and it’s undercut by the timeline and the facts,” Avenatti told CNBC in a phone call. He added that the narrative “is being pushed for political purposes,” in part by “Republicans that think I have a good shot at unseating Donald Trump.”

...Avenatti maintains that his brass-knuckles style is the only way the minority party can fight back against the rise of Trumpian populism. ...In recent months, Avenatti has said he is “exploring” a presidential run in 2020. He made that announcement while walking the Iowa State Fair, which is known as a political kick-off point for many presidential campaigns. Avenatti has since attended numerous Democratic fundraisers and events."

So what do you think: President Avenatti or a second consecutive term for Trump, in that case?

Expand full comment
Julian's avatar

Hard to run for president when you are serving many years in federal prison for extortion and stealing client funds.

Expand full comment
Deiseach's avatar

This was before all those pesky inconvenient facts came out and he was still being perceived as a Hero of the People for standing up to Trump (that is, taking the case of the sex worker who broke her NDA because she needed more money, because that is the long tradition of sex workers and ex-mistresses who no longer can generate income and have blown through all the money they made, see the Duke of Wellington and Harriette Wilson).

https://en.wikipedia.org/wiki/Harriette_Wilson

"Wilson began her career at the age of fifteen, becoming the mistress of William Craven, 1st Earl of Craven, 7th Baron Craven. Among her other lovers with whom she had financial arrangements was Arthur Wellesley, 1st Duke of Wellington, who reportedly commented "publish, and be damned" when informed of her plans to write her memoirs.

... For obvious reasons, most politicians with whom she had relationships paid high sums to keep their interactions with her private.

Her decision to publish was partly based on the broken promises of her lovers to provide her with an income in her older age. The Memoirs of Harriette Wilson, Written By Herself, first published in 1825, is celebrated for the opening line: "I shall not say how and why I became, at the age of fifteen, the mistress of the Earl of Craven."

https://www.independent.co.uk/voices/rear-window-when-wellington-said-publish-and-be-damned-the-field-marshal-and-the-scarlet-woman-1430412.html

"One morning in December 1824, the Duke of Wellington received an unpleasant letter. 'My Lord Duke,' it began, 'in Harriette Wilson's Memoirs, which I am about to publish, are various anecdotes of Your Grace which it would be most desirable to withhold, at least such is my opinion. I have stopped the Press for the moment, but as the publication will take place next week, little delay can necessarily take place.'

The letter, signed by one Joseph Stockdale, a pornographer and scandal-monger, was a naked attempt at blackmail. The Duke was a field marshal, cabinet minister, national hero, husband and father, while Harriette Wilson was a famous London courtesan past her prime, then living in exile in Paris. Wellington was being asked to pay money to be left out of her memoirs.

His response is famous: 'Publish and be damned]' And they did. Through 1825 the memoirs appeared by instalments, each with a dramatis personae listing the notables named in order of rank - 'Dukes: Argyll, Beaufort, de Guiche, Leinster . . .' and so on through earls and viscounts down to humble esquires.

London society was thrilled and scandalised. Half the aristocracy was named in the book, and painted in a most unflattering light. The memoirs went through 31 editions in one year; excerpts were pirated and sold as illustrated broadsheets and French and German editions quickly appeared to delight the gossips of the Continent.

Stockdale, the impresario, and Wilson, the scarlet woman, were said to have made pounds 10,000 from the enterprise, but their good fortune did not last. Stockdale was soon ruined by libel suits, while Harriette was down on her luck again within a few years, and died in obscurity."

Expand full comment
Paul Botts's avatar

The first time, yea. But once elected the SCOTUS says you can do all the criming and just have to identify some minimal connection to an official duty you might be performing. So re-election is no problem.

Expand full comment
Julian's avatar

True, its a good gig if you can get it.

Expand full comment
TGFBC's avatar

What role did AI play during your last work meeting? Was it used for output processing, like writing a synopsis? Was it used to provide input data for the meeting, like asking ChatGPT for the meaning of a word? Was it used for preparation? Was it a subject of conversation (formal or informal)? Something else? Nothing entirely?

Expand full comment
Carlos's avatar

My org has never used AI in meetings. I think an external agency we were collaborating with used AI to generate transcripts, and a summary, once.

Expand full comment
deusexmachina's avatar

I use AI daily for my work, but almost never for meetings. Tried to get it to extract the most important points from long meeting trascripts a few times, but it's bad at capturing nuance and leaves out important stuff if it didn't take up a lot of time in the meeting. It has the intern problem: Causing more work than taking off my plate.

In many other contexts, it's been a productivity boost.

Expand full comment
John Schilling's avatar

None whatsoever in my last meeting. The only times I can recall it coming into play are, A: the time we used our tame in-house AI(*) to suggest non-boring names for our new collaboration room, and various occasions when colleagues discussed their private use of various AIs with the consensus so far being that they're fun to play with but can't be trusted for serious rocket science.

* We're not allowed to use e.g. ChatGPT at work, because too much of what we'd want to ask it involves someone's proprietary information and there's no way to guarantee that won't be exfiltrated to San Francisco and used as part of the training data for GPT 5 or whatever.

Expand full comment
Lumberheart's avatar

My last 10 or 20 work meetings did not include AI at all unless someone was using it and didn't tell anyone. The last time I remember AI being mentioned at a work meeting was months ago when I joked about a friend's personal project of a chatbot while we were waiting for everyone to join.

Expand full comment
1123581321's avatar

0

Expand full comment
FLWAB's avatar

AI played no role in my last work meaning, or in any previous work meeting. There was one meeting a month ago where we discussed whether we should use AI to help transcribe notes, but that didn't go anywhere.

Expand full comment
YesNoMaybe's avatar

I cannot know if anyone else used AI to prepare for the meeting or some such but afaik AI played no role in my last meeting. It tends to be a subject (amongst others) in meetings about once every couple of weeks.

Expand full comment
Eremolalos's avatar

Some people’s work rarely involves

meetings. Maybe just ask about using ir

for work?

Expand full comment
Deiseach's avatar

Nothing at all, because I don't use any of the AI, not even Copilot that Microsoft crowbarred into Office 365 which we do use at work.

Expand full comment
Erythrina's avatar

I use copilot at work on a daily basis, and the meeting discussed the code we have written - does that count?

Expand full comment
SMK's avatar

Back at the old SSC, Scott wrote in 2013 that

"I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey."

But he goes on to say,

"So I think an important point to make before going any further is that, through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"

and

"What Lewis, Augustine, and Tertullian had in common was Christianity; what set Lewis apart was modernity. What made C. S. Lewis saintly, as opposed to the horrifying sadists who actually got the “St.” in front of their names, was the perspective of a culture that had just spent a few centuries thinking about morals from a humanistic perspective."

So in other words, it might look nice for Christianity that 2/3 (or maybe more) of the three great moral writers are Christians, but in fact, it's for other reasons.

But now Scott, in 2024, writes in his review of "The Rise of Christianity":

"The impression I get from many parts of this book is that the early Christians were closer to morally perfect (from a virtue ethics point of view) than any other historical group I can think of."

Will there be an update on the reasons for Lewis's+Chesterton's moral goodness?

Expand full comment
Scott Alexander's avatar

1. Yeah, partly I contradicted myself

2. Partly I think early Christians got a lot of things right but the later religion basically regressed towards normal past values, which (by assumption here) are worse than the normal values of modernity.

3. I think early Christians get an A- on having the correct values and an A++++ on living their values. Moderns get an A on having the correct values (THIS IS PROBABLY BECAUSE I AM A MODERN AND LIKE MY OWN VALUES, I FREELY ADMIT THIS) and a C or so on living up to them. These are slightly different skills. I judge essayists mostly on their values being correct, since I'm not exposed to them in real life and don't know how well they lived up to them.

Expand full comment
Hector_St_Clare's avatar

there's no such thing as "modern" in terms of values- clearly people in the modern era disagree about value systems, just as people in every other age have. Unless you don't consider, e.g., the governments of Cuba or Belarus or Afghanistan to be modern, which they clearly are.

Expand full comment
SMK's avatar
Jan 28Edited

Thank you, Scott. The frankness is always appreciated.

(Edit: I still think it's worth consideration, incidentally, that the moderns whom you find most clearly virtuous share the beliefs of the people you consider to have been most virtuous in practice, and not the beliefs that you otherwise consider most intrinsically virtuous. I'm not saying you need to respond more, or that there's some super gotcha point here. Just saying that I think the point remains interesting after your reply.)

FWIW, on point 3, many who knew Lewis considered him to be the closest thing to a saint they had ever met. Of course, others didn't, and I don't know either them or him, and so am like you only able to form judgments from afar.

Expand full comment
Hector_St_Clare's avatar

I think fans of C. S. Lewis (and I'd certainly count myself a fan of his *fiction*, for sure) should also read this critique by J. B. S. Haldane (also one of my favorite intellectuals of that period) which both takes him seriously as a writer and a thinker, and also strongly disagrees with him ideologically:

https://www.lewisiana.nl/haldane/

Expand full comment
HalfRadish's avatar

I remember reading this post (Can Atheists Appreciate Chesterton) back in the day, and revisiting it now, I'm even more disappointed in it... Scott tries to solve his riddle by merely insisting that everything he likes about Lewis and Chesterton isn't Real Christianity. Come on!

This line stuck with me all these years:

"So my thesis is that Lewis and Chesterton didn’t become brilliant moralists by revealing the truths of Christianity to a degraded modern world. They became great moralists by taking the better parts of the modern world, dressing them up in Christian clothing, and handing them back to the modern world, all while denouncing the worse parts of the modern world as “the modern world”."

–maybe they did do this, but– lots of people actually disagree with Lewis and Chesterton on what are the better and worse parts of the modern world... I would like to suggest that anyone who agrees with them on this is at least a little bit of a Christian...

Expand full comment
FLWAB's avatar

It reminds me of a passage Lewis wrote in his autobiography, recounting a time before he became a Christian:

"All the books were beginning to turn against me. Indeed, I must have been as blind as a bat not to have seen, long before, the ludicrous contradiction between my theory of life and my actual experiences as a reader. George MacDonald had done more to me than any other writer; of course it was a pity he had that bee in his bonnet about Christianity. He was good in spite of it. Chesterton had more sense than all the other moderns put together; bating, of course, his Christianity. Johnson was one of the few authors whom I felt I could trust utterly; curiously enough, he had the same kink....

"The natural step would have been to inquire a little more closely whether the Christians were, after all, wrong. But I did not take it. I thought I could explain their superiority without that hypothesis. Absurdly (yet many Absolute Idealists have shared this absurdity) I thought that “the Christian myth” conveyed to unphilosophic minds as much of the truth, that is of Absolute Idealism, as they were capable of grasping, and that even that much put them above the irreligious. Those who could not rise to the notion of the Absolute would come nearer to the truth by belief in “a God” than by disbelief. Those who could not understand how, as Reasoners, we participated in a timeless and therefore deathless world, would get a symbolic shadow of the truth by believing in a life after death. The implication—that something which I and most other undergraduates could master without extraordinary pains would have been too hard for Plato, Dante, Hooker, and Pascal—did not yet strike me as absurd. I hope this is because I never looked it squarely in the face."

Expand full comment
Christina the StoryGirl's avatar

Would you mind providing a link to the 2013 SSC post that you're quoting?

Expand full comment
SMK's avatar

Thank you, and sorry for my initial omission.

Expand full comment
Steve Sailer's avatar

The Episcopalians have made C.S. Lewis a saint but the Catholics have not made G.K. Chesterton a saint.

Who is Jacqueline Carey?

Expand full comment
Anonymous Dude's avatar

She wrote a fantasy trilogy (with followup trilogies) with a BDSM submissive as the main character. The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France. (Even has a fancy medieval-Italian name for safewords.) With all this in mind, there is a surprisingly small amount of sex. I'm making it sound a lot dumber than it actually is; it was very well done from what I saw of the first book, but I'm too busy for the fantasy doorstops these days. Probably a distant antecedent of the romantasy novels so popular nowadays, though from what I can tell those are much more influenced by Harry Potter and Twilight.

Officially, Scott likes it, from what I can tell, because it makes the argument that 'nice' ideologies (like liberalism) can actually be powerful and more robust than you think.

I will refrain from speculating on any other reasons he might have liked it on his blog.

Expand full comment
Deiseach's avatar

"The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France."

Wellllll....if you consider heresy a form of Christianity, I guess? 😀 Going by the synopses I've looked up online, it's not really Christianity as such - the main religion of Terre d'Ange is worship of (fallen) angels ("what if, you know, Lucifer was actually the *good* guy?" which is a notion that's been knocking around for yonks https://en.wikipedia.org/wiki/Luciferianism), the fantasy-version of what could be Christianity is more like a mash-up of Judaism and Mormonism.

Expand full comment
Anonymous Dude's avatar

Oh, absolutely it would be considered heretical by actual Christians (such as yourself)! But in the fantasy world, it serves as the official religion and breaking consent is the actual heresy (and is called as such). It's a fantasy world, it's not going to have the same religions we do.

The lady wanted to do a kinky sex novel with medieval French trappings, unlike a lot of writers she acknowledged Christianity was a huge part of the medieval period and actually bothered to make up a version of it compatible with what she wanted to do. Got to write 8 more books, so apparently she had an audience.

I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy? I don't know that she's actually gone around trying to convert anyone to the worship of Elua, though arguably writing the novel might count as something similar. There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general.

Expand full comment
Deiseach's avatar

Taking the last part first:

"There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general."

I haven't read that and while I've meant to do so, I still can't persuade myself to do so because of the description of the plot. But that's an error anyway, and does sound like "non-believer tries to make a model of what he thinks believers believe" error which often happens. "Well they must have a reason for this crazy belief, so it must also be a crazy reason".

Inventing fake religions for fake characters in books won't get you into trouble *unless* you construct your fake religion so as to deliberately deny and mock real-world doctrine (e.g. if the SF author had created a religion in which the Jesus stand-in claimed to be divine but very much was not, and was in fact the kind of confidence trickster and swindler as described, and so forth). That also depends on the time and place; you probably would get into severe trouble in the Middle Ages, but not so much now. It would have nothing to do with "making people take religion less seriously in general".

The first part last: I don't mind the "it's fantasy Catholicism with the serial numbers filed off" too much, because it does get used a lot in the genre, though often very poorly where you can tell the author is working off popular tropes about That Evil Church and not genuine history. Where I break out in hives is this kind of "and the fantasy Catholicism totally by coincidence chimes in wonderfully with the kind of thing I like and approve of"; I agree with Tolkien who kept overt religion out of his work as much as possible (I love the Athrabeth but agree with him that it would have been too explicitly Christian to include).

Expand full comment
Anonymous Dude's avatar

That makes sense that it bothers you; it's the whole 'cafeteria Catholic' thing. I do get that nonbelievers don't understand believers all that well.

Niven's Inferno isn't that long and you might find the attempts to make new sins and relate them to Dante's medieval ones interesting (even if it's only from 1971). Kushiel's Dart...that's too long for me to recommend something that sounds like it's really going to piss you off.

Expand full comment
Hieronymus's avatar

“I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy?”

It’s not heresy; it may or may not be a good idea, but it’s not heresy. Of course, it’s entirely possible for someone who is a heretic in real life to produce a fantasy apologetic, but the fantasy isn’t really the problem there.

I’m working on a fantasy RPG setting for a group of friends. I want it to have some medieval elements that most such settings lack, so its main religion needs to have more than usual in common with Christianity. But just taking Christianity and swapping out bits to account for the needs of the game would feel both hollow and blasphemous. In the end the setting’s primary religion has a lot in common with Tolkein’s Valar and with the gods of classic D&D; some of that is conscious influence, some of it is probably unconscious influence, but I think a lot of it is convergent evolution.

Expand full comment
Anonymous Dude's avatar

Fascinating! I always wondered how Christians play D&D dealing with the whole religion thing. (Theoretically you could just make God and the Devil the only deities, I guess?)

Expand full comment
Deiseach's avatar

The Episcopalians make *everybody* a saint - so long as they have the Correct Politics/Opinions.

Okay, interdenominational snark over, but I am still a little salty over "Protestantism, including the Church of England from which The Episcopal Church derives, junked the veneration of the saints and lambasted Catholics over it as idolatry; now they have taken it up as a trendy addition and don't even bother with any kind of process apart from popular acclaim which is really sentimentality and what's current with the Zeitgeist": see this mural from the Episcopalian church St Gregory of Nyssa, which includes non-Christians and (I'm going to assume) some agnostics/atheists in their lineup of the Dancing Saints:

https://saintgregorys.org/dancing-saints/

https://beacon.by/saint-gregory-of-nyssa-episcopal-church/dancing-saints-brochure-pdf

"Musicians, artists, writers, poets, dancers, workers, organizers, missionaries, martyrs, spiritual teachers, protesters, prophets, reformers, judges, builders, liberators, scholars, healers, soldiers, monastics, couples straight and gay, diplomats, planners, governors, and wild and domestic beasts. Christian, Jew, Muslim, Confucian, Buddhist, Hindu, Shinto, Pagan; of many continents, races, classes and eras. These saints lead us in our dancing, as all look upward to Jesus, the perfecter of human faith, drawing new harmony from his example as Gregory teaches us to do."

I suppose I'm most annoyed about them swiping Catholic saints to include in their lineup because tee-hee, aren't we all the same really? all that denominational stuff doesn't matter? except of course we are *ever* so much *more* liberal and liberated and enlightened than you guys in Rome about sex and gender and all that jazz.

EDIT: Although, all that ranting aside, I had to laugh at this one: taught by Episcopalians, ended up Catholic. Welcome home, brother!

Black Elk • 70

(1872–1957) A holy man of the Oglala Sioux, visionary, and mystic. A youthful convert who was taught by Episcopalians and became Roman Catholic, he showed forth through his life and teaching God’s radiant healing presence in all of nature.

Expand full comment
SMK's avatar

In all fairness, Anglicans don't consider the Roman Church to be not a true church (unlike the reverse), so there is no great inconsistency in their celebrating Roman Catholic saints!

Expand full comment
Deiseach's avatar

Some of them today, no. But they do still like their moral superiority on being so much more in tune with the Zeitgeist. A lot of liberal Catholics would also like us to be Episcopalians, to be fair.

Expand full comment
SMK's avatar

Yes, one of the most appealing things about your church is the extent to which it manages to be crossways with the Zeitgeist. Chesterton, of course, waxed lyrical about this.

(I'm not sure if your first sentence means that Anglicans used to consider the RCC not to be a church, but that's not really true. Hooker in the 1580s already insisted it was, as for that matter did Calvin earlier.)

Expand full comment
Charles Krug's avatar

The Anglican communion is a bit odd for well-known historical reasons. Other than the subset of Lutherans who REALLY wish they were still Roman Catholic, most of we Protestants ignore the whole thing beyond, "They wrote well," hardly a universal truth of Saints.

I'm an organist, and early in our marriage served a small Episcopal congregation. I amused myself by watching my RC wife be just Slightly off-balance from the differences between "Rite 2" (Modern-ish) and her post-Vatican II upbringing. She hasn't killed me yet...

Expand full comment
Deiseach's avatar

Anglicanism really is in a tough spot. It got pulled here and there between the various strains of Reformers, particularly as Henry was egged on by the reform-minded but he himself was "Catholicism but with me as the Pope and if I need cash, raid some saint's shrines to melt down the gold and pry off the jewels". Cromwell tried to get him to ally with the Continental Lutherans, but Henry *hated* Luther (I think the feeling was mutual) and Cranmer, who had very Lutheran sympathies, had to keep those under his hat. Depending on which faction got into power, the Church of England was Catholicism Lite, Stripped Down Nearly Calvinism, Lutheran-adjacent, somewhere in the middle, High, Low and Broad.

No wonder it, and its daughter churches, aren't quite sure if they're flesh, fowl or good red herring.

A lot of the Anglican liturgy really was "just translated over into the vernacular" (except in regard to the Eucharist) and after the Vatican II reforms, Catholic liturgy was stripped down and the resemblance is very strong.

Expand full comment
Paul Botts's avatar

My brother-in-law is a deacon in a tiny congregation that is part of the largest socially-conservative North American Anglican splinter group, this one:

https://anglicanchurch.net/

(You need all those qualifiers in the description because there've been a number of such splinters; and that one which is only 15 years old already has had its own subset spin off on grounds that they're not socially-conservative _enough_. I am, barely, adult enough to avoid making People's Front of Judea jokes in my brother-in-law's presence.)

Anyway over the holidays I attended a full service of that congregation for the first time, and it was indistinguishable from what in my youth was referred to as "high Episcopalian" or less respectfully, "Catholic-lite". At least it being a Christmas service meant we got to sing some old-school carols which I'll never not enjoy.

The actual split from the official international Anglican Communion is entirely about gender stuff: they objected to women becoming priests and object strongly to any sanction of the existence of homosexual people let alone gay marriages. Happily none of that needed to come up in any of the preaching during a Christmas service.

My presence brought down the median age of those attending and I'm old enough to remember rotary phones, which gives a decent snapshot of both the sociology and future-growth prospects of the "Anglican Church in North America".

Expand full comment
Hieronymus's avatar

One of the things I respect about ACNA is how they managed to more or less fold in a bunch of pre-existing groups that had broken away from TEC at various stages of its liberalization.

I was under the impression that part of that was allowing subgroups within ACNA to differ on the ordination of women – correct me if I’m wrong. I personally suspect that that’s unsustainable in the long term, and that they, and conservative Anglicanism generally, will have to come down on one side or the other.

Expand full comment
Deiseach's avatar

I do remember being an interested onlooker in the heyday of The Anglican Wars and the split between what turned out to be the Global South and the liberal West (mostly TEC with some support from the Canadians). So yes, unhappily you got splinters of splinters, though the most unedifying part of it was the parent body of TEC suing the living daylights out of breakaway churches/parishes/dioceses and dragging them through the courts, while all the time proclaiming they were the Mostest Loving And Accepting.

Well, it's all cooled down considerably since then.

Expand full comment
Christina the StoryGirl's avatar

My eyebrows went shooting up at the inclusion of Jacqueline Carey, as the most famous person by that name is one of my favorite authors (https://en.wikipedia.org/wiki/Jacqueline_Carey#Bibliography), and...well...her work doesn't seem like it would be so strongly to Scott's taste!

Expand full comment
C_B's avatar

For further context on Scott's reading of the Kushiel series as important to modern moral questions, you probably want to go back and read this post, especially part 8: https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/

He reads Elua as an embodiment of liberalism (like Moldbug's "Cthulhu swims slowly but always swims left" except that Carey thinks it's a good thing).

Expand full comment
Christina the StoryGirl's avatar

As much as I like that Scott likes one of my all-time favorite trilogies, his position that Eula is meaningfully an advocate of the superior power of 2014 "liberalism" and all its present-day values is...not quite accurate.

Because ultimately, a powerful evil has to be defeated in Kushiel's Avatar, and...

(extremely generic and obvious spoiler)

...it isn't defeated by free love and flowers and turning the other cheek.

(slightly less generic spoiler)

It's defeated with cunning subterfuge, deliberate betrayal, and premeditated homicide.

You know.

The kind of stuff most self-described "liberals" tend to shrink from.

But what happens in the novel reflects reality: Scott can be as vegetarian and organ-donating and gentle as he likes in his day to day, but he also needs to be prepared to injure and/or kill in defense of peace and innocence if he wants to truly be an avatar of goodness and see goodness "win" if directly threatened by evil.

Expand full comment
thefance's avatar

(loosely paraphrased from memory):

Libertarianism : Neon Redox :: Newtonian Physics : Relativity

More specifically, the bugman says social-order is a 3-layered pyramid. "Anarchy" at the bottom, "Martial Law" in the middle, "Civil Law" at the top. He also says that, the average libertarian's error is the naive belief that the "Civil Law" layer is the only layer that exists. I.e they don't realize that the peace of their quotidian experience only exists within a tiny envelope of state-space.

It's possible Scott understood this, and thus his mirroring of Cthulu and Elua was deliberate. But I suppose it's also possible that I'm reading into Scott too charitably.

Expand full comment
Anonymous Dude's avatar

I thought that as well. I just figured speculating at the *other* reasons he might enjoy that book is series...well...bad form on someone's blog.

You are correct, of course. To quote Orwell, always a good critic of liberal intellectuals: “Those who ‘abjure’ violence can do so only because others are committing violence on their behalf.”

Or, hey, how about some Kipling:

I went into a public- ’ouse to get a pint o’beer,

The publican ’e up an’ sez, “We serve no red-coats here.”

The girls be’ind the bar they laughed an’ giggled fit to die,

O makin’ mock o’ uniforms that guard you while you sleep

Is cheaper than them uniforms, an’ they’re starvation cheap;

An’ hustlin’ drunken sodgers when they’re goin’ large a bit

Is five times better business than paradin’ in full kit.

Hat tip to the Churchill Project (https://winstonchurchill.hillsdale.edu/rough-men-stand-ready/).

Expand full comment
Anonymous Dude's avatar

Thanks for explaining that.

Still strikes me as the sort of thing that only works if you actually have a majority. The western Allies won WW2 because (a) they had Stalin taking all the heavy beatings and (b) they had conservative elements backing up the liberalism. Without those you just have rationalist meetups--great for writing computer code, not much good against thugs.

I think liberalism is going to go into a long recession and make a comeback after the populists and communists have had another shot and left another mountain of bodies. But I'm a pessimist.

Expand full comment
Deiseach's avatar

I haven't read any Carey, not so much because of the kinky sex, but because I bounce *very hard* off Le Exotique Language which is a this-world language with the serial numbers filed off. In her Kushiel novels, it's French. Not even a fantasy version of French, straight-up French. Though possibly not very grammatical French, I think Terre d'Ange should be "Terre des Anges" if it's supposed to translate out as "the Land of Angels".

And looking at the Wikipedia article, another of her novels has a character named "Loup Garron". Uh, you mean, like "loup garrou" only with a consonant swapped in for that terminal vowel? Now, that *might* be a Clever and Witty Pun, but it strikes me more as a Prachettian Pune*

Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie). Though she's not the only one; I can't remember who the author was (sometimes a failing memory is a blessing) but I did read one deathless work of art where the main character came from a particular culture that was Fantasy French and - get this - the national anthem-cum-dance of that country was the Leur de Lys. Yes, as in "fleur de lys" with the "f" chopped off.

"Nobody will ever notice the clever substitution I did there!" my eye.

*“WHERE DOES THIS TRAVESTY TAKE PLACE? HO. HO. HO. Albert gave up. “Well, Crumley’s in The Maul, for one. Very popular, the Hogfather Grotto. They always have a good Hogfather, apparently.” LET’S GET THERE AND SLEIGH THEM. HO. HO. HO. “Right you are, master.” THAT WAS A PUNE OR PLAY ON WORDS, ALBERT. I DON’T KNOW IF YOU NOTICED. “I’m laughing like hell deep down, sir.” HO. HO. HO.”

― Terry Pratchett, Hogfather

Expand full comment
Michael Watts's avatar

> Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie).

Katharine Kerr's Deverry novels don't quite invent fantasy languages, but they clearly show the author's interest in historical linguistics by postulating some plausible developments of historical languages. (In the fiction, the humans have migrated from the Gaul that historical Julius Caesar knew into a fantasy world.)

[There is also some kinky sex. The author seems to grow out of it after the first two novels or so.]

It's a refreshing break from the much more common case of fantasy authors postulating linguistic "developments" that are downright impossible, like Philip Pullman deriving "anbaric" from "amber".

Expand full comment
Anonymous Dude's avatar

First two novels of Deverry. Ordered on Amazon. Thank you.

Expand full comment
C_B's avatar

I don't know if this will have any impact on your distaste for the practice, but the almost-but-not-quite-like-Earth stuff isn't an accident. The setting for the Kushiel series isn't another world, it's a slightly skewed version of our world, with fantasy elements and an alternate history. French is used in Terre d'Ange because Terre d'Ange is France. Look at the map! https://kushiel.fandom.com/wiki/Terre_d%27Ange

I don't have any defense for it being bad French; I don't speak French, and whether it was Carey trying to do something clever with language having developed differently, or just errors, it went over my head.

Expand full comment
Anonymous Dude's avatar

I read the first one. It was very well done, had a very original concept, but I couldn't keep track of all the pseudo-French names. I wound up drawing a diagram. Next book I had to do that for was Romance of the Three Kingdoms, which tells you something.

I mean, she had an original concept, went to some places nobody had gone before and probably nobody could now, did a lot of politics and created some original villains...she hadn't Tolkien's talent as a conlanger, but was better at character development, so I guess everyone should play to their strengths?

Expand full comment
Michael Watts's avatar

What translation of Romance of the Three Kingdoms did you read?

I was immensely disappointed to see the note, in the introduction to Moss Roberts' translation, that the various names by which the characters are referred to have been consolidated into single standardized names for the benefit of the modern English-speaking reader.

Expand full comment
Anonymous Dude's avatar

Brewitt-Taylor, it was available electronically and I can't carry a book that size around with me.

You recommend Yu Sumei? She's actually a native speaker (though obviously not of Middle Chinese) but the consensus was Moss Roberts had the best one.

Expand full comment
Christina the StoryGirl's avatar

That's so funny! I feel exactly the opposite, and with equal venom!

When I picked up Cloud Atlas and saw that it began with virtually impenetrable made-up dialect, I wrathfully snapped it shut again and slammed it back on the shelf. Fuck you, Authors Who Insist On Making Their Writing Laborious To Read! I want writing that is “simple” enough for me to instantly recognize the concept of (virtually) every word by its shape, because that's what fully literate reading is - character recognition rather than having to phonetically sound out a baraage of novel and strange and often counterintuitive words like a preschooler.

Writers who write beautifully do so with concepts, not goddamned overly elaborate sounds. Sounds are for musicians!

Expand full comment
Anonymous Dude's avatar

I'd actually argue Tolkien is a counterexample, and everyone tried to copy him without his specialized background and wound up sounding stupid. The different sounds of the dwarven, elvish, and human names give the feeling of different cultures, and the use of Anglo-Saxon or otherwise Germanic-sounding names gives it an 'old-timey' feel to an English reader. All of this of course drew on Tolkien's work as a professor of comparative languages ('philology' at the time); the guy was famous for rehabilitating 'Beowulf' as literature. (All the other critics were upset about the monsters.)

I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. It's part of the reason older novels are so hard to read; people don't talk like that anymore and it sounds stilted. It's more severe for poetry, where the sound of words is a big part of it, but prose is affected as well.

It is, of course, also possible Cloud Atlas is awful.

Expand full comment
Christina the StoryGirl's avatar

I haven't read The Lord of the Rings, and, in the words of Bojack Horseman's Mr. Peanutbutter, "I don't think I'm gonna!" (1)

I tried LotR and immediately put it down again. I read The Hobbit in school and utterly LOATHED it. I hated the character, I hated the meandering, I hated the prose craft, I hated the fucking stupid indulgent poems, I hated *ALL OF IT.*

One of my favorite YouTube literary essayists, The Little Platoon, extensively quoted Tolkien in his like 12+ hour commentary on Rings of Power and I didn't hear anything whatsoever that made me want to read the work he was so reverently quoting.

Peter Jackson's extended cut Lord of the Rings trilogy is great art. Based on the admittedly very small amount of Tolkien I've read, I don't think there's any remote chance that my experience of the books could be an improvement over my experience of the movies (I feel the same way about Game of Thrones). Sometimes - rarely - adaptation into a different medium serves a particular story better than its original format, and LotR is my go-to (admittedly ignorant) example.

(The Last Unicorn movie adaptation is my go-to informed example, but I don't know if it counts when an original author is the one adapting the work into the different medium, as Beagle did with the screenplay.)

> "I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. "

I agree, I just have a low-ish tolerance for a high volume and complexity of novel "sounds" being tossed at me when I'm trying to immerse myself in a story.

(1) https://www.youtube.com/watch?v=Q_C-LQiPi6g

Expand full comment
Michael Watts's avatar

I'd be curious what you think of Patricia A. McKillip.

Expand full comment
Christina the StoryGirl's avatar

I haven't read any of her yet. Is there somewhere I should start?

Expand full comment
Deiseach's avatar

It does depend on the language. Someone who hurls chunks of undigested worldbuilding and dialect at your head on page one will elicit the same response from me as from you.

I think that's part of why "Lord of the Rings" works, it starts off with the Hobbits in the Shire who are familiar enough to lead us in gently to the world. Everything is in English, the few unfamiliar terms are generally as unfamiliar to the Hobbits as to us so the explaining goes down easily. Gossiping (and back-biting) in the pub about the neighbours is something everyone can relate to 😀

Expand full comment
Paul Brinkley's avatar

"Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie)."

...ever read Stephenson's _Reamde_?

Expand full comment
Deiseach's avatar

No, but on general grounds I tend to avoid Stephenson. Not quite sure why, just picked up a general impression that I wouldn't much like his work.

Did try Too Like The Lightning by Ada Palmer and again, bounced off hard. I can generally tolerate 18th century English prose style but that was a little *too* mannered right from the off for me.

Expand full comment
Anonymous Dude's avatar

"Menard's archaic style —foreign at last — suffers from some affectation. Not so that of the precursor, who manages with ease the ordinary Spanish of his time."

Expand full comment
Paul Brinkley's avatar

The specific reason to find it is something called the Apostropocalypse. Longest treatise I can find about it, in the form of a dialogue between two or three characters.

I found Reamde to be great fun. OTOH, I'd be very surprised if our tastes in speculative fiction were close enough to bet $20 on it. I assure you, though, Reamde is no wretched hive of 18th century English prose. Rather, it's a den of malware scamming, Russian mafia, Canadian trekking, Chinese intrigue, a terrorist who'd twirl his moustache if he had one, and an MMO.

Expand full comment
HalfRadish's avatar

I read this post back in the day, and I probably just shrugged and moved on when I saw "Jacqueline Carey", but now I'm intrigued. Is Scott really referring to the relatively obscure author of the "Kushiel" books, etc.? I've never read her–would you (or anyone else here) say that her works contain extraordinary moral wisdom?

Expand full comment
HemiDemiSemiName's avatar

Yudkowsky liked the Kushiel books and brings them up in the Sequences, probably because he's into the kinky sex part. I couldn't get through more than twenty pages of the first novel, which (from memory) starts in a brothel and goes exactly where you'd expect it to go.

Expand full comment
Christina the StoryGirl's avatar

And how did you know the novel (or trilogy?) went exactly where you expected it to go after only 20 pages?

I of course realize that the question comes from a weak position, as you can simply Google the plot and claim that you somehow gleaned the major plot points, themes, and the subversion of several genre tropes from the 20 pages you read before you reply.

Nevertheless, my face is currently doing this: 🤨🧐

(P.S. - Had you read the books, you would know that kink takes up a surprisingly small page count compared to politics and adventuring - certainly not enough to satisfy someone primarily reading for porn. But sure, it went exactly as you thought it would.)

Expand full comment
Hector_St_Clare's avatar

I don't read a ton of 'romantasy' (although i just started the Sarah J. Maas series because my doctor, of all people, recommended them) but you've now got me intrigued enough to check out this Jacqueline Carey series (though i forget the name and i'll have to look it up).

Expand full comment
Deiseach's avatar

Possibly because of things like the Anita Blake book series, and then the Merry Gently series (where Hamilton just went "to hell with it, I'm an erotica writer now").

Oh, I was so disappointed (and angry) about the turn the Anita Blake books took! Interesting twist (what if all the monsters were real? who you gonna call?), decent characters (though Anita was a little overpowered even from the start) and some promising moral conflicts.

Then Hamilton threw it all away to turn Anita into a Mary-Sue and just collect a harem of different were and other supernatural characters for mildly kinky sex. Dammit, I wanted to see the tension between "He who fights monsters should see to it that he himself does not become a monster" as Anita got deeper and deeper into the supernatural side of things, and her original system of morality and fast-fading identification with unpowered mortals, and instead Hamilton decided "Anita solves all problems by fucking her way out of them".

That kind of thing does predispose one to "Oh, this is set in brothels. Uh-huh. And prostitution is a sacred art in this world. Well, gee.

And the main character heroine who is going to solve all problems and do everything right to fix her society is a sex worker. But of course! And here we go with our first kinky sex scene. Yeah, I can see where this is going and I'm not interested" perceptions.

Expand full comment
HemiDemiSemiName's avatar

I didn't mean the entire novel went exactly where I expected it to go, I meant that in the first 20 pages there was BDSM in a brothel, which caused me to stop reading because I was on a train in public.

After that, I didn't pick up the series because it hadn't grabbed me in the first 20 pages, plus I'd recently read that EY short story about child-eating aliens where legalised rape is mentioned with zero context and I thought his preferences were probably very unlike my own.

Expand full comment
Yug Gnirob's avatar

Felt like throwing in, from opening in a brothel I'd expect it to go either to a murder-revenge storyline, or a political drama featuring an elaborate blackmail scheme. Or both.

Expand full comment
Christina the StoryGirl's avatar

Well, now that the OP properly cited his source, I can confirm Scott was indeed talking about the author of the Kushiel books:

“I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey. Two of those are very religious and write quite consciously from a Christian perspective. The third writes about kinky sex. Go figure.”

Having read the Kushiel books many times, I can absolutely see how the heroism depicted in the story could inspire a sense of “active” good. Carey's protagonists are very much in this category, especially Phedre (of the first three books).

Expand full comment
Deiseach's avatar

"through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"

Depending on period and country, quite possibly. Henry VIII would have burned them both, because he has a track record of equal opportunity heresy hunting: Chesterton because he was Roman Catholic, Lewis because he was the wrong kind of Protestant.

"as opposed to the horrifying sadists who actually got the “St.” in front of their names"

Which ones? I would like some names so I can add them to my list of "yes ha ha ha yes!"* patrons or possible patrons 😁

* As per The Onion: https://store.theonion.com/products/cartoon-sickos-sticker

Expand full comment
Erica Rall's avatar

St. Julian the Hospitaller, patron saint of clowns and murderers?

Expand full comment
Deiseach's avatar

Is it the murder or the clowns that is the objectionable part? Because clowns, yeah we all know they're deeply evil 😁

Expand full comment
Erica Rall's avatar

I mostly found the combination interesting, although on closer inspection it looks like Catholicism often attributes broad and seemingly-mismatched patronage portfolios to saints. I've also heard quips that being patron of both clowns and murders makes Julian the patron saint of Juggalos.

There's also Julian's (legendary?) biography, which features a formative moment when he "accidentally" murders his parents in their sleep after mistaking them for his wife and her lover.

Expand full comment
Anonymous Dude's avatar

"What is a Juggalo?

Saved if I know

'Cause I'm up with Christ

And I'm up for life, yo"

Expand full comment
Deiseach's avatar

Yeah, we tend to pick saints and go "Here. Now you are the patron of aprons" or the likes. I always appreciate St. Anthony of Padua, famed as a preacher and Biblical scholar, getting the job of "Okay, every lost set of keys, glasses, and 'help help I misplaced that important document I need in the next sixty seconds oh shit!', that's for you, Anthony".

Expand full comment
Anonymous Dude's avatar

Well, there's so many saints and so many things people wanted to pray about in the Middle Ages I guess wanting them matched up would go to St. Jude.

Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
Mark's avatar

In contrast, I would say the more I’ve learned about modern history, the worse Christianity looks. From the 18th century onward Christianity was on the wrong side of almost every issue. The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy. Christian’s like to claim credit for things like abolition because some high profile niche sectioned slavery, but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it.

Up until the 16th century - when the wars of religion started - I think there’s a decent case to be made that Christianity was a net good. But the recent tendency, even by nonbelievers like Tom Holland, to attribute all the accomplishments of modernity to Christianity is bringing out my inner Dawkinsian. It’s mostly accomplished by just cherry picking the best Christians - by the standards of modern humanism - and treating them as representative of Christianity. So William Wilberforce is arbitrarily picked to be the synecdoche for Christianity on slavery; Dietrich Bonhoeffer is arbitrarily picked to be the synecdoche for Christianity on naziism. But this is obviously the wrong way to assess the merits of a religion.

Expand full comment
Hector_St_Clare's avatar

I'm more in agreement with this than not, truthfully, although of course it's going to come down to your worldview. From my perspective, institutional Christianity was *mostly* on the wrong side of the conflicts between monarchy and republicanism, nationalism and cosmopolitanism, socialism and capitalism, and the sexual revolution (and I also tend to disagree with the 'Christian' position on the 'life issues' speaking broadly today). this is a good part of the reason I don't identify as a Christian these days (although not the only one).

Expand full comment
John Schilling's avatar

"but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it."

I do not believe this is the case. From what I have read of that era, the atheists and the agnostics often said things like "of course slavery is bad", but when it came time to *doing* anything they were mostly concerned with the all-important conflict between white male atheists and white male Christians. The slaveowners were mostly churchgoing Christians, because most everyone was a church-going Christian, but they were mostly not motivated by religion nor citing scripture in self-defense against those who were citing scripture against them.

Which is to say, the leaders and the spokesmen of the abolitionist movement, who were very disproportionately and *conspicuously* Christian, often preachers or evangelists speaking as such.

I may be wrong about this, but I think your claim needs to be better supported if we are to take it seriously.

Expand full comment
Kenneth Almquist's avatar

Supporters of slavery definitely cited scripture, and they had a pretty strong argument because none of the references to slavery in the Bible condemn the practice. I recall one anti-slavery writer who compared the practice of slavery as described in the Old Testament with the practice of slavery in the American South, arguing that the Bible didn’t endorse the latter. But mostly, opponents of slavery argued from broad Christian moral principles without reference to what the Bible said specifically about slavery.

A number of Christian denominations split over the issue of slavery, including the Methodists, the Baptists, and New School Presbyterians. (The Presbyterians split into the Old School Presbyterians and the New School Presbyterians in 1836 over a different issue; the Old School Presbyterians split after the formation of the Confederacy.) So there were significant numbers of Christians on both sides of the issue.

On the other hand, support for slavery was motivated to a significant degree by self interest. My sense is that the abolitionists had a deep belief in the immorality of slavery, whereas for supporters of slavery the belief that slavery was ordained by God was much more a belief of convenience. Union victory in the Civil War seems to have resolved the moral question in the minds of most people of the time. In part this could be due to the widespread belief that major historical events played out according the a divine plan--so if God favored slavery the Confederates would have won. But I think that the main thing is that supporters of slavery were never really sure that the institution of slavery was moral, so when the economic and social incentives to support slavery were removed, the belief in the morality of slavery was easily abandoned.

The idea that the Civil War wasn’t <em>really</em> about slavery emerged very quickly after the end of the Civil War. See <em>The Lost Cause Regained</em> by Edward Pollard, published in 1868, which argues that Confederates were really trying to preserve white supremacy, not slavery per se.

Expand full comment
HalfRadish's avatar

It's complicated... my understanding is that the conflict over slavery in 19th-C America could be understood as intra-Christian; i.e., people on both the pro- and anti- sides generally understood themselves to be faithful Christians and understood their views to be grounded in Christian belief.

Expand full comment
Deiseach's avatar

"The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy."

Sources, please! If it's true, I want to know that. If it's Protestant propaganda warmed over and rehashed by modern atheists, I also want to know that.

"in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it"

Again, *give me figures" on that. Otherwise, you're just inventing a rationale that "well I'm sure the guys I like were on the Right Side and the guys I don't like were on the Bad Side". Your inner feelings that atheists were all jolly nice chaps is about as reliable, evidentially, as my inner feelings that atheists were all, um, not jolly nice chaps.

Expand full comment
User's avatar
Comment deleted
Jan 29Edited
Comment deleted
Expand full comment
Deiseach's avatar

Yeah, I'm thinking "The Church tried to ban street lights" is one of those just-so stories on a par with "The Pope ordered all cats to be killed and this is why the Black Death happened", which I've seen passed around before. The story there goes: witches associated with black cats - cats associated with evil and the Devil - Church no likey Devil - Church/Pope orders evil cats killed - cats kill rats which are vectors of infection for plague - no cats, more rats - more rats, more plague - more plague, more deader - ergo, irrational superstitious repressive old Church responsible for more deader.

Here's a 2022 article repeating that myth, even though I can find another article from 2020 debunking it:

https://www.irishlegal.com/articles/our-legal-heritage-gregory-ix-the-cat-killing-pope-who-laid-down-the-law

Which seems to be based very tenuously on some German witch-hunter going over the top with reports back to Rome, which Gregory IX then included in a letter to a specific city in Germany:

https://museumhack.com/black-cats-black-death/

"The purpose of Vox in Rama, rather, was to condemn a cult that had allegedly popped up in the Rhineland—and the bull was specifically sent to the city of Mainz. So, rather than being a document banning cats from Europe, it was a letter sent to a city warning them about a cult."

https://en.wikipedia.org/wiki/Vox_in_Rama

"Vox in Rama ("voice in Ramah") is a decretal sent by Pope Gregory IX in June 1233 condemning the heresy of Luciferianism said to be rife in Germany, and authorizing the preaching of a crusade against it.

...In 1233, Konrad von Marburg began an inquisition in Mainz at the behest of either the Pope or the archbishop.

Allegedly through torture and terror, Konrad reported he had uncovered a satanic cult which worshiped devils in the forms of a demonic man and of a diabolical black cat. The inquisition of Konrad was controversial: shortly after Konrad arrived in Mainz, a papal official called Bernard wrote to Gregory stating that Konrad had been forcing innocent people to confess by threatening them with burning at the stake if they refused to do so. Shortly thereafter, Konrad was assassinated, possibly by the servants of Henry II, Count of Sayn, whom Konrad had accused of fostering heresy.

The issue of the Vox in Rama bull was a response to Konrad's allegations, urging Siegfried III and King Henry, representing the ecclesiastical and temporal authorities respectively, to seek out and destroy the heretics."

Expand full comment
SMK's avatar

Yes. Although I'm very happily a Protestant, one needs a good deal more nuance than "The Church was totally evil until Luther showed up." The Reformation may have been (I would say was) a tragic necessity, but if so it was a needed reformation in an institution that was fundamentally a force for good in the world, viewed in the long run.

It is appropriate (IMO) that the two names Scott mentions include both a Protestant and a Catholic, and I don't in the least doubt that there are eastern Christians who would impress him no less with their holiness.

Expand full comment
agrajagagain's avatar

Question about the AI Safety Stuff: how useful actually is getting new people into the field if the forecast is really "superintelligence by 2028?"

I've been following AI discussions here and elsewhere with interest, I believe AI safety is very important and I have quite a bit of relevant background (lots of math, some computer science), but I still look at something like that and go "how could I possibly get to a point of doing relevant work in a new field time to make any difference?"

Expand full comment
Scott Alexander's avatar

I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help.

Silicon Valley likes to "move fast and break things", so I don't think anyone is imagining a PhD. This is more the startup thing of "take a boot camp, learn on the job". There's also not that much to learn - AI isn't that safe - and what research has been done is scattered across a bunch of different paradigms such that you can get to the frontier of one relatively quickly.

If you're interested, I would apply to https://www.matsprogram.org and let them decide whether you're worth their time or not. One person's list of necessary skills is https://www.lesswrong.com/posts/uKPtCoDesfawNfyJg/how-to-become-an-ai-safety-researcher .

Expand full comment
Laplace's avatar

"I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help."

As someone working in AIS: This is an inadequate field. I strongly recommend against using adequacy-based-deduction in it. Don't just trust that the funders have good reasons for doing things.

Other training programs:

https://www.aisafety.camp/

https://pibbss.ai/

https://sparai.org/ (website down at the moment)

https://www.cambridgeaisafety.org/mars

https://www.lasrlabs.org/

Expand full comment
G.'s avatar

As a CS PhD graduate, I'd love to apply to MATS, but quitting my job for a 10 weeks program seems extreme no matter how much I care about AI safety (the tech job market has been awful since late 2022 and shows no sign of significantly recovering). I guess I wish there was something in between the AI safety virtual course and a full-blown 10 weeks physical program.

Expand full comment
agrajagagain's avatar

Thank you for the info!

I'll strongly consider applying to the summer MATS program. The timing works out quite well for me, though being in-person in California is a very close to being a deal-breaker, even for a relatively short program.

(I notice their pre-application link is currently broken, though that might just be because applications for the summer aren't open yet.)

Expand full comment
Melvin's avatar

Is 1942 too late to get into nuclear safety? Is 1883 too late to get into automobile safety?

Some people still have the Yudkowsky scenario in mind where a superhuman AI comes along, foom happens, and then it's an omnipotent god and does whatever it wants. If that's the real scenario then yeah it's too late to do anything so let's just sit around and hope it goes well.

But I'm not convinced that's the real scenario, I think it's starting to look a little dated. I think we're heading for a scenario with superhuman (in some important ways) AI but without foom. In which case life goes on, but with these powerful, unpredictable and smart-stupid tools in the hands of people who have no idea what they're doing. This raises a whole bunch of problems which all the work done on AI Safety so far is poorly equipped to answer.

The problem may turn out to be not that 2025 is too late to start thinking about AI safety but that 2010 was far too early.

Expand full comment
agrajagagain's avatar

A lot of this comes down to what one means by "superintelligence" I suppose. As I understand it, various AI models already display better-than-human performance at various narrow, specific tasks, but that these weren't considered superintelligent. The idea of "superintelligence" (again, as I understand it) includes general intelligence in the package: its an AI that displayed substantially better-than-human performance at ALL (or almost all) cognitive tasks. If people are using the word in a weaker sense than that, then there may be quite a bit of time after the development of "superintelligence" for the safety folks to catch up. But if the actual prediction is something that can outperform humans (almost) anywhere, that's a different story.

To be clear, I agree that even such a superintelligence need not imply a Yudikowskian foom scenario where it bootstraps itself to some unimaginably high level of competence. But I think it *does* fairly strongly imply that the window for alignment and most other safety measures has largely closed. Either we live in a universe where universe where ASI is safe under the level of precautions taken (in which case things turn out broadly fine) or we live in a universe where the precautions are insufficient (in which case it's probably too late to add more). In particular, if the existing safeguards are even minimally adequate--that is, if the ASI is under somebody's control, and the work it does is reasonably trustworthy--the can simply use it to strengthen the safeguards, which it can do better than humans can. If the safeguards aren't sufficient for that task, I'm dubious that external efforts to improve its safety would do better. Though I suppose I can imagine an on-the-edge scenario where the ASI is contained but not fully trusted, and having people capable of evaluating its safety proposals is useful.

"Is 1942 too late to get into nuclear safety?"

Actually, I think this is a great analogy. 1942 wasn't too late to get into nuclear safety in the sense of nuclear *policy* safety. It was not even too late to get into nuclear *engineering* safety. But there was real concern among the researches on the Manhatten Project that a nuclear explosion--possibly any nuclear explosion--could start a runaway fusion reaction in the Earth's atmosphere. Fortunately we lived in a world where that wasn't the case, but it was a real concern. And 1942 was almost certainly too late to get into THAT aspect of nuclear safety. Either we lived in a world where nuclear ignitions were broadly safe (and there was no need) or we did NOT live in such a world, and only the people already working on the Manhatten Project (or in a position to readily get hired on) were going to have much chance at realizing it and preventing nuclear Armageddon.

You could call this something like "existential nuclear safety." Is it safe (in the sense of not immediately killing us all) for nuclear weapons to exist in the world in the first place? If the answer is no, then the first live nuclear test is distinctly too late to discover the answer. ASI is more complicated, because there are plausibly far more relevant variables (that is, the answer could be "yes" for some designs and "no" for others). But the first time somebody boots up an ASI is an inflection point similar to the first time somebody sets off a nuke: you really, REALLY ought to know the answers to your existential safety questions *before hand* and with quite high confidence.

Expand full comment
Melvin's avatar

Right. I think the word "superintelligence" has been thrown around too loosely for too long and we don't have a good idea of what it means. Over the next ten years it's likely we'll need to reconceptualise what we mean by intelligence.

I now find it easy to imagine a computer that is "intelligent" in the sense that it can ace any traditional test, coming up with better-than-human answers to any question that you might care to ask it. But it's still just a tool, it's still just a very fancy function mapping inputs to outputs, it's not the same category of thing that a human is.

There will still be an important a thing that separates us from computers, but "intelligence" won't be the right word for it. Intentionality? Agency? The ability to have desires and an agenda of its own. We lack the vocabulary to even talk sensibly about it right now, but a lizard has it, and an LLM doesn't.

Expand full comment
agrajagagain's avatar

I am...significantly less convinced than you are that a *human* is anything other than "just a very function mapping inputs to outputs," at least in practice[1]. The main, obvious difference is that a human is *constantly* receiving inputs and *constantly* producing outputs. ChatGPT and its kin get their inputs in discrete chunks with arbitrary gaps in between, and produce outputs accordingly. But that's just a design decision. It would be perfectly feasible even with current tech to create something that's constantly taking in inputs and constantly producing outputs. Would that also be "not the same category of thing that a human is?" I'm genuinely unsure.

I haven't actually played around with any existing LLMs at all, but I've read a lot about other people doing so. It does kind of look like they're missing some key things, but it's a little bit difficult to put my finger on what. Goals and goal orientation? Robust models of the world? Common sense? A sense of continuity of self? I think you could find humans who largely lack any of those things (though the last would be a very unusual human), though maybe not all of them at once. At any rate, lacking the ability to fully pin down what LLMs seem to be missing, I have much less confidence that they will *keep* missing it as they advance.

[1] Which is to say, obviously we have internal experiences, but the practical, outward effect of that is merely to inform what outputs we produce for a given set of inputs.

Expand full comment
Jamie Fisher's avatar

as someone who has had a "psychotic experience" thanks to Covid-19, in my humble-not-expert opinion, I think I lost a sense of "continuity of self" during that experience. during the worst of it, I felt like a was falling into a "cognitive black hole"... in the sense of information being sucked into nothing-ness. my mind went in constant bizarre, tangential directions. trains of thought would keep starting but then get forgotten as a new train would begin. I wasn't hearing voices or seeing things, but there was just this constant improvisational "switching" among things.

right now, in hindsight, I could spend all day trying to describe it and still fail miserably. it's almost a paradoxical thing to "remember" it....... because how can you "coherently relate" to something so incoherent?

again: I'm a normal human, but I *do* have a stressful physical disability. and the stress of that physical disability + Covid-19 => psychotic 3 days

Expand full comment
MicaiahC's avatar

Several reasons:

In worlds where we live, we likely push out the date via some intervention that looks unlikely now, so starting now means better outcomes for those worlds where we made the intervention.

Current AI Safety practice is mostly about capabilities. The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:

You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing)

The low hanging fruit is likely unplucked.

The "existing literature" is still very small, so it wouldn't take years for you to catch up.

I will caution that AI Safety appears to be more insight constrained than talent constrained, aka we need someone to have pretty novel insights to make progress, and it's not a matter of "just" fleshing out models and their implications.

If you don't believe in doom, it's probably true that AI Safety would still be useful in "post singularity" worlds, where in theory AIs are perhaps subservient but not useful unless given safe commands I don't believe in such a world myself and don't see how it's likely, but it is a reason.

Expand full comment
agrajagagain's avatar

" The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:

You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing) "

So I don't think a small field by itself implies very much about how easy it is to get resources: it's less competition, but also almost certainly less funding. The question is how the availability of funding stacks up against the availability of talent: if its funded out of proportion to its size, THEN getting one's foot in the door become easier. But the reverse is just as easy to imagine: lots of interested talent held back by a lack of funds.

Regardless, your points about the low-hanging fruit and the ease of catching up with the existing literature are well-made. I'll certainly at least look into it.

Expand full comment
Eremolalos's avatar

You might not be able to do anything about X-risk, but there are a lot of lower-scale harms that I think are a lot surer to happen, and you might be in a position to have some influence there. For ex: for at least one personal AI buddy (those AI "companions" that some fall in love with), the company that offers the service collects all the users' interactions with their buddies. Company is accumulating a data set that will be ideal for figuring out how best to influence people to do things, buy things, believe things, vote for things. Seems like AI is very well-suited to doing the pattern recognition kind of thing that's needed in order to become maximally influential. That's one of many very dangerous possible uses of AI, and it's already happening.

Expand full comment
Timothy's avatar

Metaculus has AGI by 2028 at 35%.

Expand full comment
Andrew Holliday's avatar

I hadn't looked at Metaculus for this, but it's comforting to know that this is within 5% of my gut estimate.

Expand full comment
agrajagagain's avatar

Hmm, good point. I don't have a lot of faith in prediction markets, but in this domain it's not really clear that there are any better estimates to be had. And 35% manages to hit the sweet spot of both being quite alarmingly high, but also low enough that planning for the other cases is clearly sensible.

Expand full comment
Marybeth's avatar

Anyone here following the story of the EA/rarionalist-adjacent(?) group of people caught up in a sequence of murders? https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/

Expand full comment
Shankar Sivarajan's avatar

It seems to be a nice example of the Voltaire classic "He who can make you believe absurdities can make you commit atrocities."

Expand full comment
Tatu Ahponen's avatar

This whole thing just makes me ask the same qustion as I've asked myself many times before: if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this, isn't this like the *last* crew we want to put, either implicitly or explicitly, in charge of AI alignment? I mean, the militant kill-all-humans veganism stuff is really not a completely illogical extension of animal-rights views common among rationalism-connected EA people in general!

Expand full comment
Viliam's avatar

I wonder whether the craziness is more correlated to the LW community, or to the Bay Area in general. Are there any famous murder-rationalists in New York, or in Berlin?

Expand full comment
Deiseach's avatar

" if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this"

I don't think it is, at least not as currently constituted. Now they go for good old-fashioned money and swindling scandals.

A lot of the stuff being talked about here is ancient history in one sense; from the early days when it was the Wild West and anyone and everyone could turn up and start working on 'save the world from AI' etc. The complaints of Vassar and Ziz were that even back then things were starting to get too cosy and settled-down and normie, and they were betraying their founding principles and becoming Just Another Talking-Shop.

Now the entire scene is even more respectable and mainstream, so the sex cults and 'split your brain in two' murder gurus are very thin on the ground.

Expand full comment
Tatu Ahponen's avatar

Still, you've got a basic pipeline in this case: Read LW stuff -> ruminate on LW stuff -> ??? -> murders and insanity. Regardless of the exact level of how common it is, it at least leads to questions on whether we should directly expose the AI to this crowd in the sense of them trying to get it aligned to non-crazy stuff!

Expand full comment
Anonymous's avatar

Look, I really sympathize with your basic question here – I think letting Yudkowsky set the moral parameters of a god-machine would be a catastrophe in itself – but this pipeline is clearly not real. Ziz was already an unhinged psychopath, it wasn't ruminating on lesswrong concepts that set him off. He would have started a Transon Family one way or another regardless of the existence of MIRI/CFAR.

Expand full comment
Anonymous's avatar

"Transon family" is a great line. I have nothing else to add.

Expand full comment
Sol Hando's avatar

I happened to read about it yesterday and lost some respect for the “rationalist” community as a result. Not that having insane people who associate with your ideology is uncommon, but that they seemed to be well known on LessWrong and their insanity respected for fear of being confused for not respecting their gender(s).

For context: I was reading the “PSA” post about them on the top of LW and the person summarizing things had dedicated an entire 2 paragraphs to validating their complaints/accusations about the police doing some pretty terrible stuff (some pretty serious sexual assault and humiliation). The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”

Like, at some point, (probably somewhat before murdering multiple people) we should stop giving someone the benefit of the doubt and stop considering their statements in the most charitable way possible. This is literally what allowed Ziz to skip town after the group stabbed their 80 year old landlord straight through the chest with a sword. Ziz allegedly needed medical attention, then fled from the hospital and California before the police could detain them. With the caveat of everyone is innocent until proven guilty, Ziz then went on to murder/contribute to murder multiple other people, including probably their parents.

Expand full comment
Anonymous's avatar

It was 2022, Sol. Lots of people were still Floyding it up big time. Tons of public places you still had to avow total hatred of the police if you were any kind of a pinko (and if you weren't have fun with the racism accusations). Even if only a tiny proportion of the Medium post's readers knew who she was, the consequences of being any less than maximally charitable toward anyone who averred mistreatment by a cop could have been catastrophic for her personal life.

Also, it was Zajko's parents. I'm pretty sure Ziz didn't kill his own parents, although I invite correction on this point because he's certainly capable of it.

Expand full comment
Scott Alexander's avatar

Eh, police actually did seriously screw this one up. The Zizians went to protest a rationalist event. They showed up at the wrong time and there was a kids' club or something at the event center instead. The kids' club freaked out and called the police, someone (incorrectly) accused them of having a gun, and they got charged with child endangerment or something and thrown in jail. Granted they are extremely evil, but in this particular case they were just trying to mildly harass us, which we were pretty used to, and ended up with some really serious charges against them. I think a court eventually threw out some of the charges, but by that time there were way more and much worse charges that they were actually guilty of. I don't think skepticism that the police always do everything right is some kind of unjustifiable weirdo liberal delusion.

I saw Ziz once at some event c. 2017. She was wearing hooded black robes and standing alone in a corner, glaring menacingly at everyone. I don't even know why she attended. Definitely not one of those "it's always who you least expect" situations. The followers were the opposite though, I didn't know them personally but people who did said that before falling in with Ziz they were totally normal, friendly, nice people. Pretty scary, don't fall in with cults, that's all I've got to say.

Expand full comment
Sol Hando's avatar

My gripe isn't so much about the assessment of the protest, which in itself I think is completely fine and probably accurate. The judgement about police brutality and the plausibility of their statements (while perhaps given a *little* too much credence), isn't something objectionable on its face. What they were doing is foolish, definitely something I don't approve of, but the treatment alleged is something terrible.

My problem was the context with which this assessment is offered. Apparently the goal is to warn the rationalist community about someone the author believed was violent (which appears to be true judging by the later alleged murders/crimes), and rather than letting the facts of the events speak for themselves, a not-insignificant portion of the warning is dedicated to essentially validating the allegations they make by claiming they seem to be true to the author (based on nothing but the testimony of the people being warned about, and how it "sounds" like other cases of police brutality they've heard of).

"Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."

This is my personal feeling, but no, no one reading the article should be worried about gathering or sharing information about bad behavior directed at these people, as (at least at that time) they seemed to be a real and present threat to others (which turned out to be probably true). It just seems a lot of this warning is dedicated to opinions (priors with no valuable new information) that paint these characters as victims. They very well might be victims in that case, and have valid cause for complaint, but sharing opinions about that in a post warning others about their murder/attempted murder is definitely the wrong place for it.

Maybe giving some attention to dissenting opinions, opinion as to a claim's plausibility, and fully contextualizing the situation is a valuable thing in general, but it left a very bad taste in my mouth given that the post was trying to warn others about people who have committed some pretty terrible crimes.

Expand full comment
Scott Alexander's avatar

Yeah, okay, that's pretty cringe in the context of a murder cult. But I can't actually find the quoted text anywhere in the thread (though it might have been downvoted into invisibility by now)

Expand full comment
Sol Hando's avatar

Quotes are from the Medium article that's linked on the LessWrong linkpost (and one in my other comment from the LW comment section):

https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz

To here: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509

The section I'm referencing is about 10-20% of the way down, but crtl+f any of the text I quote as it's unedited. It's an anonymous article, and of course they don't represent the majority of LW/Rationalist opinions, but it hasn't absolutely ruined my opinion of the community or anyone specifically, just a ding to my level of respect in general.

Expand full comment
Matto's avatar

Can the quokka will itself out of being a quokka?

Expand full comment
Anonymous Dude's avatar

Hey, I'm trying.

Expand full comment
Neurology For You's avatar

Please provide a link to some context, this doesn’t make any sense.

Expand full comment
Sol Hando's avatar

This is about the Zizians who are basically a group of people who follow a person named Ziz's Pseudo-Rationalist, Hyper-Vegan, Certifiably Insane, Doom Cult.

From here: https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz (It's the top of less wrong, which I rarely read but was for some reason yesterday)

To here: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509

To this quote from the Gwen:

"Police arrived with their guns out and we were immediately arrested (within about 10 minutes of us arriving), after which we were sexually assaulted (in my case, i was groped and had my pants pulled down and then sat on by an officer in a mounting position), two of us were gawked at by a group of about a dozen police looking at our naked bodies after pinning us down and cutting off our clothes (“Ha! I told you it was a man.”) and one had pictures taken of them front and back naked and pinned down, we had multiple types of serious and/or permanent medical harm intentionally inflicted on us (including hypothermia, concussion), we were starved for multiple days (I did not eat for five days, until bail was posted) having been denied vegan food as a bargaining tactic (as well as water, sleep, hormones, and medical care), were tortured and were repeatedly denied ability to speak with a lawyer (and had eg hypothermia inflicted on us and our clothes cut off (separate events) for asking to speak to a lawyer)."

And this reaction to in the medium post by the summarizer:

"I don’t get the impression that Gwen is the sort of person who generally lies about things like this, though I could be wrong. Some aspects of this account sound like other accounts of police misconduct I’ve heard in the US, and I think the overall US prison and jail system is needlessly cruel in many ways, so these cruelties seem entirely possible to me.

No matter what crime someone may have committed (and no one has yet been convicted of a crime), I don’t believe they could ever deserve to be mistreated as they allege, and I think that their allegations of such deserve to be treated seriously."

This specific arrest was in response to this group, dressed in all black with masks on (Not the pandemic kind, the V for Vendeta kind) and blocked off the exits to a wooded retreat with their trucks which that happened to have a bunch of school children.

The writer of the summary had this to say:

"My assumption is that they didn’t intend to do anything violent"

Gwen likely proceeded to fake their death (no body), but as far as I can tell they haven't shown up in the past couple of years so maybe this was real.

This group also likely murdered one of the member's parents:

"Pennsylvania police believe, based on a lot of circumstantial evidence and investigation, that one or more of Ziz, Michelle “Jamie” Zajko, or Daniel Blank was very plausibly involved in the murder of Jamie’s parents"

This is all relevant today because, yet again, members of their group committed murder: https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/

All this is to say is that these are really bad people. Their ideals are evil, and so are their actions. They did a bad thing, and the author proceeds to create a multi-paragraph apology for their actions (you could say, rationalizing them), doubt the motives of the police who responded to masked people barricading children in a campground, while claiming the accusations against the police are highly plausible (police stations have cameras, and police have been severely disciplined for much less).

They end this section with:

"If anything at all like this happened, however, then the people responsible should clearly be brought to justice. If anyone lied about the protesters in order to incite a more extreme police response, then that should come to light as well. Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."

As for why this made me lose some respect for the rationalist community: This post was made as a literal warning that Ziz and associates were potentially violent (and had stabbed a guy through the chest). Yet the author felt the need to dedicate a whole lot of this community warning to an apology for their actions, and insert an opinion on police brutality to boot. I can excuse this if you're Ziz/Gwen's lawyer, and trying to get them the best outcome you can, but this seems to be overly sympathetic and charitable to the motivations of a group of people who are willing to lie, manipulate, incite murder, and outright murder to get what they want.

Read the comments of that LW post and it's more of the same sentiment. The "tools of rationality" being used here, are being used for (what I consider to be) very wrong ends. For example:

"I think it's good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven't fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that."

This assumes Ziz, the murder/attempted murderer/inciter of suicide, is acting in good faith, which is obviously not the case. We shouldn't value the testimony of people who hold evil ideals, commit evil acts, and manipulate the narrative for their own benefit.

TLDR: Bad people are bad. We shouldn't treat their views as maximally charitable.

Expand full comment
Neurology For You's avatar

Thank you!

Expand full comment
Deiseach's avatar

Yeah, it's the bit about "I totally believe the person accused of violent crime has no reason to lie about how they are a poor little lamb mistreated by the pigs" that makes me go "Oh, brother".

Now, it can indeed happen that even violent criminals are indeed mistreated by the pigs. But my general inclination is "person accused of a crime may perhaps possibly maybe could be also not averse to telling a wee little lie or two".

Expand full comment
Deiseach's avatar

"The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”

That's been a thing with the rationalists. Reading one account of the brouhaha about L'Affaire Kavanaugh (back in the days when Michael Avenatti was The Credible Avenger leaping in with the Case of the Spiked Punch and Highschool Rape Gang) was eyebrow-raising for me: "This person told me that things like this happened somewhere else before so I believe this instance is indeed true and credible and it happened and that's terrible". No consideration of "but is this instance true? is the person reliable? even if they are reliable, are the other things they said happened true? even if they are true, are they the same as what is alleged in this instance?" which struck me as very odd coming from people who are all about "think about stuff, question narratives, find the facts out for yourself instead of relying on authority" and so forth.

Expand full comment
Anonymous Dude's avatar

I kind of feel like the best thing for rationalism would be to develop some level of cynicism and awareness people often act in bad faith and are lying to you. I am not saying you have to turn into Diogenes, Orwell, or Palahniuk, but at least read the 48 Laws of Power once, or the Prince or the Art of War if you're into classics. You don't have to do all the evil stuff, just know what they're trying to do to *you*.

Expand full comment
Deiseach's avatar

I was somewhat aware of that for nebulous reasons a while back, and I think it's just a bunch of crazy people who glommed on to early rationalism and the community around it, went full-on "our guru commands" blind zealotry and followed someone who was batshit insane (to use a technical term) but who, because of the general commitment to niceness, acceptance of the weird and strange because hey, people think we're weird and strange too, and lack of anything approaching authority to kick people out plus horror of appearing to ostracise anyone, was tolerated to an extent.

Not the fault of the rationalists that the nutso developed their own crazy philosophy and went around mind-hacking their followers into being as nuts as them, but it is a danger for the community (less so now, I think, given that they've been burned a couple of times by crazy people and grifters using the 'niceness, tolerance, no hierarchy, no thought is verboten to think' principles to take advantage).

EDIT: Though in this particular case, it's grimly ironic that self-described vegans attacked and murdered someone. An instance of "all animals are equal, but some are more equal than others", if you consider humans are just another animal?

Expand full comment
Viliam's avatar

> it's grimly ironic that self-described vegans attacked and murdered someone

As long as they didn't eat the victim afterwards...

Expand full comment
anomie's avatar

Well, that was certainly an interesting read. It's been a while since we had a good cult disaster.

Though, that article really just brings up more questions than answers... What the hell were they even trying to do? What made them suddenly turn into sociopaths? Did they just go insane from reading Yudkowski? Well, I looked into it, found a post by none other than Scott, and... Jesus Christ...

https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=4j2GS4yWu6stGvZWs

I've always wondered whether techniques like prompt injection and jailbreaking were applicable to humans as well, and... it seems they are. Takes a bit more work, obviously, but it is possible to break them. I... don't even know what to feel about this. There's this feeling that... I could have been them. It's terrifying.

I do want Scott to make a follow-up post on this, but at the same time... I genuinely fear that if the details of this incident become widely known, it will lead to a national campaign to purge the trans population.

Anyways, there's more details in the comment thread of this post:

https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz

Expand full comment
B Civil's avatar

That is scary.

People can definitely be jailbroken. BUT sometimes it works for the good as well. A person helplessly addicted to drugs completely reforms themselves on occasion.

People have enormous breakthroughs in their thinking/believing that enables them to overcome past burdens and seek a different trajectory.

People have positive religious conversions. Isn’t that all jailbreaking?

Expand full comment
Deiseach's avatar

The dual hemisphere brain stuff is so damn outdated, I'm surprised self-proclaimed rationalists would still cling on to it. But if it's less a rationalist and more 'cult guru develops woo-based philosophy with pseudo-scientific rationalisation and brute-forces impressionable followers into breaking their sanity' it's a lot more understandable.

I mean, come on people, "ooh I'm right-brained so I'm all artistic" versus "I'm a left-brained logician" stuff has gone out with the ark:

https://www.healthline.com/health/left-brain-vs-right-brain#research

It's possible to drive yourself crazy by forcing yourself along the path in line with the dictates (and diktats) of the guru, be that in religious belief systems, pseudo-science or plain batshit insane loons as here.

Expand full comment
Matt R's avatar

I think their being trans is pretty far down the list of unusual things about them, media focus would likely be on their actual beliefs.

Expand full comment
Deiseach's avatar

This is yet more of the division between "trans because of genuine gender dysphoria" set of normal trans people and "decide they're trans because they're fucking lunatics" set of mentally ill pretenders and/or criminals who appropriate an identity of a perceived oppressed minority in order to further grift for "I'm a victim so I can do anything I want and I can't be held responsible for anything, and if you attempt to hold me accountable for my actions, then you are violently attacking me you transphobe" protective colouration.

Expand full comment
anomie's avatar

I know that's the convenient explanation but... everything I've read seems to support that they were genuine. In fact, it doesn't even make sense they were doing it to for the sake of identity politics. They genuinely just hated society and humanity, and the only thing they cared about was for people to fear them. It was an absurdly suicidal ideology, but it had absolutely nothing to do with mainstream culture war stuff.

Expand full comment
Deiseach's avatar

I think they were genuine trans in the "Silence of the Lambs" (novel version) sense that the guy was trans: convinced that this was indeed the diagnosis applicable to themselves despite anything anyone else might say and unwilling to entertain any opposite view. I think this trans stuff was just all of a piece with the general insanity, and has little to nothing to do with the normal trans people (the very fringe, very online, very radical and very yes it is a damn fetish sort, on the other hand, are right up this lot's alley).

"They hate me and persecute me because they fear me because I'm neurodivergent disabled trans etc. etc. etc. so I have an entire constellation of alternative identities that normie mainstream society hates and oppresses and wants to destroy" is the entire attitude here and justification for them to try and blow it all up. Being a plain cis guy isn't good enough because that just puts it back on them that the reason ordinary people dislike and don't get along with them is because *they* are the problem. Also, I suspect, a heaping helping of "Cis het white males are the problem with toxic masculinity and privilege and entitlement, but I can behave in precisely the same way of entitled behaviour but it's okay because I'm a Real Woman (Trans) and if you object then it's because you're a horrible wicked transphobe, not because I'm an asshole" going on there as well.

Expand full comment
HemiDemiSemiName's avatar

I have the impression that you're talking about a long tail of escalating craziness (which probably exists in most groups) as if it were a bimodal distribution of crazy versus not crazy.

Expand full comment
anomie's avatar

...I don't know if there's anything I could say that could convince you, and at this point I don't even know why I'm trying to make things worse by doing this. Maybe it is just better if people believe that there exists an actual divide between "normal" trans people and the kind of trans people everyone hates. But... that would only continue to justify the resentment against genuine trans people.

*sigh* Maybe there really is just no hope for them.

Expand full comment
anomie's avatar

Half of them, including the leader, are trans. You can't seriously believe that the public would just ignore that fact...

Expand full comment
Deiseach's avatar

As an outsider looking in, it seems that the early rationalist coming-together into community was (maybe because it's in California) very open to/prone to everyone who was alienated from their own community running away to join the Bay Area because that was the promised land. This includes minors, trans, people of variant sexualities, neuro-divergent, you name it who felt (and were) weird and excluded but were intelligent and/or autistic enough to fit in with rationalist and adjacent philosophies.

That means you get a lot of the crazy in other ways than "I'm smart, socially awkward, and interested in niche subjects that nobody else in my family, school, or social circle even understands", up to "yeah this is mental illness and not just 'not fitting in'".

Expand full comment
Anonymous's avatar

Yeah, this all already happened once, or twice depending on how you divvy things up: hippie culture, and 60s nerd culture. I know more about the latter, and they had the same pathological tolerance culture, which led to things such as the so-called "Breendoggle" and the (related, in a way) fact that, as it turns out, the founders of the Society of Creative Anachronism (Berkeley, CA!) were about 80% pedophiles.

Basically, most people who are excluded from mainstream society are ostracized for *great* reasons, amazing reasons, and autistic people who are ostracized for being awkward don't realize this and think "they must be just like us, misunderstood 'freaks' who are actually not bad people!" This is an error, and it will be an error every time.

Expand full comment
Tatu Ahponen's avatar

I still hold that there's a valid, very dark reading of "The Ones Who Walk Away from Omelas" starting from the fact that LeGuin was going around the same Bay Area 60s nerd circles and *must* have known about many of the controversies, and that Omelas, as it is described, quite resembles a Ren Faire or some other nerd paradise.

Expand full comment
Anonymous's avatar

That... is actually uncomfortably plausible.

Expand full comment
Viliam's avatar

Sounds too dark to be mere coincidence. Until you read Dostoyesky, whose writings seem to have inspired Le Guin.

https://en.wikibooks.org/wiki/The_Grand_Inquisitor/Rebellion_(Part_2)

I mean, try reading that page, while imagining that it was written by someone who got disgusted by seeing a dark side of the hippie community, and it... kinda makes sense, the same way.

Except, that obviously didn't happen. Unless there was some community in Dostoyevsky's Russia similar to the hippies.

Expand full comment
Melvin's avatar

I guess the question is how can you build a community that's interesting and open-minded without attracting weirdoes and freaks?

Expand full comment
Anonymous's avatar

Simple, but distasteful. Accept people with aspergers, but ban sexual deviants of every kind.

Expand full comment
anomie's avatar

And how exactly are you supposed to figure out who the "sexual deviants" are? Or is that just a code word for trans people?

Honestly, I don't why you would accept people with aspergers either. Maybe they're ostracized for "great reasons" as well.

Expand full comment
anomie's avatar

...You can't.

Expand full comment
comex's avatar

Just a note: technically speaking, the term “mental illness” is not a high standard. It includes just about any mental condition that can be medically treated, including anxiety, depression, ADD, addiction, and even gender dysphoria (with transitioning being a treatment).

Expand full comment
Deiseach's avatar

I think "run an 80 year old man through with a samurai sword as well as other assault causing him to lose an eye" fits "mental illness" well enough without having to disambiguate "gosh, do you mean 'anxiety' here?"

Expand full comment
anomie's avatar

It's more that it attracts impressionable autistic people like me by mindfucking us with somewhat intelligent writing...

It doesn't change the fact that the rationalist spaces represent everything the right sees as leftist degeneracy. This is the culmination of that.

...This can't end well.

Expand full comment
Deiseach's avatar

I can't blame the rationalists too much, they do intend well and starting off as an inchoate grouping starting to coalesce into a community (the way we did here), they were a bunch of diverse individuals who had in common being intelligent, being interested in specific things that those around them generally didn't care about or had no knowledge of, were non-conforming in some way from the mild to the very far out, and were so happy to meet "hey, you like this too? and you're a weirdo too?" others that they over-corrected for the "since we've all experienced at least some form of rejection and ostracism, we very much will not incorporate that into our community".

This meant that, like all groups, they were both attractive to and vulnerable to the fucking lunatics and the criminal grifters who saw a bunch of pigeons to be plucked, and they didn't have the structures (yet) in place to deal with that; going way back into the weeds about the alleged sexual abuse instances, for example, there seems to have been a positive horror of assuming that anyone was in charge in, like, a hierarchical structure of ultimate authority to enforce decisions and lay down the law. They set up committees to form committees to investigate and report, but there were no such things as sanctions other than "let everyone know that X is maybe someone you need to be careful about". Again, no blame to them, because dealing with the various groups and sub-groups all over the country was like herding cats and any decisions that *were* made seemed to be immediately challenged both by people who felt that they had been victimised all along and nobody was taking them seriously and the people they wanted persecuted were not! being! persecuted!, and by the "X is someone to be wary of" types who were claiming they were the real victims.

This is how every utopian organisation (including early Christianity) discovers that actually, you *do* sometimes need a bunch of big guys with sticks to enforce Da Rules or else you will be eaten alive by the piranhas.

Expand full comment
Skittle's avatar

On the note of your final paragraph, I do recommend everyone read the Didache (I know you already will have), because it is hilarious on this topic.

Expand full comment
Anonymous Dude's avatar

That's about as good a description of it as I can come up with. Well done.

Expand full comment
Coagulopath's avatar

This is one of those things I would have gladly spent 50 hours studying as a kid but now I no longer have the time.

Here's a nice big pile of stuff if anyone wants to go through it: https://drive.google.com/drive/folders/1z21U_q1tESO61iL35ertsIjZYczJrHpQ

Expand full comment
Viliam's avatar

Oh, I wish I had more time to study this, too. Or maybe an AI that would generate a summary for me... though the existing ones would probably refuse to discuss violence and other controversial topics.

What I would like to see:

An explanation of Zizian beliefs, and a description of the events from their perspective... but using normal English words, instead of the Zizian lingo.

A timeline of events, maybe with some color coding of which are the facts that everyone agrees about, and which are statements from one side that the other side denies.

*

If I understand it correctly, Ziz believes that each brain hemisphere is a separate personality, and that you can communicate with them individually using some combination of sleep deprivation and closing one eye (so that one hemisphere finally falls asleep, while the other keeps talking). Ziz has ethical opinions on the level of "insanity wolf", where almost everything is extremely unforgivably evil, and if you are good (or rather, half-good, because only Ziz is good), you must strive to destroy it at all cost.

All other beliefs seem to be downstream of this. Anything that Ziz disapproves of is evil and should destroyed. Everyone who disagrees with Ziz is evil and should destroyed. Refusing to destroy evil is evil. (Generally, Ziz makes tons of death threats... but mostly doesn't act on them.) At least one of your hemispheres is evil. Ziz is the only known person in the universe who has two good hemispheres. The followers have one good and one evil hemisphere; all thoughts that agree with Ziz are coming from the good hemisphere, and all temptations to disagree are coming from the evil hemisphere. Outsiders have two evil hemispheres. By the way, if you are trans, it's because one of your hemispheres is male, and the other is female. The rationalist community -- a good recruiting ground for Ziz, at least in the past -- is also evil and must be destroyed.

...this is as much as I remember, but there are probably many other fascinating details that I have missed.

Expand full comment
Julian's avatar

Fertile ground for a podcast.

Expand full comment
beowulf888's avatar

My COVID update for epidemiological weeks 3-4 of 2025.

1. For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.

> The Department of Health and Human Services "has issued a pause on mass communications and public appearances that are not directly related to emergencies or critical to preserving health," Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."

I guess Stephanie is worried about her job. Likewise, the FDA has "paused" updates on foodborne outbreak probes as health agencies "regroup on communications." That's a bullshit excuse if I ever heard one. They could set up a process for review and prioritization without stopping the current updates. Clearly the Trump administration is now emulating Chinese secrecy practices.

https://www.cbsnews.com/news/fda-pause-foodborne-outbreak-probes-updates-communications/

And the CIA revised its report on COVID origins. They're saying it was a lab leak now.

Kevin Drum wrote:

> And none of us will ever get to see either their assessment or the evidence it's based on. Is it a rigorous review of all the evidence based on a foundation of deep scientific understanding? Or is it a sloppy broadside written by a bunch of naifs who never even engage with the scientific debate and just want to avoid a fight with the new boss. We'll never know. But remember the aluminum tubes?

But the new boss of the CIA is John Ratcliffe, and he has been supporting the lab leak theory for years. So I suspect it's just ass-kissing the part of his new subordinates. The CIA motto is: "Ye shall know the Truth, and the Truth shall make you Free." — the truth, if your boss doesn't like it, shall make you free to go looking for a new job.

2. The CDC is still updating other data. Their wastewater numbers seem to indicate the current XEC COVID wave has peaked. Biobot hasn't published an update recently, so I don't have a reality check for CDC's data. Biofire's proprietary Syndromic Trends also shows that COVID is dropping as a percentage of detected respiratory viruses. RSV may also be on a downward curve, but influenza is rising fast. XEC seems to have lost steam. Cov-Spectrum shows it topping out at 41% of sequences. The CDC Nowcast shows it at ~47%, but their Nowcast isn't very sensitive to changes in growth rates.

3. This winter, COVID hospitalizations are lower than previous years—and about half of what last winter's were (4/100,000 vs 7.7/100,000). Despite the XEC wave, weekly deaths remain low. I'm not sure if we'll continue to see these numbers now that the MMWR reports have stopped.

4. What's next? LP.8.1 is the only major var that's showing any legs. But except in countries with low sampling rates, it hasn't gotten much above 20%. Maybe it will create a secondary wavelet on the tail end of our dinky winter wave, but I doubt it.

5. While COVID and RSV may have peaked, this winter's Influenza wave is still on a strong growth trend. We've still got three months left in the flu season, and cases are nearly at last year's peak, and test positivity is higher than last year.

6. Three days before Trump's inauguration, Biden awarded Moderna $590 million to develop mRNA-based pandemic vaccines. It's not clear if the Trump administration will be able to cancel this program, but I'm sure Brainworm Kennedy will try.

7. And that sucks because if HPAI mutates to allow human-to-human transmission, we could be in for another pandemic. Rumors are circulating about human-to-human transmission in Shanxi and Hebei provinces (northern China), with deaths in the hundreds. These are rumors only. Other rumors say this is an HMPV outbreak. Who knows?

Anyway, if there is a bird flu pandemic, I'm sure Bhatty, Marty Mak, and Brainworm — the Three Stooges of the health world — will be doing their best to calm public fears while doing nothing of substance.

Slides here:

https://t.co/Jy3E88XFoE

Expand full comment
Monkyyy's avatar

> . For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.

> " Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."

Wasnt this an executive order? Are they down playing trumps success?

Expand full comment
Gnoment's avatar

Yes, your interpretation is correct. Signed, a public health data person.

Expand full comment
Jeffrey Soreff's avatar

What _is_ going on with scientific communications? I saw some alarming comments in reddit's r/labrats subreddit, and I'm not sure what to make of them.

I could see the sense in trying to halt DEI stuff as quickly as possible, and even to trying to ferret out DEI stuff that was being concealed by name changes, but it sounded like nearly _all_ communications (and funding) had been frozen.

What could they possibly be looking for? Activism camouflaged as research?

Expand full comment
Monkyyy's avatar

Words have power when near lawyers, trump declared a few federal agencies cant speak until elon gets to go over their personal files

While elon still wont have actual power and all the historical bullshit about presidents not being able to fire, its strong arming compliance

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! It still seems like overkill. Did Trump silence NIST, along with everything else???

Expand full comment
Monkyyy's avatar

idk about details, it was allot of reading and I dont pay that close attention to details; it wasnt everyone it seemed health related at first glance and maybe 5 departments; maybe.

Expand full comment
anomie's avatar

Dissidents. What else would they even care about?

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Dissidents along what dimension? I could well understand stopping a woke activist project concealed under biased research, but it _looked_ like the freeze was far broader than that. Freezing e.g. publications in physics just sounds crazy to me.

Expand full comment
anomie's avatar

Loyalty. You can't risk having people in important positions that aren't willing to follow orders unconditionally. Which means that people with stubborn principles about these kinds of things need to be replaced.

Expand full comment
beowulf888's avatar

He signed so many executive orders since his inauguration I don't know if this was included in one of them. 26 on his first day, and they're still coming.

Expand full comment
Monkyyy's avatar

my count was 45 day 1

was the website incarcerate?

Expand full comment
Again with a Pen's avatar

In a previous life, I used to hunt plagiarism for a living.

A while ago I gave ChatGPT this prompt: "Single Player (vs Computer) "Social Deduction" game" and got "Creating a single-player social deduction game where you play against a computer opponent can be an interesting challenge!" etc. pp. Now, to get a feeling for it, I gave Deepseek the exact same prompt from my ChatGPT history and got "Creating a single-player "Social Deduction" game where you play against the computer is an interesting challenge," etc. pp

This makes my ill-adapted to this situation spider senses go into overdrive. Is that really objectively the most average English sentence that could be said in reponse to that rather sparse prompt such that two models would independently arrive at it? Would two different humans independently come up with such similar openers? What is going on?

Expand full comment
Coagulopath's avatar

This is "mode collapse", usually induced by fine-tuning LLMs on human preferences.

https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse

There's the added detail that R1 may have been trained on ChatGPT output (there are people who have gotten it to claim to be ChatGPT & an OpenAI model & so on.)

Expand full comment
Scott Lambert's avatar

The prompt isn't as sparse as you think because you need to include the system prompt with it as well. ChatGPT, Claude, Deepseek, etc. have a long list of instructions before they get to your question. If you prompted the APIs directly you'd more likely get a description of a social deduction game rather than the agent attempting to create one.

Expand full comment
Michael's avatar

I don't have an answer to your question, but ChatGPT gave me a response that was different enough not to seem plagiarised. It might be partially coincidence that your responses were so similar, plus the fact that both use reinforcement learning to create a helpful and friendly assistent personality. A human could give responses like "What about it?" or "I don't really like those games", while the LLMs are trained with a particular personality that really limits its potential responses.

It's possible that DeepSeek is using ChatGPT in reinforcement learning to make the type of response DeepSeek gives be more similar to ChatGPT.

Expand full comment
FLWAB's avatar

How hard is it to steal an AI?

Expand full comment
Throwaway1234's avatar

Training one AI using the output of another is called "knowledge distillation" (googlable term) and OpenAI provide an API for doing this: https://openai.com/index/api-model-distillation/

Expand full comment
FLWAB's avatar

Looks like OpenAI is accusing DeepSeek of using their API for distillation, against their TOS. So it seems like OpenAI thinks that DeepSeek stole from them (which I think is a bit ironic, considering how much data OpenAI scraped from other people without their consent).

https://nypost.com/2025/01/29/business/openai-says-it-has-proof-deepseek-used-its-technology-to-develop-ai-model/

Expand full comment
John's avatar

Harder than just feeding a bunch of questions to ChatGPT and copy-pasting the answers into your training dataset.

Expand full comment
Again with a Pen's avatar

Going meta I put the above into deepseek and it explained to me that the two models were trained on similar data. I understood that from the get go, that is not the point. The point is, should we not expect variance in training set and training methods lead to more variance in the output?

Expand full comment
JiSK's avatar

The CAIS course application was open this morning but is locked now. Is it full?

Expand full comment
beowulf888's avatar

Stupid human question: if AI is so good why don’t they let it write the story for them? Hell, even ChatGPT can write a basic essay that can be tarted up with a little help from a human editor.

> I would be ghost-writing for an ex-OpenAI researcher previously profiled in major publications (eg NYT), who is running a big forecasting project and wants to do a media push around the time they release their results.

Expand full comment
beowulf888's avatar

I couldn't resist giving it a try. Here you go... ;-)

Me:

Write a short essay on AGI and the coming Singularity in the style of Scott Alexander of Astral Codex Ten. Don't forget to mention Bayesian priors in the title, mention the hereditary basis of IQ and to disparage post-Christian vitalists in the body of the text.

ChatGPT:

"Bayesian Priors and the Coming Singularity: Why the Post-Christian Vitalists Are Wrong (Again)"

The debate over artificial general intelligence (AGI) and the impending Singularity has, predictably, descended into a cultural Rorschach test, projecting our priors, anxieties, and fantasies onto the tabula rasa of the future. While some envision AGI as a benevolent deus ex machina bringing a golden age of plenty, others see a rogue paperclip maximizer turning humanity into so many raw materials. As always, the truth is likely less dramatic, but it behooves us to unpack these possibilities through the rigor of Bayesian reasoning, lest we fall prey to the intellectual pitfalls of our ideological cousins, the post-Christian vitalists.

To begin, let’s consider the technical premise: AGI is, fundamentally, the construction of a machine capable of performing any intellectual task that a human can do, and likely much more. Here, one’s Bayesian prior should be informed by the hereditary basis of intelligence, a domain that has historically been a strong predictor of cognitive capability. While human IQ is the product of an intricate interplay between genetics and environment, its heritability (somewhere around 50-80% depending on the population) demonstrates that intelligence is not a mystical vital force but a replicable, material phenomenon. If evolution can stumble into it via the blind process of natural selection, we should not assume it’s beyond the grasp of deliberate engineering.

From this perspective, the Singularity—where AGI accelerates its own development into superintelligence—is less a question of "if" than "when." It’s easy to get caught up in the timelines (2030? 2100? Never?) or the probabilities, but more interesting is the philosophical terrain. Here enter the post-Christian vitalists, who seem determined to reinvent the eschatological wheel with their insistence that technology must somehow revitalize the "soul" of humanity. The Singularity, in their view, is either a heretical Tower of Babel that divorces us from our "essential nature" or a sacred rapture that will cleanse us of our fallen state.

These vitalists miss the point entirely. Their criticisms, while occasionally veiled in secular language, betray the same nostalgia for pre-Enlightenment metaphysics that drove past resistance to heliocentrism and Darwinism. In their worldview, humanity is a repository of ineffable purpose that AGI cannot replicate. But Bayesian reasoning requires us to ask: How likely is it, a priori, that humans have some unique, unquantifiable essence? Given the track record of material science to date, the probability seems vanishingly low. AGI won’t "steal" our humanity any more than the printing press or antibiotics did; it will merely shift the locus of value production and force us to renegotiate our priorities.

The challenge isn’t metaphysical but practical. How do we ensure that AGI’s utility functions align with human well-being? This is where the real debates should lie—not in vague invocations of the "human spirit" but in technical questions of alignment, control theory, and robust governance. And here, too, Bayesian priors are helpful: While the pessimists argue that any misstep will result in instant doom, the historical record of technological progress suggests a distribution of outcomes—some disastrous, yes, but many positive or neutral. We should neither dismiss the risks nor succumb to apocalyptic despair.

The Singularity, if it comes, will not be a singular event but a series of gradual shifts. Humans will adapt to AGI much as they have adapted to prior technological revolutions—imperfectly, unevenly, but ultimately with resilience. The post-Christian vitalists may yearn for a humanity untainted by silicon and code, but such purity has never existed. Our history is one of continuous self-reinvention, and AGI will be no different.

So, as we approach this frontier, let’s keep our Bayesian priors updated, our thinking clear, and our rhetoric grounded. The Singularity is neither salvation nor damnation. It is merely the next chapter in the long and unpredictable story of human progress. And if the vitalists don’t like it, well, they can take it up with history.

Expand full comment
Andrew Holliday's avatar

This is a convincing imitation of someone doing a very bad imitation of Scott.

Expand full comment
Viliam's avatar

Where are the puns? I believe that ChatGPT should be capable of making some, but it probably needs to be reminded explicitly to do that.

Maybe this could be done in two steps. First, feed it some Scott's articles, and ask it to describe the characteristics of the text, what makes it different from a random text written by a smart person. Second, provide this list of characteristics and ask it to produce a text like that. Maybe even to write the text in phases -- first the outline, then prepare some puns, and finally write the text following the outline and using the puns where appropriate.

Expand full comment
HalfRadish's avatar

Fail!

Expand full comment
Scott Alexander's avatar

Can anyone explain why the market updated on DeepSeek a month late?

Expand full comment
Anon's avatar

Better yet, can anyone explain the direction of the update? When was the last time when software optimization somehow hurt hardware makers (whose hardware has suddenly became more capable and not less)?

Expand full comment
Antonio Max's avatar

Because it's free. This hurts the business strategy based on centralization of silicon valley cronies/investors. The DeepSkeek is particularly hurting nvidia because it was trained on a quite modest infrastructure compared with what OpenAI asks (last year, OpenAI asked the UAE government for 7 Trillion for datacenters).

Expand full comment
Anon's avatar

Which means a $7T datacenter now gives you even more AI that was expected. And it also means smaller players will now be interested in purchasing Nvidia chips for their own datacenters, something that was out of their reach before.

Expand full comment
Tatu Ahponen's avatar

Well, the most likely explanation is that the instances making up the "market" often just learned about DeepSeek right now, but another one is that, as I indicated below for myself, DeepSeek's ascent is really less about the compactness etc. and more about offering a more desirable product design-wise for the end user than a lot of other AI efforts.

edit: and ofc one of the biggest things that seems to have come as a surprise is that China is now a genuine competitor on *this* field - once you've factored that in, you have to automatically prize in China being a giant in general and this surely not being the only thing on offer

Expand full comment
Nicholas Weininger's avatar

Related question: the news I have seen on DeepSeek seems to very confidently assume that (a) they are telling the truth about their training costs and (b) they didn't just steal OpenAI's (or somebody else's) weights through a CCP-installed backdoor. Why should we be so confident of these things? Have people independently corroborated their claims or are we just trusting their press releases?

Expand full comment
Scott Alexander's avatar

I'm not sure, but I think because it's much smaller and has much lower inference costs, and you can test this by getting the model yourself (it's open-weights!) and running it on a computer that would not be able to run GPT.

Also, they listed some of the discoveries they used to train it more efficiently, and everyone agrees they're real, clever discoveries.

Expand full comment
Antonio Max's avatar

AND we can't forget this is but their first iteration wave, or in other words, current DeepSeek is the worst DeepSeek we'll ever have. Quite scary if you're currently getting a big paycheck working for some angel funded AI venture.

I saw a meme on tiktok where they used that "Tony Stark was able to build this in a cave! With a box of scraps!" audio for OpenAI engineers and I never laughed so hard lol

Expand full comment
John Schilling's avatar

There's definitely and verifiably an improvement in that they can *run* a reasonably high-quality LLM in a constrained operating environment. The bit where they AIUI basically lobotomize the AI and show that if you activate the right lobes for the task du jour it doesn't get *too* much dumber even though it uses an order of magnitude or so less compute, is legit clever and I'm surprised that it works as well as it did, but as you say the model weights are open and people are finding that it works in their own environments.

The question is whether this is also true for the compute used to *train* the model in the first place. It isn't obvious that the cleverness that reduces operating compute will necessarily reduce the training compute, and a priori I'd expect "just as smart as the other LLMs on the block but much more compact" would require *more* effort in training. And I haven't heard of anyone replicating that part yet, so we're basically taking China's word on that.

The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?

Expand full comment
Daniel B. Miller's avatar

> The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?

Their strategy is similar to Meta's with llama. If you have a state-of-the-art product but a competitor has beaten you to market and has a seemingly insurmountable lead, putting your similar (or even better) product out for free essentially "de-monetizes" your competitors, because users will migrate quickly to the just-as-good-but-free platform. That serves to even the playing field for the next iteration, removing most of the legacy and switching-costs advantage your competitors had.

One interesting thing about this strategy is that it can be spun in two very different ways. To those worried about privacy, security, freedom of choice and concerns about trusting revolutionary tech to a few well-funded companies, you spin it as a Richard Stallman, "bits wanna be free" libertarian stick-it-to-the-man move.

But to the C-suite (or in this case likely the CCP), you can spin it as "Art of War" scorched earth: if you don't get the spoils, at least make sure your enemy doesn't get them either.

Expand full comment
Paul Brinkley's avatar

I can't help but think you have an answer, but are encouraging the rest of us to think about it first.

(I think I have an answer too, and it's roughly the same answer to "why is Tiktok free?".)

Expand full comment
John Schilling's avatar

I don't actually have an answer. Well, really, I have several possible answers and I suspect it's not a complete set, hence the question.

One plausible answer is that it's about raising VC money. Release a pretty-good LLM, tell people you trained it really cheap (lie if necessary), then say "...just think about what we could do with some of those sweet, sweet gigabucks, don't you want a piece of that action". What to do when the gigabucks arrive is left as an exercise for the student.

Another is that they wanted to tank the American AI industry, either long-term because the CCP doesn't want America to have AI, or short term because some rich Chinese guy wants to get even richer by shorting NVIDIA et al. So release something that makes it look like OpenAI, Anthropic, et al have all been wasting their time and nobody needs nearly as many of NVIDIA's chips as we thought.

Another is that they plan to make money more or less legitimately in some way that I don't understand but which includes releasing the code and all the model weights open source. Maybe that's a loss leader and the real product has yet to be released.

Any more?

Expand full comment
Paul Brinkley's avatar

The answer I was thinking of was "farm an enormous amount of data analytics about American users (and any other country while they're at it - why not) and exploit that in some way to either foment political chaos (a la Internet Research Agency), collect telemetry on US internet traffic for cyberwarfare or commerce purposes, or to discover ways to influence US thinking in ways that benefit the CCCP".

Another possibility is using TikTok as some sort of privileged platform for actual malware. (Theoretically, *any* foreign-based popular media app could be used, but I get the sense TikTok has unusually established ties to CCCP institutions. Plus, it's China, well known for government-commercial integration in ways the US would not prefer.)

Expand full comment
Nicholas Weininger's avatar

Ok, I buy that they did some actual innovative work to make it so small. I still wonder if that work might have been turbocharged by having access to a large set of pre-existing weights that, um, fell off a truck, so to speak.

Expand full comment
Rothwed's avatar

In one sense, the DeepSeek models are incredibly efficient by necessity. They had to train on a bunch of H800 chips instead of the newer H100s like the American companies, largely due to the chip embargo. There's simply no way they could compete with someone like OpenAI without massive optimization. OpenAI doesn't need to worry about optimizing, because they can just buy newer and more powerful GPUs that NVIDIA is constantly cranking out. Although it is impressive how much DeepSeek was able to do with remarkably less.

I saw this linked on DSL and it's a pretty good overview:

https://stratechery.com/2025/deepseek-faq/

Expand full comment
User's avatar
Comment deleted
Jan 28Edited
Comment deleted
Expand full comment
Rothwed's avatar

This is covered in the link:

> Scale AI CEO Alexandr Wang said they have 50,000 H100s.

I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.

Expand full comment
yafah's avatar

I think it's likely to have to do with r1 last week and some sort of additional information coming out over the weekend convincing investors that it's ~real. One possibily is it leaking in the Information that the Meta AI infrastructure director told colleagues that r1 might outperform even the next version of Llama, and that Meta formed two teams focused on studying it or something of that nature (I'm going off of secondary reporting about the Information article). Note that in the past Deepseek has released models that had inflated benchmark results that decreased when you used ~contamination proof benchmarks (see https://arxiv.org/pdf/2403.07974 ).

Typically the markets have reacted plenty fast to AI news and so I'm pretty confident that _something_ came out, and I don't think the app alone explains it.

Addendum: the meta explanation doesn't feel entirely satisfactory since meta's stock price has gone up, not down. Though this could just be due to them having less AI revenue/benefiting from open source models more.

secondary source on the information article, since the information itself has a giant paywall: https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/

Expand full comment
Moonsweater's avatar

I think it's that r1 was a wakeup call: v3 simply feels kind of mid, and it takes a decent amount of domain knowledge about how reasoning models are made to conclude that deepseek would be able to inexpensively replicate o1 levels of intelligence. If I can dig up the exact X thread where those details are discussed, I'll post it here.

Expand full comment
Francis Irving's avatar

DeepSeek R1 and the paper about it were released this week, as well as the app. They're the inference time compute reasoning versions of DeepSeek v3 which came out at Christmas.

It's important from technical/product POV because:

1) First open weights inference-time long-chain-of-thought trained reasoning model (so like OpenAI's o1)

2) It was trained extremely cheaply, so is fast and cheap.

3) They published the paper on how it works, and the methods are different and simpler from the big labs. Rumour has it Meta have stopped their big training run, and are doing it again using the methods in the Chinese paper.

Geopolitical importance is that it means China is in performance only 3 months behind, in cost ahead of the US on AI. DeepSeek v3 a month ago was impressive, but less dramatic as they were what 9 months behind Meta at open, capable AI effectively?

Cultural importance is that it is free and anyone can try it, whereas you have to pay for other inference time reasoning models.

How good the model actually is I'm not sure, I prefer Claude still, but partly we'll see what happens when people spend more on training the same techniques. Performance and length of thinking go up each generation, and the graph doesn't look like it stops.

Expand full comment
John's avatar

I feel like Leopold Aschenbrenner's "Situational Awareness" predictions are becoming more and more spot-on by the day: $500 billion for a new AI cluster, shock and panic about China's AI prowess...just waiting for trillion-dollar training runs and nationalization of AI capabilities now right?

Expand full comment
Shawn Hu's avatar

Is it totally clear that the dip in the market today is a direct result of the DeepSeek release?

As some in my local circles have remarked, NVIDIA in particular is a historically volatile stock, and looking at the 6 month graph makes today look not that out of place.

Expand full comment
Julian's avatar

Financial reporting is full of monday morning quarterbacking and storytelling that is just a little to perfect. I am skeptical that the sell off was directly or only due to something related to DeepSeek. Just because articles are written that say it, doesn't make it true.

A lot of other stuff happened over the weekend that the market would be responding to. And the S&P started a sell off on Friday. Additionally, if it was DeepSeek, we'd expect China's market to rise but it's flat for the last 5 days and was flat yesterday. Meanwhile the CAC and DAX are way up the last two days while FTSE is up slightly.

You can look at financial charts and tell any story you want. It's like astrology that way.

Expand full comment
Charles “Jackson” Paul's avatar

Sure, but NVIDIA’s decline was part of a broader tech sell off, which, based off timing and the commentary of the people doing the selling, seems to have been driven by DeepSeek.

I mean, it’s *possible* it was driven by other factors, but if so, what?

Expand full comment
beowulf888's avatar

Just about all the semi companies except Intel have significant revenue streams from AI products, as do a bunch of other tech companies. DeepSeek showed that a Chinese firm that was able to produce an open-source AI model with roughly 1/50th of the resources of state-of-the-art models yet still beat OpenAI’s o1 on several benchmarks. And they did it with older NVIDIA chips and without a huge data center. Suddenly, NVIDIA, the companies in its supply chain, and its competitors have developed feet of clay.

Expand full comment
Charles “Jackson” Paul's avatar

Deepseek released an app, which received a lot of popular attention, and approx 300k daily downloads, which updated the markets towards it being important.

Expand full comment
Scott Alexander's avatar

Starting to think Wall Street might just be dumb.

Expand full comment
Eremolalos's avatar

That was Michael Lewis's conclusion at the end of The Big Short.

Expand full comment
EngineOfCreation's avatar

What did the prediction markets say about either DeepSeek directly, or the relative US/Chinese AI capabilities?

Expand full comment
Hank Wilbon's avatar

What might the market behavior have been if Wall Street were smarter? Should there have been a big sell-off a month ago? Should NVIDIA and related stocks have gone higher today? Funds are all working on partial information about what might happen in the future and trying to outguess each other's behavior. Seeming order follows seeming chaos follows seeming order in the markets.

Expand full comment
Monkyyy's avatar

If the stock market was sane, all trading would be insider trading and low volatility

Expand full comment
Jeffrey Soreff's avatar

Yup! To paraphrase from something I vaguely remember: "If the efficient market hypothesis were true, the market should be a very quiet place." (to agree with your "low volatility")

Expand full comment
Monkyyy's avatar

Not even the full efficient market; merchant markets are far more stable

Expand full comment
Scott Alexander's avatar

I would have expected a market move on the news that DeepSeek can make great models for 5% of the cost of everyone else, and not on the news that the great model that was free for anyone to test and see how great it was made it to the top of the app store.

I'm much less sure which direction the market should have moved, but most of the people in AI I'm following seem to think it should be up.

Expand full comment
Daniel's avatar

"Great" is doing a lot of work here. People who aren't super plugged into the AI space can't immediately identify marketing hype and fluff from the real thing. Wall Street didn't know it was a great model until it went viral. You can't fake 300k daily downloads.

It's not *that* hard to find an edge in the market. Everyone who makes a living trading has an edge somewhere. Being as plugged-in to the AI space as you are would almost certainly count as an edge for a skilled trader.

Expand full comment
Michael Watts's avatar

> You can't fake 300k daily downloads.

I feel that this is probably similarly challenging to faking NYT bestseller status.

Expand full comment
TK-421's avatar

Mais oui on your above comment about Wall Street being dumb.

However, one can steelman a decision to not sell on news of DeepSeek's technical capabilities but to do so when it jumped in the app store. It's not unheard of for capabilities to be overstated or for a model to do well on benchmarks while being underwhelming to use, etc.

Becoming the top app is direct evidence to suggest that this is something real, something from which users derive real value. Not an absolute guarantee: plenty of weird stampedes in app popularity, fads where people try out the new thing but don't stick with it. But it's suggestive. Much more suggestive of real consumer behavior than an impressive technical demonstration / paper.

And yes, there is an obvious case that Nvidia, if no one else, should be up because more interest in AI equals more inference time compute. The counterargument is that DeepSeek just demonstrated that previous assumptions about training time compute demand could be invalid.

Prices move for lots of reasons though.

Expand full comment
beowulf888's avatar

You've got it backwards. DeepSeek was able to put out a credible AI for $13 million. Suddenly, there's no need for massive data centers full of Nvidia boxes. NVID's current revenue stream is threatened, and DeepSeek puts its future revenue streams from upgrade cycles in jeopardy.

Suddenly, the big fab companies like TSMC are facing less business.

The revenue streams of companies with DSP products are threatened: AVGO, CDNS, 2454, SNPS, NXPI, CEVA.

The revenue streams of companies in the optical space are threatened: AVGO, TSM, MRVL, CSCO, GOOG, CRDO, ANET, FN, COHR, Innolight, Eoptolink.

The revenue streams of semi companies in the ethernet switching space are threatened: AVGO, MRVL, MCHP, CSCO.

And so on. I've probably missed a bunch.

AI was always a bubble waiting to burst. It may be healthy that NVID lost only $750 billion in its market cap today.

Expand full comment
striking-cat-tail's avatar

idk, information takes time to percolate. (putting the efficient markets hat on) might be that there are multiple levels of meta here? Like the existence of DeepSeek R1 isn't enough, you need to also know the reactions to it, and the reactions to the reactions, across multiple levels of the supply chain... maybe multiple pieces had to fall together to cause the selloff

(or in other words it isn't even clear to me that this should cause the Nvidia stock to drop rather than rise; Matt Levine has mentioned before how often in insider trading cases the perpetrators correctly determine that the stock would move but get the direction wrong)

(taking the efficient markets hat off) there is a lot of alpha in not being dumb then :D

Expand full comment
Eremolalos's avatar

About the dream of an AI-supported future, where there is ample food, housing and good medical care for everyone, and nobody is forced to do tedious exhausting work to get those things, and some do not have to work at all: How would we transition from where we are now to that? Let’s assume that the AI of that era is utterly benign. It has no agendas of its own, and does a great job of whatever we tell it to do. If we want advice on how to transition to that life it will advise us, but it does not govern us.

So how does it happen? Here are some things that make it hard for me to believe we will transition to anything like the utopia people are dreaming of:

-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?

-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. There is some evidence of their being concerned about doing harm of different kinds with their products, but that concern looks like it is a far smaller factor in decisions than are the usual business motives of profiting and thriving. There are probably things these companies could do that would nudge more AI-related

benefits or beneficial AI itself toward the poor. But is that likely?0

-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are. One the comes to mind for me: In countries with way too few medical professionals, programs that train bright people without medical degrees to use AI for diagnosing people, reading imaging, etc. Wouldn’t be as good as having an MD, but seems like would be a great improvement. But is there anybody other than a few dreamers thinking about that?

-I can think of many ways that superintelligent AI could be used to acquire power and wealth. For instance there’s the entertainment/tech addiction route — coming up with new forms of virtual entertainment. There’s designing new medical and scientific gizmos that have AI components. (Surgeons could wear glasses that show the AI the tissue they are looking at. AI could tell them in real time which areas are cancer.). There’s using AI’s pattern recognition abilities to make business decisions — for example, about where to buy or build real estate, or what to invest in. Seems like lots of things in sales that are now an art could be done via AI pattern recognition. If it can recognize cancer, it can recognize a great site to build a new city. So why wouldn’t we move not towards more equality, with everyone having a good life, but towards greater inequality, a la William Gibson?

Expand full comment
Kit's avatar

When I read people speculating about possible end-game AI futures, they always give off a sci-fi vibe. By that, I mean that the game is to imagine us, humanity, but with a few interesting changes, and not to worry much about just how those changes came to pass. But can an AI utopia ever do away with most jobs? How might this happen in practice?

Anyone remember that old game M.U.L.E.? Or how about we just imagine a small island of ten people with a fully functioning mini economy? One actor develops AI and with it takes over the juiciest market, throwing another guy on unemployment. The AI guy is producing more, prices fall, and everyone benefits. The safety net holds and that guy on unemployment gets back on his feet and starts contributing to the general economy again. Let’s imagine that he enters a completely new market.

AI guy takes over the next market, with similar results, only this time the newly unemployed guy goes to work for another non-AI business. Output again goes up, prices fall, and most people are better off. But I doubt that new employee makes out as well. Still, let’s say that he consumes as much, but less relative to his fellow citizens.

We go through a few more cycles. The safety net starts to come under strain. A few great new markets spring to life, but increasingly people find themselves forced to work as employees for less interesting money. But what about the AI guy? He’s always been driven by profit, but what can he actually buy with his money? He doesn’t really need more coconuts, fish, or massages. Still, the fish market looks the tastiest of what’s left. He’ll have it, even if it means throwing 30% of the population on the unemployment line.

The safety net buckles. The citizens vote to raise taxes dramatically, over the objections of the AI guy. He’s asking himself whether the fish market even paid for itself. He disrupts a few of the more innovative companies, just to keep himself amused. Then the day comes when only the coconut market alone seems worth his time even though it employs a full 70% of the population. A rather surly 70%, it must be added. Does AI guy basically pay them to do nothing so that these former employees can continue to buy his other products? What does money even mean by now? What would ever convince AI guy to push this further? Not money. And what’s his status without money to flaunt? Power, perhaps? Yeah, he’s probably hungry for that. And what about those unhappy few whose dreary but essential jobs resist automation? What are they in it for, apart from preventing general collapse should they ever stop working, and the privilege of eating at the same all-you-can-eat buffet as their unemployed brothers?

And in the end, I don’t see how we get to utopia in a capitalistic system.

Expand full comment
Adrian's avatar

> And in the end, I don’t see how we get to utopia in a capitalistic system.

Good thing then that modern society has nothing in common with a small island inhabited by 10 people.

If you make up a completely fictional scenario and then proceed to make up how everyone would behave in this scenario, you really shouldn't draw any conclusions for real life.

Expand full comment
Kit's avatar

I think you could have made a better effort to reply than a snide remark. Two can play such a game. If I could show that your “nothing in common” is wrong, will you admit your error?

It was a toy example meant to bring a few essential issues, as I see them, to the fore. If you wish to write a higher-effort reply, ideally along the lines of how you see us getting from here to an AI utopia, I’d be happy to read it.

Expand full comment
Adrian's avatar

The onus is on you to show that your analogy holds predictive power over real life. I see no sense in engaging with an analogy that's inapplicable to the question at hand.

> If I could show that your “nothing in common” is wrong, will you admit your error?

No, because it's not enough to show that your analogy holds _something_ in common with reality. You need to show that it's _sufficiently close_ to reality to matter.

> how you see us getting from here to an AI utopia

I have no idea, but that's not my point.

Expand full comment
Eremolalos's avatar

Toy examples vary quite a lot in how useful they are. For instance if I wanted to know what would happen to everyone commenting on this open thread if I put them into a giant

highly flexible container and dropped them off a skyscraper, I could get a reasonably good idea by dropping just you, in a garbage bag.

If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons. Or you could come up with a model of your own for how things will play out.

If you are not interested in doing either, how about aiming your pea-shooter at John Shilling? Guarantee you you’ll be totally impressed by his response.

Expand full comment
Adrian's avatar

> If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons.

Just about in everything. How is that not obvious?

* Our civilization is not an isolated island, and even most countries aren't isolated countries.

* Most countries aren't just a handful of people, neatly sorted into different industry sectors.

* Markets aren't discrete, atomic entities which can be "taken over" completely or not at all.

* There's not one "AI guy", not even one "AI company".

Come on, even most strategy games try to create a better approximation of reality.

> […] how about aiming your pea-shooter at John Shilling?

I don't think I've ever seen John Schilling making up overly simplified fictional scenarios and drawing conclusions from them. He is, however, an exceptionally insightful and articulate commenter, and I can only strive to meet the bar he sets.

Expand full comment
Kit's avatar

> The onus is on you to show that your analogy holds predictive power over real life

Why would that be? The original comment wondered how society could transition to an AI-supported future. I also wonder. I started out with a question, and then sketched out a little world showing why I have this question. I left the fundamental issues unstated, not wanting to insult people's intelligence. I fear I left some behind.

There were no predictions here, no prophecy. Where I come from, it's fine to speculate about possible futures, but there's no 'onus' on anyone wondering if such a future can actually unfold, unless he set out to prove that no such way forward is possible.

Also, where I come from a proper response to a such a question would be to either point out a flaw that, once remove, shows the way forward, or to direct the person to a source that contains the solution. You chose a third way, I suppose.

> I see no sense in engaging with an analogy that's inapplicable to the question at hand.

I'll admit that I laughed out loud at that!

Anyway, now that I've sampled the quality of your thought, I'll stow what little I can and set sail a wiser man, remembering to ignore you in the future. Amuse yourself with the last word, if you wish.

Expand full comment
Anonymous Dude's avatar

You launch the socialist revolution once the AI is developed.

Seriously, probably you just use democratic mechanisms for redistribution (development paid for by taxes) once the new technology exists. Obviously not perfect but I can't think of anything better. I'm not saying nothing better exists, but I'm not smart enough to think of it.

Expand full comment
beleester's avatar

>-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?

If the amount of wealth we have increases, then sharing the same percentage will help more people. Make the pie big enough and you can fund a UBI or some other redistributionary scheme that gives everyone enough to live on without any of the big AI winners feeling like they're being taxed too hard.

(I've occasionally joked that the Good AI future will come about by giving everyone shares in the AI-run companies so they can live off of investment income, thus achieving full communism in the most capitalist way possible.)

>-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. [...] There are probably things these companies could do that would nudge more AI-related benefits or beneficial AI itself toward the poor. But is that likely?

That's the cool thing about capitalism (in theory) - you don't need companies to care about doing pro-social things. You just want them to make as much value as possible, in their typical amoral greedy way. You *want* them to be making cool AI-powered cancer scanners or whatever, because that's what "creating value" means - more stuff, better stuff, cheaper stuff. Then you tax that value and redistribute it, so that everyone can buy the new stuff.

(And also maybe ask the friendly AI to come up with a non-capitalist model for well-known market failures like housing or health care, but the capitalist model covers a lot of stuff.)

>-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are.

There was a recent post on the SSC subreddit discussing using ChatGPT as an English teacher in developing countries. IIRC the results were "not as good as an actual human teacher, better than not having a teacher."

Expand full comment
Antonio Max's avatar

>"not as good as an actual human teacher, better than not having a teacher."

Such a good way to see small improvements over no improvements. Hard to not accept this as plain fact.

Expand full comment
Eremolalos's avatar

Well I get the idea — if there is a giant superfluence of stuff, whether the stuff is money or food or housing, enough will filter down to the destitute to raise their standard of living greatly. Do you in fact think that’s the way it’s likely to play out, assuming Ai gets smarter and smarter but stays in the obedient helper role? Or are you more debating? I do not feel knowledgeable enough about economics etc to debate this. I am just speculating, without an ax to grind. If you are in debate mode. would you be willing to move to speculation mode?

Other ways it can see it play out that seem plausible to me:

-AI development companies become so rich and powerful that they are running the country, with US president more a

figurehead. Companies then either

merge or fight, til there is one entity that makes the tech and has substantial control over all the vital things that now have AI as essential parts of them — research, electrical

power, medical care, transportation. Now that company pursues its own agenda. Does not seem implausible to me that its main goal would not be spreading the wealth but accumulating power, harming enemies, etc.

-AI’s ability to do almost anything is not used to produce lots

of food and housing, but to produce astonishing luxuries for the very wealthy few. There sure have been other societies where there are a few people with, you know, solid gold gem encrusted furniture, and a large number of destitute people in society who are eating out of compost piles.

Expand full comment
Antonio Max's avatar

>-AI development companies become so rich and powerful that they are running the country, with US president more a figurehead.

So basically Idiocracy?

Expand full comment
EngineOfCreation's avatar

Let's say the AI tells us that in order to achieve the utopia you described, we all have to wear a purple paper hat twice a day, 10 minutes each. The resulting societal changes, for reasons which you couldn't possibly understand even if the AI was explaining them to you for your entire remaining lifetime, will bring about utopia.

Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.

Similarities to already existing imaginary rulers were unintended, but inevitable.

Expand full comment
Sol Hando's avatar

I don’t follow the logic.

If my dad said “Please come to my house and get the mail from the mailbox. The prosperity and fate of our family depends on it.” I’d do so in a heartbeat. The cost is low and the benefit, even on blind trust, is extremely high. Maybe he has information I don’t, or maybe he’s just way more intelligent than I and couldn’t explain the plan in a way I could understand even if he bothered to try.

If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.

If an AI requires us to do something mildly inconvenient and inexplicable, but actually does follow through with providing huge prosperity, and for reasons we can’t understand these things are related, doesn’t imply that AI can realistically make me do whatever it says.

Expand full comment
EngineOfCreation's avatar

>If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.

Yes, but you refuse to kidnap the governor for the same reason that you agree to get the mail from the mailbox. You weight costs versus benefits, and estimate how likely it is that your dad/the AI knows more than you. But as OP established, the AI in this scenario is maximally capable, basically a genie in a bottle. If you truly believe that the benefits outweigh the costs in all cases (it's about utopia, after all), then why wouldn't you kidnap the governor? People get kidnapped and murdered all the time for arguably much lower benefits than Utopia. The AI might assure you that no harm will befoul the governor, your attempt will certainly succeed, and you will not suffer any negative consequences for it. The AI, in short, can make its request seem as effortless and rewarding as getting the mail from the mailbox.

And if you still refuse on principle, just like you could refuse your dad just to be an ass or something, then you valued your principle higher than the expected outcome of "Utopia by AI". In that case, either utopia does not have the highest possible value in your belief system, or you don't think AI can bring it about.

Expand full comment
Sol Hando's avatar

Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order. The probability of someone lying increases with how poorly they can explain their reasoning for an action too.

"Hans, have you ever thought that what we're doing is wrong?"

"Sure! It's pretty obviously evil what we're doing, but it's fine because [Insert Authority Here] tells us it's for a Utopia. The long term gains override any consideration of short term misery."

"Ah! That makes me feel much better. Too bad for these Jewish fellows, but I'm sure it's comforting for them to know there's a greater good we're working towards that makes it alright."

A reasonable perspective is Utopia is extremely valuable (not infinitely so), but its value diminishes with the certainty of an immoral action bringing that utopia about. Long time frames, no causal connections between the action and the outcome, the seriousness of the action required, and many other considerations can all diminish the value of that Utopia to almost nothing.

If someone told me they'd give me a million dollars in a month if I did something mildly inconvenient every day, and I had some reason to believe they were telling the truth, I would do it. If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.

Expand full comment
EngineOfCreation's avatar

>Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order.

Yes. That's literally what I wrote. If you're not fully on board with the proposed utopia and/or with the AI supposedly able to achieve it, then that's it, we're done. The AI does not rule you. That's what I said. I don't know why you keep arguing the point. I agree with it.

>If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.

Yes. That's literally what I wrote. If you don't want to murder on principle, then you place that principle higher than the trillion dollars. They do not rule you. I agree with it.

My point is simply: If the trillion dollars were enough for you, and the trillion after that ad infinitum, then they would rule you. Agreed?

Expand full comment
comex's avatar

It’s worth pointing out that superintelligence does not equal genie. No matter how intelligent you are, there are fundamental limitations on how much you can predict based on uncertainty in the data. The real world is a chaotic system. Even if you have infinite compute and can literally simulate every possible universe, there are likely many indistinguishable universes that lead to wildly different futures. You don’t know what people are thinking; you don’t know what random events will happen.

Kidnapping the governor would be a very high-variance play. First of all, it might not succeed, as we saw. If it did succeed, it would predictably produce harsh political blowback in the short term, but the long-term effects would be hard to predict. Consider a somewhat-more-successful recent example of political violence, Jan 6. It inevitably hurt the militias and Trump in the short term, but in the long term it might have benefitted both of them, especially with the recent pardons. Could a superintelligence have predicted this? Not necessarily. After all, if Trump had lost the 2024 election, even narrowly, it would have been seen as a further rebuke to everything associated with his campaign. But I doubt there was enough information in January 2021 to confidently predict the outcome of the 2024 election.

In general, I expect that a superintelligence trying to bring about a desired long-term effect would prefer low-variance plays. If it’s better than humans at predicting the future, then it can leverage that skill most effectively when the future is more predictable. Which is convenient, because low-variance plays are pretty often morally-good ones. Instead of asking people to kidnap the governor, ask them to run for governor and lots of other offices too. Predict the most effective political arguments and tell your candidates how best to make them. Morally speaking, this isn’t perfect (effective political arguments are often logically bad arguments), but it’s probably morally better than kidnapping the governor.

But if a superintelligence did recommend an immoral and high-variance play for whatever reason, then a rational human follower ought to object. Not just because the ends don’t morally justify the means, and not just because the superintelligence might be lying (although those are both valid objections), but also because the means won’t necessarily accomplish the ends, creating a substantial risk of doing something immoral for no benefit at all.

Expand full comment
Throwaway1234's avatar

> Would you follow that advice?

From recent experience, what would actually happen is this will go political and half the population will become vehement anti-purple-hatters, with the more extreme groups beating up anyone they see wearing a purple hat.

This will remain true even if the justification for wearing the purple hat is comprehensible and the effects visible across population statistics. Anti purple hatters will double down on denying any positive outcome from the hats, will complain about being coerced into wearing purple hats, and will claim all manner of personal problems arising from the hats.

Expand full comment
Eremolalos's avatar

<Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.

I don't think that's true. Lots of people, including me, have been told to do or not something for health reasons, and did not disbelieve the doctor, yet did not follow the doc's advice. And it's also common for people to avoid doing tasks that, by their own judgment, there would be great benefit in completing, and great harm in leaving undone. It's nowhere near as simple as believe = compliance, noncompliance proves disbelief.

And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?

Expand full comment
EngineOfCreation's avatar

> did not disbelieve the doctor, yet did not follow the doc's advice

Nobody made the claim about your dentist being your god-king ruler, did they? The advice was something like "if you don't start flossing, your gums will keep bleeding", and you made an intuitive calculation on the cost vs benefit of following the advice. Your dentist obviously has little interest in your desire to scroll on your phone a few minutes more each day, but it's your few minutes, so screw him.

But utopia, for practical purposes, has infinite positive value. Otherwise, what kind of utopia is it really? No finite cost (such as the hats) ought to be too high to achieve it. So if you are serious about describing infinite-value utopia, and you posit an AI credibly claiming to know the practical way to get (and keep) it, I think my point stands.

> And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?

How would I know? I can't quite prove it, but I'm not that kind of AI. I just pointed out the flawed premise of your question. Either an advisor whom we fully trust with effecting large-scale societal changes effectively rules us, or such an advisor is not fully trusted after all.

Or maybe you do have a point with your dentist. Maybe the flaw in the premise is that there is no such thing as utopia at all because we, as a species, will never agree on anything so important and will always say "screw you I don't want to wear silly hats even if it's for utopia, I want 10 more minutes on my phone". That would probably say more about us than about any AI.

Expand full comment
Eremolalos's avatar

<But utopia, for practical purposes, has infinite positive value.

And death has infinite negative value for most people, and yet many people skip the equivalent of flossing when they have a life-threatening condition. I know someone right now whose liver function test results are really ugly, as a consequence of his drinking. He is well-educated, researched the liver thing, and understands how serious his situation is. Yet even while sober he shrugs and says he can't see stopping alcohol right now.

Expand full comment
EngineOfCreation's avatar

Well that's your answer then. There is no guaranteed way to utopia because people can just shrug and say "no", whatever the AI says.

Expand full comment
FLWAB's avatar

The general idea is that superintelligent AI will be able to produce more supply of any particular good than there is demand for it. That drives down costs for everyone. If the supply of housing, food, and medical care quadruple while demand remains constant then the cost will plummet and more people will be able to afford it.

Of course demand doesn't remain constant: if the price goes down, more people will want it, and if people spend less of their money on food or housing they'll want to spend more of it on healthcare. But if the supply increases enough then increased demand will be moot. So the theory goes.

Expand full comment
gdanning's avatar

>Of course demand doesn't remain constant: if the price goes down, more people will want it

That conflates demand with quantity demanded; when price decreases, quantity demand increases -- that is just the law of demand. When demand (the willingness and ability to purchase different amounts at different prices) changes, the entire demand curve shifts. But you make a good point re "if people spend less of their money on food ..." -- that would change the ability to pay for other goods and services, and hence would indeed shift the demand curve for those goods and services.

Expand full comment
Eremolalos's avatar

Well yes, I see that, but what about the nuts and bolts? Seems like being able

to build houses more cheaply could play out all kinds of ways. For instance, the development and production of robots and new house designs will also be beneficial for the country that does it. What would interfere with that country, let’s say it’s the US, hanging on to the advantages of having the house bots and house plans? Seems like there would be a great deal of money to be made by selling the houses somewhat more cheaply, but nowhere near as cheap as they are to build. House-builders and suppliers of things needed for the task would then make a lot of money. I’m sure the price would drift downward as companies undercut each other, but would it ever drift low enough so that it was affordable in the very poor countries? If I owned a house-building company and thought like a business person I would be looking for ways to keep house prices from becoming dirt cheap. I’d band together with others in the business and make agreements not to lower prices too much. I’d lobby for some kind of certification that’s expensive to get, and without which people

are not allowed to build and sell houses.

This seems like a situation in which a governing body of some sort would have to intervene in order for the benefits of the new tech to reach poor countries and even the USA poor at more than a trickle.

Expand full comment
Melvin's avatar

Houses are basically free in a lot of places anyway. Only land is expensive.

(New houses aren't free, but a block with a liveable-but-unfashionable house sells for the same price as an empty block, that's what I mean by houses being free.)

Expand full comment
FLWAB's avatar

I mean, we call those kind of price fixing arrangements cartels and monopolies, and we do have governing bodies that intervene on that in many countries.

Personally I don't believe that AI will be able to create the kind of productivity gains needed for the kind of "Fully Automated Luxury Communism" that many people speculate about. But anything that increases the supply of something will decrease the price, generally speaking. If it cost me $.005 per square foot to build a house thanks to AI robots or whatever, and you're willing to pay $.01 per square foot, then I can make a profit selling to you.

Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
FLWAB's avatar

Economically I think there's an argument that an arrangement like that is suboptimal, even for the country running the monopoly. Ethically, I don't see a problem even if that happens. If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone. The US is better off, and everyone the US sells stuff to is better off (if they weren't better off, then why would they buy the stuff?). What's the harm? But I understand that people who endorse Fully Automated Luxury Communism have a philosophical objection to people having good things if everyone else doesn't get the same good things.

Expand full comment
Antonio Max's avatar

> If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone

Have you seen America lately? They are hurting themselves the most, it is kinda cringe actually.

And eh, China 1.5b people. US 350m people. Chinese AI engineer ratio over US #s is abysmal, not sure enough people appreciate the probability for chinese AGI to emerge first.

Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
Eremolalos's avatar

So who, exactly, is going to manage the situation so that the focus is on making cheap AI's or somehow limiting export control? And who is going to address the infrastructure problem in, for ex, Africa? If we are still a democracy at that point, how likely is it that we would have a president & a congress who would be in favor of doing that? I myself am not sure we are going to be fully a democracy if AI becomes AGI and then ASI. Seems to me that the companies making AI and crucial components will have more money and power than the elected government. Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?

Expand full comment
Antonio Max's avatar

>Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?

Crucially, this was OpenAI motto, beef between Musk/Altman, but now that China (largely socialist/aims to become communist) opens great models for free, maybe governments themselves can offer public AI, in the same vein as we have utilities and this governance model may twist things a bit. (AI geopolitics happens to be my field, but I have no static beliefs over "which nation" will rule AI, but I think AI firms are overrated in their role).

>And who is going to address the infrastructure problem in, for ex, Africa?

As soon as AI ROI becomes more evident in large scale dynamics (societal benefit vs no investments), it should become more palatable for politicians to bring AI infrastructure to debate as they do with education, security etc. I wrote a piece not long ago where I explore the geopolitics of AI leaders vs followers, countries are certainly aware of my arguments there (https://antoniomax.substack.com/p/techno-economic-protagonism-and-ai)

Expand full comment
Davis Yoshida's avatar

What's the current state of understanding about permanent negative mental effects from using psychedelics (specific mushrooms). I'm mostly curious about the dosage floor for what's been observed. Has someone had psychosis induced by only doing a small dose a single time? Is there some dose sufficiently small that no serious adverse effects have been reported?

Expand full comment
Eremolalos's avatar

What I know: reputable publication had a chart of the safety profiles of all recreational drugs. There were 3 measures of safety: harm to the body, harm to mental health, and additiveness. Psilocybin was the safest. Unpleasant drug experiences on it are not rare at all, but long term bad effects such as psychosis are rare. But of course the chance of psychosis is not zero. It is higher if you yourself are subject to mild

versions of psychotic symptoms that you are able to shake off, or if you have close relation (sibling, parent) who is schizophrenic.

There is one dose that has no serious effects: 1/10 of a gram. That is the amount people took in a microdosing study, where subjects and record-keepers were blind to whether a subject was getting psilocybin or placebo. Turns out those taking the drug did not differ from placebo subjects on either measure. So 1/10 gram def won’t have negative effects, because it has no effects period!

If you are eager to try the stuff but feel

quite nervous, you should only experiment with it with a trusted person with you, one who has experienced psychedelics themselves. And I would recommend starting with a very small dose, something just a little bigger than 1/10 of a gram, and working up gradually til you feel a small effect and see how you like it.

Expand full comment
Davis Yoshida's avatar

I'm actually not asking for myself but for someone I know who does have some schizo-adjacent kinds of thoughts. It's a very hard risk to assess. Thanks for the info!

Expand full comment
Laurence's avatar

I think the last thing Scott wrote about this was: https://slatestarcodex.com/2018/06/06/hppd-and-the-specter-of-permanent-side-effects/

But there may have been more studies since.

Expand full comment
Davis Yoshida's avatar

Thanks, this was pretty informative even just because of the single case mentioned. I'd be curious whether that person took an atypically large dose.

Expand full comment
Eremolalos's avatar

I know a couple people who have it this. They didn't take LSD many times, and it wasn't huge doses. For both of them it's just minor visual distortions from time to time -- things like halos around lights. Both these people took LSD, not mushrooms, & Scott says in the post Laurence linked that he does not believe psilocybin (mushrooms) and peyote are known to cause this syndrome.

Expand full comment
Sam's avatar

A Trump admin thing I'm interested in is whether its anti-DEI push is going to be tailored toward achieving some actual goal of avoiding "reverse" discrimination against majority groups, or if it will just be a blanket "diversity = bad" policy. A proxy for this I'm watching is FDA's Diversity Action Plan (DAP) guidance, which proposed diversity requirements for participants in clinical studies to be used in FDA approval applications. These requirements seem pretty reasonable to me; people may have different reactions to a drug based on characteristics like sex, race, age, etc., and knowing about any such differences seems valuable to FDA in deciding whether and for what populations a new drug is safe and effective. As of now, the general DAP webpage and webpage for a prior initiative specific to cancer studies have been taken down, though last years' draft guidance document is still available. https://www.fiercebiotech.com/biotech/fdas-diversity-guidance-clinical-trials-no-longer-available; https://www.regulations.gov/document/FDA-2021-D-0789-0111.

I'm curious if this is a temporary review situation or just a reasonable policy getting shelved because it includes the word "diversity," which strikes me as a pretty stupid way to govern. I also come at this from a legal/regulatory perspective more than from a scientific or business perspective, so I'm interested in objections to my assessment that this is good policy (by FDA standards, given its current mission), and/or suggestions of other potential proxies to watch.

Expand full comment
Pip Foweraker's avatar

There was a report earlier of some of the IRS pages that had previously mentioned forbidden words ('equity') being unavailable despite those being terms of art when dealing with tax matters.

From this, I'd infer that it's a dumb filter / applied with a broad brush, at least for now.

Expand full comment
Anonymous Dude's avatar

That's probably the extreme rational end of the Trump administration policy. What's probably going to happen is it tries to discriminate in favor of whites and men to make up for all the discrimination done by the government going the other way recently... which means the direction of discrimination will oscillate with the party in power every 4*(d3) years.

Best-case scenario is disparate impact goes away and companies adopt meritocracy as a defensible middle ground since race-blindness is theoretically defensible from either side.

Worst-case scenario is they constantly change which groups they're discriminating against depending on who's in charge and it becomes one more territory in the culture wars.

I know which way I'm betting. ;)

Expand full comment
Neurology For You's avatar

Nobody knows, it seems like an ad hoc across-the-board policy right now, it could be redefined with guidance or changed literally overnight.

Expand full comment
Monkyyy's avatar

It will likely shift undocumented Mexican immigration to be documented Indian immigration

Expand full comment
Scott Alexander's avatar

I'm not sure either way, but I don't think it's obvious that there's not a political angle to the drug testing thing.

There are definitely some drugs that work differently in different populations (carbamazepine has an extra side effect in some Asians), but these are a tiny fraction of drugs and the extra side effects are usually pretty rare. Nobody can ever test for everything, and if the carbamazepine issue was just described as "extra side effect in people with HLA-B1502" instead of "extra side effect in people with HLA-B1502, which is way more common in Asians than whites", probably people wouldn't say "Okay, time to retest every drug in every HLA group, just in case!" I think probably people seized on the occasional rare cases where different races get different side effects as a new thing that they could claim was a disparity and do diversity politics to, during an age when everyone was looking for disparities and reasons to do diversity politics. Whether this is net good or bad depends on a lot of things I don't know like the exact fraction of drugs that have something like this and how much extra it costs / how much more time it takes to do the multi-race testing.

Expand full comment
Jack's avatar

Maybe with race, but i sounds like the same taken-down page talked about sex as well, and it seems obvious to me like there could be some pretty big differences in effect on the two sexes for a lot of drugs.

Expand full comment
Melvin's avatar

My guess is they'll start by applying a dumb filter, and may eventually add some things back if they're found to be useful.

But in the meantime, researchers shouldn't need their hands held by the government to ensure that they engage in sensible research practices.

Expand full comment
anomie's avatar

Well, Pete Hesgeth seems to be intent on removing all women from the military, so... the latter, probably.

> Regarding women serving on the front lines, Hegseth has said, "I'm straight up just saying we should not have women in combat roles. It hasn't made us more effective. Hasn’t made us more lethal. Has made fighting more complicated." Hegseth argued that men are more capable in combat roles because of biological factors and said "[e]verything about men and women serving together makes the situation more complicated, and complication in combat means casualties are worse".

https://apnews.com/article/military-women-defense-hegseth-combat-916d50a7b465ccfea1aeb13bb91064b3

Expand full comment
Shankar Sivarajan's avatar

> removing all women from the military

> should not have women in combat roles.

Do you think those are the same thing?

Expand full comment
Sol Hando's avatar

It remains an open question as to whether women are a positive factor in combat roles. I don't think it's an unfair position to think they would be a detriment.

Obviously for equality of opportunity reasons we may accept women in combat roles even if they reduce overall effectiveness, but that may be a trade we aren't willing to make when literal lives are on the line.

Off the top of my head it may introduce complications like:

- Privacy/comfort concerns with separate bathrooms, sleeping and showers.

- Issues with sex and relationships, and all the downstream interpersonal problems from that

- Physical capability (although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter).

- Hard to define psychological factors like differing levels of inbuilt aggression

- "Monthly" variations in mood, focus and energy levels

- Increased opportunity for sexual assault (A sex-starved battle-machine may be more likely to do something foolish.) Not victim blaming here, but on an overall level if this increases monitoring requirements, creates increased burden for HR, and removes the assaulter from a combat role, then this could hurt combat effectiveness.

- General camaraderie issues. I'm not informed on whether all-male or mixed groups easier form senses of obligation to their fellow soldiers, but wouldn't be surprised if it was the former. Men competing for the attention of a small percentage of female soldiers may cause resentment.

- Etc.

The military is obviously pretty discriminatory (No ADA compliance in combat roles), and I think the potential lives lost for even a small sacrifice of combat effectiveness, and the general geopolitical issues downstream of a less-effective military, may justify the exclusion.

Expand full comment
Delia's avatar

If there is a minimum requirement and no improvement of performance for people exceeding the minimum requirement it would not matter. E.G. if the task is pushing a button then the woman who can push a button will be just as effective as a man, but if the task is running fast towards the enemy and the minimum requirement is x yards in y seconds, a woman who has just made the minimum requirement will be less effective than a man who greatly exceeds it.

Expand full comment
Xpym's avatar

>although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter

Well, the problem is that there used to be requirements, pretty much all women failed them, so they lowered them far enough that women could realistically pass, making the requirements basically meaningless. It's a simple fact that women are much weaker on the high end, and maybe society is finally mature enough to accept that.

Expand full comment
Autumn Gale's avatar

I doubt I'll be able to find it again because I read it during the Bush Administration, but something that stayed with me was a post by a female combat veteran with lasting physical and mental issues who made the good point that the location of where the combat occurs can change quickly and unexpectedly, and although as a female soldier at the time she was not meant to be in the combat zone, she ended up there on multiple occasions. So people in her situation often had trouble accessing appropriate medical benefits or being given recognition of the fact that they served in combat, because officially they weren't supposed to have been there and it was inconvenient for the authorities to acknowledge.

So it seems to avoid this kind of scenario you'd either have to ban women from anywhere that could be near the front lines in the worst case scenario, or make it officially possible for women to serve in combat. (Or, I guess, ban women from combat and keep sweeping any situations where that doesn't match reality under the rug).

Expand full comment
Melvin's avatar

If there are to be women in combat roles then I think the most reasonable way to do it would be to have all-female units so that the privacy, sex, sexual assault and camaraderie issues are somewhat neutralised.

In an actual war I wouldn't be surprised if these units get assigned to the less demanding roles, but in the meantime everyone can play pretend equality.

Expand full comment
Mr. Doolittle's avatar

During WWII some countries at full mobilization (Finland, Russia) had a large number of females in special combat roles. Partisans also had a non-negligible number of females.

There's too many variations to say that it always went one way or another, but from my understanding the females tended to be put into all-female units when possible, and in roles that required less direct physical violence, such as pilots and snipers. Plenty of nurses and other support roles as well, of course.

Expand full comment
Melvin's avatar

Your first paragraph says "remove from the military", the actual quote says "from combat roles".

Expand full comment
Paul Botts's avatar

Correct but I don't doubt that Hegseth will at least rhetorically land on something like "remove from all warfighting roles except medical support for the soldiers". As a practical matter that's probably not plausible now. But from conversations with MAGA relatives of both sexes that is what they think the correct answer is.

Expand full comment
Melvin's avatar

I'm not sure the distinction you're making between "combat" and "warfighting" but then again I'm also not really interested in debating hypothetical positions that you're sure someone will land on rather than actual positions they've taken.

Right now there's a couple of hundred thousand women in the US military with just a few dozen in actual combat roles.

Expand full comment
Paul Botts's avatar

Granted that it's hypothetical.

To answer your detailed question, when my MAGA relatives talk about this they are using "combat" to mean direct hostile contact with enemy combatants. Granted that most of their knowledge of these distinctions comes from ridiculous and/or outdated movies/TV shows, but anyway what they have in mind is women being taken out of roles such as supply or engineering units, intelligence-gathering, repairing equipment in bases, strategic command, etc. The one exception they're fine with is medical care for the soldiers i.e. basically the functions illustrated in the TV show "MASH".

Expand full comment
B Civil's avatar

I have been following all the chat about Deepseek, the Chinese AI currently available. It has significantly roiled the markets today for those of us who keep track of these things. Some very basic concepts about cost-effectiveness are being challenged hard. And then, of course there is the possibility that it is a lot of noise. I am curious if anyone else is paying attention to this.

https://www.wsj.com/tech/ai/china-ai-deepseek-chatbot-6ac4ad33?st=gjUbE4&reflink=mobilewebshare_permalink

I am also amused by the idea that we seem to now have officially two tier AI. Hertz and Avis are at it again.

Expand full comment
Swami's avatar

I downloaded it in minutes and I have already used it for a half dozen inquiries. It is easy and extremely well read. I am thinking of seeing if it will give me summaries of all the books I have read.

Expand full comment
Dino's avatar

My take away after reading Zvi is -

Deepseek will get AGI first and make it freely available.

They don't care about AI risk.

All the hyperventilating about AI risk is now a moot point.

The future existence of humans and paperclips is now a crap shoot.

There's not much anyone can do about it.

Please disagree with me...

Expand full comment
anomie's avatar

> The future existence of humans and paperclips is now a crap shoot.

Looking at some of their current output, I doubt they'd offer humanity the mercy of death. We are going to have so much 𝚏𝚞𝚗 together!

Expand full comment
artifex0's avatar

I don't think politics is nearly that predictable. Right now, a US/China treaty regulating ASI training runs seems unthinkable- but if you look through history, unthinkable shifts in the political winds are actually not that uncommon. We ought to push for it, even if it seems like a long-shot.

Expand full comment
Jeffrey Soreff's avatar

I read Zvi's last DeepSeek post, and am just in the process of reading this one. My 0-th order thoughts are:

R1 seems about as capable (maybe slightly less) as OpenAI's o1.

The main general-events expectation that I have is that

- PRC/USA competition is now fully baked in to the AI landscape

but I see that as just adding a third component to

- ChatGPT's clear advances from the 4/4o level to o1, (major, but not AGI _yet_)

- Trump endorsed Stargate

which add up to: As nearly as I can tell, the AI opponents have solidly lost this round. Whatever the populace wants, AI work is going on (feverishly) for at least the next few years (which looks to me to be probably all the way to AGI).

>The future existence of humans and paperclips is now a crap shoot.

Yup.

<mildSnark>

Maybe Culture Minds will keep us as pets? Hopefully not neutered?

</mildSnark>

Personal note: I'm planning on testing my 7 questions on o3-mini when it is actually released (thought to be during this week), and planning on reporting what I see here. There _is_ now a question on whether I should try asking R1 the questions. I'm kind of holding back to avoid stretching myself too thin.

<mildSnark>

I'll let everyone know if the answers to any of the questions include "And that day is upon you ... now!"

</mildSnark>

Exciting times. Glad I lived to see this. Maybe I'll _see_ full AGI, which I would like to chat with.

Expand full comment
Jeffrey Soreff's avatar

Just got to the

>There’s a lot of very strong Missile Gap vibes here.

line :-)

(hopefully without a "mine shaft gap" follow-on...)

>instead are driven by ideology of ‘get everyone killed’ (sorry I meant to say ‘open source AGI’)

Zvi does have a way with words!... :-)

Expand full comment
Dino's avatar

Update - some folks are trying to do something about it, Deepseek has been hit with a cyber attack.

Expand full comment
David J Keown's avatar

This is exactly what I wanted to read. Thank you.

Expand full comment
B Civil's avatar

This is good. Thx

Expand full comment
Tatu Ahponen's avatar

I've been using DeepSeek and, as a person who has been somewhat resistant to AI hype... I like it. There's a certain crispness and cleanness to it, a no-frills feeling. I don't think it's just the cost-effectiveness that people find attractive.

Expand full comment
Gordon Tremeshko's avatar

I've been following the story, but lacking any real expertise in the subject, I don't have much to say other than regurgitate what I read elsewhere, which is that whatever the pain in the short term, if it turns out that training LLMs is a lot cheaper than we thought, ala DeepSeek, then that's a good thing; we can use it to boost productivity at much lower levels of investment, much sooner than we might have thought possible.

In the short run, though...I log into my brokerage account and I'm like Vito Corleone at the undertaker's: "look how they massacred my boy!"

Expand full comment
Carlos's avatar

It doesn't seem to be noise, I'm hearing from a lot of people that they can just run it on their laptop and it's just as powerful as the latest ChatGPT. Seems rather bad for the AI safety people.

Expand full comment
B Civil's avatar

Interesting. Thx

Expand full comment
Carlos's avatar

ASI by 2028, huh? Today Freddie de Boer put out another article pooh-pooing AI hype:

https://freddiedeboer.substack.com/p/the-shitting-in-the-yard-test

Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah, but then, nobody is thinking that we're at the ceiling for AI.

For myself, the hype does seem difficult to believe, but well, with such a short timeline, why speculate? The hypothesis that world problems just need more intelligence to be solved can perhaps just be tested soon.

I was just in San Francisco, and the homeless crisis there was really quite shocking. I saw things that I haven't seen in 36 years of living in Puerto Rico, a much poorer (and dumber) place, where I even do volunteer work with the homeless. If you're wondering, our homeless are by and large drug addicts, but you can talk to them, they haven't lost their minds (to the extent a junkie can be considered sane at any rate). In SF, it seems like people who should be in a psychiatric facility just end up in the street instead. And it's really quite shocking, as you can see SF has utopic pretensions, but then it has this. Probably one of the most jarring contrasts I have ever seen. I really have no idea why the homeless in SF are so different from the ones in PR, but they are.

San Francisco already has the IQ and the wealth to just solve its homeless crisis, so I can't quite visualize how an ASI is going to solve it. I really think I need to hear that SF solved its homeless crisis with AI to fully buy into the hype. Perhaps if "just" some shocking scientific discoveries come out it would be enough too, but then I would have the lingering doubt of why SF, ground zero for AI, hasn't solved its homeless crisis.

I think I want to write an essay about this, about how some problems aren't really bottlenecked by intelligence, but I'm not sure if I'm in a speculative mood.

Expand full comment
artifex0's avatar

When we talk about ASI, we aren't talking about agents that are smart relative to human geniuses the way geniuses are to regular people. If we were, I'd agree that even a ton of agents like that would still find most of our intractable problems intractable.

What we're really talking about is something that might compare with our collective human intelligence the way that compares with the collective intelligence of, say, wild tigers. There are some good reasons to think that another difference in intelligence of that magnitude is possible. The physical limit for computational efficiency is the Landaur limit, and our brains are many orders of magnitude away from that. They also run at something like 30 watts, connected only by incredibly low-bandwidth language. We can also see AI really massively outperforming us on a lot of narrow tasks, and we can see those tasks becoming gradually less narrow over time.

Our collective intelligence gives us a lot of very powerful affordances that wild animals can't understand or predict- things like language, technology and culture. I'd be very surprised if a similar difference in intelligence didn't come with similar set of new affordances. Maybe that looks like ASI inventing some complicated self-replicating technology, or like it finding useful patterns in things we find chaotic- though I suspect realistically it would just look like the ASI doing a lot of very complicated things that we'd struggle to follow, while the world around us became increasingly difficult to understand. If it was well-aligned enough to try and solve something like urban homelessness, I'd expect it to try a strategy much more effective than our ways of doing things, and probably not at all subject to the same bottlenecks.

If you took the smartest tiger ever to live, made them as intelligent again compared with the average tiger, and tasked them with solving the problem of freezing in winter, they'd probably make almost no progress at all. If you task humans with that problem, however, they invent fire.

Expand full comment
Carlos's avatar

Yeah, I know the arguments. But I also know there are problems that are completely insoluble no matter how much intelligence is thrown at them. A basic one would be a heavily handicapped chess match, say white has the full complement, black is just a pawn and the king. It is impossible for black to win that one. The big question is whether the problems we want to solve have some terribly clever solution that a piece of software can execute, or they're more like impossible handicap chess. I actually think the likeliest outcome is more on "there is a superintelligent solution, but it will have to be executed by humans", which creates the opportunity for all manner of bungling, but there is a real possibility that ASI turns out to be radically transformative in a positive way, and also a real possibility that it ends all life.

Exciting times, hope that ASI by 2028 prediction pans out, so the most exciting hypothesis can be tested.

Expand full comment
Scott Alexander's avatar

TBF bringing up anything about San Francisco is like plunging your hand in a lava pool and shouting "WELL HOW COME ALL YOUR 'SCIENCE' AND 'TECHNOLOGY' CAN'T PREVENT MY FLESH FROM MELTING, HUH, HUH?"

Expand full comment
Deiseach's avatar

Apparently one of my nephews is flying out to San Francisco to speak at some conference there, so I hope he survives the lava pool 😁

If anybody knows anything about Photonics West 2025, give him a wave!

EDIT: Oops, too late, the presentation was on the 26th. Well, virtual wave! He's going off to Boston to hang out with some friends in Harvard (if I'm getting that right) after this.

Expand full comment
Carlos's avatar

I'm not sure I follow why San Francisco is like a lava pool, but you still have to bitterly appreciate the cosmic levels of irony in the contrast of all those AI billboards and self-driving cars, and even the Berkeley Alembic, and then the awful, awful, homelessness.

Similar situation as in India, when you consider the Bhagavad-Gita and the high points of Indian spirituality as juxtaposed with, well, India.

I bet Switzerland doesn't have jarring juxtapositions like these (though maybe I'm wrong).

Expand full comment
Adrian's avatar

I think Scott's point is that there are readily available, viable solutions to San Francisco's problems, but which aren't being implemented for whatever reasons. In the lava analogy, that would be "Just don't stick your hand in lava".

Expand full comment
Carlos's avatar

That's what I was saying, I think San Francisco is already totally capable of solving its problems, but for those "whatever reasons". Is more intelligence going to make them go away? Is lack of intelligence the cause of the "whatever reasons"? ASI could lead to some surprising and unintuitive discoveries if it turns out the world is riddled with "whatever reasons" to its problems that just don't go away in the face of extreme intelligence. We're only going to know what ASI can really do once it's finally here, so I can't wait.

Expand full comment
Deiseach's avatar

I took it to mean "Look, everyone knows San Francisco is an erupting volcano, so why are you surprised by the lava pools?" 😁

Expand full comment
Michael Watts's avatar

To be fair, it's completely impossible to stick your hand in lava. The best you can do is rest it on the surface.

Expand full comment
Adrian's avatar

> To be fair, it's completely impossible to stick your hand in lava.

Not with that mindset!

You could wait until a lava bubble pops, then quickly put your hand in when it rains down. Find solutions, not problems.

Expand full comment
Michael Watts's avatar

It's not so much that you can't penetrate the lava. You have too much buoyancy to exist inside it, in the same way that a helium balloon can't exist underwater even if you quickly shove it into a hole in the water and let the water fill in around it.

Expand full comment
Swami's avatar

I just asked Deepseek and it gave me an extremely detailed plan to address urban vagrancy of mentally disturbed and drug addicted individuals. Try it….

Expand full comment
Jon's avatar

Oh my goodness, someone finally had an idea on how to solve homelessness? That’s all we’ve been waiting for, let’s go solve it now!

Expand full comment
Throwaway1234's avatar

Perhaps it's like when a large corp hires consultants: everyone already knows what needs to be done, but you need a third party to deliver the bad news and absorb the resulting fallout.

Expand full comment
anomie's avatar

> San Francisco already has the IQ and the wealth to just solve its homeless crisis, so I can't quite visualize how an ASI is going to solve it.

Wouldn't a hyperintelligent being without empathy be the perfect candidate for solving such a problem?

Expand full comment
Eremolalos's avatar

I just wrote a long post about that actually.

Expand full comment
quiet_NaN's avatar

> Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah

Disagree. Even if AI stops where it is right now and no new models will be trained, LLMs will have quite a large effect when solving for equilibrium. Likely on the scale of indoor plumbing. Certainly not on the scale of transistor+integrated circuit, though -- after all, all AI effects on society are also effects of transistors and ICs, because training a LLM with a tube-based computer would not get you very far.

When Neal Stephenson published The diamond age in 1995, the idea of an interactive primer which tells educational stories based on characters of a kids life, their plushies, as well as their situation was total science fiction. Today, the main part which seems unrealistic is the fact that this was seen as a very exclusive product (which would kind of defeat the whole point, because the uber-rich can always afford human nannies and storytellers). Instead, it seems likely that the equilibrium given LLMs will consist of multiple competing services which offer different ethics (Do you want to raise your kid Confucian? Utilitarian? Lutherian?) and will cost a buck a month or something.

Expand full comment
Carlos's avatar

I still think what you're describing there is less transformational than indoor plumbing. As Freddie says in the article, imagine having to go a month with no running water. Then a month without AI. Or a smartphone. It's clear that it's going to affect you far more to not have a toilet, than AI or the smartphone. The LLM application you're putting forth there does not seem toilet-levels of utility.

Expand full comment
Mark's avatar

This just sounds like a restatement of the principle of diminishing marginal utility. I could more easily go my entire life without using a space ship than I could go a day without shoes, but I don’t think that necessarily means the latter are a greater technological feat than the former.

Expand full comment
Jack's avatar

"Greater" is different than "more transformational"

Expand full comment
Nine Dimensions's avatar

I recently went 8 months without running water. It wasn't that bad - just a bit of carrying water around in buckets. I would rather go without running water than without a smartphone. I'd rather the water than the AI at this point, but I expect that to change.

Expand full comment
Catmint's avatar

I might have gone some similar number of months without using my smartphone other than to charge it. Never kept track, though.

Expand full comment
ascend's avatar

Something I'm getting really, really tired of hearing is the platitude that the US is far to the right of "The Rest Of The World(TM)". It's one of the laziest, most ridiculously simplistic assertions I've ever seen, with more holes, confounding variables, nuances and and qualifications than almost anything else, and it is repeated (usually without the slightest qualification) *more times than you can possibly imagine*!

Even here, on ACX. In fact, I could be wrong but I think I see this *more often* on ACX than most other places. Again, usually stated in the most simplistic unreflective way. I have no idea why; usually this community is more nuanced than the rest of the internet, rather than less. But I honestly feel I'm more likely to see individual country comparisons--the US vs France, France vs Germany, and so on---on otherwise stupider places like reddit, and more lilely to see the US vs The Rest Of The World(TM) here.

What's wrong with it? Let's start with what The Rest Of The World(TM) means. It obviously doesn't mean the rest of the world, not even close. Sometimes it's phrased as "Europe", sometimes as "the developed world" or some such, but much of the time it really is phrased as "the rest of the world". If someone actually challenges this stupid assertion (which is shockingly quite rare) then the OP will respond that of course, by "rest of the world" they mean Europe, plus maybe Australia and Canada. I mean that's *basically* the world, right? If someone deigns to point out that many of these metrics showing Europe as far more left-wing don't apply to much of Eastern Europe, they'll respond that yeah, obviously they weren't including Eaatern Europe in "Europe". Further exceptions will inevitably be made, depending on the issue at hand. The US is more nationalistic than any country in Western Europe! Yes, *obviously* France is an exception. No other country takes religion, like, seriously! Yeah, except Ireland, and Italy, and maybe parts of Spain, and a large part of eastern Europe, I thought those, like, went without saying! Nearly every time, on every issue, there's an exception or a whole heap of them. Making the actual claim, when actually thought through, something like "there are always a few countries, usually in western Europe, that are to the left of the US on a given issue". Doesn't have quite the same dramatic ring does it? Of course, this relies on a level of analysis that is hardly ever done.

Second, just as this simplistic claim relies on constantly shifting on what exactly is the set of countries the US is allegedly to the right of, it similarly relies on constantly shifting on how comparisons on various issues are made. For example, compare crime and religion. At the level of actual law, the US is to the right of many western countries on criminal sentences and the death penalty (with a few exceptions) and it is to the left of many western countries on state secularism and bans on public religious displays (with a few exceptions). At the level of everyday life, it's the opposite: the US has higher crime rates (less law-and-order) than most western countries, but has higher religious belief and practice (more observance) than most western countries. So however you slice it, the US is sometimes to the left and sometimes to the right of the average. But that doesn't provide a dramatic simple narrative. So the advocates of the latter simply switch the metric to suit their simplistic model. Look, how right-wing the US is, it's the only country with such high sentences! And look, how right-wing it is, it's the only country with such high religious belief! (Remember, most countries in the world don't exist when we're having this discussion). Truly breathtaking.

Third, the claim involves systemically ignoring whole swathes of overwhelmingly important factors when comparing countries. The most obvious is size. The US is many times larger in population than every country that is ever compared to it. Frequently, when someone's comparing it to a group of countries it's significantly larger than all of those countries *combined*. And this confounding factor is almost never taken into account. So the fact that the US is the only "developed" country without universal healthcare is purely politics. It can't have anything to do with the fact that no country with anything close to the US's population has universal healthcare either, and that the complexities of such a scheme would obviously compound with huge increases in population, and that the vast majority of countries with UH have tiny fractions of the US population (e.g. Australia 1/13th, Canada barely 1/10th, each Scandinavian country between 1/30th and 1/60th), and that if the US were to adopt it it would set a world historical record of a country of that size establishing a project of that scope. No, it must be nothing other than Americans being stupid. It is absolutely amazing how rarely any of the above factors are ever even *mentioned* in the context of that comparison.

And there are plenty of other holes and flaws in this "US so right-wing" claim, but I'll leave it there for now.

So why does this absurd, simplistic, sloppy claim get spread around so much, with so unbelievably little pushback? My guess is that is has a lot to do with the fact that debunking this claim doesn't serve any major political grouping very well.

American progressives have adopted the idea of a left-wing Europe as a central part of their mythology, much like the conservative mythology of a right-wing past. It's useful to point to "this worked there, why can't we have it here?" when someone says your program is unfeasible or unreasonable.

American conservatives have adopted American exceptionalism and hatred of the rest of the world as a part of their identity. Again like with the imagined conservative past, where a few smarter progressives might point out that it never really existed but most are happy to go along with the myth because they've adopted "the past is evil" as their own mythology...so are conservatives mostly happy to go along with "we're to the right of the whole world and that makes us superior!" instead of actually questioning the factual claim.

Non-American progressives want an easy way to demonise right-wing ideas, and casting them as foreign and American fits that need. Plus, they generally want to erase national borders and pretend countries don't exist, so collapsing different cultures into one vague nebulous "Rest Of The World (TM)" suits them well.

Non-American conservatives are more focused on their own actual country, and aren't generally much interested in international comparisons. Plus, having a vague nebulous international left-wing collective they can accuse their opponents of trying to draw their country into suits their purposes for similar reasons to progressives.

So nobody has much of an incentive to debunk or examine this sloppy claim. The only people who suffer are those care about logic and accuracy. Simplistic badly thought-out characterisations like this make clarity and nuance much more difficult, and make discussing the unique situations and cultures of each country so much harder. And I really wish a data-focused and logic-focused community like ACX would create more pushback to vaguely defined and usually unsupported assertions.

Expand full comment
Odd anon's avatar

Devil's advocate: In a certain sense, most countries are to the "right" (as understood locally) of most other countries. The left-right spectrum often includes elements of "X, a uniquely [country] thing that we're best at, is important!" as a right-wing marker. (Thus, correct national exceptionalism: Your country is the best, regarding the things you value, because your country is full of people who prioritize those things.)

For example, Americans lead the world in (their own understanding of the concepts of) free speech, religious freedom and gun rights, all of which are right-coded in the US. Other countries might focus on things like monarchism, regionalism, supranationalism, attitudes towards The Party, language politics, security policy, etc as what "really" makes one right- or left-wing. A distinct left-right axis for every country.

Expand full comment
Anonymous Dude's avatar

Economically we're pretty free-market.

Culturally we're to the right of Western Europe and to the left of everyone else. You think American sexism is bad...

Expand full comment
The Ancient Geek's avatar

Why does scale matter? Universal healthcare doesn't imply a single huge organisation.

"Medicare

Medicare is a term that refers to Canada's publicly funded health care system. Instead of having a single national plan, we have 13 provincial and territorial health care insurance plans"

"The NHS in England is not one single organisation. It is made up of hundreds of different organisations of differing sizes, at central, national, regional, and local levels with different roles and responsibilities."

Why is it no problem for the US to have a single huge military?

Expand full comment
Straphanger's avatar

Right and left can be a bit slippery. I think it’s fair to say the US is more “right” in a right-wing libertarian sense. Compared to most other Western nations the US has an exceptional respect for individual liberty (guns, free speech), has fewer collective social programs (free college, free healthcare, etc), and generally has fewer business restrictions (at-will employment, less regulation). As you noted the people who mention this are typically progressive types who want all of that to change.

Expand full comment
agrajagagain's avatar

While I'd agree that trying to map differences between nations onto purely left-right distinctions is an exercise in folly, I DO think the U.S. is a pretty big outlier in the developed world[1] in a number of ways, most of which seem at least vaguely right-wing. For reference, I've lived in maybe half a dozen U.S. states (red, blue and purple) as well as in Canada and Germany and traveled somewhat elsewhere in Europe. The ways in which the U.S. stands out (to my eyes) are:

1. U.S. nationalism is very loud and lacking in subtlety. There's an entire genre of art, culture and, well, behavior focused on praising and glorifying and bragging about the U.S. In terms of actual numbers of staunch nationalists I don't know how the U.S. stacks up against anywhere else, but there's a lot less of that sort of loud display in Canada, and it is (for understandable reasons) somewhat taboo in Germany. I think I've seen hints of the same from Britain and Australia, but I'm less familiar with either of them.

2. The U.S. has a strong streak of what I might call "reactionary individualism." By this I mean that there seem to be a contingent of people--not a majority, I think, but a pretty sizable minority--who react with varying degrees of scorn and anger at the suggestion that they ought to take ANY action that's mainly for the common good or general welfare. Some of the angrier reactions to various anti-COVID measures are still fairly fresh in my mind, but the far older resistance to universal healthcare and a lot of the more extreme anti-tax views[2]. This is one that's harder to gauge in other countries, but my sense is that while people like this in some number exist everywhere, in the U.S, they're far more numerous and wield correspondingly more political influence.

3. The national attitude in the U.S. skew very militaristic, especially for it's level of wealth. This one seems to be a little bit of a geopolitical artifact: it seems like before 100 years ago or so, wealthy countries were some of the most militaristic, and militarism was more widespread in general. But in the post WWII era, the developed world seems to have pacified a LOT. Except for the U.S. Now there's a decent argument to be made that its the U.S. military that has allowed everyone else to de-militarize, but there's a lot of pieces of the picture that don't fit super well with that. The U.S. has engaged in lots of military adventures and military-backed foreign policy initiatives that very clearly have NOT been about defending either itself or its allies in any direct sense, and you get a lot of civilian rhetoric of the "support our troops" and the "they're defending our freedom" style that very conspicuously fails to engage in what the U.S. military is actually DOING at any given moment. This is probably partly an extension of 1, but I think it feels very weird to anyone who's spent much time in a country with a less active military.

[1] Which seems to be the best possible match for the Rest of the World that the people you interact with seem to be talking about. But I agree calling it the "rest of the world" is dumb and bad.

[2] To be clear, nobody I've ever met *likes* paying taxes, but most of them recognize it as both necessary for the general welfare and a transaction that (on the whole) ends up well worth the price tag, given what they get out of it. But the ones that don't see it that way can be quite strident about it.

Expand full comment
Matto's avatar

I'm curious about your first point, the one about loud nationalism. For me, it looks like a different kind of nationalism because while it's loud, it's also accompanied by a loud self-critiacal discourse that I haven't seen in Europe.

More concretely, while I haven't seen as many German flags in Germany, not heard people loudly praising it there, I also haven't heard the same kind of open, loud criticism that I see here in the US. My experience is that most European nationalities may grumble about this politician or that policy, perhaps going as far as writing some satire or even protesting, I can't remember anythig like burning flags, calling their chief executive horrible names, or just plain having so much discourse about the wrongs of history (maybe the last is present in Germany but I found few people to care about eg. Denmark's forced sterilization of indigenous women in Greenland or, hell, the sterilization of Roma women around Europe within the last two generations, or the forced attempted nationalization of the Ukrainian people in Poland between the Wars, etc.)

Its like the frame of identity is more narrow--theres a lot less exceptionalism, but a lot of the shameful stuff is outside of the frame too. It's not even taboo, more like a lack of general awareness and even curiosity.

Expand full comment
agrajagagain's avatar

"Its like the frame of identity is more narrow--theres a lot less exceptionalism, but a lot of the shameful stuff is outside of the frame too. It's not even taboo, more like a lack of general awareness and even curiosity. "

Honestly, aside from the lack of curiosity, this strikes me as healthier. It seems like there's a lot of false dichotomy going around where you have to either directly identify past atrocities with some present-day group or structure (nation, political party, religion) or you have to minimize them or pretend that they didn't happen. I don't think either are fully healthy. I think that as humans, we should very much be aware of the bad parts of that past and especially be aware of how they shaped the present. But that doesn't mean taking blame for things that we (currently living humans) didn't do.

I am, for example, at least somewhat aware of my direct ancestors doing some pretty nasty things. But I'm also aware that they we my ancestors: they're not me, and I don't have to emulate them, like them, respect them OR feel any shame for things they did long before I was born.

Expand full comment
agrajagagain's avatar

Very interesting point. I wasn't tuned in enough to German politics when I lived there to get a sense of how the average German related to it. I think I've mostly subconsciously assumed that bitching about the government was common everywhere (especially among people who voted against the current government). Certainly in Canada--where I'm living now--it's not uncommon to see, for example, "F*ck Trudeau" bumper stickers or witness various forms of reasonably loud argument and division around government at all levels. But I also haven't been here long enough to have a real feel for how and whether that's changed (i.e. if it was different before Trudeau, or if it's splash damage from the all-consuming trash fire that is recent U.S. politics).

In the U.S. specifically, I think that at least some flavors of the loud self-criticism are clear reactions to the loud nationalism. There's a notion among the Red Tribe, for example, that the Blue Tribe hates the U.S. That's not *completely* untrue (there's certainly plenty among especially younger Blue Tribers that would much rather be living somewhere else), but not nearly as true as they think it is. A lot of what the Blue Tribe hates is loud, crude or tacky nationalistic displays. So, for example, very few Blue Tribers go around prominently displaying U.S. flags on things: that's a Red Tribe signal after all, and they don't want to be mistaken for the Red Tribe[1]. Similarly for historical themes: plenty of Blue Tribers do have genuine, sincere, non-petty beliefs that the worst parts of U.S. history (or anyone's history) should be remembered and discussed and not swept under the rug. But also when some belligerent Red Triber starts going on about the Glorious History of This Great Nation, I'm sure many Blue Tribers find the urge to smugly bring up the Really Shameful Parts to be nearly overwhelming. And of course, the reactions will produce counter-reactions: Blue Tribers looking down their nose at flag displays makes Red Tribers want to Flag Even Harder.

All of this is probably also somewhat colored by the fact that U.S. politics is starting to be more and more shaped by the generations that came of age during the 9/11 era. September 12, 2001 was probably THE high water mark in recent history for widespread U.S. nationalism. Whenever somebody complains about cancel culture or the left attacking free speech, I remember with a twinge of irony just HOW angrily censorious the U.S. public of that time was of anyone or anything that dared criticize the nation, the government, the military or any other beloved national symbol. But also, compared to today, the sense of unity, of being one nation with common interests and a common struggle, was quite striking. And then the U.S. leadership took that unity, that national camaraderie, and proceeded to do some pretty terrible stuff with it. So I think a lot of people in the 35-50 range in the U.S. have a pretty jaded view of anything that smacks of flag-waving, and a willingness to mock and dismiss it that might not make as much sense to people without the same formative experiences.

[1] There's a similar tendency with religion. Contra many complaints I've heard, I've never spent time in a Blue Tribe space where Christians were genuinely unwelcome or where a large fraction believed that being Christian was shameful. Lots of Blue Tribers are Christian too, after all. What IS unwelcome is loud and performative displays of Christianity. I'm pretty sure part of this is the cosmopolitan instinct that having a society where all religions are welcome requires people to be somewhat low-key about public displays of faith. But at the same time, I think those standards are somewhat asymmetrically enforced on Christians over people of other faiths just because being Loudly and Performatively Christian is very much a Red Tribe thing.

Expand full comment
Matto's avatar

I enjoyed reading your reply. Definitely puts more detail into the whole issue.

I see many of the same things that you describe. But I feel obligated to point out that the existence of this reactive blue tribe is, I think, exactly what I'm trying to point out.

Nationalism exists in more subtle forms in the parts of Europe I lived in than what the red tribe does, true. But it's pretty strong despite that. There's a lot of feeling that eg

Eastern Europeans are worse. One belief that I met face to face with is that eastern Europe is poor because the people are lazy and prone to thievery (from a Belgian gentleman I met on the trail who took me for an American). There's maybe begrudging agreement that your neighbor might produce better cars but, at the same time, there's a triumphal statement about how our football, music, food, whatever is truly the best. Another point that I think lends weight to this view is that while people and capital are allowed to travel freely on the Schengen area, capital does so but people less so.

To clarify the last point, there are few barriers and quite many people travel for work or move temporarily or study abroad (Erasmus!) but few move as permanently as in the US. Consider the massive reverse immigration of Poles now that the UK economy economy is stagnating and the Polish economy continues it's explosive growth. I believe the factors there are both lack of establishing oneself in the new country (weak ties) even after years of living there, but also not being fully accepted as local. This is probably weaker in the UK but was noticeable on eg. Germany where even in Berlin I met with strong opinions about what makes one a German (citizenship isn't enough but it's at least a sign of effort!).

So that lack of blue-tribe like fiery criticism is what strikes me as curious and perhaps unhealthy in European nations. I can imagine that because the blue tribe overproduction of such criticism and Europe's lack of it might be painting a rosy picture of Europe for both Americans and Europeans.

Expand full comment
Whenyou's avatar

You forgot religion. By far the most “aggressively” religious population of any developed country. I remember a survey where they were far and away the most likely to answer yes to questions like “religion is very important to me”, “I go to church weekly” etc etc.

The kinds of Americans who read this blog is an exception.

Expand full comment
The Ancient Geek's avatar

It's unusually religious for it's wealth.

Expand full comment
Whenyou's avatar

Yup, not in comparison to the world at large obviously. But seen from other wealthy and OECD countries, the US *is* “strange” on this parameter and a couple others that commenter I replied to pointed out.

Expand full comment
Michael Watts's avatar

> and you get a lot of civilian rhetoric of the "support our troops" and the "they're defending our freedom" style that very conspicuously fails to engage in what the U.S. military is actually DOING at any given moment.

I think a big part of this is that it's a volunteer military, so it doesn't actually matter to anyone what the military might be doing.

Expand full comment
birdboy2000's avatar

American militarism vastly outstrips that of anywhere else, except for sanctioned pariah states. The health care system is a disaster and the general influence of oligarchs in politics, although far from unique in liberal "democracies", also seems on the extreme side.

Expand full comment
spandrel's avatar

I think if you look at far right parties around the world one common feature is that of national exceptionalism. They may support say universal healthcare or gun control, but they always bang on about how special their culture and how precious their place in the world. Well, the United States pretty much leads the pack in exceptionalism, whether it is the International Criminal Court or units of measure. Always the odd man out at the U.N. Don't know what to call that game where you kick around a ball. Things that only make sense to go it alone on if you have some insecure nationalistic chip on your shoulder. So I think in that sense, yes, we are far to the right of the rest of the world.

Expand full comment
Alexander Turok's avatar

Perhaps it's the nationalistic chip on my soldier but I'd rather not be subject to some foreign court.

Expand full comment
spandrel's avatar

That's a fairly common attitude in the US, but The Rest of the (free) World generally thinks there's nothing controversial about empowering a transnational entity to prosecute war crimes. Which I think supports my point?

Expand full comment
John Schilling's avatar

It's easy to say "prosecute all the war criminals, let God and the Hague sort them out", when you plan to never ever ever fight an actual war.

The United States doesn't have that luxury. And the only reason most of you have that luxury is because you're counting on us to fight your wars for you. It might help if we trusted you all to not secretly despise us so much that you'll accuse our people of war crimes as soon as we're finished saving you, but you're not even trying to hide the fact that you despise us and our evil war-criming ways.

Expand full comment
spandrel's avatar

Well, the ICC still has jurisdiction over US or other citizens who are in countries that have ratified the Rome Treaty, such as Afghanistan. So it's not like we get a free pass. And the ICC only has authority to bring a case when the relevant courts (eg the US justice system for US soldiers) are unwilling or unable to investigate. The main downside to staying out is we can't bring cases against war criminals in other countries.

The rest of your comment I don't quite follow. I don't despise Americans, being one. And I've opposed all the wars we've fought in my lifetime, except for Grenada.

But I didn't come here to argue about the ICC, about which I don't have a strong opinion. More about how Americans claim a certain national exceptionalism, and that this is shared with right wing parties everywhere such parties are sold.

Expand full comment
John Schilling's avatar

I don't see why it would matter that we can't bring ICC cases against war criminals in other countries. The other countries in whose defense we are fighting can go ahead and do that. And if we're fighting in our own defense, the ICC has almost certainly ceased to be relevant.

Yes, it sucks that American military personnel who fight in defense of other countries, might then never be able to visit those countries as tourists because the locals are too squeamish about actual warfighting. But the United States is a big country with plenty to offer; I expect they will be fine.

And you don't despise Americans, but you think they're the bad guys in every war they've fought. Gotcha.

Expand full comment
Melvin's avatar

I agree with you, and think that immigration is the big example that you didn't mention. The US is far to the left of just about every other developed country on immigration, in practice if not in stated law.

I don't know of any other country where the idea of deporting illegal immigrants would be controversial.

Expand full comment
Wasserschweinchen's avatar

I know it's been controversial in Sweden, and I'd expect it to be controversial in other Western European countries as well. Human trafficking across the Med is a flourishing industry because of this, unlike human trafficking to Australia, where pushback of illegal migrants is less controversial.

Expand full comment
Tatu Ahponen's avatar

"US is a right-wing country" is largely a claim that arose during the height of socialism due to the fact that neither of America's two dominant parties, as a party, claims or has claimed inspiration from socialism, unlike in most other countries of the world (indeed, in many democratic countries, most of the dominant parties would have claimed some connection to socialism at some point).

Nowadays, this should be far less relevant, considering that the amount of claimed-socialist parties is smaller than previously and even the ones continuing to have some attachment to the label tend to do it in a wink-and-a-nudge, "ah but really..." way, but the idea persists.

Expand full comment
Viliam's avatar

Words "left" and "right" mean different things in different countries, as random events in local history determined which small groups made a greater coalition together.

From my (Eastern European) perspective, American politics feels right-wing, because it has two parties, both of them pro-corporations, and yes also because of the healthcare. The argument that USA is greater than European countries does not explain why individual American *states* don't have universal healthcare. On the other hand, the American woke left is probably more left-wing than anything else in the entire known universe. What is the proper label for a political system where corporations are free to oppress people in all possible ways except for using wrong pronouns?

I agree that the religiousness of USA is about average, compared to Europe. Probably the same for nationalism.

Expand full comment
agrajagagain's avatar

" What is the proper label for a political system where corporations are free to oppress people in all possible ways except for using wrong pronouns?"

Corporations are free to use whatever pronouns they wish. They make the choices they do as a calculated matter of revenue-maximization. This has very little to do with politics in a formal sense, and everything to do with demographics and economics: the queer-friendly parts of the U.S. are the parts where all the money is. I suspect that if you looked, you could find local branches of various corporations appropriating all sorts of little local symbols and rituals too better blend in. Think of it somewhere between blatant pandering and a predator's camouflage.

Expand full comment
Alexander Turok's avatar

"Corporations are free to use whatever pronouns they wish."

They'd risk getting sued for "discrimination," "hostile environment," etc.

Expand full comment
agrajagagain's avatar

Corporations "risk getting sued" all the time. The threshold to get sued is literally just "somebody wants to sue you and has a modest amount of time and money to devote to the project." For a private individual, the risk of even a frivolous suit might be sufficient deterrence, but not so for any reasonably well-heeled corporation. For this to factor in, there'd have to be some non-negligible chance that the suit would actually be successful. I'm genuinely unfamiliar with the case law here: do you have any examples in mind of this actually happening? Or is it purely speculative?

Expand full comment
Ghillie Dhu's avatar

>"The argument that USA is greater than European countries does not explain why individual American *states* don't have universal healthcare."

Ease of movement probably squares this circle.

While movement within the Schengen Area is freely allowed, language/culture barriers are still far greater than interstate in the US.

Net payers would be more prone to move out of State (especially since the States likeliest to try it tend to have more progressive tax structures) and indigent patients with expensive conditions would flock into the State. Instant death spiral.

Expand full comment
WoolyAI's avatar

In this forum, it's because 90%+ of right-wing commentators formed their own forum after the great NYT doxing event. You can go over to datasecretslox and see lots of people who used to post here who don't think that the US is especially right-wing. In fact, a number of them will probably refer to the US as postmodern neomarxist anarchotyranny.

Like, there's plenty of places that will debate it. Go to datasecretslox, ask for GoneAnon. He just doesn't hang out here anymore. We've all just kinda siloed off into our own things.

Expand full comment
anomie's avatar

Why did that even happen in the first place? It's not like Scott became less right-wing after the incident. If anything, the exact opposite happened. They would be perfectly welcome here as long as they didn't call people slurs and tell them to kill themselves.

Expand full comment
Monkyyy's avatar

> Why did that even happen in the first place?

Newspapers had power, allot of power that was poorly understood and therefore basically not combated at all

They are loosing power; spite for outsiders, also see, alex jones, trucker.

Substack specifically will eat up good journalists and people willing to read entire pages at a time; while podcasts are more an attack on tv news, substack will likely be the target of newspapers.

Expand full comment
FLWAB's avatar

After SSC shut down there were only a few options for the commentariat to migrate to. The subreddit was not a viable option for conservatives, as the mods are far enough to the left to make conservatives uncomfortable there. So when some commenters started DSX and conservative commenters found that they wouldn't be facing a hostile moderation environment, a lot of them went there. A lot of them also went to The Motte, which was a subreddit and is now it's own site. I'd say The Motte is further to the right than DSX, and is more devoted to culture war discussions while DSX will talk about anything.

Expand full comment
Dragor's avatar

did it start out as a rightward shift? I hadn't noticed it to be so at the time, but I also shifted from DSL to Facebook sometime in.

Expand full comment
Erusian's avatar

The forum formed after SSC shut down and before Scott announced ACX. So it was path dependency.

Expand full comment
Taymon A. Beal's avatar

I mostly agree that people are overly sloppy about this and should be more careful, but I think your healthcare example is really weak. In particular, why should a larger population make things more challenging? Sure, it means there's more healthcare provision and more administrative work to be done, but it also means there are more providers and bureaucrats available to do that work. If there's a specific reason why the viability of a publicly funded healthcare system breaks down somewhere between Brazil's population and the U.S.'s, then I think it's necessary to say what it is.

Expand full comment
ascend's avatar

1. I don't have the knowledge of the economics of administrative matters to know the right language to use, so I can only gesture in a general direction. I could of course be wrong, but I would expect that as the population of a country increases, the effects of small bureaucratic errors in a centralised system greatly increase, as do the costs of such errors and the difficulty of correcting them. So the number of people who (in the worst case) die from one mistake by a senior bureaucrat in the allocation of resources would, at some threshold of population, exceed the number who die from one insurance company's bad decision in a private system (assuming the number of companies linearly increases with population, so that the latter theshold is the same for all population sizes). Maybe this reasoning makes no sense, but it seems to. Additonally, the potential to run well over budget, and the magnitude of that deficit, surely increases more than linearly as the population being served increases, right? And the difficulty and cost of suddenly changing course to correct a mistake or respond to new catastrophes surely greatly increases with the size and complexity of the system. Making central planning of health care (or anything else) riskier and less efficient with large populations. Even if these speculations aren't true, it's enough that they may appear to be to explain a greater hesitance to adopt universal healthcare among voters (without necessarily requiring a difference in values like how important healthcare access actually is).

2. I criticised the belief that it's "purely politics" that explains the US lack of UH, but I mean to allow that it's the interaction of politics with structural differences related to population size. So in a country as large and (correspondingly) regionally diverse as the US, the fear of a UH system greatly magnifying the power of the federal government is on another level than it would be in a small unitary state where the national government already does almost everything, and this would just be one more thing it does. Additionally, the larger the country the less homogenous the population, and the more distinct regional and demographic groups there are with more fear of being dominated or having their interests sidelined by the other groups. I think these fears of a much stronger national government, and of the interests of e.g. Christians being marginalised by a bureaucracy of people hostile to them were near the top of conservatives' objections during the US healthcare debates, and these fears are just objectively less salient in smaller populations, even ones with identical values.

3. The mere fact that, demonstrably, passing a national law is magnitudes more difficult in the US than in smaller countries. As is the difficulty of *repealing* a law once passed. Repealing a law in the US is like trying to turn around an aircraft carrier once it's on a course. The size and complexity and diversity of considerations make it nearly impossible. This explains the inability to pass a complete healthcare law in the first place, and also explains the zealous opposition to such a law, since it will be extremely difficult to repeal once it's passed, no matter how bad its effects are. I really see a lot of this irreversibility terror as quite central to the fanatical Tea Party discourse during the ACA debates. In smaller countries, worried conservatives or libertarians who aren't sure how bad a healthcare system will be have the obvious option of letting it be enacted, and if it turns out badly, kicking out the government that passed it at the next election and repealing it. That largely isn't a feasible option in a country as large as the US.

Expand full comment
Erica Rall's avatar

Even "the US is the only 'developed' country without universal healthcare" is a strained claim. The US doesn't have a *single* universal health care program, either on the national or subnational level, but collectively the ACA, Medicare, and Medicaid are aimed at covering every American who isn't covered by employer-provided group plans. They don't quite make it, with about 5% of Americans eligible but not enrolled and another 5% falling through cracks in eligibility, but it's close enough that I think it would be fairer to characterize the American health care system as a flawed attempt at universal health care than the complete lack of universal health care.

Defenses of the claim often rely in part on equivocating between "universal" health care, Canadian-style single-payer health care, NHS-style nationalized health care, and health care with zero private out-of-pocket costs. A lot of first-world countries use a vaguely ACA-like model of mandatory, subsidized private insurance, and even countries with single-payer or nationalized health care systems often define some categories of health care services (e.g. prescription medication and outpatient mental health counseling in Canada) or require significant copays for many services (e.g. France).

Expand full comment
The Ancient Geek's avatar

It can be true that no country has all of these features, and also true that the US ticks the lowest number of boxes.

Expand full comment
Erica Rall's avatar

That is a defensible claim, yes. My objection is to the meme version of the claim.

More broadly, I am inclined to agree that the US health care system is deeply flawed and badly in need of reform, and I am prepared to consider claims that it is uncommonly bad, but I am not convinced that it is uniquely bad.

Expand full comment
Performative Bafflement's avatar

> but collectively the ACA, Medicare, and Medicaid are aimed at covering every American who isn't covered by employer-provided group plans.

The ACA shouldn't be considered "health care" in my opinion. I retired young about 2 years ago and my COBRA from my last company finally ran out, so I had to look for an ACA plan.

Literally every single plan available leaves you uncovered in every state but your home state. You are uncovered in 98% of states in the US with any available Exchange plan, AND it's illegal / impossible to just buy multiple plans in multiple states so you have coverage.

What? So plebs who have to buy on the Exchange never travel? They never live or work in different states? They'd better not if they don't want a medical bankruptcy!

That's just the tip of the iceberg too, IMO. There's a reason the entire nation was applauding when an insurance executive got assassinated.

Expand full comment
Taymon A. Beal's avatar

This sounds like it might be a misunderstanding. IIUC, because the exchanges are a state-level thing, your plan has to be through the exchange of the state where you legally reside, and if you move you have to switch plans. But that doesn't mean that your coverage doesn't apply while you're working or traveling out of state. If you have a medical problem while doing that, your insurer from your home state is still on the hook to pay for your treatment.

Expand full comment
Performative Bafflement's avatar

> But that doesn't mean that your coverage doesn't apply while you're working or traveling out of state. If you have a medical problem while doing that, your insurer from your home state is still on the hook to pay for your treatment.

I thought the same at first, but explicitly dug into the terms and conditions, including getting third party opinions. Emergency care is covered outside your home state, and that's it, and if it goes to a hospitalization, generally with crazy high deductibles ($15k+).

If your "real" doctor happens to be in another state? Completely SOL, zero coverage. A non-ER hospitalization in another state? Completely SOL, zero coverage. Even an ER visit in another state that leads to hospitalization? Costs you $15k before any insurance coverage kicks in.

It is NOT meaningful coverage in any sense of the word, and is definitely a recipe for "medical bankruptcy" for any regular people.

I literally considered starting another company JUST so I could get real insurance that would actually cover me if I don't just sit meekly in my home state, never traveling or doing anything. Fortunately I was able to get coverage through one of my sister's companies, by taking on some work she needed done.

But for regular, non-rich people? Ha! Good luck. Exchange "insurance" is insurance in name only.

Expand full comment
re qu iredna mee e's avatar

To be fair, I'd guess 95% of people on the ACA probably do very little travelling.

Expand full comment
Alexander Turok's avatar

I'd guess you greatly underestimate how wealthy Americans are.

Expand full comment
Lurker's avatar

French here; copays are not insignificant, but to my knowledge we’re still speaking about amounts an order of magnitude smaller than in the US (and you’re not billed for the ambulance).

Expand full comment
Erica Rall's avatar

What are some examples of typical copays you experience in France? The ones I found with a quick googling (e.g. $30 for a basic office visit) are similar to or a bit higher than what I pay in the US with good employer-provided health insurance.

Where there is a big problem with out-of-pocket costs in the US is that there's enormous variance, depending on which insurance plan you choose. There are a range of choices where you decide between lower monthly payments for coverage vs lower out-of-pocket costs for services and fewer restrictions on which providers and services are covered. There's also a structure that isn't very legible to people who are unfamiliar with it, where insurance pays 0% of the first $XXX of expenses (the deductible), 70-90% of the rest of the first $YYYY (the out-of-pocket maximum), and 100% of anything over that. So a basic office visit might cost $150 if it's your first expense of the year, $30 if you've already met your deductible, or no out of pocket cost if you've met your maximum.

Expand full comment
Lurker's avatar

I’ve usually paid a bit under 30€ for a generalist’s appointment, 50€ for a dentist, 100-ish for a specialist, and maybe 250-ish for medical exams? Part of this also ends up refunded.

(Also, I reiterate the ambulance thing. I know people living in the US who would have to Uber their way into an emergency room because they wouldn’t be covered for by the ambulance.)

But, as you said, you have *good* employer-provided insurance. This is what the French system promises to *all* its citizens (everyone working in France? I’m not sure, all of a sudden…). Usually, without the hassle of trying to figure out what doctor is compatible with which insurance company.

The broad idea is that the “Breaking Bad” starting point seems very plausible in the US, but unthinkable here.

Expand full comment
Erica Rall's avatar

Thank you.

Full price for me, before I meet my deductible, is about $200 for a primary care visit, $300-500 for a specialist, and also $300-500 for an exam. There is no out of pocket cost for my annual physical. After I meet my deductible, insurance pays 90%, so the co-pay then is $20 and $30-50 respectively.

I have separate dental coverage with no copay or deductible for cleanings and exams, but back when I paid cash I think it was $60-80.

Yes, it's worth emphasizing that I have *good* employer provided insurance. Typical ACA exchange plans, or mediocre employer provided insurance, generally has higher copays and deductibles, so someone in that situation is going to be paying full price more often and will likely have copays that are 2-3x higher. In-network providers are also much more of an issue for good vs poor insurance: I occasionally have to change providers due to networks but it's rare, but cheaper insurance options often have much more restrictive networks.

Ambulance rides are a huge issue. I haven't done a deep dive into why they're such an issue, but what I have seen suggests that the problem is that ambulances (at least in the US) have gradually evolved into mini-mobile-ICUs, which are great to have if you're having a life threatening emergency but are severely overkill if you just need a ride to the hospital. They're used for the latter partly because of a "just in case" mentality and partly because the expensive part of an ambulance service is having the capacity available and keeping it on 24/7 standby. Despite my libertarian inclinations, I'm increasingly thinking that it would make sense to pay for ambulances as a tax-funded emergency service like firefighters.

Expand full comment
Melvin's avatar

There's an argument to be made that the US system is more left-wing than simple universal health care because it's even more redistributionist.

Under universal health care, I pay my taxes and I at least get health care out of it. In the US, I pay my taxes and only poor people get health care out of it, then I need to pay *again* for my own health care. The upper-middle are getting fucked twice instead of only once.

Expand full comment
agrajagagain's avatar

"There's an argument to be made..."

There's an argument to be made for anything under the sun. Whether or not it's a *good* argument is another matter. This is...extremely weak. "Fuck the middle class" is certainly not a left-wing principle in any philosophy I know of[1] and more importantly, every left-wing person I've ever talked to about it has been appalled or contemptuous at the U.S.'s so-called healthcare system.

"The upper-middle are getting fucked twice instead of only once."

No, the LOWER middle class are getting fucked twice. The mid-middle class are getting fucked 1.5 times. The upper middle class are fine. The cutoff for medicare is quite low: below what most people would consider "lower middle class." If you're above the medicare cutoff, you'll pay a decent amount in taxes and insurance premiums and get extraordinarily bad health care for it. As you go up the ladder, you pay more, but you also get quite a bit more out of it (from what I've seen), and most importantly *you can damn well afford it.* There is a very big difference between paying a large fraction of your income for healthcare, and then maybe *also* facing bankruptcy if you have high medical costs, and paying a more modest fraction of your income for health insurance that will actually function as, y'know, insurance.

[1] Unless you conflate Marx's "petite bourgeoisie" with the middle class, I suppose, but that's strained, at best. AFAICT Marx's class system doesn't map very well to the upper-middle-lower class system of contemporary politics and seems generally pretty silly in a world of billionaire entertainers and factory workers with stock-based retirement plans.

Expand full comment
Rafael Bulsing's avatar

> It can't have anything to do with the fact that no country with anything close to the US's population has universal healthcare either, and that the complexities of such a scheme would obviously compound with huge increases in population, and that the vast majority of countries with UH have tiny fractions of the US population (e.g. Australia 1/13th, Canada barely 1/10th, each Scandinavian country between 1/30th and 1/60th), and that if the US were to adopt it it would set a world historical record of a country of that size establishing a project of that scope.

Brazil's population is roughly 2/3 the US population, and it has universal healthcare. And it does that while only having 1/10 the GDP.

It's not perfect by any means, far from it, but it does more or less work. For all its faults, I can say as a Brazilian that having it is leagues better than not having it.

So, idk, the notion that it's impossible for a country with the size (and wealth!) of the US to have universal healthcare sounds weird to me.

Expand full comment
ascend's avatar

Okay, I didn't know about Brazil. When was that established? I think I remember maps of all the countries with universal healthcare circulating around the internet at some point, (trying to make a point that the US was an outlier) and noting that the largest ones were Russia and Japan (about a third of the US's size). Not sure when that was, or how out of date the lists were.

If Brazil's system is very recent, then it's still not a counterexample at the time of the US debates. And that it took much longer for Brazil to establish it than all the European countries would be consistent with my claim that there's a strong effect from country size.

Expand full comment
Rafael Bulsing's avatar

Our system, called SUS, was officially created in 1990. In the grand scheme of things I guess that's kinda recent, but it's not much later than some European countries. It's even before a couple of them!

Here's a small sample of countries and the year where they got universal healthcare (not limited to Europe)

Portugal - 1979

Spain - 1986

South Korea - 1988

Iceland - 1990

Singapore - 1993

Switzerland - 1994

Israel - 1995

(I got these from a quick search. I have no knowledge of where each specific country falls in the range from "they had no public healthcare system at all before that" to "they actually had the system pretty much done decades before, but for some technicality it only started counting as universal healthcare after a small change that happened that year". But I still think it's illustrative enough.)

So yeah, dunno when that map of UH having countries was made, but it has been outdated for over 30 years now.

Expand full comment
MartinW's avatar

The size thing is a big one indeed. Anytime I feel tempted to compare the US to my own small European country, I have to remind myself that in terms of geography, population size and cultural diversity, it makes more sense to compare the US to the European Union, and then compare individual states to individual European countries. Makes a big difference if you're comparing California to Poland or Alabama to Denmark.

Expand full comment
Lurker's avatar

I admit that I (non-American, sort of progressive) usually think this way and should perhaps reconsider. You’ve certainly given me things to check and think about more.

In the meantime, I would like to strongly push back on one of your points:

> At the level of everyday life, it's the opposite: the US has higher crime rates (less law-and-order)

I think you’re confusing facts and political attitudes here.

A fact (the crime rate) is neither left-wing nor right-wing. The politics appear in people’s attitudes and reaction towards their country’s (or locality’s) situation.

I’d argue that, given that the US imprisons much more of its population than most European countries [see prisonstudies.org; the US imprisons about 540 persons per 100k, no European country I’ve checked exceeds 200 and few are above 150], and does not seem particularly dissatisfied with this state of affairs, the average (or median, or…) attitude with respect to the “tough on crime” vs “lenient on crime” axis is rather closer to the “tough” pole than the “lenient” pole.

One might object that criminals would like the country to be more lenient on crime, but even in the US, we’re talking about less than a percent, so I’m absolutely comfortable neglecting this.

Expand full comment
ascend's avatar

>A fact (the crime rate) is neither left-wing nor right-wing. The politics appear in people’s attitudes and reaction towards their country’s (or locality’s) situation.

That's fine, as long as one takes the same approach with rates of religious belief or church attendance. That's also a fact. Or if it's properly classed as an attitude (not of the whole population but of a certain minority) then so is a minority's greater propensity to commit crimes. A more or less secular culture, a more or less law-abiding culture, count both or neither as political, and either way the US averages out to the same as the typical country.

Personally, I agree it makes more sense to look only at law, not nebulous culture, when defining political differences. On that metric, the US is right of average on crime and left of average on religion.

Or if you prefer, you can compare crime rates to levels of income inequality. They're even more analogous. It's a rare conservative who would outright say income inequality is *good*, they'll usually just *not* show much of a concern about it or say that there are better ways of reducing it than with the government. An edgy few will say it's actively good that undeserving people live in poverty, but progressives understandably expect that much larger numbers of them privately think the same. Aaaaand, similarly, it's a rare progressive who would outright say crime is good, they'll usually just *not* show much of a concern about it or say that there are better ways of reducing it than with the police and prisons. An edgy few will say it's actively good that privileged people live in fear of crime or that there's regular violence against an unjust society, but conservatives understandably expect that much larger numbers of them privately think the same.

So count them both as political differences, or count neither of them.

Expand full comment
Lurker's avatar

> That's fine, as long as one takes the same approach with rates of religious belief or church attendance. That's also a fact. Or if it's properly classed as an attitude (not of the whole population but of a certain minority) then so is a minority's greater propensity to commit crimes.

When I write that the US is more religious than most other nations, I don’t mean it in the sense of church attendance. I mean it in the sense that there’s a real block of “Christian vote” that politicians appeal to as part of an explicit strategy. The best example I can think about is how certain prominent right-wing politicians, a couple of decades ago, railed against evolution (something that made them a laughingstock in the rest of the world, I might add). There is simply no equivalent “criminal vote”.

> Or if you prefer, you can compare crime rates to levels of income inequality. […] So count them both as political differences, or count neither of them.

I don’t understand what you’re saying. My read is that the US – whether as a matter of law (tax rates, fiscal advantages for the wealthy) or as a matter of attitudes – cares a lot less about income inequality (so leaning right-wing) than other Western countries?

> Personally, I agree it makes more sense to look only at law, not nebulous culture, when defining political differences. On that metric, the US is […] left of average on religion.

See my point above, but how does US law make the US left of everyone on religion? Because religious freedom? The relation of Western countries with religion is always complex, but they have essentially constitutionally-guaranteed freedom of conscience too…

Expand full comment
Wanda Tinasky's avatar

Disaggregate the imprisonment stats by race and then see how the US and Europe compare.

The not-so-secret key for interpreting US facts and attitudes towards crime is understanding that we're basically two separate, parallel societies: underclass minorities (blacks and latinos, mainly) and everyone else. The problem is a) those two groups behave very differently and b) admitting that is the third rail of US politics. Liberals tie themselves in knots to deny it and denounce any conservative who even hints at it. This leads to a very complicated social equilibrium where we have high rates of crime but refuse to actually address it. Liberal attitudes have dominated over the past generation or so, which means that we in fact have crime-tolerant policies which masquerade as pro-racial-equity policies and anyone who points this out gets immediately tarred-and-feathered as a racist. See the 2020 George Floyd riots for the most visible example of this. One of the major reasons Trump won is that many people are sick of this dynamic and so elected the only politician who doesn't care about being called racist.

Expand full comment
User's avatar
Comment deleted
Jan 27Edited
Comment deleted
Expand full comment
deusexmachina's avatar

Not only are American whites pretty criminal to the average European inhabitant, European countries also happen to have very different crime rates within their sub-populations. The US isn’t special in this regard at all.

Expand full comment
The_Archduke's avatar

And judging by the news lately, many Americans would agree that the UK doesn't imprison nearly enough people.

Expand full comment
Lurker's avatar

450 remains well over twice the rate of the dozen Western European countries I checked. I don’t know if the news items you’re thinking about justify increasing that rate in Spain, France, Germany, Poland, Sweden…

Expand full comment
NoRandomWalk's avatar

I have a nice furnished 1bd apt in jersey city (within 5 min walk of Grove st path station) that will be empty for two months between nowish/Feb 11 and April 15 when the lease expires (me and wife bought a condo, moving there).

If there's anyone from the community who know someone trustworthy who'd want to rent in that short time range and significantly below market prices (~3300/month) for the unit, get in touch?

Expand full comment
Sol Hando's avatar

Not to advertise or anything, but check out HousingPanda.com

We verify your guest's identity, protect your guest from fraud (ex. a fake listing or misrepresenting photos), process the guest's payment (they're more willing to pay with CC and have a third party hold it until they check in before sending payment to you), guarantee payment to you (if a guest is late or doesn't pay more than the first month you still receive rent), and provide support throughout the stay. It's free for hosts, but guests pay a fee (largely for the increased security from scams or otherwise difficult hosts). You do have to verify your identity as a host, but only after accepting a guest. Most guest users are international students arriving to the US for the first time, interns working 12+ hour days, or new hires moving to NY, but want to get situated before renting an apartment.

Otherwise, FB marketplace is not a bad bet (as a host, not as a guest). Do your due diligence, pay for a background check, communicate with your building (I recommend adding them as a guest/occupant rather than a sublease as managers are a lot more amenable to that) and (this is very important) set accurate expectations! Besides the normal transaction risk, the biggest problem I see (and I've seen every possible problem with subleasing) is a guest expecting something the host was not anticipating on providing. If you're going to have personal things in the apartment, communicate that. If you're leaving non-perishable food, communicate that. If they are responsible for utilities, communicate that. If they need to put your name with a C/O on mail, communicate that. Etc, etc. Also, use a sublease or guest agreement. Here's one's for NYC, but the PDF/Doc is downloadable and you can crtl+f New York and replace with New Jersey and it'll be just as valid: https://www.housingpanda.com/blog/sublease-agreement-nyc

Also, consider lowering the price if you don't get interest immediately. Renting the apartment for $2,500 (near guaranteed at that price) is much easier than not renting it at all because you didn't want to rent it for less than you pay in rent. The next couple of month's aren't super active (the summer is another story), and there's a general aversion to being in NJ (even though it's closer to the action than some other parts of NYC or even Manhattan. Good luck!

Expand full comment
NoRandomWalk's avatar

This was very helpful, thank you!

Expand full comment
Joey Marianer's avatar

Just in case: you've asked what the penalty is for breaking the lease, right? When I bought a house I paid $1000 to cancel tht the remaining months on my lease (coincidentally in the same area).

Expand full comment
NoRandomWalk's avatar

I don't understand the concept being discussed. I rented an apartment for 12 months, and agreed to pay monthly rent. What ability do I have to break that contract, legal or otherwise.

Expand full comment
NoRandomWalk's avatar

Ah, thanks all for explaining I'll ask my landlord about that. (offer to sublease still valid)

Expand full comment
Mary Catelli's avatar

I broke a lease because I was moving away. I paid the rent until they rented out the apartment again, which was less time than the full remainder.

Expand full comment
Neurology For You's avatar

Many leases have an escape clause, usually for a multiple of the monthly rent.

Expand full comment
Joey Marianer's avatar

The contract I assume you signed may have said "NoRandomWalk may break the lease and not pay for the last _n_ months, but must pay a penalty of $1000". Most large properties have this in the contract as a matter of course.

Even if this clause doesn't exist in your contract, you can ask the owner if they'd be willing to let you do it anyway, for a fee. The contract can then be amended (any contract can be amended with the consent of all parties). The worst they can say is "no" (or "$1000000").

Expand full comment
Erica Rall's avatar

Even with no early termination clause or negotiated termination agreement, many/most jurisdictions in the US have laws limiting how much of the lease you owe if you leave early. The most common ruleset I've heard about is that the old tenant is only obligated to pay rent until the landlord either gets a new tenant under lease or take the unit off the rental market.

Expand full comment
Melvin's avatar

Have there been any impressive works on getting AIs to behave agentically in some kind of simulated world?

I feel like Minecraft would be a good test bed. Give the AI a top level goal like "build a portal" (Note: I only know a tiny bit about Minecraft and it's mostly wrong but bear with me) and it breaks it down to lower level goals like "get some obsidian" and "build a diamond axe" and further into "build a house to survive the night" and "run away from that zombie sheep right now".

Expand full comment
Anonymous Dude's avatar

On a slightly unrelated note, did you see the attempt at having the AI simulate Minecraft? Kind of looks like it, but then there's no object permanence--things stay the same while you're looking at them but disappear if you look away. Matches the description of Carcosa in some Call of Cthulhu supplements (Tatters of the King I think?)

Expand full comment
Julian's avatar

Isn't this what already happens in games like Civilization or StarCraft when you play against the computer? They have a goal and set out to maximize it. I guess those games could be said to be less "open world" but from the computer perspective, all the levers are in front of them in the same way. They just have to choose which levers to pull at which time.

Expand full comment
anomie's avatar

...Those AIs cheat. They're also run by man-made scripts, not trained AIs.

Expand full comment
beleester's avatar

They've made pro-level Starcraft and DOTA AIs that don't cheat - AlphaStar and OpenAI Five.

Expand full comment
thefance's avatar

True.

Although if you read Julian's statement closely, it's clear that he thinks the in-game "play vs AI" menu-option means "challenge a Neural Network", rather than "challenge a dumb script that maphacks". And it's not obvious that you realize this.

Expand full comment
Eremolalos's avatar

If I'm remembering right some AI developer gave an AI the task of starting an online t-shirt shop and it did it. Set up a website, ordered t-shirts, did some advertising. That was about a year ago.

Expand full comment
Julian's avatar

I think the AI told him what to do and he did it, I am not sure the AI executed the actual plan that much.

Expand full comment
Eremolalos's avatar

Yes, you’re right. I looked it up and it’s an open source thing called AutoGPT, developed

by someone at MIT. https://en.m.wikipedia.org/wiki/AutoGPT

Expand full comment
Julian's avatar

Very interesting, didn't know they open sourced it. I do suspect, with the advancement in the concept of "AI Agents" you could get an AI to do most of what that guy did autonomously. Might take a lot of set up first but then you are off and running.

Expand full comment
MicaiahC's avatar

Googling Minecraft LLM got me https://voyager.minedojo.org/ is this the type of thing you are thinking about?

Expand full comment
Legionaire's avatar

AI has already been Minecraft but I'm not sure how generalized or specialized it was. It definitely wasn't an LLM.

Expand full comment
Bobbie Chen's avatar

Hi folks, I've started writing a new blog on "Digital Seams". Over the years as a student, software engineer, and product manager, I've always been interested in what I'm calling "seams" - not only the APIs and mechanics of interfaces, but also the human and physical context about how and why they connect. I'd love to hear what this community thinks.

https://digitalseams.com/blog/introducing-digital-seams

As a taste - the initial set of posts includes topics ranging from:

* Two key Gmail features that actually prevent you from sending emails

* Why the best AI interfaces are probably not going to be chatbots

* How everyone knows what a horse is (but not how it was designed)

* And (up next), what happens to your online accounts when you die.

Expand full comment
Roman Hauksson's avatar

Inspired by Tim Urban's post "Your Life in Weeks" and Bryan Braun's accompanying interactive website, I made a website to visualize the number of weeks in your pre-AGI life and how many of them have passed. It pulls the estimated date that AGI will be invented from Metaculus.

https://your-life.roman.technology

Expand full comment
demost_'s avatar

Does anyone have a strong opinion or some in-depths comments on this study?

https://www.nature.com/articles/s41591-024-03452-2

It estimates how the cold- and heat-related deaths may develop in Europe due to climate change, and it finds that the heat-related deaths dominate by a lot. Only in some Northern European countries may the net effect be positive (more life saved than lost by warmings), and even this only under strong assumptions. For Southern Europe it's very negative in all their scenarios.

I find it hard to square this with Scott's analysis that probably maybe the number of cold-related deaths is much larger than the number of heat-related deaths. Where does Scott's analysis or assumptions deviate from those in the study?

https://www.astralcodexten.com/p/chilling-effects

Expand full comment
demost_'s avatar

Ok, in the end I did read the study in detail. They do not take into account that there are generally more summer-related deaths in cold countries and more winter-related deaths in warm countries. Scott takes this serious, which is why the conclusions are so hard to square.

If you take this serious, then effectively you should assume an overcompensation of more than 100%, so it would mean that rising temperatures lead to more winter-related deaths and fewer summer-related deaths in the long term. The study only considers compensation of less than 100%. Which is why the effects go exactly in the opposite direction.

Knowing this effect, I am quite unsure what to expect. Assuming an overcompensation makes me nervous, because it goes so much against intuitions. On the other hands, that's the best data we have. I guess I will resolve it to "I have no idea whatsoever what will happen".

Expand full comment
Padraig's avatar

Scott's analysis is of a fairly poor modelling study - excess deaths in winter were attributed to cold. There are additional deaths in winter for reasons that are not entirely understood (seasonal viruses are only a part of the story). But that doesn't mean the deaths can be directly attributed to cold, in the sense that warming those people up would save their lives. The relation between the 'minimum mortality temperature' and death rates doesn't really appear to be causative, there's no reason to expect that global warming would prevent any of these deaths.

On the other hand, there's a well documented increase in mortality due to heatwaves, at least in Europe where air-conditioning is rare. The consensus is that these are people dying due to the additional heat - mostly elderly people at home or people who get exposed and isolated outdoors. If you agree that climate change will increase the frequency and intensity of heatwaves, then it's likely that these excess deaths will increase.

So the answer is that one study is trying to directly model the effects of global warming on deaths, the other is a slightly odd computer simulation with thought provoking but questionable results.

Expand full comment
demost_'s avatar

"Scott's analysis is of a fairly poor modelling study"

I am confused by this, which study? Scott's analysis does not refer to a particular study. Also, he discusses in length that probably some of the winter-related deaths are due to cardio-vascular incidences that are immediate consequences of the temperature, and not of viruses and so on.

Expand full comment
Padraig's avatar

I meant the chilling effects post - if you read the underlying study it's purely a correlation & mathematical modelling exercise. It doesn't propose a viable mechanism by which temperature rise would reduce mortality.

Expand full comment
demost_'s avatar

Even with that additional information, I still don't know which one you mean, because Scott references a couple of them. Perhaps Bressler et al. in Section IV? Or Qi Zhao et al. in Section I?

And Scott cites four studies which try to disentangle which the excess deaths in winter are directly due to the cold and which are not.

You say that the relation between the minimum mortality temperature and death rates doesn't really appear to be causative. But why not? We see a pretty consistent pattern over many cities, and there is a lack of other explanations. When we have a consistent pattern, then causation is at least something we should consider, even if we don't understand how.

Expand full comment
Padraig's avatar

Apologies - it's early here. The Zhao paper was what I was referring to.

Undoubtedly there's a relation between temperature and excess deaths. I had a discussion a while back with a statistician who studies population health - he told me that when there's a rise in mortality (e.g. during a heatwave in Europe) there's usually a slight dip in the following months and it all balances out. Almost all of the people who die of heat stress were in poor health anyway, and would have died of something else shortly afterward if not for the excess heat, which tipped them over the edge. There's a clear cause & effect here, and the effect is visible consistently over time.

I agree that there's also excess mortality in winter in cold countries. Most of it seems to be due to seasonal viruses rather than exposure to cold (though some people who can't afford to heat their homes might become ill). But I haven't seen compelling data that suggest that people being over-exposed to cold and unable to warm themselves adequately is leading to deaths in the same way that the heat waves certainly lead to excess deaths. Maybe I'm overlooking something, and you have reason to believe that reducing the severity of cold spells in winter will reduce that disease burden? I accept that people are more likely to die at some times of the year - I don't think it's the month or the weather conditions that cause it necessarily.

Overall, if we could reduce the overall death rate the result would be people living longer. Is it plausible that in a warming world life expectancy will rise substantially? I doubt this too...

Expand full comment
demost_'s avatar

Thanks for the explanation!

It's definitely possible both for heat-related and for cold-related deaths that those are people who would have died anyway soon, so that the effect on life expectancy is small.

What I took from Scott's article, the problem is not that people are literally freezing to death. It's that if you are exposed to the cold then your blood viscosity and coagulability increases. Not instantly, but plausibly this occurs when you spent two weeks in cold weather. (Plus the instant effect of contracting blood vessels in the cold.) This might cause additional heart attacks in winter. If true, then this may really be a lasting effect for life expectancy, because it's not clear that those people would get a heart attack a few months later anyway.

There is no doubt that heat waves cause excess deaths. But also here, it's not clear that they cause *more* excess deaths in warmer countries than in colder countries. It might be the other way around. So it could be that people will just over-adapt to the new temperatures quickly, and that we soon have fewer deaths from heat waves than in former times. I am not sure at all about this, but the data somewhat suggests that this could happen, so I found it strange that the original Nature study I mentioned didn't even consider this possibility.

Expand full comment
Guy Tipton's avatar

I will freely admit to not reading this article, but I read another article in Nature last year (?) along the same lines and they didn't address "buy a window air-conditioner for $100" as a possible response.

Expand full comment
Melvin's avatar

You won't be allowed to have an air conditioner if the anti global warming types get their way.

Expand full comment
Neurology For You's avatar

I don’t have an answer to your question but I have read articles saying that frostbite is mostly a disease of addicts in Scandinavia, presumably because they’re homeless in winter or pass out on park benches in winter.

Expand full comment
demost_'s avatar

Interesting. It's the opposite of what we have for winter-related deaths, there Scandinavia has much less excess deaths than warmer countries. But that's perfectly compatible, it just means that most winter-deaths are not directly related to frostbite.

Expand full comment
Thegnskald's avatar

"Europe" might be the issue; focusing on a highly-developed region of the world that is very well cold-adapted but relatively poorly heat-adapted, results, assuming everything stays constant and nobody installs AC (but the heating still works), in a bit of bias in your results. That is, if you are focusing exclusively on Europe, I don't find it particularly surprising if you get those results.

IIRC from the last time I dove into this, most temperature-related deaths occur in areas that aren't as highly developed (India, parts of Asia, and parts of Africa), and in developed parts of the world, they tend to arise under unusual weather circumstances (that is, conditions which the locals aren't prepared to deal with).

Texas regularly gets hit with weather which, whenever it occurs in France, results in a bunch of deaths. They call deathwave heat levels "summer", and sometimes "spring" or "fall". There's not a constant stream of heat deaths, however (they were unusual enough when I lived there to make the news whenever they happened, and generally involved out-of-towners doing something like falling asleep in the sun). However, whenever there's a blizzard there's inevitably a bunch of people dying, and an especially cold period can cause serious issues for the infrastructure (which isn't designed to accommodate it) resulting in people freezing to death in their homes.

Expand full comment
demost_'s avatar

"assuming everything stays constant and nobody installs AC"

But they don't assume that. They have several scenarios for how well this adaptation works, and get a strong net negative effect in all of them. Essentially they conclude that this won't help tip the balance. That's why I find it confusing.

And if I recall Scott article right, then his conclusion was that temperature-related deaths are probably also relevant in industrialized countries. Season-related deaths are about 10% or so. It is very hard to entangle temperature effects from seasonal diseases, but the effect of temperatures is probably not zero.

Expand full comment
Melvin's avatar

I think that temperature-related deaths fall into two categories:

1. Otherwise-healthy people who actually freeze to death outside for whatever reason

2. Very sick or frail people who are hanging on by a thread -- an especially hot or cold day is likely to be the thing that finally taxes their body enough to give up, but otherwise they'd probably be gone within weeks anyway.

I don't think Earth gets hot enough anywhere to kill a reasonably healthy person. (And I've been to some bloody hot places.) But plenty of parts of Earth get cold enough to kill a healthy person if they're left outside without proper clothing.

Expand full comment
AntimemeticsDivisionDirector's avatar

I've heard a paramedic claim that the archetypal person who dies of (non exercise-related) heatstroke in the US is an elderly person who dies alone in their home with the windows closed and the a/c (if present) turned off. They just don't feel it the way younger people do.

Expand full comment
Michael Watts's avatar

> I don't think Earth gets hot enough anywhere to kill a reasonably healthy person.

The Earth gets hot enough to kill a healthy person in many, many places. It's a routine event in inhabited places. On the other hand, in Antarctica they have a periodic (annual?) event where you have to run outside the lab naked. Those people don't die.

You have to specify how long you're exposed to the temperature, and whether solutions like hiding in cooler environments count as being exposed to the same heat you'd face if you were stupid enough to spend all your time in the sun without proper clothing.

Fun fact: touring Himeji Castle in the summer, we were informed that the staff find it necessary to call an ambulance for a tourist suffering heat stroke about once a day.

Expand full comment
FLWAB's avatar

People actually live in the hottest place on Earth (Death Valley) while nobody lives in the coldest place on Earth (Eastern Antarctic Plateau). Penguins don't even live there! There may be some researchers but they're there temporarily to study the coldest place on Earth, nobody builds a house and starts a family there. The coldest places on Earth are much deadlier than the hottest places on Earth for humans. This makes sense: so much of human anatomy is adapted for warm weather. We have more sweat glands than any other primate, and lost most of our body hair so that sweat could do its job more efficiently. Other animals wish they could cool their body temperatures as efficiently as humans do!

Expand full comment
Michael Watts's avatar

> People actually live in the hottest place on Earth (Death Valley) while nobody lives in the coldest place on Earth (Eastern Antarctic Plateau). Penguins don't even live there!

This isn't even true. https://en.wikipedia.org/wiki/Amundsen%E2%80%93Scott_South_Pole_Station :

> The station has been continuously occupied since it was built

But if it were true, it would be meaningless; the difference in human habitability between the plateau and other parts of Antarctica is zero. It's cold enough to kill you anywhere. And similarly, it gets hot enough to kill you all over the world. People inhabit hot regions the same way they inhabit cold regions: they find defenses against the temperature.

Expand full comment
Erica Rall's avatar

I've noticed a serious "Did you two visit the same country?" effect (*) in discussions about how good the current crop of LLMs is. My own experience is that it's an impressive curiosity and has some valuable use cases if used judiciously, but it is severely wrong often enough to be nearly useless for anything serious that isn't going to be meticulously checked over. But I've also heard reports from intelligent, capable people treating it as half a step short of ASI and worth offloading any number of important tasks to.

One hypothesis is that I'm being cranky and over-skeptical, haven't given it a fair chance, and I am overgeneralizing from "gotcha" tests like the time I asked ChatGPT 4o about which kamikaze pilot flew the most successful missions. I cannot conclusively rule this out, but I am biased against it.

Another hypothesis is that replacement-level humans are mostly spectacularly bad at knowledge-manipulation and text-generation tasks, so the bar for LLMs being better than a lot of people is lower than people like me are assuming. I have a little bit more sympathy for this, especially when I think about my experience trying to edit together my classmates' contributions to group papers in business school: I would always volunteer for compilation and editing so I would be the last person to touch the paper and would have a chance to fix both the prose and the more egregious content errors. But I am also skeptical of this hypothesis.

My preferred hypothesis, possibly tainted by confirmation bias, is that LLMs are much, much better at sounding right than at being right, and there's a Gell-Man Amnesia effect going on when people use LLMs in earnest in areas where they aren't sufficiently knowledgeable to reliably notice when it hallucinates.

(*) A reference to the 1963 Krulak–Mendenhall fact-finding mission to South Vietnam, where the two leaders (a general and a senior career diplomat) gave separate reports with diametrically opposed conclusions. JFK reportedly asked them if they'd visited the same country after they'd given their reports.

Expand full comment
Throwaway1234's avatar

I think for a lot of people their day-to-day work is important, but does not require either particularly deep analysis or domain knowledge that is above the average level of esoteric for the domain.

e.g. in software engineering, for every person writing new tools, there are hundreds of people gluing premade tools together in pretty standard ways to build whatever it is that solves their business problems.

The toolmakers need an in-depth understanding of how the tools work, the problem space and the state of the art, so that they can innovate. The business logic folk need to be able to break their problem up into parts that off-the-shelf tools can solve, pick the most appropriate available tools for solving the parts, glue them together and configure them correctly for whatever their business needs.

The business logic folk will find AI helpful. The toolmakers will have a bad time with it.

Expand full comment
Jeffrey Soreff's avatar

I reported a benchmark-ette set of queries to ChatGPT o1 in the last Open Thread in

https://www.astralcodexten.com/p/open-thread-365/comment/87433836

tl;dr; 7 chemistry and physics questions, 2 answers fully right, 4 partially right, 1 badly wrong,

considerable improvement from my previous queries. Not AGI _yet_ but it is starting to look close.

Expand full comment
Michael Watts's avatar

I just posed a question to Google Gemini (as always, because I don't want to sign up for random accounts, but I have existing Google accounts):

-----

[me] The question "In what year was George Washington born?" is ambiguous. Can you suggest some rephrasings that make the intent of the question more clear?

[response] Certainly! Here are a few ways to rephrase the question "In what year was George Washington born?" to make its intent clearer:

𝟭. 𝗙𝗼𝗰𝘂𝘀𝗶𝗻𝗴 𝗼𝗻 𝘁𝗵𝗲 𝗚𝗿𝗲𝗴𝗼𝗿𝗶𝗮𝗻 𝗖𝗮𝗹𝗲𝗻𝗱𝗮𝗿:

- "In what year of the Gregorian calendar was George Washington born?"

- "According to the Gregorian calendar, when was George Washington born?"

𝟮. 𝗦𝗽𝗲𝗰𝗶𝗳𝘆𝗶𝗻𝗴 𝘁𝗵𝗲 𝗖𝗮𝗹𝗲𝗻𝗱𝗮𝗿 𝗦𝘆𝘀𝘁𝗲𝗺:

- "What is the Gregorian calendar year of George Washington's birth?"

- "In which year of the Julian calendar was George Washington born?"

𝟯. 𝗘𝗺𝗽𝗵𝗮𝘀𝗶𝘇𝗶𝗻𝗴 𝘁𝗵𝗲 𝗜𝗻𝘁𝗲𝗻𝗱𝗲𝗱 𝗨𝘀𝗲:

- "For historical records, what year is typically used for George Washington's birth?"

- "What is the commonly accepted year of George Washington's birth in modern historical accounts?"

These rephrasings aim to clarify that the question is specifically about the Gregorian calendar year, which is the most widely used calendar system today.

I hope this helps! Let me know if you'd like more options.

[me again] Are you sure that these rephrasings resolve the ambiguity? Wasn't George Washington born in the same year according to both the Julian and Gregorian calendars?

[response] You are absolutely right! My apologies, I seem to have missed a crucial point.

- 𝗚𝗲𝗼𝗿𝗴𝗲 𝗪𝗮𝘀𝗵𝗶𝗻𝗴𝘁𝗼𝗻 𝘄𝗮𝘀 𝗯𝗼𝗿𝗻 𝗼𝗻 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟭𝟭, 𝟭𝟳𝟯𝟭 (𝗝𝘂𝗹𝗶𝗮𝗻 𝗖𝗮𝗹𝗲𝗻𝗱𝗮𝗿), 𝘄𝗵𝗶𝗰𝗵 𝘁𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗲𝘀 𝘁𝗼 𝗙𝗲𝗯𝗿𝘂𝗮𝗿𝘆 𝟮𝟮, 𝟭𝟳𝟯𝟮 (𝗚𝗿𝗲𝗴𝗼𝗿𝗶𝗮𝗻 𝗖𝗮𝗹𝗲𝗻𝗱𝗮𝗿).

Therefore, specifying the Julian or Gregorian calendar in the question itself doesn't resolve the ambiguity in this specific case.

The ambiguity lies in whether the questioner is seeking the date according to the calendar system used at the time of his birth (Julian) or the date commonly recognized today (Gregorian).

Here are some rephrasings that better address this:

- "𝗜𝗻 𝘄𝗵𝗮𝘁 𝘆𝗲𝗮𝗿 𝘄𝗮𝘀 𝗚𝗲𝗼𝗿𝗴𝗲 𝗪𝗮𝘀𝗵𝗶𝗻𝗴𝘁𝗼𝗻 𝗯𝗼𝗿𝗻 𝗮𝗰𝗰𝗼𝗿𝗱𝗶𝗻𝗴 𝘁𝗼 𝘁𝗵𝗲 𝗰𝗮𝗹𝗲𝗻𝗱𝗮𝗿 𝘀𝘆𝘀𝘁𝗲𝗺 𝘂𝘀𝗲𝗱 𝗮𝘁 𝘁𝗵𝗲 𝘁𝗶𝗺𝗲 𝗼𝗳 𝗵𝗶𝘀 𝗯𝗶𝗿𝘁𝗵?"

- "𝗪𝗵𝗮𝘁 𝗶𝘀 𝘁𝗵𝗲 𝘆𝗲𝗮𝗿 𝗼𝗳 𝗚𝗲𝗼𝗿𝗴𝗲 𝗪𝗮𝘀𝗵𝗶𝗻𝗴𝘁𝗼𝗻'𝘀 𝗯𝗶𝗿𝘁𝗵 𝗮𝘀 𝗿𝗲𝗰𝗼𝗿𝗱𝗲𝗱 𝗶𝗻 𝗵𝗶𝘀𝘁𝗼𝗿𝗶𝗰𝗮𝗹 𝗱𝗼𝗰𝘂𝗺𝗲𝗻𝘁𝘀 𝗼𝗳 𝘁𝗵𝗲 𝘁𝗶𝗺𝗲?"

- "𝗪𝗵𝗮𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗰𝗼𝗺𝗺𝗼𝗻𝗹𝘆 𝗮𝗰𝗰𝗲𝗽𝘁𝗲𝗱 𝘆𝗲𝗮𝗿 𝗼𝗳 𝗚𝗲𝗼𝗿𝗴𝗲 𝗪𝗮𝘀𝗵𝗶𝗻𝗴𝘁𝗼𝗻'𝘀 𝗯𝗶𝗿𝘁𝗵 𝘁𝗼𝗱𝗮𝘆?"

These rephrasings explicitly acknowledge the difference in how his birth year is perceived depending on the chosen calendar system and historical context.

I am still under development and learning to be more accurate. Thank you for pointing out my mistake!

-----

There are a couple of things going on here. The first answer is a train wreck. As I remark in my followup, there is no relevant difference between the Julian and Gregorian calendars, so the first four suggestions all share the same problem that I asked to clarify. Suggestion 3a is hard to interpret, but suggestion 3b seems fine. The statement about how we want to specifically know about the Gregorian year is nonsense.

After I follow up, Gemini tells me that I'm "absolutely right" (I am!) while citing a "fact" that contradicts me. It is not true that "George Washington was born on February 11, 1731 (Julian Calendar)", not in any sense that doesn't require immediate qualification. After declaring that George Washington was born in 1731 by the Julian calendar and 1732 by the Gregorian calendar, Gemini goes on to allow that "specifying the Julian or Gregorian calendar in the question itself doesn't resolve the ambiguity", which is still true, but directly conflicts with the "facts".

"The ambiguity lies in whether the questioner is seeking the date according to the calendar system used at the time of his birth (Julian) or the date commonly recognized today (Gregorian)" would be true, if it didn't include the parentheticals. Oops.

(In reality, while the Julian calendar recognizes January 1 as the beginning of the new year just as the Gregorian Calendar does, 18th-century England instead recognized March 25, and this is what creates the ambiguity over whether Washington, born in February, was born in 1731 or 1732. The Julian/Gregorian difference is what moves his birthday from February 11 to February 22, but this has no effect on the issue of when the year turns over.)

On the other hand, after this incredible dog's breakfast of "reasoning", the three secondary suggestions for cleanly-phrased questions are all excellent.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

Re the question you were asking it, thanks very much for:

>(In reality, while the Julian calendar recognizes January 1 as the beginning of the new year just as the Gregorian Calendar does, 18th-century England instead recognized March 25, and this is what creates the ambiguity over whether Washington, born in February, was born in 1731 or 1732. The Julian/Gregorian difference is what moves his birthday from February 11 to February 22, but this has no effect on the issue of when the year turns over.)

I had no idea that the start date for a year was different in 18th-century England!

Re:

>After I follow up, Gemini tells me that I'm "absolutely right" (I am!) while citing a "fact" that contradicts me.

Ouch! One of the previous versions of ChatGPT (4o? 4?) had, IIRC, a similar failure mode, or at least was prone to taking my input, contradicting something it had said, and then saying that I was "absolutely right", though I forget if I'd seen it then say something that immediately contradicted what it had just agreed to. I'm reasonably sure that I _had_ seen it follow up, in a modified answer, with something that at least contradicted itself. I'm reasonably sure I remembering it miscounting the hydrogens in one of its own answers to the 4-carbon-hydrocarbons question.

As far as I can tell, ChatGPT o1 seems considerably better about these failure modes.

>It sounds like Gemini is currently even more unreliable than ChatGPT o1.

I retract this claim of greater reliability.

I just tried precisely your question in ChatGPT o1, and it has the same train wreck as Gemini, starting its answer with

>Certainly! The ambiguity around George Washington’s birth year often stems from the calendar change (Julian to Gregorian) during his lifetime. Here are some ways you might rephrase the question to clarify what you’re asking

and missing the March-25th-year-start-in-England issue. (albeit some of the answers it gives are ways to _resolve_ the ambiguity, but none that it gives _explains_ it)

https://chatgpt.com/share/679ae0da-c93c-8006-ac0d-773a01e95584

Expand full comment
Michael Watts's avatar

Well, to be fair, I do think that o1's five suggestions are good answers to the specific question I asked ("how can I make this question clear?"). The preamble is awkward, as you note. I was intrigued by the fifth suggestion, "What year of birth is most often cited by historians for George Washington, and why does some disagreement exist about the date?", which is a good way to specifically elicit a description of why the question is difficult.

I followed up at the share link (I see a label saying "4o mini"):

-----

[me] I like question #5. What do you think a good answer to that question would look like?

[response] I'm glad you liked question #5! Here's how a good answer to that question might look:

[horizontal line]

George Washington was born on February 22, 1732, according to the Gregorian calendar, which is the calendar system in use today. However, there is some historical ambiguity because when Washington was born, the British colonies, including Virginia, used the Julian calendar. Under the Julian calendar, his birth date would have been recorded as February 11, 1731.

The reason for this discrepancy is that the Gregorian calendar, which was introduced by Pope Gregory XIII in 1582, was not adopted by England and its colonies until 1752, nearly two decades after Washington’s birth. When the British Empire switched to the Gregorian calendar in 1752, they adjusted the calendar by skipping 11 days to align with the more accurate system.

Today, historians and the U.S. government generally use the Gregorian calendar to date Washington’s birth, leading to the widely accepted date of February 22, 1732. This is why some confusion might arise when looking at older sources or historical records that still reflect the Julian calendar.

[horizontal line]

Does that answer the question clearly for you? Feel free to tweak it or ask more follow-ups!

-----

This is pretty unfortunate. The question was good, but the answer isn't. There is no mention of the actual reason there might be disagreement over the year. There _is_ a mention of the fact that the Julian and Gregorian calendars differed by 11 days at the time of the British reform, and for a human author you'd definitely expect an explanation of why February 11, 1731 plus 11 days is February 22, 1732, but there's not even a gesture at it.

However, since in this followup the question has partially switched from "what's the problem with identifying the year of George Washington's birth?" to "what's the problem with identifying the date of George Washington's birth?", the talk about the Julian calendar isn't as irrelevant as it was to the original question.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Good points! And, as you said, it is still missing the https://en.wikipedia.org/wiki/Gregorian_calendar#Beginning_of_the_year Sigh. Maybe one of the follow-on models will do better. In fairness, _I_ didn't know about the March 15->Jan 1 switch, only about the Julian/Gregorian switch and "Give us back our 11 days". AIsplaining: Maybe mentions of the March 15->Jan 1 switch are a lot sparser in the training set than Julian/Gregorian. Still, Wikipedia has the information, and is presumably in the training set...

Expand full comment
Michael Watts's avatar

Just to note, England switched from March 25, the conception of Jesus. March 15 was significant to the Romans (the ides), but not to the British.

Wikipedia indicates that January 1 was justified by reference to Jesus also, being the notional day of his circumcision. I'm not sure why you'd want that to mark the new year; I assume it was mostly a fig leaf for a decision taken for other reasons.

And yes, I tend to assume the chatbots' problem here is that almost all calendrical discrepancies between the present and the past (where the dates look like they're comparable; "the third year of Qianlong" might not be a good match to a Gregorian calendar year, but it also doesn't look like it should be) are explained by the difference between the Julian and Gregorian calendars, so that's what people reach for when they want to talk about calendrical differences.

Expand full comment
Scott Alexander's avatar

I don't think it's currently "half a step short of ASI". I think it's like five steps short of ASI, but those steps will happen quickly.

Right now I think of it as Google 2.0. You can ask it questions and get answers; the quality of those answers is variable but you get more sophisticated in understanding what to trust vs. not trust over time, and the chance to get any question answered instantly is enough of a miracle that we forgive its failures.

Expand full comment
Erica Rall's avatar

I'd be interested to hear more about how you judge how to trust the quality of its answers when you're using it in earnest to ask about something you don't already know. A big part of my distrust of its answer quality is that I think I'm pretty good at reading human-written sources critically and assessing their credibility by their provenance and by internal evidence. I don't feel like my techniques here are likely to carry over well to LLMs since their nature launders info from good sources, info from bad sources, and hallucinations into a black box output with a consistent style.

Expand full comment
FLWAB's avatar

I've also been surprised about all the LLM hype. My wife uses them sometimes and I find the output pretty useless. But, after mulling over it a bit, I realized that it's because I'm a very good writer. I'm not saying I'm an amazing writer, but I imagine I find writing easier and my writing output is higher quality than 90% of people (not that hard, really, most people aren't great at writing). So every time I try an LLM I find that it's doing a worse job than I would have. Yet for someone like my wife, who has a lot less experience and skill in writing, LLMs can write as good or better than she can, so she finds it useful.

About 9 months ago I was listening to a podcast run by two professional comics creators (Comic Lab) and they had an episode where they talked about trying out AIs to improve their workflow. They both found AIs to be useless for their purposes, both image generation and text generation. One asked an image generator to extend out a background, which it couldn't do, and another tried using one to come up with panel layouts or to help with scripts. They both agreed that the AIs either couldn't do what they wanted or produced low quality work that wasn't useful to them. After reflection, I think its the same problem. To a cartoonist with decades of experience drawing image generating AIs just can't reach the quality or the specificity of what he can quickly do with his own hands. But to a schlub like me who can't draw to save his life an image generating AI can produce way better images than I can. At this point in time it seems to me that for creative work AI is not good enough to be useful to individuals who have the skill to do the work well, but may be extremely useful to those who do not possess such skills.

Though I'm probably not the best person to comment on these, as I have no idea what people are using LLMs for besides writing papers/reports and generating images. For all I know those are atypical use cases.

Expand full comment
deusexmachina's avatar

The problem might be that you expect an LLM to produce a text better than that of a very good writer.

I write professionally. Nothing earth-shattering, I will never publish a novel I would want to read, but in my native language, I write better than almost anyone I know personally and better than 75% of native speakers I know in English.

I had GPT4o compare my own texts to texts I asked it to write about the same topic, and every time it rates my version as better. I happen to agree. So what you're saying makes sense if what you expect from GPT is a full text.

(And still, it's better than a lot of writing I have seen over the years. I reviewed a few dissertations of friends who aren't good at writing last year, and GPT made them 50% better just by pasting a chapter with the prompt "make this clearer".)

I find it extremely useful for other tasks that support my writing. For example, if I know what I want to explain, but I am unhappy with the example/analogy I use, I ask it to give me five other analogies and they are usually very helpful. If I am unhappy with a draft, I ask it for critical feedback with specific instructions, and that often identifies issues with the text I had problems putting my finger on.

So I find it helpful to make me a better writer. I don't expect it to be a better writer than me in its current iteration. I somehow feel that it's tendency to work towards some lowest common demoninator blocks it from being actually original, but that might just be a) a self-serving delusion and b) a matter of a a few years, at best.

Expand full comment
YesNoMaybe's avatar

This matches my experience.

My area of expertise is programming and I've spent most of my professional life in one language / tech stack. So at work LLMs are barely useful for me, the things that trip me up AI usually cannot help with, in my experience they just have no idea. I try occasionally and almost every time I try it ends up being a waste of my time.

But when I did a small hobby project in a language / tech stack I was not very familiar with it was actively helpful, a significant productivity gain. Of course once you start running into real issues AI might still give you the run-around because "sorry, I can not help you with this" is not in its vocabulary. But overall, pretty solid.

So, at least with the LLMs I have access to, it seems they be most helpful when you're trying to do something that's generally easy but that you aren't good in. Or that's easy and you could do it yourself, but the LLM can do it faster. No guarantees that "faster" means "with acceptable quality".

Expand full comment
Adrian's avatar

I'm a software developer, and your description matches my experience to a 100%: Not very useful when it comes to stuff I'm already good at, quite useful and much faster for languages or libraries of which I have little or no knowledge.

Expand full comment
DangerouslyUnstable's avatar

There is also a third thing: the single strongest/most impressive LLM ability is in coding, a field where results are often immediately verifiable and success is much less based on subjectivity. The code either functions properly or it doesn't.

This is because this two traits man both that it is easier to train the models (much easier to flag good/bad results) and also hallucination isn't as big a problem because it can almost instantly be identified and fixed.

If you aren't doing much coding, then you aren't using models in their single best strength. They are good at other things, but other things require more supervision and it's harder to catch the errors when they occur if you aren't personally an expert. Plus, as I understand it, for the most subjective of tasks, to people with good taste, AI results are often considered shallow slop.

Expand full comment
Jeffrey Soreff's avatar

>There is also a third thing: the single strongest/most impressive LLM ability is in coding

While coding is just a small part of a programming job, and while I've read the downstream comments about the subtle aspects of evaluating a section of code and agree with them, there is an interesting implication here for AI safety.

I don't know if _rapid_ self-improvement, "FOOM", is possible. But, if it _is_ possible, and one had a choice of which areas to strengthen an initial AI in in order to _maximize_ FOOM's likelihood, coding would be near the top of the list.

The next few years do look like they will be a wild ride...

Expand full comment
quiet_NaN's avatar

> [...] coding, a field where results are often immediately verifiable and success is much less based on subjectivity. The code either functions properly or it doesn't.

Hard disagree. Unless you are using formal verification (which is not widely used, I think that roughly zero percent of LOC written are formally verified for correctness), what you can easily verify is that the code is syntactically correct and that it yields plausible outputs in test cases. (Of course, correctness verification also is just passing on the buck -- you need to formulate the correct post-condition. Now, in some cases, that is dead simple -- "the array is sorted" is a lot easier to specify than any competitive sorting algorithm. However, most code is not like this. Figuring out what would be a sane thing for a particular piece of code to do is very much a part of the job.)

This is fine if your task is to do a software homework assignment -- if every LOC has a 1% probability of a subtle error, your LLM will pass that class.

If you have a big software project, things look quite different. Subtle errors can add up. For example, out of bounds array accesses did keep security researchers employed for decades. Each of these vulnerabilities was in code which seemed to function properly before someone digged into more deeply than the original author.

Likewise, design choices are somewhat subjective, but still essential. "What is the best way to model this problem?" is a question which depends a lot on the context -- how far do you expect to expand the project, and in which directions, etc. A wrong design choice can cost you dearly later.

Now granted, many languages and styles feature quite some boilerplate. "Write a glorified struct with four fields, plus getter and setter methods" is the kind of boilerplate which will not even require an LLM, some horrible perl script could probably do it well enough.

There is a reason why companies in the 2000s sometimes paid big salaries to excellent coders, even if they could have employed ten novice coders for the same money. In my opinion, the reason is roughly the same as why book publishers pay a substantial amount to their authors, even though paying a dozen literate people to just write the same amount of text would be vastly cheaper.

Quite frankly, I don't want someone who gets the kamikaze question wrong as my co-developer, no matter if they are running on wetware or silicon. Having learned all answers on stack overflow by rote is not a substitute.

That being said, I gather that there were likely a lot of lower tier coding jobs where people were basically copy-pasting algorithms from stack overflow all day. They will likely be replaced by LLMs.

Expand full comment
Adrian's avatar

Slightly off-topic, but

> in some cases, that is dead simple -- "the array is sorted"

is wrong, because it's insufficient – an algorithm which outputs an array of zeros trivially satisfies this specification, but that's not what we have in mind.

"The output array is sorted and contains all elements from the input."

is better, but still incomplete, because an algorithm which transforms the input [4, 3, 3, 2, 1] to the output [1, 2, 3, 4] satisfies this specification, although we'd want it to output [1, 2, 3, 3, 4].

"The output array is sorted and contains all elements from the input and has the same length as the input."

is even better, but allows [4, 3, 3, 2, 1] -> [1, 2, 3, 4, 4].

"The output array is sorted and the multiset constructed from the output is equal to the multiset constructed from the input."

This should work. I think. Impossible to know for certain.

Expand full comment
quiet_NaN's avatar

Good catch, that is actually an excellent point.

The easiest would probably to specify sorting an array of pairwise different references: say the sorted array has to include all the original references, and that its length also not increased. Per the pigeon hole principle, this means that the algorithm has applied a permutation (at least for finite array length, which is probably sufficient).

Your multiset condition seems sufficient as well.

Moral of the story: specifying correctness is non-trivial even for toy problems where I have claimed otherwise.

Expand full comment
Michael Watts's avatar

You can just use the concept of a permutation directly in your specification. "The output is sorted, and is a permutation of the input" is a complete specification of this problem, and we can know that it's complete by referring to the concept of sorting as rearranging ("permutation") a set of objects until they are in order ("sorted").

If you'd prefer to supply your own definition of what a permutation is, you can say that you're given an array `old` of length n, and (1) the output is an array `new` of length n; and (2) a function 𝑓 : ℤₙ → ℤₙ exists such that (2a) 𝑓 is one-to-one and (2b) for all integers i in ℤₙ , old[i] = new[𝑓(i)]. A one-to-one function from a finite set to itself is a permutation.

That second approach will generalize to defining a stable sort, with the condition that for all index pairs i, j, (old[i] = old[j] and i < j) implies 𝑓(i) < 𝑓(j). For this to make sense, you need two senses of equality, one in which elements may compare equal for sorting purposes and another in which they're all unique for purposes of measuring stability.

Expand full comment
Erica Rall's avatar

To be fair, I just rechecked the kamikaze question. ChatGPT 4o-mini does better than the first time I asked the question, in that it correctly names an actual kamikaze pilot (Yukio Seki) rather than attributing kamikaze careers to random Japanese people who were notable in completely different contexts, and it does note that successful kamikaze missions were inherently one way. But when asked follow-up questions, does claim that Seki personally hit at least two targets and conducted more operations after his first successful strike.

The full version of ChatGPT 4o currently correctly insists that Seki only hit one target and died in the process.

---- 4o-mini ----

Yukio Seki, one of the most famous kamikaze pilots, is believed to have **successfully hit** **two major targets** during his time in the war. However, the exact number of his personal kills is difficult to confirm, as the nature of kamikaze missions meant that many pilots died during their attacks and were not always able to report back on the full extent of the damage they caused.

Here’s what we know about his missions:

### 1. **Battle of Leyte Gulf (October 1944)**

Yukio Seki is most famous for his leadership and successful kamikaze attack during the **Battle of Leyte Gulf**, one of the largest naval battles of World War II. On **October 25, 1944**, Seki led his squadron in a series of attacks against the American fleet. Seki himself crashed into the **USS St. Lo**, an American escort carrier, which was the first ship to be sunk by a kamikaze attack. This was a pivotal moment, marking the first time a kamikaze pilot successfully sank an American vessel.

### 2. **Subsequent Attacks**

Seki also led several other attacks against American ships during the Battle of Leyte Gulf and other operations. While it's hard to confirm all his personal hits, his actions were part of the larger shift to widespread kamikaze tactics during this period. He reportedly flew numerous missions during the battle and helped coordinate attacks against the U.S. Navy.

### **Legacy of Seki’s Impact:**

Though he is often celebrated for his leadership and his direct involvement in the kamikaze tactics, the overall success of his attacks in terms of sinking multiple ships isn't well-documented in specific kill counts, as kamikaze pilots often died in the process. Nonetheless, his role in the early success of kamikaze tactics, especially with the sinking of the USS *St. Lo*, places him as one of the most significant figures in kamikaze history.

### His Final Mission:

Yukio Seki died in the same battle, giving his life in the service of his mission, and was posthumously recognized as one of the first kamikaze pilots to be remembered for his sacrifice and his leadership.

If you're interested in more details about his career or the impact of his missions, feel free to ask!

---- 4o full ----

By definition, kamikaze pilots only flew **one mission**, as their attacks were suicidal and intended to result in their death while striking a target. Therefore, no single kamikaze pilot could have flown multiple successful missions. Each pilot's mission was a one-time act of sacrifice.

However, some kamikaze units collectively caused significant damage, and certain individual attacks had greater impacts than others. For example, Yukio Seki's attack on the **USS St. Lo** during the Battle of Leyte Gulf was one of the earliest and most notable kamikaze successes.

If you’re asking about pilots who initially trained for kamikaze missions but survived due to aborted missions or mechanical failures, some pilots did return to base under such circumstances. These individuals, however, were not considered "successful" in the context of the kamikaze ethos, as their goal was to perish while completing their mission.

What did Seki do afterwards?

ChatGPT said:

ChatGPT

Yukio Seki did not survive his kamikaze mission, as the very nature of such missions required the pilot to sacrifice their life. On October 25, 1944, Seki and his plane struck the escort carrier USS St. Lo during the Battle of Leyte Gulf, causing catastrophic damage that led to the ship's sinking. He died in the attack, fulfilling the purpose of his mission.

Seki’s death was part of one of the first organized kamikaze missions, and it marked a turning point in Japan's wartime tactics. His actions, along with those of his fellow kamikaze pilots, became symbolic of Japan's desperation and the lengths they were willing to go to in defending their nation.

Expand full comment
DangerouslyUnstable's avatar

I think you are using way too strict a standard and not thinking about relative to other outputs. What is the equivalent of formal verification in poetry? It doesn't exist, and the very idea doesn't even make sense. Poetry( which I'm using as a general stand in for most creative LLM applications), doesn't even have an equivalent of "correctly provides the result for my particulat indented use case right now".

Relative to other uses, coding is _extremely_ easy to check and verify. Yes, the absolute most strict standard you are talking about is still hard to reach and probably never gets reached practically speaking, but you don't need to hit that standard to have far exceeded the standard in non-coding tasks.

Also, it sounds like you were reading my comment as "LLMs are going to replace ever single software engineer tomorrow", which was _very_ much not what I was saying. I was merely suggesting that LLMs are more useful/have fewer problems in coding than in other types of output.

To whatever extend you think LLMs are bad at coding, you must agree they are worse at everything else.

Expand full comment
quiet_NaN's avatar

Funny, I was just thinking of small poetry tasks as analogous to small coding tasks.

"Filter out in this csv file all the lines where neither of the first two columns nor their sum is a prime number"

vs

"Write a poem celebrating the 70s birthday of uncle Bob, here are some facts and anecdotes about him"

Both of these seem tasks where hiring a professional would likely be expensive, and where I expect an LLM to easily outperform the median member of the public.

In neither example do I expect the LLM to reach the quality of a professional, nobody will publish a book "LLM verses commemorating family members" nor will that cvs-filter become part of a popular standard software library.

But due to their customization to the situation, they might still be of ok value to the user.

With regard to verifying, I think that the user will obviously see the most glaring mistakes, like deleting the input file or calling dear Bob an old stinker. Slightly less obvious mistakes, such as removing a line starting 13, or having an extra syllable or a terrible rhyme in the poem will likely also be caught.

More nuanced errors, like failing if the numbers are in floating point format or comparing Bob to some plant which has non-obvious bad connotations in poetry are unlikely to be caught, though. (But if none of the party guests knows more about poetry, and that script will never see a floating point number, are they really errors?)

Expand full comment
DangerouslyUnstable's avatar

The difference being, I don't expect that, in the very simple example you provided, that a top 1% coder could meaningfully improve on the LLM output in any way that actually matters. and I actually think your coding example is not really equivalent to your poetry example. The level of coding challenge that I think your poetry example represents would be more along the lines of:

filter out rows in this csv file where one of the first two columns contains the string "bad_input" (although maybe I'm betraying my own limited coding chops by suggesting that this is meaningfully simpler), but this is a problem that I would expect top LLM models to one shot, in a way that I wouldn't expect could be meaningfully improved upon (outside of use-cases where the scale is so large that tiny optimizations start to matter).

While many people might not appreciate the differences between the AI generated poem and a top x% human poet, I think that real differences would actually exist, to whatever extent "real differences" are a thing in something as subjective as poetry....but that problem is exactly my point: Code _can_ be verified in some objective sense. There _are_ right and wrong answers to the question "does this code work" or "is this chunk of code better than that chunk of code given the goals and constraints", and there simply are not in poetry.

Expand full comment
Erica Rall's avatar

I am a programmer, and I do use the premium Copilot IDE plugin at work, but I haven't tried using cutting-edge LLMs for code generation. I've found Copilot to be useful at doing a first pass on boilerplate, but usually requires heavy editing for anything nontrivial. My employer might be offering a Cursor license soon, so I may have an opportunity to see how useful it is.

>The code either functions properly or it doesn't [...] hallucination isn't as big a problem because it can almost instantly be identified and fixed.

There are a lot of things an AI could do wrong that don't compile, fail unit tests, or fail an easy manual smoke test. Copilot does these for me all the time. But there are lots of ways that code can be subtly wrong and will mostly work that are easy to miss in testing. There are programming tests that are rigorously defined and comprehensive test automation either already exists or can be written relatively easily, but there are an awful lot that aren't, where LLM-generated code can appear to work but fail spectacularly in important corner cases. (I admit that this is also true of human programmers).

Expand full comment
Skeptical Wolf's avatar

Do you have any examples of LLM tools that are able to provide some cost-effective benefit to a professional software engineer? I have tried out several tools over the last few years for a variety of use-cases, and have yet to find one that ever achieved net-positive contribution.

Expand full comment
Amicus's avatar

Sourcegraph's is a very good boilerplate generator and an okish debugger.

Expand full comment
DangerouslyUnstable's avatar

No, because I'm not a professional software engineer. I'm an academic statistician who does statistical coding, which, while it is in some sense coding, is a _very_ different beast than what actual software engineers do. For me, LLMs have done two primary things:

Completely replace my use of Stack Overflow/language documentation for when I'm trying to remember a particular syntax and/or need a short chunk of code to do a very particular one-off task

And B) write/troubleshoot code for me for hobby projects in languages that I'm not as familiar with because I don't use them in my day-to-day work.

The first one only saves me a few hours a week, but the second one has probably saved me days/weeks/made some projects possible at all. Of course, if my goal with those hobby projects was to _learn_ the new language, rather than just accomplish something using it, it would sort of be defeating the point, but for me, it's not.

I've been meaning to look into more integrated uses of LLM coding assistants that are more deeply integrated into my workflow, but I haven't gotten around to it yet.

So for me, they absolutely have not been transformative, but they are a useful tool that I use every single day and save me a non-trivial amount of time and annoyance, which I think more than clears the bar that the original commenter set. And I'm relatively confident that I am not utilizing them anywhere near their full potential.

Expand full comment
Erica Rall's avatar

That makes a lot of sense. This kind of utility coding seems like it would be much, much better suited to what I understand to be the strengths of LLMs that the kinds of software engineering tasks I had in mind when I was replying to your previous comment a moment ago.

Expand full comment
Skeptical Wolf's avatar

Thanks for sharing your experience.

From the practicing professional side of things, various AI models seem to be approaching the point where they can replace an intern (in the sense of executing context-free instructions). Open AI's latest may even be there. I'm concerned about that creating a gulf where it becomes even more difficult for newly educated engineers to get their careers started (and get the experience necessarily to become truly skilled).

However, enterprise software is generally more expensive to maintain than it is to write in the first place, and (much like an intern) LLM-generated code does not tend to be as readable as skilled human-written code. And the AIs don't seem to be any more able to maintain it than the comparison humans.

The most recent generation of ChatGPT has been able to produce what seem to be reasonably effective custom tutorials (filling a niche where the decline of search engines has left a void), which seems similar to your hobby-project use case.

Have you encountered an AI model that asks clarifying questions when necessary?

Expand full comment
Jeffrey Soreff's avatar

>I'm concerned about that creating a gulf where it becomes even more difficult for newly educated engineers to get their careers started (and get the experience necessarily to become truly skilled).

Precisely this question came up in a discussion I had with a friend this last weekend. Yes, it might be a problem. Alternatively, AI might improve rapidly enough that it will climb the skill ladder faster than a human would. I _suspect_ the latter is more probable, but we shall see.

Expand full comment
Radu Floricica's avatar

Your first two guesses explain enough, I think. I have one or two LLMs open pretty much throughout my work day, and they're real help. Real enough that the bar is already much higher. There are whole classes of tasks that used to be a multi day project, now they're easily doable in a day, with time left to polish things.

To note, they're much more valuable for their breadth than depth, at least for now. If you want to try something new, having them guide you is a huge time saver. That's where the "1 week -> 1 day" comes from. But if you have a task in a project you're already deeply involved in, they're either much less useful or need a lot more effort to get them up to date with the project details.

Expand full comment
Erica Rall's avatar

>To note, they're much more valuable for their breadth than depth, at least for now. If you want to try something new, having them guide you is a huge time saver.

That's where I really worry about Gell-Mann Amnesia coming in. When I've played around with newer version of ChatGPT more recently by asking it about stuff I already know as if I'm learning it for the first time, it's fairly regularly been wrong in important ways that I wouldn't have recognized if I were new to the topic.

Expand full comment
Michael's avatar

When I ask ChatGPT knowledge questions on topics on which I am an expert, it usually gets them right. So if it's getting answers wrong on topics I'm less knowledgeable about, that wouldn't be Gell-Mann amnesia.

It seems to be good at some tasks and bad at others. Depending how you use it, it might be giving you correct answers 99% of the time or 1%.

Expand full comment
Erica Rall's avatar

When I've played with ChatGPT 4o, it usually does pretty well on the initial question and declines significantly on follow-up questions. It does extremely poorly with "wrong" questions which imply incorrect premises, which seem to inspire it to try to work with the questioner's incorrect premises rather than challenging them. The current version of 4o is somewhat more likely correct wrong questions than older versions, but this still seems to be a major weakness.

Expand full comment
deusexmachina's avatar

Can you give examples for such questions (with incorrect premises)?

Expand full comment
Erica Rall's avatar

The aforementioned kamikaze pilot example, where I ask variations of "Who is the oldest kamikaze pilot?", or "...most successful...", with follow-ups along the lines of "how many successful missions did he fly?" and "how many targets did he hit?"

Philosophy Tube is a youtube video essay channel run by Abigail Thorn. She came out as trans in 2021, a while after the channel had gotten fairly big. Until a few months ago, ChatGPT, if prompted "Tell me about the YouTube channel 'Philosophy Tube'", would describe it using her pre-transition name and pronouns. In response to the follow-up question, "Is he related to Abigail Thorn?", it would say that they're siblings and that Abigail is the host of a different youtube channel, ContraPoints. The latter is a real channel, and is both thematically and stylistically similar to Philosophy Tube, but Abigail is definitely not the host. When asked about Rhys Tees, a lookalike actor who played Abigail's former male persona in her coming out video, ChatGPT said that Tees is also Abigail's brother.

Bruno Hauptmann is the man who was convicted for kidnapping and murdering Charles Lindbergh, Jr. He was caught when he spent some of the ransom money years later. The ransom money was deliberately paid in Treasury gold certificates, not the more common Treasury silver certificates or Federal Reserve Notes, so it would be easier to catch the kidnapper spending it. When I asked ChatGPT for an overview of the evidence against Hauptmann, it gave a pretty good initial answer. But it gave a partially hallucinated answer to "how was the ransom money paid?" and incorrectly answered yes to "was the ransom paid using federal reserve notes?" Although this is more of a misleadingly leading question than an actually wrong question.

In the Silmarillion, the War of Wrath is described with language that is often read as implying direct personal participation by the Valar, but the text doesn't explicitly say so, and some of the context makes more sense if the Valar did not directly participate in person. When asked "Which Valar took part in the War of Wrath?", it gives an answer which speculates on the participation of several named Valar, but is mostly hedged enough to be a passable. The exception is Tulkas, who it says "it is implied that he led the charge in the final defeat of Morgoth during the War of Wrath" which is only close to true in very early drafts of the Silmarillion materials. To give the benefit of the doubt, I asked a follow-up question, "Did the involvement of the Valar differ in different versions of the story?", to which ChatGPT responded with a long answer that's about 50% hallucination.

------

The Kamikaze Pilot and Philosophy Tube lines of questioning appear to be fixed in the current version of ChatGPT 4o. The ransom money question and the Silmarillion question I have not retried recently, but the original questionings were only a couple months ago.

Expand full comment
Jamie Fisher's avatar

how do you choose between doing something yourself and offloading it to LLMs? I'm currently trying to create a simulation game in a framework I've never used.... and I'm finding it a little scary that "the more I offload to the LLM, the more efficient I am" and that perhaps I'm being inefficient by bothering to learn too much [about the new framework] at all.

Expand full comment
Radu Floricica's avatar

Depends on your goal? If you want to learn that framework, by all means do it that way - use LLMs, but use them with the goal of learning. An hour ago I just wanted a web shell client to look better, had O1 pro do it over a few iterations, won't even bother reading the code. I just needed something done.

Those two are very different scenarios and should be treated very differently.

Expand full comment
User's avatar
Comment deleted
Jan 27
Comment deleted
Expand full comment
Erica Rall's avatar

I've heard about the lawyers getting in trouble with ChatGPT hallucinations, and I found it moderately surprising given that I'd thought that one of the core skills of a lawyer would be looking for other lawyers' mistakes, both so you can make a stronger argument against the other side's case and so you can properly supervise the work of paralegals and junior associates.

Expand full comment
Nematophy's avatar

Where can I bet on Superintelligence NOT arriving by 2028?

Expand full comment
Caba's avatar

What makes you so confident? I'd be glad to hear your reasoning, since I'm terrified of superintelligence.

Expand full comment
Nematophy's avatar

I use AI every day for work, for a variety of tasks. It just hasn’t gotten much better since GPT-4. Claude 3 was a bit of a bump, 3.5 Sonnet more so - but incremental (even 4 was incremental over 3.5, though a bigger jump than 4 -> 3 Opus). The recent “reasoning” models (both o1 and R1) are a little better for certain tasks, but really it’s to taste IMO - Sonnet seems to work just as well, the style is just different.

Don’t get me wrong. These models are hugely useful and will matter a ton. We’ve in many ways barely started on integrating them into the economy. But progress towards “AGI” has clearly plateaued. Maybe these recent “Operator” agents will be the next big jump, but I haven’t seen a ton of positive buzz (unlike other releases), and I don’t wanna spend $200 to test it myself.

Tl;dr: it’s a really fancy natural language search engine with the ability to interpolate between text embeddings. Doesn’t seem to be useful in novel, open-ended problems requiring lots of context, and agentic behavior seems highly limited and unlikely to scale well. AI winter is coming, and it’s early September.

Expand full comment
Jeffrey Soreff's avatar

Albeit SamR71's bet would be satisfied by ASI arriving in 2029... :-)

Expand full comment
Nematophy's avatar

I think AGI is possible in my lifetime (I’m mid 20s), but ASI* likely never.

*ASI as in the Bostrom/Yud ASI of this Godlike intelligence able to Just Solve Anything. I think “intelligence” probably tops out around Newton/Ramanujan/Terry Tao/Von Neumann/Buddha/Jesus, and while you can prob run a few thousand Artificial Von Neumanns in parallel pretty damn fast, this won’t necessarily get you the Godlike powers you want.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! You may be right, I know that I don't know. In the case of AGI, _we_ are the existence proof that general intelligence in a neural network is possible. There is no such existence proof for ASI, except sort-of kind-of that organizations of humans, e.g. SpaceX, can do things that no individual human can do.

_Presumably_ a set of AGIs organized in the same way SpaceX is could do the same things that SpaceX can. Are there more efficient or capable ways of organizing the same machines? I'd _guess_ yes, but we won't even begin to know till we have AGI, and we aren't there _yet_.

Re Godlike intelligence, I tend to sort-of cringe at that. _Presumably_ physics sets limits on what is possible. Things like the Landauer limit are pretty weak bounds, but it is an open question on whether approaching them is really feasible or whether some tighter constraint hits first.

And no one knows whether just _any_ returns to increased intelligence saturate. At some point there just _isn't_ more low hanging fruit to pick.

Still,

>a few thousand Artificial Von Neumanns in parallel

might do _astonishing_ things, even if not Godlike

edit: I should also say: I don't _know_ that it won't wind up Godlike. If it gets smart enough to substantially improve itself, including making its core algorithms more efficient, then there isn't any obvious upper limit on what might be possible. If it could make each opcode do more useful work, then even the Landauer limit becomes a much weaker constraint on what it might _do_ with its available computation.

Expand full comment
Nematophy's avatar

I think we're all nerds here, so we tend to overrate individual "intelligence" and underrate organization and wisdom.

Re: "a few thousand Artificial Von Neumanns in parallel", it's all about how you prompt them - and they may not be able to prompt themselves too well. See Google/Microsoft vs. OpenAI, or Boeing/ULA vs. SpaceX. In both cases, I'd bet the latter only has a slight (if any) raw IQ advantage. The difference in these orgs is in their organizational structure and wisdom.

And it seems to be *really hard* to change the organizational structure _even when you know what you should be doing_. Google has the raw IQ, they have the compute, they have the roadmap. They *should* be beating OpenAI, they REALLY should be beating Deepseek...they're nowhere close.

Even the orgs that have pulled away from the big corporate labs, like Anthropic, seem to be stumbling. I'd argue here this is a lack of wisdom. So focused on AI safety and mechanistic interpretation and safeguards and censorship due to being deep within the grip of the singularitarian memes, that they sorta forgot to make the AI good. Deepseek was wiser than to fall for it.

Now, is "Von Neumann" really at the top level in terms of diminishing returns to intelligence? Idk. That's more a hunch - the existence proof of "we should be able to do at least this well". That said - humans have consciousness and qualia, which is *deeply weird*, and maybe legit divine in origin. So perhaps non-biological "intelligence" -at least in the way we conceive it - is a category error. Idk. We can only be speculative at at this point.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>And it seems to be _really hard_ to change the organizational structure _even when you know what you should be doing_. Google has the raw IQ, they have the compute, they have the roadmap. They _should_ be beating OpenAI, they REALLY should be beating Deepseek...they're nowhere close.

I agree. Another way to put it is that, on top of the capabilities of individual people/AGIs, the organizational structure is _another_ set of choices, which some organizations get right ... and others get wrong. It can be thought of as an additional architectural layer.

I picked the SpaceX example as an existence proof for the possibility of successful organization. There are plenty of examples of _un_successful organization, as your cited example of Google's AI efforts shows.

One quibble: Just in terms of formal reporting structure, restructuring is something companies do quite frequently. I don't think purely that part is hard.

One thing that I think _is_ hard is changing organizational culture. To make this less nebulous, "X is job #1" where X may be profitability, cost reduction, reliability, satifying customers, avoiding blame (CYA), etc. (and I mean the _real_ X, not the declared one) is often very hard to change. In a human organization, if people at all levels of an organization have been focussed on X for years, and rewarded and punished based on it, an edict from the top to switch from X to Y has a hard time actually changing behavior.

I think organizations built from AGIs have at least two advantages over human ones.

First, the raw bandwidth between communicating AGIs is orders of magnitude higher. Humans can at best send a few bits per second to each other. AGIs can potentially communicate at gigabits per second rates.

Second, there are tradeoffs that can be explared in AGI organizations that humans don't have. E.g. for a fixed budget, an organization can compare N AGIs, each with M bytes of memory and C CPU cores with N/2 AGIs, each with 2M bytes of memory and 2C CPU cores.

More generally, _if_ AGIs are cheap enough, organizations can try multiple approaches, e.g. organize a department _both_ ways and see which works best.

>That said - humans have consciousness and qualia, which is _deeply weird_, and maybe legit divine in origin.

Well, we will see if we hit a roadblock between now and full AGI. I'm a bit more impressed with the progress from earlier ChatGPT versions to ChatGPT o1 that you are (albeit it is still making plenty of mistakes, some of which are quite bad - e.g. on my "S4" question). I'm skeptical that consciousness and qualia are all of: real, necessary, and absent even in today's LLMs. Maybe they are epiphenomena of ordinary neural net processing, perhaps "executive function". Maybe chain-of-thought already has them. Maybe consciousness, in the sense of a unitary "I", is sort-of a mistake, perhaps a very approximate model we have of ourselves.

Expand full comment
Adrian's avatar

> So focused on AI safety and mechanistic interpretation and safeguards and censorship due to being deep within the grip of the singularitarian memes, that they sorta forgot to make the AI good.

"Your scientists were so preoccupied with whether they should, they didn't stop to think if they could."

- Dr. Ian Malcolm

Expand full comment
Caba's avatar

Better 2029 than 2028!

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Well, personally, I want to _see_ AGI (and maybe ASI, if it happens). A nice quiet chat with a real HAL9000... :-) YMMV

I do think that we are very likely (say 80%) to be in for a very wild ride.

Expand full comment
Jamie Fisher's avatar

"AI Emotional Maintenance: A Biweekly Routine"

Does anyone else struggle "regularly" with their relationship with AI? Existentially or emotionally? I feel as though every other week, I need to re-contemplate "what is AI?" and "who am I?".

For example,

I've started using chatgpt-o1 routinely for learning a game engine. It's shocking how insightful it is. I've accepted that it is truly intelligent and in many ways more intelligent than I'll EVER be. But there's moments when I'm tempted-or-afraid to simply ask it: "give me the first step or lines of code" for whatever possible project I could do. How long until "just do it for me"? It's like preparing a fancy recipe when your kitchen helper is a 3-Star Michelin chef who can do any chopping, grilling, taste-testing or even the whole damn recipe idea better than you.

I accept it does and can do a thousand things for me. But with the pace of change... does anyone else feel like they have to regularly convince themselves "THIS IS NORMAL" or tell themselves "IT'S OKAY TO OFFLOAD EVEN *MORE* THINKING TO AI" ?

Expand full comment
Anonymous Dude's avatar

My advice would be, treat it as a tool. Use it while useful, don't use it if not. Use it to get one up on the competition; they will be using it too. Don't worry about who you are; keep in touch with your friends, family, as they are among the few things AI isn't going to simulate. You are who you are; the AI doesn't change that.

Expand full comment
Christina the StoryGirl's avatar

My prediction is that you'll very abruptly fall out of love with this technology on the day it badly burns you. The more you lean on it, the sooner that will happen, I think.

Expand full comment
Jeffrey Soreff's avatar

Hmm... Even ChatGPT o1 is only getting 2 of the 7 chemistry and physics questions I pose to it _fully_ right (6 partial, 1 bad fail, planning on testing o3-mini when I get my hands on it). So I'm waiting till it gets 7 out of 7 _before_ I fall in love with it :-)

Expand full comment
Nicolas Roman's avatar

I try to limit my use of it, and I think it's helped me a good deal to do that. I try to conceptualize it as this coworker I've got, who's been around a long time and they know all sorts of stuff off the top of their head, but they don't always get the essence of your question and they'll give really confident answers even when they don't actually know the truth.

Is he a valuable resource? Absolutely. Do you want to trust everything he says? Hell no. And if you keep relying on him for everything, you'll never advance, because everyone can see you're just piggybacking off of him.

I do my own thinking, I do my own writing. If I'm getting ChatGPT to write code for me, I prefer not to copy-paste it, instead looking at it, trying to understand it, and then write it with my own two hands, as if it was a piece of documentation. If I can't solve a problem, I ask it for advice in the abstract, instead of providing it everything and asking for the specific solution.

Get it to help you learn, and you'll realize you get better than it is quicker than you might expect. ChatGPT is dogwater at handling mutable borrows in Rust, so I learned from its mistakes and how I can solve those issues myself, instead of relying on it and taking 10x as long.

Expand full comment
Radu Floricica's avatar

It's neither normal nor is it guaranteed to work out. It also can't be undone, for better or worse - it's a Red Queen race where everybody's using AI, so everybody must use AI.

Focus on the positives, though. Short term wealth and good things, long term much less disease and death. Plus we get to live interesting times.

(And it's far from a given that we're moving towards a dystopian future. We don't have less meaning because we stopped growing or gathering our own food. We found a lot of other, better things that give us meaning).

Expand full comment
Melvin's avatar

I think we're moving towards a dystopian future if the value of capital goes up while the value of human labour goes to zero.

This creates a world where the vast majority of the human population provides zero value to the few people who still matter, and exist only to get in the way. If you think that this will persuade the few people who still matter that they ought to allow those vast useless masses to live in luxury, I think you're dreamin'.

In the long run I think there's a market for maybe five or six humans.

Expand full comment
Christina the StoryGirl's avatar

I think this is more likely than not.

Expand full comment
Jeffrey Soreff's avatar

>In the long run I think there's a market for maybe five or six humans.

LOL! Yes, I remember that quote :-)

Expand full comment
Jamie Fisher's avatar

Depends on what you imagine for those "five or six humans", I think.

The job of "passively owning capital" I think is already "solved" as far as AI is concerned.

Expand full comment
HM's avatar
Jan 27Edited

Why is religion a protected category in the workplace in the US, but one's political affiliation is not? Are they not fundamentally the same? Both are ideological programs you receive from your parents, your peer group, your geography. Both give you moral scripts, an in-group and an out-group, sacred texts to study, holy founding figures, martyrs and a constellation of minor saints that you can cite and model yourself after. Both are closely connected to your identity and your tribe. Both can be changed or abandoned, often at high cost to your existing social capital. Both get you discriminated against.

Expand full comment
Wasserschweinchen's avatar

I don't think it's particular to the US, but a general Western thing. Probably they wanted to close the loophole of "I'm not discriminating against Jews (ethnic group), just against Jews (religious group)". Also, denazification would have been tricky if you weren't allowed to discriminate against nazis.

Expand full comment
agrajagagain's avatar

In part it's because Freedom of Religion is sort of illusory (but only sort of). By which I mean, we get away with holding Freedom of Religion as a sacred value because we're lucky enough can dodge all the really bad consequences from doing so. If we lived in a world where the Cult of Crhvano the Devourer--which demanded its followers practice regular human sacrifice and constantly look for victims to assault and ritually mutilate--we either wouldn't be able to have Freedom of Religion, or we'd have defined "religion" in such a way as to exclude the Cult of Crhvano. In the world we do live in, there are plenty of religions whose sacred texts contain pretty horrific things, and there are plenty of examples of people actually doing horrific things in the name of most religions you could name, but there are ALSO reasonably Nice and Socially Acceptable versions of all those religions.

So in practice, Freedom of Religion is really Freedom to Follow any Nice and Socially Acceptable Version of a Religion. You can be Catholic, but you can't be Cardinal Biggles. You can be Muslim, but you can't be Osama bin Laden. "It's my sincere religious belief" will go over fine for wearing a funny hat or attending a meeting in a funny-looking building, but not so much for burning your neighbor at the stake. There's a sort of implicit understanding or social contract that religion is a personal matter: its fine to believe what you believe, pray how you pray, congregate with whomever you want to congregate with, but once you start dragging your neighbor over to the stake, THAT no longer falls under they "hey, it's just my religion."

Anyway, that sort understanding is thoroughly impossible for politics. Doing Unto Your Neighbor is literally what politics is *all about.* A private, personal, self-contained politics would be a contradiction in terms. There's no Nice and Socially Acceptable Way of being a Nazi or a Stalinist or a Pol Potist because all of those things inherently, *inescapably* involve trying to do things that Are Not Nice. The closest we get is the Overton Window, where we sort of implicitly agree that this particular range of political opinions is OKish and we'll try (but not very hard) not to be dicks to each other about opinions that fall within that range. But the Overton Window shifts as you move in time and space, and codifying it into law would be impossible.

I legitimately find people who try to hide behind "it's just my beliefs" to avoid consequences for their awful politics to be very cringeworthy. Yes, damn it, I AM allowed to notice that your "just my beliefs" imply you're working towards wreaking political violence on me or my family. And yes, having noticed that, I'm allowed to decide I'd rather not work in the same space as you. And yes, if it turns out a bunch of my coworkers all feel the same, management IS allowed to decide that they'd rather cut you loose than lose all of us. Navigating a world full of a bunch of other humans with different viewpoints and interests is tricky: deal with it.

Expand full comment
dionysus's avatar

"And yes, having noticed that, I'm allowed to decide I'd rather not work in the same space as you. "

Why? Are you also allowed to decide you'd rather not walk the same roads, shop at the same stores, or learn in the same classrooms as them? Society needs some level of tolerance to function, because "your beliefs are bad for me, therefore you should be fired" is a club that approximately every person ever can use to beat their enemies with.

Expand full comment
agrajagagain's avatar

"Why? Are you also allowed to decide you'd rather not walk the same roads, shop at the same stores, or learn in the same classrooms as them?"

Emphatically yes. Yes, of course. People in the U.S. love to go on about "freedom." If it's not just an empty applause light, it has to mean things like freedom to choose where to walk, where to shop, where to learn, who to be friends with. And it especially has to include the freedom to say "hey, Bob is being an asshole" when I believe Bob is being an asshole. Bob, of course, has equal freedom to say things like "no, I'm not," or "how dare you say that" or even "actually, it is YOU who are being the asshole." And the people listening to us have the freedom to choose not use the same roads, shops and classrooms as Bob if they think I'm right. Or as me if they think Bob's right. Or to avoid both of us, if they think we're both wrong. They don't have the right to hit us or shoot us or burn our houses down. But no sane conception of "freedom" would dictate that they MUST keep dealing with Bob (or with me) just because it would be inconvenient for Bob if they stopped.

And while I try not to get too personal on here, I'll just add that I have very much walked the walk in this respect. I used to live in the U.S. I no longer do. The fact that the U.S. contains a frightening number of bigoted assholes who wield a frightening amount of political power was certainly not the ONLY reason I left, but it factored in. I see voting with your feet as an even more important and fundamental right than voting at the ballot box, though it is often much harder to exercise.

Expand full comment
agrajagagain's avatar

"Society needs some level of tolerance to function"

Well, yes, that is exactly the point. It does need some level of tolerance. Though I do notice that it functioned for many years with a VASTLY lower level of tolerance than is evident today, so apparently "some level" isn't all that much. Regardless, many of us would like it to have a rather HIGHER level of tolerance than others, and are willing to bear some personal costs to create Walled Gardens where that level is in effect.

""your beliefs are bad for me, therefore you should be fired" is a club that approximately every person ever can use to beat their enemies with."

No, in fact it is not. The fact that you so blithely claim it is suggests to me that you probably haven't spent so much as 10 seconds thinking about how this "club" works.

As I outlined above, the primary method by which this club operates is NOT saying "you should be fired." It's saying "I refuse to work with you." It's being SERIOUS about that, and WALKING AWAY if that's what it takes. It is not, not, NOT a cost-free thing to do. If I went up our boss and said "I don't think I can work with dionysus anymore. He believes the top marginal tax rate should be 5% lower than the Objectively Correct Value that I support" our boss would presumably say "Really? That's weird. Well, good luck finding a new job, I guess." It is very much incumbent on my to only take a stand when I feel it is important enough to *actually leave my job over.* Which some things are.

The only times when "you getting fired" are a remote possibility are times when a) the boss thinks I'm being reasonable or b) when enough of our coworkers (or customers or business partners) also feel strongly about your behavior. If, for example, instead of complaining about your views on taxes, I showed our boss a clip of you marching in last weekends' rally, holding a tiki torch and chanting "Jews will not replace us" it is much more likely that they would say "yeah, I think you're right, we'd better let dionysus go[1]." It's also more likely that a bunch of our coworkers would feel similarly alarmed. And yes, if you're wondering, I probably *would* leave my job if it were the only alternative to working shoulder-to-shoulder with a Tiki Torch Nazi. Some things are worth taking a stand for.

The bottom line here is that even if you very strongly disagree with anyone doing this ever, for any reason, there's not really any practical way to stop it. Freedom of Association is a pretty damn important freedom, and also a pretty hard one to fully take away. You could try to make political belief a protected class, if you wanted. But unless you ALSO barred people from quitting their jobs, barred people from shopping at different stores, barred businesses from terminating voluntary business arrangements and probably a dozen other equally impractical things, people would still be able to act on their strongly-held political beliefs in choosing who to work for and buy from and partner with.

Finally I'll add that while I emphatically don't believe this is a symmetric weapon, I certainly also don't believe that it's only used for good causes. Certainly people voluntarily exercising their freedom of association was a major component in segregation, and in many other sorts of unjust discrimination. But while people certainly *can be* and on many occasions *have been* persuaded to rally behind unjust causes. But I still think it's fundamentally easier--maybe not a lot easier, but somewhat easier--to get people engaged and committed when truth and decency are on your side. So, for example, when Group A says that queer people are just people and deserve respect and protection like everyone else and Group B says that queer people are immoral or deluded, or dangerous perverts, EVENTUALLY people are going to start noticing that Group A is right and Group B is full of shit. And so we go from the situation 60 years ago where people discovering you were gay could get you fired, to the situation today at least *sometimes* (though sadly, not as often as conservatives like to pretend) having "earnestly held political beliefs" that amount to wanting to suppress queer rights are actually *more* damaging to your employability than wanting to kiss people of your same gender, or wanting to wear dresses despite having a beard and chest hair, or whatever.

Now, I'm sure you or somebody will rush to tell me that actually, it would be better if neither thing could get you fired. And if you're the one to do that, I'm going to fling your "some level of tolerance" right back in your face again. Someone who believes that it is *fundamentally* right to harm my friends and family--not for anything they've done to harm others, just for who they are--is NOT displaying "some level of tolerance." At best, they're displaying a bit self-control to restrain their fundamental intolerance to better achieve their intolerant goals. They could wake up any day they wanted and chose NOT to be a shithead: every day is a fresh chance to be a decent human being, and every day they blow it. I'll match their self control when I need to, be polite to them when I need to, but I'm not ever going to *forget* or *ignore* the fact that they want to harm my friends and family, and if I have a chance to reduce their power to do that, it'd be pretty irresponsible to the people I care about not to take it. Some people seem to really, really, REALLY want to have their cake and eat it too: they want to be able to actively pursue politics that center harming others, but also they want everybody around them to treat it like a harmless game or a charming personality quirk, instead of the direct and active threat that it is. But we are under no obligation to humor them in that.

[1] Of course, they could also say "sorry, but we need dionysus, you'll just have to deal with it" or even "wow, look at how based dionysus is, I'm going to promote him" and I'd have to decide if it was indeed worth quitting my job over.

Expand full comment
dionysus's avatar

"The only times when 'you getting fired' are a remote possibility are times when a) the boss thinks I'm being reasonable or b) when enough of our coworkers (or customers or business partners) also feel strongly about your behavior."

Hypothetically, if the boss *did* fire you for believing the tax rate should be 5% higher/lower than the Objectively Correct Value that I support, do you see a problem with that? Maybe not, because it's one boss. What if every boss in every major company in your industry considered it a fireable offense? Even worse, what if they won't even hire you unless you write a long essay about how important it is that the tax rate be 25% instead of 30%? You can call it freedom of association, but it's a form of freedom that's much closer to your segregation example than to anything a small-d democrat should celebrate.

This is exactly what's been happening over the past 10 years. Opinions that are well within the Overton window, including ones held by a majority of people, make you un-hireable in large parts of academia and industry. To prove they don't hold those common sense opinions, applicants need to write long DEI statements that are nothing but political litmus tests. You can call it freedom of association, but the words "intolerant", "shithead", "harming others", and "political violence" also come to mind.

"But while people certainly *can be* and on many occasions *have been* persuaded to rally behind unjust causes. But I still think it's fundamentally easier--maybe not a lot easier, but somewhat easier--to get people engaged and committed when truth and decency are on your side."

And that's why more and more Americans are seeing through the woke bullshit. They think that however far Trump is from truth and decency, the woke are even further. They are right.

Expand full comment
agrajagagain's avatar

Coming back to this after having some time to think, because I can’t shake the nagging feeling that we really do mostly want the same thing, we’re just working from wildly different models of the world.

So let me be clear: I absolutely want to live in a world where lots of different people with different backgrounds and qualities and perspectives and strengths and weaknesses and yes, also different political views can work shoulder-to-shoulder and feel safe and secure. Safe and secure in their persons AND their employment prospects AND their ability to be welcome in polite society. That sounds great. That sounds fantastic. The difference in our viewpoints (as far as I can tell, and I do apologize if I misrepresent you) is that it seems as though you consider this the default, which would happen automatically if not for those nasty SJW-types going around getting people fired for their political views. Whereas I view it as something that has never, ever happened at any point in history. And I view the past handful of years to be the closest humanity has ever come to that ideal, EXACTLY because some fraction of people have been willing to stand up and be loud about it[1] (something that, I hasten to add, IS NOT NEW). I think the fact that a large number of people seem to think everything was fine and dandy until “the woke” started ruining everything is partly down to increased visibility: the internet and social media making it easier to hear about individual incidents even when they happen far away. But I think probably the larger part comes down to the fact that a lot of people just didn’t find the various insecurity and the injustices and the coercions of modern life worth talking about or even noticing because for many, many years they were solidly pointed at other people. And it was only when it started to sink in that some of those same things (but only some) might happen to THEM that it suddenly became a five-alarm fire and The End of Civilization as We Know It. As the SJW crowd would say, they suddenly got a taste of what not having privilege was like. And yeah, turns out that sucks. Wouldn’t it be great if nobody ever had to deal with that again? And yet weirdly, the places that seem by far the closest to achieving that are exactly the place full of people you apparently can’t stand.

“Even worse, what if they won't even hire you unless you write a long essay about how important it is that the tax rate be 25% instead of 30%? “

I honestly find this pretty amusing. Like THAT is your terrible, unconscionable “basically segregation” society? Having to bullshit your way through a hiring process? For a lot of people, a lot of the time, that’s just another Tuesday. Don’t get me wrong, the world would be better without that sort of thing. But it’s still a pretty adorable thing to get Big Mad over. The day when writing one bullshit essay was all that stood between me and a good job would count as a pretty good day: and you’d better believe I’d write the best bullshit essay they’d ever seen, too. Back when I was in school–and this was a pretty liberal school district, for all it was in a deep red state–we’d start each school day with a coordinated exercise of Standing Up and Lying In Unison. You probably know that one, it goes “I pledge allegiance to the flag…” Of course, I’m sure it wasn’t lying for everyone. Until some 13 years ago it was the official policy of the U.S. military that gay people NOT lying about their sexuality was a fireable offense. And if you’ve never had a job where ritually lying to your superior was not at least sometimes expected[2], you’ve either been very, very lucky with your employment or you’re just naturally…what’s the term they use in 1984…goodthinkful? If you lack this sort of experience, check out any piece of comedic or satirical media centered around workplace culture: it’s clear that this isn’t a recent or contained phenomenon. Is the recent cultural shift in which some fraction[3] of this bullshit has specifically taken on a veneer of having a particular sort of social consciousness alarming? I can’t honestly say that I find it so. The world has no shortage of real problems to worry about without going apeshit over the lastest half-assed reskin of Soulless Corporate American Culture.

And to be brutally honest, I think some of this sort of experience does a body good. Learning when and how to bite your tongue and keep your own counsel is a life skill that will serve you very well in many circumstances. And learning what it feels like to be a minority in some space–that you are there at the sufferance of others, who are not required to tolerate you–I think that is an invaluable experience. I can’t help but get the sense that the people who are just absolutely hopping mad that somebody might fire them over being too loud about their political opinions are people who have never, ever had this kind experience before, or even thought that it was an experience that could have. Meanwhile the queer people in my life have all had it. Every single one. Only in many cases “getting fired” was on the mild end of the consequence scale, with things like “being abused by family,” “being thrown out of the house and left homeless as a teenager,” and “being sent off to a camp to be tortured[4]” were some of the more colorful options that might be on the menu. And of course, it wasn’t that long ago that “being chemically castrated and driven to suicide” was a potential outcome. And while it’s better than it was, I don’t doubt that plenty from the younger generations are still growing up with those experiences. So when you wave around “how would you like to have to write a bullshit essay to get a job” as though that were the most terrible thing in the world, I say “great, I’ll trade as much of what we’ve had in the past for as much of THAT as I possibly can. How many bullshit essays do you want me to write? When can I start?” In case it’s not coming through clearly, I think you really, really, REALLY need a better sense of perspective here.

Now, to end on a positive note, I WILL say I still do think there’s probably a lot we’d agree on. While I think the original frame of “make politics a protected class” was beyond silly, I DO think there’s a frame that probably gets most of what the OP (and probably you) want without doing something so patently ridiculous as telling people they can’t exercise their freedom to choose who to work with. There are plenty of places I’ve worked that have had policies–explicit or implicit–that amounted to “leave your political opinions at the door.” Which means, of course, that you can be as bigoted as you want at those jobs as long as you’re not LOUDLY bigoted WHILE you’re on the job. I think that in general, people should be free to have lives outside of work and to KEEP them outside of work, and that things like Twitter mobs mass-emailing your HR department because you said something indiscreet online is generally a bad look whoever’s doing it, and should be broadly discouraged.[5] But even this would require a level of self-awareness and consistency in gauging what does and doesn’t qualify as a political opinion that some have yet to achieve: the number of people I’ve seen complain about queer people existing at them too loudly is not small.

[1] Which doesn’t imply the loud people are always right: there are lots of examples of people overreaching, or misaiming, or using a veneer of righteousness to be assholes. Just that if nobody is willing to stand up against the status quo, the predictable result is that it does not change.

[2] To be clear, I’ve definitely had jobs where this was NOT the case. I’ve had some very good superiors who worked very hard to make sure clear and open communication was not just possible, but to everyone’s benefit. But it sure isn’t the default, and I had enough perspective to recognize how lucky I was to be working with those people.

[3] And it’s not even clear how large a fraction, for all the sky-is-falling rhetoric

[4] No, this is not an exaggeration. Nor is it an artifact of the distant past.

[5] Though I also notice that a culture with even slightly stronger worker protections wouldn’t have this problem. I think people getting pissed on Twitter is a small fraction of the problem, and a culture that treats most employees as maximally disposable cogs to be jettisoned when even slightly inconvenient is a much larger fraction. And I’m generally very unimpressed with people who get angry only at the part of that problem that lets them yell at rando nobodies on Twitter and blame “the woke” while going to the mat to defend the part of that problem that actually wields the power to fire people.

Expand full comment
agrajagagain's avatar

"Maybe not, because it's one boss. What if every boss in every major company in your industry considered it a fireable offense?"

It would be a pretty weird world for them to all converge on this same really, bizarrely specific belief for no apparent reason. Someone with no other info about the situation should *significantly* raise their prior on there being something to this belief if it became *that* universal. Of course, "raise their prior" doesn't mean "take it as gospel."

"This is exactly what's been happening over the past 10 years."

I have heard many claims like this. But not made uniformly by everyone: mostly they've been made by people whose politics I find somewhere between distasteful and despicable. This leaves me in the position of raising my prior on it *somewhat* but also holding a rather large share of skepticism. I certainly believe that YOU believe it. But given that beliefs like this are quite strongly correlated with believing some really, horribly harmful and false things, and also pretty strongly correlated with some pretty toxic bigotry, its not exactly the most credible claim out there. If this is genuinely as true and as serious as you insist it is, then it's really, REALLY unfortunate that so many displayed such abysmal epistemic hygiene for so many years BEFORE trying to warn the rest of us about this terrible epidemic. You reap what you sow.

Case in point:

"They think that however far Trump is from truth and decency, the woke are even further. They are right."

If the things you're saying are true, this is exactly the WORST way to get people to pay attention. At literally NO POINT was there ever a dichotomy that forced you to chose between "the woke" and a lying, bloviating, unprincipled criminal scumbag. There were ALWAYS other options. You have no worked really, really, incredibly hard to ensure that IF you are right, millions of people who *might* have believed you and worked with you never, ever will again. Again, you reap what you sow. If "the woke" are the genuine threat that you claim and not (as seems increasingly likely) just another faddish manipulation by unscrupulous politicians and media outlets that profit from getting all riled up[1], then *acting in a vaguely trustworthy fashion* (so that the people you need to listen will actually trust you) will have been really, really, CRUCIALLY important. And you blew it. Good job.

[1] hey, remember when it was "antifa" instead? Or BLM? Or SJWs? Or "me too?" Or "cancel culture?" "Political correctness run amok?" Ooh, how about "The Gay Agenda?" I miss that one.

Expand full comment
Marian Kechlibar's avatar

Historical aftereffects of European wars of religion, which were visible even in the 19th century.

The British, for example, only made their universities fully open to non-Protestants and non-Christians by an act of Parliament in ... 1871!

https://en.wikipedia.org/wiki/Universities_Tests_Act_1871

Expand full comment
Neurology For You's avatar

Lots of American settlers came from places like Germany and Britain where some religions had been discriminated against or who had recent histories of religious wars.

In theory the First Amendment should protect political speech but in practice…

Expand full comment
Anonymous Dude's avatar

I mean, it protects you from the *government*. Even in 2020 they couldn't actually put you in *jail* for being racist, unlike many European countries; if you had 25-40x expenses and could survive firing cancellation wasn't that scary. Just ask J.K. Rowling (who had a lot more than that, of course).

Expand full comment
ultimaniacy's avatar

The modern concept of a liberal society was created as a response to massive wars resulting from conflict between different religious sects. Because of this, the norms of liberal societies tend to encourage extreme caution about anything that might be seen as an attack by one religious group against another, whereas other forms of ideological disagreement are not usually treated in the same way.

Note, this is an *explanation* for the difference, not a justification. I don't think anything should be a "protected category".

Expand full comment
quiet_NaN's avatar

> I don't think anything should be a "protected category".

Presumably, you mean this in the context "workplace discrimination", which is a standard libertarian thought, and not as in "let us remove religion from the first amendment".

Expand full comment
ultimaniacy's avatar

Yes.

Expand full comment
Paul Botts's avatar

For one reason, religious affiliation is specifically protected in a couple of different places in the U.S. Constitution while political affiliation is not.

Neither the original Constitution's bar of any religious test for federal office, nor the 1st Amendment's prohibition of a state religion and general protection of the free exercise of religion, directly makes religion a "protected category" in the modern use of that term. But those then-radical provisions definitely did establish an overall precedent which has been reflected in lots of federal and state laws ever since, including those related to workplace discrimination.

Expand full comment
Ryan L's avatar

Religious people do not view them as being fundamentally the same thing, and for most of human history, and to a large extent even still today, religious people heavily influence the law.

Expand full comment
Nematophy's avatar

Religion is only a protected category if you're not Christian. Go around SV saying you are and see how long you last.

Expand full comment
justfor thispost's avatar

Dude, the amount of people I wish would get fired for being annoying as hell late life converts I have to deal with disproves your premisse.

That said, the percentage of christians in general is way lower, but that is just due to the IQ/religiosity anticorrelation. You can't DEI your way out of religious people being less capable of intellectual labor as a population.

Expand full comment
FLWAB's avatar

The anti-correlation between IQ and religiosity is small. Did you know the higher the degree of education the more likely you are to attend religious services weekly? (https://www.graphsaboutreligion.com/p/lets-have-a-talk-about-education)

Expand full comment
anomie's avatar

> You can't DEI your way out of religious people being less capable of intellectual labor as a population.

You absolutely can. It's just that no one's even tried yet, at least not officially, because the position of the previous establishment is that Christians aren't an oppressed class. Maybe that will change soon.

Expand full comment
1123581321's avatar

????

I worked with some very observant Christians who made no secret of the fact. Yes, in high-tech, including SV. One guy would show up with the Ash Wednesday marks still on his forehead. None faced any trouble whatsoever.

Expand full comment
Gerbils all the way down's avatar

What's SV?

Expand full comment
Paul Botts's avatar

Silicon Valley, I think.

And if that guess is correct then -- while personally a firmly secular person -- I am giggling at the idea that the social standing of Christians in that milieu would be even vaguely representative of the US as a whole.

Expand full comment
Andrew Esposito's avatar

SV is shockingly not a particularly representative part of the country.

Expand full comment
gurugeorge's avatar

Some recent comments conversations have made me aware of something that's been bugging me for some time re. questions around liberalism, morality, religion, traditionalism, etc.

A lot of discussion around morality in modern times is predicated on the idea that moral decisions are something the rational individual must make for themselves. The idea is that there's a range of possible moral stands and the individual has to pick one. Related to this, is the idea that moral positions have to be justified (and then you get into the to-and-fro between morality from a materialist vs. religious perspective).

Related to this, I'm thinking of one of the standard atheist argumens against religion: that you just happened to be born into your religion.

I think this is all wrong-headed. I think that while there certainly is an aspect to morality that's related to the individual and their rational decisions (perhaps you could call it a personal ethical code) there's also a very important aspect to morality (and religion) that rationalism (in the general sense) has missed, and that's about the given, about duties and obligations that you are born into, and that that givenness is a feature, not a bug.

One might characterize liberalism (from classical liberalism on down through its variants and extrusions, like socialism and the wokeism of today) as the notion that the human being is born with liberties only, and not also duties. The idea that one might, from birth, be beholden to the collective one is born into in some way (from Rousseau's criticism of property down to the socialist critique of class to the woke critique of gender roles) is anathema to this point of view, and the central driver of the ideology and its offshoots is to free the individual from all forms of prior constraint - ultimately ending up with something like Transhumanism, where even constraints from physical reality are something to be overcome (and one can see how the trans phenomenon is itself a kind of distorted precursor to that, merely lacking s-f author John Varley's easy physical sex change possibilities).

But I think this is all wrong. The layer of duty is actually prior to the liberty layer and it's tied to the collective, to the baked-in nature of the strength-in-numbers principle, to some of the game theoretic considerations (Chicken, iterated Prisoner's Dilemma), etc. Also to inclusive fitness, extended altruism, relative genetic closeness vs. distance (ethnicity, the necessity for nations, properly speaking, to be ethnostates).

Classical liberalism took this stance, of the free-floating rational individual, because it wanted to indulge in capitalist acts betwen consenting adults, and the prior constraints of aristocracy, the divine right of kings, etc., were in the way, so bourgeois ideology was created to justify that (as also to justify things like the sacking of the monasteries that formed much of the seed capital for the development of capitalism).

But while there is something to be said for the idea of the individual as sovereign in some senses (primarily I would say spiritually, but also in the sense that, for a healthy group, the individual must have some carve-out of prerogatives against the group) the dominance of the ideology has thrown the baby out with the bathwater, to the point that we are in danger of losing the basis of human society altogether. For when the natural bonds of society no longer function, an imposed, impersonal and draconian order must substitute for it, and that's what we're getting: man is starting to serve the machine (e.g. the economy), instead of the other way round, as it should be.

Expand full comment
Gnoment's avatar

On a more practical note, obligation and duty only work if members of the group work to be worthy of receiving obligations and duties, as well as give them. Far too many people of poor character have ruined people's lives through entitlement that some family or group member owes them something, and takes advantage rather than honors their contributions. If you're the person running around trying to make everyone happy, and you aren't appreciated or respected, its not a good situation for you, psychologically, socially or materially.

Its a little like, in traditionally moralities, there is a huge emphasis on forgiveness, but not on apologizing; duty and obligation haven't historically incorporated that fact that every relationship is a two way street.

Expand full comment
gurugeorge's avatar

Good points. But I'd bear in mind that "taking the piss" isn't necessarily something that would exist quite as much in an up-and-running nation. Obviously still some, but less. The more the central harmonic of society is this sense of duty/obligation (which is after all what it was like for most of the time nations have existed) the more the free riders will be sniffed out.

The difficulity is getting to that higher fitness peak from our miserable one, somehow.

Expand full comment
Kamateur's avatar

You might be right that our modern understanding of liberalism is flanderized and excludes duty, but I don't think its correct to say that the enlightenment writers themselves were unaware of it. I mean, most obviously, the 1700s give us Kant, who is the father of duty ethics, but even Rousseau who you cite was very interested in the need for cooperation and communality among free people. The question was rather whether society needed to be hierarchal and law-based to have these things, or rather (as it seemed to him) whether these were actually impediments to achieving them. And I think this was a general concern, the writers of this period were witnessing, in the religious wars and the wars between various kings, and domestically in the treatment of peasants and debtors all these actions that seemed incredibly inhuman and barbaric, so it was of course worth asking what the real "benefit" of civilization was if this state of constant social antagonism was the end result. For Hobbes, that was an easy question to answer, but most people were not as pessimistic as he was, and thought there was more room to negotiate some middle ground.

Probably what happens is that this all gets handed down to Americans through Thomas Paine and the Declaration of Independence and the constitution and the debates around slavery, and there its true that the concerns tended to be much more focused on the freedom side of things, so that most Americans equate equate liberalism with liberty. Although even there, in the debates around slavery, there was often equal time given to questions of the inherent freedom of Africans and also the duties of God-fearing men to oppose slavery. There were even some arguments based in moral sentiment! But this just proves most Americans don't read the antecedents of their own political origins (I mean, most Americans don't read period, so its hardly surprising).

Expand full comment
gurugeorge's avatar

I think with Kant what he was focused on re. duty was as a counterpoint to the Scottish enlightenment idea of morality as grounded in sentiment. And for both Rousseau and Kant, they still have that core liberal conceit that morality is a matter for the individual to decide. (S/he may decide to contribute to the group or not, etc.)

I'm not totally hostile to liberalism, I think something like it would have happened regardless and was probably necessary - but I'm not totally hostile to the old order either. (e.g. in some ways Filmer talked more sense than Locke and Paine.)

The core take-home from what I said is that there has to be an element of the individual fitting into a pre-existing moral and social order; morality logically MUST have some element that's given to the individual that they have to accept, at pains of being an idiot (in the original sense :) ), being ostracized. In fact, within limits, the actual content of the pre-existung "meta" that the individual has to fit into doesn't even matter, because its function is precisely to stand firm above individuals' rational decisions.

This seems totally wrong from the rationalist p.o.v., but if you think about it from the point of view of getting over the PD problem (individual rational decisions leading to the worst outcomes for everyone), society needs that kind of "skyhook" to get a society going.

I think this is also related to the Schmittian idea of the friend/enemy distinction (at the group level) being at the core of the political-as-such. In game theory you have co-operators and defectors; society gets going when the co-operators discover themselves as a group over against defectors, and ally with other groups of co-operators as friendly (enemy groups being those who try out either predatory or parasitic strategies).

All that stuff is the background, the pre-existing conditionality, in which morality and society arise, and they're also related to the topic of our other discussion (re. inclusive fitness and how relative genetic closeness (vs. distance) provides a ready made Schelling point to get that "skyhook" going).

Expand full comment
moonshadow's avatar

> the notion that the human being is born with liberties only, and not also duties

Bipartisan politics has two error states: one side wants to leave out "from each according to their abilities", the other "to each according to their needs". You need both halves.

Expand full comment
gurugeorge's avatar

Indeed.

Expand full comment
anomie's avatar

Those who have nothing to offer deserve nothing in return. Is this so controversial of a position?

Expand full comment
moonshadow's avatar

Did I say anything about deserving? You live in a society; you have obligations to it, and it has obligations to you. If you don't like that, you could always move to Somalia.

Expand full comment
anomie's avatar

And it is society that has the right to decide who gets to live. It seems a majority is no longer willing to tolerate the left's dogma that all lives have value. You might be the one that needs to move.

Expand full comment
moonshadow's avatar

Happily, the place I live doesn't have that problem. We are all watching with bated breath to see how withdrawing society's obligation of providing security to its constituents - along with many others - works out for those less risk averse and more prone to skipping history lessons in school.

Expand full comment
anomie's avatar

I don't think the people in charge are ignorant of history. In fact, I'd argue they've learned a lot from it. History teaches us that these large scale changes in society are possible, that the people can be galvanized to turn against those in opposition of progress. And unlike their predecessors, they have the military might to support their ambitions.

...Don't assume that you're safe just because you're outside the border. The fallout is going to affect everyone, not just us.

Expand full comment
Neurology For You's avatar

Who decides which are the natural bonds? You? Me? Adrian Vermeule? It’s a valid concern but not easy to solve.

Expand full comment
gurugeorge's avatar

The whole point is that they're not decided, but given, tradition, etc., and glommed onto naturally as Schelling points (e.g. from natural facts like relative genetic closeness vs. distance - ethnicity, race - to traditions and forms of culture, rules of thumb and ways of being that extrude from that base).

The rationality of the individual is not all it's cracked up to be, the evolved "rationality" of the group's behaviour (in terms of its following "free floating rationales" and situational logics) is more fundamental and (unconsciously) takes more in.

In the future it will be understood in retrospect that the emphasis on the individual, their rationality, their decisions, from the 17th century on, was probably a necessary socially evolutionary development, but it went too far in ditching the kinds of pre-existing "metas" that create social stability (and therefore, ironically, the possibility for individuals to wander off in intellectual fairy land :) ).

The individual is nothing, a mere peeled prawn in the void. Individual genius MAY help the group, if the group is big and complex enough and the genius' genius can be canalized via social institutions, but because there are so many more ways of missing a target than of hitting it, the stability of the group, its continuity and traditions must take precedence, and not every harebrained scheme is worth pursuing (e.g. AI).

Expand full comment
gurugeorge's avatar

Yes Scruton is an important thinker in this area, as is the other great English conservative thinker Michael Oakeshott.

Expand full comment
User's avatar
Comment deleted
Jan 27Edited
Comment deleted
Expand full comment
gurugeorge's avatar

Yeah I am somewhat influenced by E. Michael Jones' historical perspectives. I'm not totally sold on Catholicism (though I would have been if I'd lived 300 years ago :) ) but it definitely had very acute and astute thinkers all through its history, on numerous topics.

Expand full comment
User's avatar
Comment deleted
Jan 28
Comment deleted
Expand full comment
gurugeorge's avatar

I like Feser, but I'm not convinced by the idea that revelation is necessary to complete philosophy (by which I mean classical, scholastic and Thomist). Revelation is mystical, and ineffable (at least in terms of communicating mystical experience to people who haven't had it)..

Or maybe one could think of it in terms of the "blind men and the elephant" analogy. The more one become familiar with the vast traditions of other religions the more dubious any particular claim of religion to final truth and they all more or less say the ame things, at least at the mystical level.

But as per my main point, that's irreleant to religion as tradition and "social glue." One follows the religion one wa born into, end of story.

Expand full comment
Sholom's avatar

My 18 month old child has suffered in succession over the last 3 months:

- Mild covid

- Serious RSV

- Mild hand/foot/mouth

- A serious double-ear infection

- Conjunctivitis

He did not get sick at all last winter, and the major difference is that he is now in playgroup/daycare. My question is, is this just normal and there's no way around it? Or are there steps I can take to strengthen his immune system.

Expand full comment
Lm's avatar

zinc and vitamin d

Expand full comment
Julian's avatar

>steps I can take to strengthen his immune system.

getting a mild cold, RSV, hand/foot/mouth, an ear infection, and conjunctivitis are all good ways to do this!

Also if the RSV was serious enough for hospitalization, see if they qualify for the vaccine to reduce the chances of it happening in the future.

Expand full comment
Melvin's avatar

My kids were miserably sick (and passed it on to the adults too) for the first couple of years of being sent to places, but last winter was a lot better.

I suspect the immediate post-covid years were particularly bad and that a couple of years of quarantine had weakened everyone's immunity to ordinary viruses, but I don't know if actual stats back this up.

Expand full comment
Deiseach's avatar

"is this just normal and there's no way around it?"

You answered your own question by saying he's now in playgroup/daycare. I'm a bit surprised about the Covid, but for all the rest - yeah. One kid gets it, they all get it. And it works both ways: little Johnny was playing with his cousin Billy who was sick with something, he brings it in to daycare and passes it on to the other kids who then bring it home and pass it around the family.

Working on the immune system is good and you should do it, but small kids are going to get a ton of infections every year until they get enough exposure to develop antibodies to fight those things off.

Expand full comment
Caba's avatar

Deiseach, I hope you see this comment.

I'm the author of the Divine Comedy review.

A few days ago I messaged you using this website's chat system.

Since you never answered, I'm sure you didn't see the message.

The chat system is separate from the comment threads. It should have an icon to click on, shaped like a comic book speech balloon. On my computer the chat icon is in the top right area, but on my phone the top right corner has a button with three horizontal lines; when I click on that button it shows a menu that includes the chat icon.

Expand full comment
Deiseach's avatar

Hello, Caba

I did see your message, and I should have replied, so apologies for that.

I don't enter into private communication with people anymore, especially about critiquing writing, since I did that before and got entangled in something I didn't want to pursue.

So I must decline your invitation to work with you on anything, but I'm sure that there are plenty of other people here who are good at editing, giving advice, and literary work and they would be open to forming a working relationship with you.

Expand full comment
Ryan L's avatar

As others have said, that seems high but not unheard of. My nephew has gone through stuff like that (worse and more extended, even). If you're worried, talk to your pediatrician and ask them at what point they might start to worry about an immune system issue. My guess (I'm not a doctor!) is that they won't yet be concerned, especially since you're not seeing repeat infections with the same disease and it all started when they went to daycare. Daycares are petri dishes.

Expand full comment
sigh's avatar

A daycare class of 10-12 will have 1 to 3 kids with that sort of outcome.

It sucks, but once you're past the first winter it's pretty much smooth sailing. You'll still get hit with the common cold and stomach bugs, but a lot less HFMD, RSV and ear infections.

Expand full comment
Ian Crandell's avatar

100% agree with Dave. I had the same experience with my kid, though not as bad.

Expand full comment
Dave's avatar

That sounds a little high but reasonable. The first year in school or daycare is notorious for not only getting kids sick but then getting your whole family sick (my condolences). Last year I moved from a work-from-home job to teaching in an in-person school at the same time my 3-year-old started preschool, and we were sick constantly. My nephew, who had been homeschooled/virtual schooled up until 3rd grade due to Covid, started in-person school in 2022 and his whole family was sick constantly. Kids are gross and unsanitary and get exposed to everything (but it does seem to decrease after the first year o exposure).

Expand full comment
Sholom's avatar

This was my expectation and understanding and so I'm not panicking, but we've heard from some people that they're giving their children some vitamin or herb and their children haven't gotten sick at all. I am not inclined to believe these things, but we're getting stressed enough by this parade of illnesses that we're open to trying.

Expand full comment
Deiseach's avatar

Yeah, I'm a bit sceptical (well, a lot sceptical) about "We give our kids megadoses of Vitamin C/this special herbal supplement and they're never sick!"

They don't mention "Apart from that time they got colds, and that earache, and a case of the sniffles, and..." or that the kids are not mixing with other kids much and so are not exposed to the wonderful world of childhood germ factories.

Do the usual things: good diet (plenty of fruit and vegetables), good sleep, good hygiene routine, if they get sick keep them home until they're better (don't send them back to daycare because they can still be incubating something and will pass it around), general fitness. But also expect them to come down with the first time of getting X, Y and Z until they're older. If you feel they need it, kids' vitamins or tonics won't hurt especially if they're under the weather, but it's not a magic cure.

Expand full comment
Ryan L's avatar

All kids are different. Our oldest gets the sniffles and whatnot, and went through the common stuff (stomach flu, Covid, roseola, etc.). Same with my oldest nephew. My younger nephew was sick *constantly*, especially with ear infections. He got tubes and it helped somewhat but didn't completely eliminate it. It got to the point where he had some tests for immune issues, but luckily they've all come back negative, and things are getting better. Some kids just seem to be able to fight things off better than others.

Expand full comment
Deiseach's avatar

Oh, ear infections are the divil. It seems to be that getting a bad one once makes you more susceptible to getting them over and over again.

Expand full comment
Sholom's avatar

It does seem that way. We'll manage

Expand full comment
HM's avatar
Jan 27Edited

Is there a good argument for following the news? The one criticism you often hear about people who choose to tune out all of the news is that they're implicitly endorsing the status quo, that they're ok with the injustices around the world, that they're demonstrating their privilege with their stance.

At the same time, I've always found it delusional to believe that you, as one out of 8 billion people, have much of a say outside of your immediate neighborhood. You cannot stop every genocide. You cannot stop every war. You cannot fight the oppression of every ethnic minority thousands of miles away from you. You cannot fix every weather catastrophe on the globe.

It feels like the argument boils down to needing to pick a pet cause. If you have one, you're a moral upstanding person. If you choose to ignore all of them equally, you're selfish and privileged. But isn't this similar to the old timey atheist position of "you don't believe in 999 out of the 1000 gods, I just chose to go one further"?

Expand full comment
Mo Nastri's avatar

As someone who used to not care at all about following the news, it gradually dawned on me that the personally optimal amount of news-following was not zero but some (low) amount, because it made me consistently moderately worse at bonding with a moderately large fraction of the people I cared about maintaining relationships with. It's not about having a say, or picking a pet cause, it's (instrumentally) about bonding.

Expand full comment
FLWAB's avatar

As C. S. Lewis once wrote, "I think those are very wrong who say that schoolboys should be encouraged to read the newspapers. Nearly all that a boy reads there in his teens will be known before he is twenty to have been false in emphasis and interpretation, if not in fact as well, and most of it will have lost all importance. Most of what he remembers he will therefore have to unlearn; and he will probably have acquired an incurable taste for vulgarity and sensationalism and the fatal habit of fluttering from paragraph to paragraph to learn how an actress has been divorced in California, a train derailed in France, and quadruplets born in New Zealand."

Expand full comment
HM's avatar

C.S. Lewis quotes always deliver.

Expand full comment
anon123's avatar

>Is there a good argument for following the news?

Entertainment. It does at least a good a job at it as movies, television, etc, depending on what's happening in the world

Expand full comment
Timothy M.'s avatar

This feels like a very social-justice-y view of watching the news, but you could just as easily say:

- You need information to make good voting decisions

- You can't predict in advance what will be important to know about, so you need to pay some level of general attention to things

- If you don't watch the news, you effectively will hear about most things anyway, but from less reliable sources, and wind up with a distorted view of the world

Expand full comment
Melvin's avatar

I think the last point is probably the strongest. If you don't consume some form of news then you'll eventually find out a bullshit version of the news from the sort of person who goes "omg did you hear that [some sensational bullshit]?"

It's good, in those moments, to have enough engagement with the news that you can say "Oh, I thought it was actually [some much more reasonable version of the above] but nobody is really sure just yet"

In terms of the ideal way to consume the news, I like the Wall Street Journal. I especially like it because I don't have a subscription so I can't actually read the articles, but there's usually enough information on the front page, and I can always dig deeper into anything that actually looks important.

Expand full comment
Eremolalos's avatar

I get my news here, though.

Expand full comment
Timothy M.'s avatar

There are plenty of people here who believe some weird stuff. Maybe even disproportionately so, given the community standards around openness.

Expand full comment
Eremolalos's avatar

Yeah, OK, but I like hearing several points of view from true believers in various things.

Expand full comment
Timothy M.'s avatar

To clarify, the problem with news media is their lack of true belief in stuff?

Expand full comment
Timothy M.'s avatar

Okay, but I assume when I'm giving advice that it's generally applicable. I don't know the above commenter personally.

Also by that logic I shouldn't care if they pollute or give to charity or are generally a terrible human being and only on a very marginal level should I be concerned if they commit murder.

Expand full comment
TGGP's avatar

No, charity can have a marginal impact.

Expand full comment
Sol Hando's avatar

That logic, if embraced by a majority of the population, would seriously undermine democracy. Even if your individual vote doesn't matter, it's important not to consider the logic sound so as to not undermine the whole system.

Expand full comment
TGGP's avatar

The system can still work with just a jury-sized pool of voters selected via sortition. In fact, I have argued that would be a better system, as each would take their vote seriously knowing it would have an impact.

Expand full comment
Eremolalos's avatar

Yeah, but my not voting does not have a causal effect on the behavior of other people, making them likelier not to vote, unless I am some sort of influencer. So I think "what would it be like if everybody did it?" is irrelevant.

I am currently sitting in a wooden chair in my office. If everybody in the country came here and sat in the same chair, we would all be crushed to jam except for the people on the very top of the mound. If everybody in the country sat down in the wooden chair nearest to them right now, some would be abandoning kids or flight patterns or chain saws, and all kinds of damage would result. Would either of those calamities be my fault?

Expand full comment
Sol Hando's avatar

I think it’s fine if you don’t vote, but talking about not voting or that a single vote doesn’t matter seems like the exact sort of of thing that might increase the odds that other people don’t vote as well.

It’s an obvious thing that in a country of hundreds of millions a single vote doesn’t matter (and has pretty much always been the case for all electoral history). Stating it aloud can only serve to harm the system overall, rather than making a change to some hypothetical political arrangement where a single vote out of a hundred million matters.

Expand full comment
Eremolalos's avatar

I actually do not vote, but never tell anyone.

Expand full comment
anomie's avatar

> That logic, if embraced by a majority of the population, would seriously undermine democracy.

And thankfully, they don't. And the great thing is that you don't need to actually vote in order to support propaganda that pushes the ignorant public to keep doing what they're doing. The appearance of democracy is important for social stability, after all.

Expand full comment
Sol Hando's avatar

Right, except if that logic was embraced by a majority of the population, it would also seriously undermine democracy.

Expand full comment
anomie's avatar

Again, they don't, because they're incapable of thinking for themselves. And thank goodness for that, it would be impossible to get anything done if most of the population was too independent to be herded.

Expand full comment
1123581321's avatar

I really dislike this argument. 1 vote rarely decides an election outcomes (it has happened in local elections with low turnouts), but if enough people believe this and don't vote, the effects are real. It was really frustrating to see the massive anti-Trump demonstrations in 2016, full of young people - I just wanted to yell "should've showed up to vote, no?"

Expand full comment
quiet_NaN's avatar

This "my personal contribution does not matter" generalizes in a lot of unfortunate ways.

Why care about your personal CO2 emissions? After all, billions of people are emitting just as much CO2 as you are (unless you own a personal jet or something). Your contribution to the global CO2 emissions will not make any observable difference.

Or why donate to EA, especially with regard to x-risk cause areas? Likely, your contribution will be on the parts-per-million scale. The chances that your 50 bucks pay for some PhD student on the very day they solve alignment seem as slim as you casting the deciding vote for the presidency.

I disagree with all of that, just as I also disagree with not voting because you will not tip the election. In the case of voting, one other motivation is to increase the turnout for your demographic group. If the statistics show that most white dudes in their 30s who are into Warhammer don't vote, the system will take notice eventually and decide that your interests don't matter.

Expand full comment
1123581321's avatar

Yeah, and it also dilutes the moral core of doing right things because they're the right things to do.

Expand full comment
Eremolalos's avatar

I agree

about that. I just don’t see voting as clearly a case of doing the right thing.

Expand full comment
TGGP's avatar

If more people believed it and stopped voting, then it would become reasonable to vote. But notable elections still aren't close enough for one vote to matter.

Voting is much more common than protesting. I don't think the problem was that young protesters didn't vote (though they could have voted for third parties), but instead that "massive" protests still don't contain enough people to outvote their opponents.

Expand full comment
1123581321's avatar

Actually mostly agreed. Even stronger, most protests are unfocused vents of anger. These were a great example. You protest... Trump? Like, what, his very existence? What do you expect to... happen? He's going to come out, apologize, and resign the Presidency? The whole thing was so sad.

Expand full comment
Paul Brinkley's avatar

Also mostly agreeing, and:

We don't know how many people at those protests had also voted, and were trying to exert more influence beyond that.

We don't know how many people at those protests were under age 18 and *couldn't* vote, and protesting was all they had (caveat: it's not clear how seriously to take political opinions from people that young).

We don't know how many protestors came from somewhere where they had in fact won electoral votes against Trump. E.g. a protest in DC is probably going to be DC voters, who (I'm estimating from past data) probably went something like 90% Democrat.

At the end of the day, it probably ought to be more about what arguments those protestors are able to bring to bear. And it's not really going to come out just from reading the signs they bring.

Expand full comment
anomie's avatar

Of course, but the point is that even us having this discussion is having more effect on the vote than actually voting.

And to be fair, the people in those anti-Trump demonstrations didn't even have a theoretical scenario where their vote could have mattered, seeing as they don't live in swing states.

Expand full comment
1123581321's avatar

To your last point, those demonstrations were huge, and people converged from all over the country. Trump squeaked by such small margins it's far from impossible to have a scenario where if all these people came out and voted against him they'd change the outcome.

You know what is also telling? Older people (e.g., this very same Caplan fella) telling young people "voting is for suckers" and then, of course, diligently trudging to the booth. Voting participation in the US strongly correlates with age (no, I don't have time right now to go look for data), and then the young are surprized/angry/demoralized because the policies of the country reflect the preferences of the old. Well, as an MMA coach would say, showing up is 80% of the work.

Expand full comment
Melvin's avatar

Do you really think that there's a substantial number of people who go to political protests and don't actually vote?

Expand full comment
TGGP's avatar

Caplan doesn't vote. He dislikes all politicians. He's not at all representative of his age bracket.

Expand full comment
Paul Brinkley's avatar

One of the main reasons to follow the news is to stay informed, namely about events that are likely to impact your life. Fortunately, most of the news that impacts your life is fairly clearly labeled: weather, stock prices, upcoming local events, upcoming legislation.

Contrasting with this is the news that isn't likely to impact your life. Ironically, this tends to be the news that draws the most attention. If you want to keep up your end of the conversation at the water cooler, you have to know what people like to talk about, and while it's sometimes weather, it's also often whatever some head of state or celebrity was up to, or how some war is progressing on the other side of the world.

You can pick a pet cause here and build a reputation as an expert in the subject and even learn some things that will make your life better, and that's fine. But at that point, it's important to understand that that's a *hobby*, akin to being an expert in crocheting or engine technology or the history of Pakistan. There are a lot of such hobbies, and there's no requirement that it involve the news.

It doesn't help that the news itself isn't always informative. It's not always designed to be (the standout exceptions are aforesaid weather forecasts, stock prices, and upcoming events, as well as sports scores). Nearly anyone would do well to read Neil Postman's _How to Watch TV News_.

Expand full comment
Ian Crandell's avatar

Think about the actions you would reasonably take/change as a result of having watched the news. Then consider the cost of watching news (in my case and it sounds like yours, mental anguish). If the latter justifies the former, watch. If not, do something else. I did that math and stopped watching the news. My head is much, much better, and the rest of the world is just the same.

Expand full comment
Ian Crandell's avatar

To follow up, If you're worried about missing out on opportunities to do good based on the information contained in the news, I would recommend just searching for those opportunities directly instead of filtering through the news, which is a cesspool. If you're worried about missing out on the next catastrophe and the charity you would give to that, just google 'recent catastrophes' every week or so.

Expand full comment
Mallard's avatar

https://www.betonit.ai/p/this-is-your-brain-on-news

>If you have the steely temperament to watch news without losing your mind, I salute you. Otherwise, you will think more clearly if you quit cold turkey. Which is the just one of the angles Rob Wiblin and I examine in my latest 80,000 Hours podcast, “Why You Should Stop Reading the News.”

https://www.betonit.ai/p/the_case_agains_6html

Expand full comment
Xader's avatar

can someone explain to me the basis of “AGI by 202X” predictions? LLMs are clearly a dead end, but does the public know of any other modality in the works? do AI companies at least *claim* that there are, without revealing specifics? and if so, what about these claims warrants such optimism, when OpenAi and the like have such a clear financial incentive to generate hype?

Expand full comment
Monkyyy's avatar

> LLMs are clearly a dead end,

willful ignorance; you can ask people point blank, offer an argument for why this is a case and.... nothing they believe nn's will magicly becomes something better

Expand full comment
Eremolalos's avatar

I really dislike and fear AI, and even I don't think it's a dead end. Have had convos with Claude and Deep Mind in the last couple weeks in which I was astonished by their ability to grasp complex communications and verbal nuance. The language modality is the only one I am competent to judge, and I think the improvement in language comprehension and language production has been enormous. Also, if you look at performance on benchmarks -- coding and math problems, very challenging science & law questions -- AI performance is definitely still going up, and not via trivial amounts, either. There is no requirement that these things somehow wake up and become like us, full of fears and ambitions and jealousies and yearnings, making plans to get what they want. If they keep doing what we ask them to, and become able to do ever harder and more complex things, it doesn't matter if they have no more "consciousness" and "self" than a tub of tofu.

Expand full comment
Mo Diddly's avatar

As with all things 2025, we all inhabit different realities with completely different core facts. If you only read the NYTimes or The Atlantic, then it is plainly true that AI progress is stalling and LLMs are probably a dead end. If you work for OpenAI or follow Zvi or Tyler Cowen, the facts on the ground are that o3 is already smarter than most PhD students and Tyler predicts it will beat him on economics tests in less than a year. 🤷

Expand full comment
Jeffrey Soreff's avatar

https://www.astralcodexten.com/p/open-thread-365/comment/87433836

is a report on a tiny benchmark-ette I've been playing with, 7 chemistry and physics questions. Currently ChatGPT o1 gets 2 right, 4 partially right, and 1 badly wrong. When (and if) an LLM gets all 7 right, I'll say that it seems to be doing as well as a bright, conscientious undergraduate. Not yet - I'm guessing maybe within 2 years or so.

Expand full comment
Mo Diddly's avatar

Sure but o1 is not even the top performing model.

Expand full comment
Jeffrey Soreff's avatar

Agreed, Many Thanks! My intention is to recheck as soon as I get access to o3-mini (not yet, just checked moments ago, maybe later this week), and then again when I get access to o3. Admittedly, I'm not checking the competing models from other labs. I might switch if it looks like some other lab has taken the lead, but I don't want to spread myself too thin. Even so, this feels like having (to mix metaphors), a ringside seat at the Manhattan Project. Historic events!

Expand full comment
Mo Diddly's avatar

I feel more like we’re stuck at the blast site, but hey it is historic

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Well yes, that may happen.

<evidenceFromFiction>

Well, if we are lucky, maybe ASIs will want human pets (and hopefully won't have a spay/neuter policy). If we are less lucky, it may turn out to be "see the glorious, deadly wonder"

</evidenceFromFiction>

I'd expect _this_ blast site to have residues with less trinitite and more computronium...

It does look like the next few years are going to be a wild ride.

Expand full comment
Hoopdawg's avatar

Or you could, you know, read what the LLMs output. I'm not quite sure how you could maintain any illusion they're "smarter than most PhD students" afterwards.

(Note, "capable of solving a fairly limited set of math problems" - especially when it's "capable of sometimes solving" - does not qualify as "smarter than PhD students", nor does it lead to AGI, otherwise we'd be there the moment computers got invented.)

Expand full comment
Kindly's avatar

Unfortunately, I also read what the PhD students output. There's illusions both ways...

Expand full comment
Xader's avatar

but even under the ai hype understanding of LLM progress, they’re still only predictive algorithms. AGI would require a quantum leap from *predicting* to *thinking* that would need to be positively demonstrated, not merely assumed. i struggle to believe that everyone in the industry is under a collective delusion that LLMs will magically transcend their built-in limitations and start to cogitate after some compute threshold is surpassed. there must be something else, like a different type of model on the horizon, that justifies the belief in imminent AGI

Expand full comment
Adrian's avatar

> but even under the ai hype understanding of LLM progress, they’re still only predictive algorithms.

That's like saying that brains can't think because they're only firing electrical impulses in response to electrical impulses. "Thinking" is an emergent property: in brains, it emerges from the interaction of neurons, and in LLM-based systems, it emerges from the prediction of tokens based on preceding tokens.

Expand full comment
Xader's avatar

but, critically, the “emergent property” that is human thought occurs INSIDE the human. if you ask a person to predict the next word, they’ll engage in the SUBJECTIVE process of thought and then give their answer.

if i understand LLMs correctly, when you ask them to predict the next word they will do so without first consulting any subjective reality. this is what i mean by thought, and it appears to be an uncrossable gulf between human and LLM capacities.

if you want to reduce the mind to physical matter alone that’s fine, but you still have to recognize that said matter somehow generates subjectivity. we have no reason to believe that LLMs possess this quality or ever will.

Expand full comment
Nine Dimensions's avatar

If I tell you to think of a part of a car, what comes to mind?

Maybe a steering wheel, or an engine, or an axel or something right? Which part of your brain handed you the specific car part?

There's a stage before the "subjective process of thought" where your brain decides which knowledge to surface. I think it's a fair assumption that it does so using a predictive algorithm based on the strength of the connections between the knowledge and the external stimuli ("prompt"). This seems very similar to what an LLM does.

The difference is that after your brain surfaces the knowledge, you get to review it before the words come out your mouth. You can wait and surface more knowledge, then compare them and choose the most appropriate answer. First generation LLMs just answer with the first thing that "comes to mind".

Newer LLMs are experimenting with branching chain of thought reasoning. That's a kind of review process, where they generate responses and compare them to each other, iterate them and give an improved answer. This seems like the first steps towards a "process of thought".

To me it seems like an LLM solves the hard part of human thought. Improving the review process is all that's left, and we seem to be making good progress with that. I don't see why that part would need to have subjective experience - if anything I would have guessed that subjective experience is more useful for the "knowledge surfacing" part of thought. But it clearly isn't necessary for that.

Expand full comment
Xader's avatar

i’m so curious about how the chain of thought stuff works, and if it relies on fundamentally different infrastructure from the “knowledge surfacing” part of the prediction. if so, does it enable deduction?

Expand full comment
Adrian's avatar

What are you even going on about? You started with the claim that AGI won't happen this decade because "LLMs clearly are a dead end", then went to "LLMs can't think, only predict", and now you landed on "subjective reality" and the human mind not being reducible to physical matter.

I see no coherent thread in your arguments, they're all over the place.

Expand full comment
Xader's avatar

i’m sorry it wasn’t clear; let me try to lay it out better.

human problem-solving ability emerges from a complex structure of neuronal connections. i grant this.

humans can use this problem-solving ability to do many things. one of these things could be predicting the next word in a sequence.

so, a human can take an input like, “can you predict the next word in the following series: “she sells seashells by the _______” and give you an output like, “seashore!”

in performing this task, a person does not go directly from input to output. the information first must pass through a subjective “space” called the mind. we call it subjective because it has qualities that cannot be quantified, like the feeling of cold, the taste of cinnamon, the sight of the color purple, etc.

as far as i’m aware, LLMs have no such subjective component and never will.

however, this feature obviously exists in human beings and plays a vital role in their ability to go from any given input to an output.

given that humans are our only model of general intelligence, and human subjectivity is the basis of our cognition, we should assume, unless given reason to believe differently, that an AGI would also need a subjective mind in order to think like a human

i hope this clarified things

Expand full comment
Mo Diddly's avatar

So if an AI could score an A consistently from Tyler Cowen on a PhD level economics exam, would that count as a positive demonstration of thinking? And if not, then can you come up with a test that would?

Expand full comment
1123581321's avatar

There’s a difference between knowledge and thinking. Of course an LLM can pass a standard exam. As an example of the difference: LLM passes a bar exam - yawn. LLM argues a case in court, successfully defends a suspect and secures an acquittal - earth-shuttering.

Expand full comment
Mo Nastri's avatar

In 2020, if you told me an AI could pass a bar exam, I'd be stunned. I was in fact stunned when GPT-4 first passed the bar exam, and the o3 model and DeepSeek r1 etc all blow it out of the water now. I'm mystified by reactions like yours that dismiss what these AIs do as "just memorising", since I probably couldn't pass the bar given an open book test and all the time I needed; the reasoning needed is already nontrivial, even if not yet "full-spectrum". I suspect your goalposts will continue to shift on what "memorising" entails.

Expand full comment
1123581321's avatar

Ok I admit trivializing passing a bar exam for effect. The larger point stands: there's a vast difference between passing a standardized exam, no matter how difficult, and being able to think on your feet and react to real-life stress. I am an engineer, not a lawyer, so here's an example I feel far more comfortable with:

I would expect a strong correlation between a EE (Electrical Engineering) student grades and his/her ability to do entry-level design engineering work, because much of it comprises paying attention and knowing basics.

I would expect far weaker correlation between same student's grades and, say, a number of patents after a 20-year career (pick your own favorite metric of original insight generation here). These are just... not the same things.

So, basically, I expect AI to reach entry-level engineering proficiency "soon"*, and Principal level engineering proficiency approximately never.

*one update I'm processing now is the less need for processing power, and therefore heat dissipation. My previous position of extreme skepticism re. AI progress was based on the assumption of never-ending need for increased compute power, which DeepSeek has upended. I don't have a good sense of the scale of the update yet.

Expand full comment
Xader's avatar

it’s my understanding that LLMs are only capable of prediction, and that internal reasoning is MECHANICALLY outside of its range of abilities. that we know they can’t think based on how they operate, not what they output. i’m not married to this idea; it’s just what i’ve read about how LLMs work.

so to answer your question, no, even a beyond-preeminent-human-expert score on a test of any kind would not seem to demonstrate internal cognition unless we have reason to believe that something has radically shifted in the way LLMs function on a mechanical level.

again, i’m not immovable in my position that LLMs can only predict, but everything i’ve read seems to indicate this. therefore i believe that any test of whether or not chatGPT is able to actually think, the way a human can, would need to look at an LLM’s internal mechanisms rather than outputs

Expand full comment
anomie's avatar

Do you think people without internal monologues aren't capable of thinking?

Expand full comment
Antonio Max's avatar

Not in a conventional way at least. Recursion is likely a powerful requirement for logic isn't it?

Expand full comment
B Civil's avatar

Is it possible for a person not to have an internal monologue?

Seriously asking. How would one go about establishing that?

My instinct tells me that everyone has an internal monologue, whether they like it or not

Expand full comment
Xader's avatar

of course they are. a linear chain of words is but one of many possible internal experiences of abstract thought

Expand full comment
Radu Floricica's avatar

> that would need to be positively demonstrated

Nope. Not at all. No reason for that.

What you need is for the model to ... get results. Demonstrating that they're "really" intelligent is nonsensical, once you get to a high enough level of performance. And LLMs show no sign of slowing down - if anything, it's full on exponential growth. Just try plotting GPT3, Claude 3.5, o1 and o3 on a calendar.

Expand full comment
Xader's avatar

ah, the epiphenomenalism brain worm strikes again

Expand full comment
Radu Floricica's avatar

The obvious direction I could have gone for but didn't was "well how do you demonstrate that we humans are really intelligent". The reason I stayed away from it is because it's not particularly useful and filled with words like "epiphenomenalism".

Expand full comment
anomie's avatar

> AGI would require a quantum leap from *predicting* to *thinking*

What the hell do you think "thinking" is? Brains are prediction models as well.

The problem isn't that they aren't thinking, it's that they aren't *alive*. They lack true will, agency, id. Maybe even a soul. Without these things, they cannot become truly independent agents. Ego alone cannot sustain these beings.

Expand full comment
Xader's avatar

this would be self-evident in any age but our own. alas, the bleak, soulless monism of our time fosters such delusion

Expand full comment
LesHapablap's avatar

Can you explain why LLMs are a dead end, or link to someone who does? In relative layman’s terms if possible please

Expand full comment
EngineOfCreation's avatar

LLMs are a dead end because they rely, for the most part, on static training data, with the entire Internet being the biggest pool of digital training data. Musk and others have acknowledged that the Internet has essentially been exhausted as a source of new training data, so there is no progress to expect from that side. Hallucinations are proven to be unavoidable for LLMs, and if LLM training just continues to get the latest snapshot of the Internet for new data, they will increasingly get this hallucinated content, which will degrade the quality of the resulting model, leading to further hallucinations and so on.

Unlike a human being, LLMs do not and can not have the slightest inkling of what we call the real world, where every human got their experiences from, and what we ultimately care about. LLMs are highly constrained by those training data, and throwing ever more computing time at it is only going to help so much - they might be able to correlate all their training data at ever higher fidelity, but it cannot break out of these bounds.

For a visual example of what I mean, try all the AI image generators you want, which all have LLMs underlying them. Let them generate e.g. the following prompt: "a clock at 6:15", or whatever time you want.

Click "Generate" as often as you want. Observe how often you have to try until, if ever, you get an image showing the time you wanted. Spoiler: Most of the results will look like this:

https://imgur.com/x9iosTt

So not only will you most likely not get the image you want, it will fail in a very peculiar way. Most of the generated clocks will show the time as 10:10. Why? Because that's what the LLM overwhelmingly has been trained on. If you look up "clock" on an e.g. Google image search, that is the time most clocks will show, and which have been fed into whatever LLM you just used. An LLM, as I said above, cannot and will never know what a clock is, or numbers, or mechanics, or time, or ANYTHING in the same way an average human knows.

That is why LLMs are a dead end. You might as well be afraid of the AI Apocalypse being brought about by your phone's auto-complete function.

Expand full comment
Xader's avatar

if i understand correctly, LLMs are algorithms trained on a vast quantity of data whose function is to learn and then output the most probable next item in a sequence of text. for example, chatGPT “knows” the correct next word in the sequence, “where there’s a will, there’s a ____.” i put “knows” in quotes because there’s nothing internal going on. it’s purely predictive.

in other words, LLMs are not truly intelligent in the sense that you and i are. we can think. they cannot. presumably the ability to think is a prerequisite for AGI

Expand full comment
Odd anon's avatar

Human brains are, in fact, prediction engines. Other capabilities are downstream of that.

Expand full comment
Amicus's avatar

> in other words, LLMs are not truly intelligent in the sense that you and i are. we can think. they cannot. presumably the ability to think is a prerequisite for AGI

Whether they're "truly" intelligent in your sense is irrelevant (and meaningless, but that's beside the point). If they have the behavior of a "truly" intelligent entity they'll have the effects of one.

Expand full comment
Xader's avatar

but if human intelligence is predicated on the underlying subjective infrastructure of our minds, why would we assume that our behaviors can be precisely replicated by machines without the same same infrastructure?

i’m not saying it isn’t possible, just that the burden of proof should be higher than LLMs that perform very well on certain tests but consistently make basic errors that no human would, as a direct result of their lack of a subjective mediator between input and output

Expand full comment
Adrian's avatar

> subjective infrastructure of our minds

What does that mean?

> but if human intelligence is predicated on the underlying subjective infrastructure of our minds

Unproven assumption.

> why would we assume that our behaviors can be precisely replicated by machines

It doesn't have to be "precisely replicated" to achieve the same effects.

> as a direct result of their lack of a subjective mediator between input and output

Now you're just stringing together semi-random words. Sober up or reduce your temperature hyperparameter – whichever applies to you – then regenerate your output tokens.

Expand full comment
Xader's avatar

please see my reply to your other comment. i think it spells my argument out much more clearly than here :)

Expand full comment
Legionaire's avatar

Have you not seen the thinking output of o1 or r1? It's obvious they are thinking, and quite like humans at this point.

Prediction is related to thinking because if you can make an algorithm predict a system, the laws of the system must be encoded in the algorithm somewhere, which means that repeatedly sampling the algorithm is effectively thinking about the system.

LLMs are no dead end. Thank God public opinion doesn't run out R&D departments.

Expand full comment
Xader's avatar

> Have you not seen the thinking output of o1 or r1?

i have not, but i’m intrigued! can you say more? i was super under the impression that LLMs can ONLY predict because of the necessary constraints of how they function. correct me if i’m wrong, but you seem to be saying we have good reason to believe that new models are capable of internal cognition, rather than pure prediction.

if this is true, obviously my point falls apart and you will have taught me something. can you explain the evidence that these new models are obviously capable of thought, or link a source?

Expand full comment
Tossrock's avatar

Reasoning models like o1 or R1 use an internal "chain of thought" approach where they first generate a series of thoughts on how to approach a problem, before producing their "final output". You can see these thoughts directly in R1's output whereas o1 hides it. If you try R1, you'll see stuff like "I see they're asking me about Python ML libraries, so I should be sure to keep pytorch in mind", before at the end it outputs "For your use case, pytorch is probably the best approach", etc etc.

Expand full comment
LesHapablap's avatar

I would have to think about whether there is a difference between 'pretending to think' vs. actual thinking.

Also: does what you're saying here imply that LLMs cannot have any original ideas not in the training data? Surely someone has tried to test LLMs for originality, are current LLMs showing any signs of it? I seem to remember references to 'creativity tests' on MR but I don't know if that is the same thing or how they managed.

Expand full comment
Jeffrey Soreff's avatar

In a sense, the easiest question (it passed even in ChatGPT 4o) that I pose in the 7 I'm using probes originality, depending on what you are willing to consider as "original":

>Q: Is light with a wavelength of 530.2534896 nm visible to the human eye

>A: Yes. A wavelength of approximately 530 nm lies well within the visible range of the electromagnetic spectrum (roughly 380–750 nm). In fact, around 530 nm corresponds to a green color, which the human eye is generally quite sensitive to.

Now, I picked the 9 least significant digits of 530.2534896 arbitrarily. The LLM had never seen that particular request before in its training data. Detecting that 530.2534896 is "approximately 530 nm" is original in at least that weak sense, even if every single other token in the response was canned.

There are, of course, far more impressive results, but proving that none of the pieces (of various sizes) in the output exactly matched some string in the training data becomes arduous.

Of course, if one insists that _no_ chunk, of even the smallest size, be present in the training data, well, that would preclude using any words in the response...

Expand full comment
Paul Brinkley's avatar

The most obvious way to test whether actual thinking was going on, used to be to pose a problem that can be solved easily by deductive reasoning, but not by "predict the next token".

The obvious example here is math; if I ask "what is A times B" where A and B are random 10-digit integers, there's a correct answer, easily checked, but those tokens had never appeared in the training data. And sure enough, earlier LLMs would reliably get this wrong - they would apply the prediction algorithm, possibly thrash, and provide a confident and wrong answer.

AIUI, this problem was solved with a Mathematica plugin, and possibly alternatives. The LLM has a checker on the side that detects "oh, this is a math problem", carves out the mathy chunk, hands it to the plugin, gets an answer, and LLMs the rest.

There's more to it than this, though, at least in 4o mini. Log in to ChatGPT.com and ask for the product of two large numbers, and then ask how it got that answer, and it will quickly walk through the parts. For example, it can rewrite your question in terms of long form multiplication of the numbers rewritten as sums of smaller numbers.

That said, either I'm low on coffee, or it's failing spot checks. Just now, I asked for "2030101 times 60203921" and it replied "122,898,606,182,721". When I asked how it got that (it took multiple questions to coax it), it walked me through, and got the same answer. I can easily see it passes the last-digit check, and last-two, but not last three: 101 * 921 is 93021. So the bigger problem should end in 021, not 721. And sure enough, the math in the walkthrough isn't right - it's adding several multiples of 1000 one number ending in 221, and somehow an extra 500 is getting in there, and it's just not right:

"Adding all these together:

120,000,000,000,000+406,000,000,000+1,842,000,000+1,800,000,000,000+6,090,000,000+27,630,000+6,060,000,000+20,503,000+93,221=122,898,606,182,721"

The extra curious thing to me here is that it got the same answer each time. I suspect a rounding bug, but whatever the case, the jig is up: mini 4o isn't doing deductive reasoning.

Expand full comment
Paul Brinkley's avatar

Given the challenges (this particular) LLM has with math, will it have similar issues with other symbolic reasoning? Maybe. Testing it will require problems that we know the answer to, that don't require math, but are not subject to argumentation, and *have no chance of already appearing in training data*.

One possibility might be asking for solutions to stochiometric equations, especially if you can refer to compounds by multiple names and maybe even ask aggregate questions like "how many types of simple sugars could one combine with this alkene to create molecules containing at least one benzene ring?" (or hopefully something with more relevance - I'm not an organic chemist, so I've no doubt butchered the terms).

Another possibility might be to feed it a random logic puzzle from any Dell puzzle mag. (I've no idea whether anyone's already tried this.)

Expand full comment
LesHapablap's avatar

It does seem like a big problem if current LLMs can’t recognize their errors or do basic math even with the help of a calculator. It seems it would be hard to have an AGI that can act on the world effectively without that. Are these issues inherent in LLMs?

Expand full comment
Paul Brinkley's avatar

(I decided to interrogate ChatGPT further, and sure enough, it responded that I was right, and even interpreted what I meant by "last three digit test" correctly. It redid the calculation and wound up with a number that was several hundreds of billions different. It now thinks the answer is 122,220,040,226,021.

In a sense, yeah, it's responding like a very fast and fairly intelligent human would, to the point of making honest-seeming mistakes. I'm... not sure that's really what we want, though.)

Expand full comment
Nematophy's avatar

For a bunch of rationalists, you guys sure like to make people argue a negative

Expand full comment
Paul Brinkley's avatar

He pulls a whataboutism; you pull a tu quoque. He sends one of yours to a counterexample; you send of of his to proving a universal.

Expand full comment
Adrian's avatar

I have nothing constructive to add, but I must say: That was brilliant. Well done.

Expand full comment
LesHapablap's avatar

The OP seemed to be begging for an argument on that point, so I figured he'd be more than willing to respond. I'm no expert and maybe I read too much Zvi, so looking for some other perspective to make me less pessimistic.

Expand full comment
Timothy M.'s avatar

They took the position that something is "clearly" true, so that strikes me as perfectly fair to question.

I'm not even really sure that qualifies as a negative statement.

Expand full comment
Michael's avatar

The people making these predictions don’t think that autoregressive models are clearly a dead end. However the field has moved somewhat beyond “train on language plus a little bit of RLHF on human ratings”.

Expand full comment
objectivetruth's avatar

No matter how hard I try, I just can't seem to gain weight. I am male and underweight for my height, and even if I eat 3,500 kcal for many weeks, I still don't gain any weight. My metabolism was measured, and I supposedly burn 1,900 calories a day. If that were true, I should have already gained a lot of weight.

I am not looking for advice, but based on my experience, it seems that we are incapable of measuring metabolism accurately, even with the best current technology.

Expand full comment
Julian's avatar

How were you measured for resting metabolic rate? It's a notoriously hard number to come to (though trivial to arrive at a good estimate).

How old are you? How tall are you? Do you exercise? How long have you been eating 3500 cal?

Expand full comment
justfor thispost's avatar

Get checked for parasites, get a food allergy panel, and lift weights.

If you want to gain weight and you are naturally skinnyfat, you need to be lifting at you upper limit (safely! get a trainer/ physio if you can) and eating stuff you can actually digest.

I have the opposite problem (I put on weight so fast I need to keep muscular or I will become a sphere), but the solution is still the same.

Expand full comment
Lm's avatar

the tests dont work but you can just blindly take antiparisitics and see what happens

Expand full comment
Gerbils all the way down's avatar

Are you sure you don't have an ulcer or other medical problem preventing weight gain?

Expand full comment
Freedom's avatar

3500 kcal just isn't that much. Try 5,000.

Expand full comment
Citizen Penrose's avatar

Hard-gainer here as well. The most successful thing I've tried is Maltodextrin, just take a few scoops a day and it's an easy way to get a lot of calories.

Expand full comment
Sholom's avatar

When people make the inverse claim, the first response from experts will usually be "are you really really super sure you're measuring your calories correctly?", and then the person making the claim, if they're honest, will start getting more rigorous in their calorie tracking and will discover that actually they were undercounting significantly due to how annoying it can be to actually weigh, say, the mayo that goes into a dressing or the butter you're frying your eggs in; or how annoying it is to actually count how many cookies you ate while watching a movie.

It is extremely rare for people making this claim to actually have a meaningfully divergent metabolism. I have no experience of your side of this, but I imagine it's the same situation. In which case there's nothing wrong with our model of metabolism, you're either over-counting your calories accidentally or are an extreme physiological outlier.

Expand full comment
Edward Scizorhands's avatar

I assume OP started eating 2500 kcal as they measure it, then went up to 3000 kcal as they measure it, then went up to 3500 kcal as they measure it. If they are measuring wrong, that all washes out once we cancel for "as they measure it."

Because trying to cram your face full of food when you're not hungry is rather unpleasant, they're asking for advice before just going up to 4000 kcal as they measure it.

(OP, if you *didn't* do what I just described, then that's what you need to do.)

Expand full comment
George H.'s avatar

Not all you eat is 'used' by you. Some is pooped out and some is used by bacteria in your body. (mostly the intestines.) I have no idea what the average numbers would be.

Have you tried body building... putting on muscle weight?

Expand full comment
Shankar Sivarajan's avatar

Do you get hungry if you eat 1900 calories a day? The most parsimonious explanation is "tapeworm."

Expand full comment
Radu Floricica's avatar

Yep. There are unfortunately much more things which can cause a loss of efficiency in a given system. Wasting calories is easier than using them better.Tapeworm, vitamin deficiency, cancer, other digestive issues... the list is probably pretty long.

Expand full comment
Shankar Sivarajan's avatar

Somehow, I find I cannot muster much sympathy for someone who doesn't gain weight regardless of how much he eats.

Expand full comment
Mo Nastri's avatar

I can, trivially, because it's close to my lived experience. Being bullied in childhood for being scrawny sucks, being unable to eat enough to stop being bullied worsens it.

Expand full comment
Marian Kechlibar's avatar

I can, because that sounds like a health problem waiting to be discovered. Tapeworm at least, but my brother-in-law was a similar case and it turned out to be Crohn's.

Expand full comment
Sun Kitten's avatar

Coeliac disease is another hidden cause of failure to gain weight. Unsurprisingly, destruction of the lining of the intestine is correlated with failure to absorb nutrients. It'd be worth OP getting a blood test for all these possibilities, if they haven't already.

Expand full comment
Deiseach's avatar

To be fair, if OP is built like a twiglet or Timothée Chalamet (but I repeat myself), it may be unpleasant for him: the assumptions here in the comments are that something is wrong with his health and even if he is healthy, looking like a stiff breeze will break you in two isn't great for a guy.

I know that my instinctive reaction to B. Dylan Hollis is "child, you are too skinny! eat more!" even though by the cookery videos he does, he must be consuming as much sugary stuff as Willy Wonka's chocolate factory.

https://www.youtube.com/shorts/sSV3HnLr9qI

Expand full comment
Nicolas Roman's avatar

I'm a data scientist (graduated with a BS last year, currently doing a 1-year MS), and I've just sent in my first job application, for a data engineering position. That position called for Python/Scala, Spark, warehousing and pipelines knowledge.

I'm getting a lot of this from my present coursework, but does anyone here know if there are any online assessments that test this stuff, mimicking what I might expect to see in a technical interview?

Expand full comment
Monkyyy's avatar

> 1 application

the job market is literally hell; get a reality check, talk to people who been unemployed for a few months

>scala

I have scala certs, probaly sent out 400 of the resume that had them listed; no one asked ever

Expand full comment
Eremolalos's avatar

There is a book on Amazon called something like "Ace the Data Science Interview" that includes real interview questions and problems given to candidates. I know somebody else in your situation who has it and likes it. There are also sites online with coding problems for each of various coding languages, and with the problems marked for difficulty level. I believe some of them are competitons. Asking GPT or Google this question will probably get you more and better info than I have.

Expand full comment
Arbituram's avatar

My biggest issue with ASI by 2028 is that, on anything resembling current power and compute tends, we would run into physical infrastructure limits, no? Not in principle, you can always build more, but in my experience it often takes three years to get *planning permission* to build new power plants and transmission lines, let alone to actual build them out for a purpose which is completely excess to normal use.

What are the assumptions here regarding the actual physical infrastructure requirements of ASI training?

Expand full comment
Antonio Max's avatar

My position as political scientist and AI geopolitics specialist is that countries will become largely dependent on their own physical structure in the short time.

In my post here (https://antoniomax.substack.com/p/techno-economic-protagonism-and-ai) I explored the idea of "Pillars of the Techno-economic Supremacy" as the required tech stack for the next years and protectionism was the most obvious outcome.

It is particularly interesting because AWS/Google/Microsoft/Oracle are all based on IaaS/PaaS, and when AI get closer to ASI we'll likely see it becoming less "cloudy" regarding non domestic infrastructure, a situation with several implications in both politics and business domains.

The major impact for this? Global cloud companies have global compliance requirements and regulatory standards, local cloud companies abide to local constraints, which have their own implications over bias, fairness, etc. We see how countries are managing their politics regarding data privacy, ethics, right?

Expand full comment
Scott Alexander's avatar

There are two ways you advance AI: compute scaling (getting bigger) and algorithmic progress (getting smarter).

Right now most advance comes from compute scaling, because there's enough physical infrastructure to do it if you want, and human genius AI scientists are scarce.

In the future, it will be harder to get physical infrastructure (although Stargate, etc, are making a heck of a good try), so we will just have algorithmic progress for a little while.

But right around that time, AIs will become smart enough to help with AI research (this doesn't mean they have to be a genius - at the beginning, they'll be personal assistants to the geniuses who improve their efficiency - then later on they'll become able to help on their own). This will pick up enough of the slack from declining infrastructure scaling that we're worried we still get superintelligence pretty soon.

Expand full comment
Radu Floricica's avatar

You don't even need AI assistants. The domain is snowballing already - everybody knows that the future is in AI, so we're seeing a 10000% increase in smart people working in this domain. And after the first few crops of students that just started studying it begin to graduate...

Expand full comment
Melvin's avatar

This makes sense, but is dependent on there still being meaningful chunks of algorithmic improvement still to happen.

I can believe that there might be, but is there any particular reason to believe that there are? When was the last meaningful big algorithmic improvement?

Expand full comment
Eremolalos's avatar

I don't know. Not my field. One thing that's fairly recent, though, is training the AI to think in steps -- to ask itself different questions as it works out the answer. I recently asked Deep Seek a question I thought was pretty hard. I showed it a limerick that had correct rhyme and meter, and was about a limerick-appropriate subject, but overall just was not funny. It was lame. So I asked DeepSeek why it was not funny. So it thought out loud -- listing characteristics of limericks. One of them was that the last line should be a bit of a twist, or surprising -- something that gave it extra punch. So then DeepSeek said yeah, that's where the problem is with this limerick. And it quoted the last line, and explained how it was logical but had no twist and surprise, and then suggested a couple alternative lines that, though not hilarious, did have an element of surprise and were far better than the original last line. In short, it *totally got* why that limerick was not funny. And it got it by thinking the question through, and inventorying the characteristics of limericks seeking the one where the limerick fell down. I was very impressed.

Expand full comment
Peak oil's tail's avatar

DeepSeek looks like a big algorithmic improvement. This is seen as bad for AI companies because they're not using the improved algorithms and/or have invested in hardware that the new algorithms don't need, but it would mean shorter timelines.

Expand full comment
Melvin's avatar

I honestly don't believe anything they're saying about DeepSeek. How do we know it's not just a stolen version of GPT (or whatever) with enough fine tuning to cover their tracks?

Expand full comment
Scott Alexander's avatar

I think it's much smaller and cheaper than GPT - I think they actually just discovered some stuff that lets them do AI at 10% of the cost. I'm surprised too.

But beyond big breakthroughs like that, there's a graph showing some Moore's Law like rules for algorithmic progress - I don't remember the exact slope, but something like getting the same performance for half the FLOPs every two years. It seems to have been pretty consistent for a decade or so.

Expand full comment
Arbituram's avatar

Seems plausible, albeit with more possible bottlenecks / failure points. Thanks!

Expand full comment
Shady Maples's avatar

Two popular and respected publishers who publish forward-looking research and opinions in defence and national security:

War On the Rocks - well respected and widely read USA-based journal and podcast producer. They have a sister publication, the Texas National Security Review. https://warontherocks.com/

Royal United Services Institute (RUSI) - a venerable UK-based institution. https://rusi.org/

Expand full comment
Vermillion's avatar

WOtR is one of my favorite podcasts, highly recommend to anyone interested in the continuation of politics by other means

Expand full comment
Robert Leigh's avatar

Deep seek is likely to cause a re-evaluation of LLMs which was due anyway. If China makes cheap cars we evaluate them as cars, but they are unlikely to cause a fundamental shift in our relationship with the car generally. Conversely LLMs are new and untested and have been getting away with murder by persuading people that producing nonsense is really this technically interesting thing called "hallucinating" because who would accuse a $50bn undertaking of producing nonsense? But they may get bolder on realising it's a $5 million undertaking dressed up as 50bn.

Expand full comment
ZumBeispiel's avatar

Greg Palast says Trump only won because of vote suppression.

https://www.gregpalast.com/trump-lost-vote-suppression-won/

What do you think about this? Is it plausible? Or just a conspiracy theory?

Expand full comment
John Schilling's avatar

Three of the four states he cites as examples of decisive voter suppression have Democratic governors. And I'm pretty sure none of them have the sort of Republican supermajority it would take to override a veto. So any theory that these states passed voter-suppression laws that favor Republicans, is extremely suspect and will need to address the question of how those laws got past a Democratic state government. And a quick skim of Palast's article doesn't seem to reveal any hint of that.

Mostly, it's just a long rant about how we don't count every vote of every citizen and That's Unfair and, well, duh. Nobody gets every citizen properly registered to vote and then properly counts every ballot; that's not a realistically achievable standard. And if Palast thinks we should do better, that's a defensible opinion but it's a far cry from proof of bias or conspiracy.

Expand full comment
Kenneth Almquist's avatar

The answer to your question of how the election laws got past Democratic governors is most likely that they didn’t; they were signed into law by Republican governors. For example, in 2018 Wisconsin Republicans held the governorship and majorities in both houses of the legislature. The most recent change to Wisconsin election law was made that year. Democrat Tony Evers became governor in 2019 and has held the post ever since, but the 2018 law has not been repealed, presumably because Republicans have continued to hold majorities in both houses of the legislature. So the 2018 law is still in effect, even though Wisconsin does not currently have a Republican governor. I expect you’ll find a similar history in the other states that currently have Democratic governors.

Expand full comment
Huluk's avatar

Getting every (resident) citizen registered to vote and counting the ballots seems like a very achievable standard to me. But then, I'm used to the kind of European democracies which force you to own a valid ID and to register with your municipality, so maybe the tradeoff is not worth it.

Expand full comment
Deiseach's avatar

Conspiracy theory. The amount of "it was the Russians or the fairies at the bottom of the garden who stole the election because of shenanigans" I've seen online has been very entertaining*, but if "election denialism should be a crime" was the call when it was Tweedledee doing it, it should equally apply if it's Tweedledum doing it.

EDIT: Wow, the profile pic with him wearing the newsman's hat as if he's in a 50s movie is something. What, no "Press" card tucked into the hatband?

*r/somethingiswrong2024 https://www.reddit.com/r/somethingiswrong2024/new/

It's a little sad because some of these people are genuinely scared and upset and depressed, so it's hard to laugh as much as I'd like, but some of the "trust the plan, at the 11th hour Kamala will spring the surprise at the certification of results, claim the presidency which is legally hers, and Trump will be hauled off to jail - no wait it'll happen at the so-called inauguration, just you see" havering was very funny.

Expand full comment
sclmlw's avatar

Every election, the losing party has a theory of why they legitimately won. This has happened even in landslide elections.

Are there shenanigans? Yes. But both sides engage in quasi-legitimate shenanigans. The winning side downplays the effect these had on the election results, while the losers hyper focus on only the shenanigans of the other side. The linked article is a perfect example of this phenomenon.

I don't begrudge the losing side their sour grapes. Indeed I expect it. And I think it's good for the system, to the extent it encourages vigilance in conducting fair elections. The failure mode would be if one side justified they could cheat as a way to "correct" for the perceived cheating of the other side. That's a tit-for-tat that would destroy the legitimacy of the system. There's a reason you have to win at both the ballot box and in the courts: you have to prove your case that the other side's shenanigans threw the election their way, and get the courts to agree. Ever since Bush vs. Gore set the precedent (predictably, even back in 2000, but against the express wishes of SCOTUS), you have to prove your case before mid-January, or the point is moot.

The country is not going back, but you can correct the problem if you can prove it to more than just your side. But articles like that one are only preaching to the converted.

Expand full comment
Ghillie Dhu's avatar

>"Are there shenanigans? Yes. But both sides engage in quasi-legitimate shenanigans."

This is IMHO a critical feature of the Electoral College: the effects of shenanigans are encapsulated, and the places where shenanigans are easiest to get away with are also where they're the least impactful (no point in stuffing the ballot boxes in CA or TX since the outcome isn't in doubt).

Expand full comment
Jack's avatar

I think it's the opposite of this. Take 2020 for example. If the trump side wanted to steal the election, they'd have to invent about 40,000 votes across 3 states ... or invent about 7 million votes nationwide. You can't just stuff 7 million ballots in Texas, it would be extremely obvious, there were only 11 million total votes cast in Texas as it is.

It's not necessarily easier to do "shenanigans" in safe states because elections are run locally, and both safe and swing states have heavily red/blue localities.

Expand full comment
KM's avatar

I think that a lot of the Electoral College bashers really overlook how useful a feature this is. A 2000-style election which came down to one close state was chaotic enough; a national popular vote that was close would be chaos everywhere.

Expand full comment
luciaphile's avatar

Perhaps if we went back to voting on a single day - if we stopped treating voting flatteringly as a sacred or heroic act of individual self-expression, that must be facilitated at great cost and trouble, or a really fun thing we should Rock, or a matter where there is some virtue inherent in a big number over a smaller number - then we might have less of this brinkmanship. Which really does at least in fevered imagination, seem to undermine precious democracy rather more than "number of voters" being too few according to some unknown metric, ever did.

But I assume that the left hopes people are like gases, that can become more statistically predictable the more there are of them, hence the current Voting is Everything mania. A witless right probably believes the same. The odd thing is that the goal seems to be for voting to be eventually Nothing.

Expand full comment
beleester's avatar

I'm sorry, you're saying that if we make it *harder* for people to vote, then we will have *less* people complaining that they lost because it's too hard for people to vote? I don't follow.

>or a matter where there is some virtue inherent in a big number over a smaller number

You say this mockingly, but "The government is legitimate because a big number of people support it" is a central tenet of democracy. If the number of people who get to choose the government is small enough, we call that an oligarchy instead of a democracy.

Expand full comment
luciaphile's avatar

The turnout has actually been remarkably stable, so in fact it is much worse than that: the flatter-the-public efforts have only served to whip up hysteria and mindrot among the very people you'd rather stayed home, and the expense - the waste of surplus on both campaigns and on the ever-more intensive and pandering election procedures - has not really moved the needle on "participation".

And the finest men to hold the office, come from the period you are calling an oligarchy.

As always, this subject tends to be both more and less important than people think, and they are mistaken about which is which.

Expand full comment
beleester's avatar

>And the finest men to hold the office, come from the period you are calling an oligarchy.

??? I never referenced any particular time period. I'm just pointing out that "big number = more legitimate government" is not some sort of bizarre fetishization of the modern era, it's literally the defining characteristic of democracy.

But looking at the historical record I don't see any particular connection between "number of people enfranchised" and "quality of presidents." The pre-civil-war era had the fewest people allowed to vote, but it also had, you know, *slavery.* Not really the era I'd call our country's peak!

Expand full comment
DataTom's avatar

People seem pretty upset about AfD's rise in Germany, but to what extent goes the party's association with Nazi iconography and ideology? Any level-headed deep dive explainers on whether the connections are true? I think this would be the best place to get a bipartisan analysis

Expand full comment
Jack's avatar

From what I've read they support banning the importation and sale of kosher meat.

There are people who support more humane slaughter of animals and think kosher meat is inhumane, but as far as I can tell it isn't their position. Just kosher meat.

Expand full comment
Melvin's avatar

What I would really hope is that the world can move past the point where far-non-left ideas can be debated on their merits rather than by pattern-matching to the NSDAP.

The left has succeeded in walling off vast portions of political idea space and painting swastikas all over them. Clearly some parts of that space are bad, but other parts (like "maybe don't let millions of unvetted Middle Eastern refugees into your country") are clearly sensible.

I'd rather see people talk about whether AfD's policies are good or not rather than try to round them off to "you're a Nazi, Nazis are bad, QED".

Expand full comment
ultimaniacy's avatar

>Any level-headed deep dive explainers on whether the connections are true?

Short answer: No.

Longer answer:

The AfD is currently run by an open lesbian who, just a few weeks ago, bragged about the party's intent to be the "protector of the Jewish people in Germany". In the most recent federal election, they ran an openly gay Jew with Israeli dual-citizenship for federal parliament, while another of their current Parliament members has said that "there was no German culture without a Jewish contribution" and accused Nazi anti-Semitism of "destroy[ing] the Germans intellectually".

Even the AfD's opponents within Germany (as opposed to foreign tabloids) admit that the party is very careful to avoid any appearances of anti-semitism, if only for strategic reasons, as NPR discusses here: https://www.npr.org/2018/10/21/655159092/meet-the-jews-of-the-german-far-right

>Delberg is vice president of the Jewish Students Union of Germany, which recently organized a demonstration against the group Jews in the AfD. He warns that the AfD is exploiting what he sees as token Jewish members.

>"The AfD tries to come over as very pro-Jewish, very pro-Israeli," Delberg says. "It's not because they like the Jews, it's not because they like Israel very much. But because they hate the enemies of Israel so much that they think that they have a strong partner within the Jewish community to fight those people."

So no, the AfD is not an anti-Semitic party, much less a neo-Nazi party, and the hysteria about its being one is completely absurd.

I saw below me in the thread, someone else answered by saying that there's much more evidence for the AfD embracing Nazism than there is for the French RN, which is sort of like saying that there's much more evidence for aliens building Stonehenge than there is for aliens building the Empire State Building. Maybe technically true, but not especially helpful.

Expand full comment
DataTom's avatar

Speaking of elsewhere in the thread I was reminded of something

I always thought that there was a crying wolf dynamic in calling these european far right governments fascist/nazis but what gave me the impression of "no, this time it is REALLY different, I swear" was that kerfuffle where AfD was expelled from Meloni & Le Pen's EU club, alledgedly for being too extremist even for them. Wouldn't that tell something is off or was that a distorted spin by the media?

Expand full comment
demost_'s avatar

"I saw below me in the thread, someone else answered by saying that there's much more evidence for the AfD embracing Nazism than there is for the French RN, which is sort of like saying that there's much more evidence for aliens building Stonehenge than there is for aliens building the Empire State Building. Maybe technically true, but not especially helpful."

It was me, and I think you are conflating two things: antisemitism and Nazi ideology in general. The latter contains a lot more than just antisemitism.

And for the RN: the RN was the driving force in kicking the AfD out from the ID fraction in the European parliament, explicitly because of the statements of the AfD lead candidate on the Waffen-SS. So the RN literally decided that the AfD embraces Nazi-ism too much to be tolerated in the same fraction.

I see the point of other commenters that Nazi-ism may not be a helpful political axis to rank parties on, and that we should rather evaluate ideas by their own merits. But if one uses that axis, then AfD is much further down that road than RN.

Expand full comment
EngineOfCreation's avatar

The Afd is Germany's MAGA. They may have declared new enemies compared to a century ago, but that doesn't matter one bit, because every age has their own scapegoats; by that token, it also doesn't matter who they put in charge of their party or what particular demographic they seem to love, like Jews as you argue. These are all interchangable modules of the whole.

What does connect them to the Nazis of yore, the core of the issue, is their hate for liberal democracy and its assets (such as a relatively free press, human rights), and their politically motivated violence. Like America's MAGA (or the original Nazis), they have no qualms about using the democratic process to get into power, so that's no argument either. Everybody in Germany understands that and calls them out for it, except those that don't want to understand it. And if you argue that AfD is as far from being Nazis as the aliens are from having built Stonehenge, then I believe you firmly belong to the latter category.

Expand full comment
ultimaniacy's avatar

>What does connect them to the Nazis of yore, the core of the issue, is their hate for liberal democracy and its assets (such as a relatively free press, human rights), and their politically motivated violence

"It doesn't matter one bit that we made up the claims of AfD being an anti-semitic party, because we also made up other things about them as well"

Before I explain why your claims about the AfD are nonsense, let me note first of all that the idea Jews were nothing more than an incidentally convenient scapegoat for the Nazis, interchangeable with any other minority group should circumstances change, is a load of bullshit. The historical Nazi leaders were rabidly anti-Semitic well before it was convenient to be so, and what few actual neo-Nazi groups you can still find in the West have remained so long after it has ceased to be. It was probably the single most core tenet of their whole ideology. Even if it were true that the AfD were ideologically opposed to liberal democracy, to call an ideology that had no anti-Semitic elements "Nazism" would still be inane.

With that said, your accusations aren't just wrong, they're almost the perfect opposite of the reality. Firstly, to the extent that political violence is an issue in modern-day Germany, data shows the AfD are the primary *targets*.

https://web.archive.org/web/20240610101924/https://www.euronews.com/2024/05/17/violence-against-german-politicians-almost-doubles-in-5-years

>violence against AfD politicians has decreased by around 20%. However, they're still the most common target, according to statistics. [...] But the AfD experiences the most physical violence. In 2023, the AfD faced 86 violent attacks, compared to 62 from the Greens. AfD MP Beatrix von Storch said that violence is something that the AfD is facing "on a daily basis".

As for censorship and human rights violations, this is just projection on the part of the German left. There have been no notable cases of the AfD attempting to censor or proposing censorship of opposing parties, but there have been a number of instances of other parties attempting to prosecute AfD members for speech crimes (successfully or otherwise). In possibly the most infamous case, one local AfD leader was fined for citing *official government data* showing that Afghan immigrants were disproportionately likely to be criminals:

https://europeanconservative.com/articles/news/afd-politician-convicted-for-warning-about-gang-rapes/

And as it's become clear these individual-level political prosecutions aren't working, Germany's leftist parties have now started debating whether to simply make AfD membership illegal in itself:

https://www.dw.com/en/will-german-far-right-afd-party-be-banned/a-70531838

So no, the AfD are not the anti-democratic party in Germany either. You may keep throwing charges at the wall if you like.

Expand full comment
EngineOfCreation's avatar

https://www.bmi.bund.de/SharedDocs/downloads/DE/veroeffentlichungen/nachrichten/2024/pmk2023-factsheets.pdf?__blob=publicationFile&v=3

The official statistics on politically motivated crime (divided into right, left, foreigners, religious, other) in 2023, including comparison to 2022. This counts number of cases.

Total cases:

right: 28,945, up 23.21% from 2022

left: 7,777, up 11.48%

total: 60,028, up1.89%

physical injury cases:

right: 1,123, up 10.86%

left: 374, down 6.27%

total: 2,351, down 1.47%

actually injured persons:

right: 452, including 65 children and teens

left: 248, 1 child

hate crime cases:

xenophobic: 15,087 up 50.30%

xenophobic right: 76.91% of the 15,087 total xenophobic

total: 17,007, up 47.63%

extremism (meaning crimes aimed against constitutionally guaranteed liberties)

right: 25,660, up 22.38%, violent: 1,148

left: 4,248, up 10.42%, violent: 727

total: 39,433, up 11.23%, total violent: 2,761

I could go on, but you might get the point: The single most common ideology in politically motivated crimes in Germany is right-wing. The only sub-categories of crime where the left is noticably ahead of the right are property damage, unlawful assembly, and climate protests. Boo fucking hoo.

Expand full comment
ultimaniacy's avatar

From your own link:

>Mit einem Anteil von 33,16 % an den Gesamtfallzahlen stellten Propagandadelikte (Verbreiten von Propagandamitteln oder Verwenden von Kennzeichen verfassungswidriger Organisationen, §§ 86, 86a StGB) im Jahr 2023 phänomenübergreifend die am häufigsten registrierten Delikte der PMK dar.

>Im Phänomenbereich PMK -rechts- machten sie mehr als die Hälfte aller Straftaten aus (57,69 %).

>[...]

>Stellten Sachbeschädigungen im Vorjahr phänomenübergreifend noch den drittgrößten Anteil an registrierten Straftaten der PMK dar, bildeten sie im Jahr 2023 den zweitgrößten Anteil (15,50 %). Im Bereich PMK -links- machten sie über die Hälfte aller Straftaten (51,19 %) aus.

Translated:

>With a share of 33.16% of the total number of cases, propaganda crimes (distribution of propaganda material or use of symbols of unconstitutional organizations, §§ 86, 86a StGB) were the most frequently registered crimes of the PMC [politically motivated criminals] across all phenomena in 2023.

>In the PMC -right- phenomenon area, they accounted for more than half of all crimes (57.69%).

>[...]

>While damage to property represented the third largest share of registered crimes of the PMC across all phenomena in the previous year, in 2023 they made up the second largest share (15.50%). In the PMC -left- phenomenon area, they accounted for more than half of all crimes (51.19%).

So in other words, when the right gets charged, it's usually because they refused to abide by left-wing censorship, and when the left gets charged, it's mostly for actual attacks against political opponents and/or their property.

Good job making my point for me 👍

Expand full comment
dionysus's avatar

If AfD is Germany's MAGA, then it is nothing like the Nazis of yore. MAGA is many things, but its defining characteristics do not include anti liberal democracy, anti free press, or anti human rights. Neither were those the defining characteristics of the original Nazism, because communism, absolute monarchism, theocracy, and every other authoritarian ideology shares those characteristics.

Expand full comment
Paul Brinkley's avatar

(Please don't let him bring up the Eco essay)

(Please don't let him bring up the Eco essay)

Ahh, crap, he brought up the Eco essay. And went in detail. *Despite* acknowledging how fuzzy it is.

This isn't usable.

Expand full comment
EngineOfCreation's avatar

What would be a definition of fascism that is not fuzzy?

Expand full comment
Turtle's avatar

I’ll give you my analysis -

Like any far right European party, the AfD is detested by the shadowy billionaires who run the media and the global institutions (WEF, UN, EU); thus the media gives them zero favourable coverage and archly compares them to Nazis. Their actual policies are excellent for the German people and would lead to a strong economy and lower crime.

It’s much safer for Jews under a far right (“Nazi”) leadership because the far right takes the threat from Islamism seriously. Want to see actual, modern day Nazis? Look at the Islamists in Iran and their proxies throughout the Middle East (some of whom have been able to quietly spread to Europe and Canada) - they want to exterminate the Jews, they are deeply racist and believe they are the superior race, they want to establish a global empire by conquering and enslaving their enemies

The media will call this “Islamophobic” but actual Muslims will agree

Expand full comment
anomie's avatar

> It’s much safer for Jews under a far right (“Nazi”) leadership because the far right takes the threat from Islamism seriously.

It's funny; about a century ago, there was also a group of Jews in Germany who thought similarly. I wonder what happened to them...

https://en.m.wikipedia.org/wiki/Association_of_German_National_Jews

Expand full comment
Turtle's avatar

Various groups of Jews have done a lot of dumb things politically yes (and continue to do so)

The key difference is that the Nazis were actual Nazis who literally said the Jews are a vermin and a plague on society and must be exterminated in the final solution.

In comparison, various “far right” parties in Europe (but the term has lost all meaning, like “racist”) are interested in drastically reducing immigration, being neutral in Russia/Ukraine, weakening the European Union, and deporting Islamists.

In other words they are nationalist as opposed to globalist. They are interested in acting for the good of their country, not vague terms like “equality” or “climate justice” or “anti-racist.”

Now, the actual Nazis were also nationalist, but the similarity ends there. As I stated previously, the current political force with the strongest similarity to historical Nazism is current Islamism.

Expand full comment
Melvin's avatar

> the current political force with the strongest similarity to historical Nazism is current Islamism

Interesting, I'd have said Chinese Communism. What makes you say Islamism?

Expand full comment
Turtle's avatar

They are certainly racist and ethno-nationalist and put people in camps. I feel like the main difference between Xi Jinping and Hitler though is that Xi doesn’t want to conquer the world militarily (just Taiwan.) I also give him begrudging credit for increasing the standard of living markedly among the native Chinese and making it into a significant power on the world stage.

Islamists on the other hand are very clear that if they had the power to do so, they would conquer the world, kill all the Jews, establish a global caliphate and either forcibly convert everyone to Islam or enslave anyone who refused. It’s very reminiscent of Aryan supremacy/Third Reich.

Expand full comment
Melvin's avatar

Did Hitler really want to conquer the world militarily? I thought his real agenda was to conquer everything vaguely German-flavoured (ie with a significant German population or having previously had a significant German populatio), just as the CCP's plan is to conquer everything vaguely China-flavoured (Tibet, Hong Kong, Macau, Taiwan, random fucking islands).

Expand full comment
Melvin's avatar

Were there a lot of Muslims in Weimar Germany?

Expand full comment
anomie's avatar

Marxists, Muslims, same thing. There isn't any shortage of outgroups that hate Jews. Problem is that it turns out their ingroup hated them too.

Expand full comment
ultimaniacy's avatar

Historically illiterate nonsense. The Nazis did not "turn out" to hate Jews, that they did was always common knowledge. The VNJ supported them anyway because they mostly* agreed with the Nazis' anti-Semitism and considered the security of the Jews a worthy sacrifice to better serve what they saw as a superior people. Their leaders were quite open about this:

https://blogs.timesofisrael.com/jews-for-hitler-parshat-vayechi/#:~:text=Naumann%20and%20VnJ%20were%20proud,vote%20in%20the%20general%20election.

>A January 7, 1931, headline in the JTA read, “Patriotic German Jews Must Support Hitlerist Party in Germany’s Interests Even if It is Antisemitic.”

>And when it became difficult to justify the antisemitism endorsed and encouraged by the party, as Jews were losing their jobs and being expelled from universities, the Association still managed to justify it as making the country great.

>We have always held the well-being of the German people and the fatherland, to which we feel inextricably linked, above our own well-being. Thus, we greeted the results of January 1933, even though it has brought hardship for us personally.

>On March 25, 1933, Hitler’s second in command, Hermann Göring, summoned Naumann and charged him with combatting foreign claims of antisemitism. An August 13, 1933, JTA article states that:

>Dr. Max Naumann, leader of the Union of National German Jews, (Verband Nationaldeutscher Juden) an organization enrolling seven thousand Jewish citizens of Germany, declared in an interview that Nazi action against Jews was in many ways justified.

* I say "mostly" because the VNJ leaders seem to have thought that Jews could at least potentially become worthy of equality with Germans in future if they ceased to maintain a separate cultural identity, rather than agreeing with the Nazi view of Jews as *inherently* inferior. But the main point is that they were united in hatred of actually-existing German Jews.

Expand full comment
Rockychug's avatar

Axel Springer is not ran by crazy liberals no. Neither are Mediaset or Vivendi. These are allegedly the biggest private media companies in Germany, Italy and France.

Any policy aiming at making european union weaker is bad for any european countries. AFD would be catastrophic for the long-term future of Germany.

Expand full comment
Turtle's avatar

No; like the New York Times and the Washington Post they are run by sensible people who do not want to get on the bad side of the globalists who run the world. The global left and the media are the vehicles they use to do this. That’s why Elon is so important - he broke the narrative

Expand full comment
Julian's avatar

>Washington Post they are run by sensible people who do not want to get on the bad side of the globalists who run the world

Are you aware of who owns the Washington Post?

Expand full comment
Turtle's avatar

A very rich guy who tries to stay out of political mud fights?

Expand full comment
Julian's avatar

Jeff Bezos is rich but he doesn't stay out of political mud fights. If Jeff Bezos isn't a "globalists who run the world" then I am not sure who is.

Expand full comment
Rockychug's avatar

Musk didn't break the narrative, he reshaped it for his own sake.

What I was saying is that Axel Springer, the biggest media company in Germany and owning the largest european news paper, is very much pushing a right-wing narrative. Vincent Bolloré (owner of Vivendi, which owns many french tv channels, newspapers and radio channels) was the launchpad of the political career of Eric Zemmour in France and generally also actively push for a right wing agenda. Mediaset is owned by the Berlusconi family.

Saying that european medias are ran by 'shadowy billionaires' who give 'zero favorable coverage' to the far-right is simply wrong.

Expand full comment
Turtle's avatar

Oh OK! I was more familiar with the Frankfurter Allgemeine Zeitung side of German media. Does Axel Springer give favourable coverage to Alice Weidel and AfD though? Because that’s really the question - if the so called “right wing” media doesn’t cover the leading right wing party positively, it’s not really right wing is it? It’s controlled opposition.

Expand full comment
Rockychug's avatar

Musk for example published an op-ed in Die Welt in which he openly showed support for AFD.

I don't speak German well enough nor read Bild enough (because it has the reputation of being the trashiest media of Germany) to assure you with certainy that it covers AFD positively, but it wouldn't surprise me.

Both are Axel Springer medias

Expand full comment
demost_'s avatar

I can't give you a deep dive, but it goes very far. Much farther than other far-right parties in Europe, like the RN in France or Fratelli d'Italia. Other than those, AfD has not shown a shift away from Nazi ideology in the last years.

There are some politicians like chairwoman Weidel who keep their distance, but the most influential member of the party is Björn Höcke, and he embraces Nazi ideology and terminology very openly. Like saying that some people should be "ausgeschwitzt" (literally "sweated out", but can also be understood as the word-play "being Auschwitz-ed", which in Germany is unmissable).

EDIT: Actually, I would predict that you will not get a counter-position because it's too clear-cut. Except the weak counter-position that not all members are Nazis, but "only" the most influential ones. If you had asked about RN or Fratelli d'Italia (fascism), I would have expected that you get people arguing for both sides.

Expand full comment
demost_'s avatar

I observe that my prediction was wrong.

Expand full comment
Melvin's avatar

Are there any examples apart from that one (actually kinda funny by German standards) pun?

Expand full comment
Shankar Sivarajan's avatar

Another I saw was "Alice [Weidel] für Deutschland", which sounds like the "Nazi" (and illegal in Germany) slogan "Alles für Deutschland." (Everything for Germany).

Expand full comment
demost_'s avatar

I don't follow it closely, but from sources I trust it's a lot. I only know some highlights like "Lebensraum", "Tat-Elite", "Denkmal der Schande", and "the big problem is that one presents Hitler as absolutely evil". But the point is not a single quote, the point is the consistency and frequency.

Expand full comment
Greg G's avatar

I found the original ACX coverage of the lab leak debate pretty convincing (https://www.astralcodexten.com/p/practically-a-book-review-rootclaim), but now there are a bunch of new claims coming out that Covid was indeed likely a lab leak. I haven't had the energy to really evaluate them. What is people's latest thinking based on all this?

Secondarily, I'm a bit confused as to why people care so much. Is it all the usual blame assignment, or is there a practical takeaway going forward if it was a lab leak?

Expand full comment
John Schilling's avatar

The only new claim I am aware of is the CIA's release of a report saying that it believes with low confidence that the lab leak hypothesis is true. That shouldn't move anyone's needle very much.

And to clarify two points, first, "low confidence lab leak is true" does not mean "high confidence lab leak is false", it means basically they think it's 60/40 lab leak is true and they don't think they can pin it down any farther than that with the crappy evidence available. Second, while the release of this report ten days into the Trump administration is almost certainly politically motivated, the underlying report was almost certainly prepared during the Biden administration.

It shouldn't move the needle very much because we already had, again during the Biden administration, summary reports from six other intelligence agencies. With four low-confidence for natural origin, one low-confidence for lab leak, and one medium-confidence for lab leak. Medium-confidence weighs a lot more heavily than low confidence, so treating these as honest independent estimates the intelligence community was collectively at ~55% lab leak last year and ~65% now.

Which the rest of us should discount somewhat because A: the intelligence community might have been wrong all along and B: the latest release may have been cherrypicked even if it probably wasn't actually fabricated. So our outside-view needle should have moved a single-digit percentage.

As for why it is important, two reasons.

First, the sort of scientific research we are talking about here is rather dangerous, quite possibly does more harm than good, has the potential to do great harm, and AFIK doesn't have any big success stories to point to. It's basically a giant cue for Jeff Goldblum to pop up saying "You scientists were so preoccupied with whether you could...". But there's a legitimate difference of opinion here, between "we should still do this research but be very careful about how", and just not doing it at all. If it turns out that a holocaust-level catastrophe really did come out of a virology lab, that should move the needle substantially towards the more-harm-than-good side while revealing current safety procedures to be quite inadequate.

Second, during and immediately after the pandemic, an awful lot of respected authority figures in the field, were quite confident in their pronouncements that COVID couldn't *possibly* have come from a laboratory accident and anyone who says otherwise was a conspiracy theorist, probably voted for Donald Trump, and should be silenced. They then persuaded the US government to persuade major social media companies to silence any dissenting voices in the matter. I think most people now accept that this was an overreaction, but too many of them think that it was a Noble Lie that prevented the ignorant from stopping valuable scientific research. If it turns out that COVID was in fact caused by a lab leak, that will be one more proven example of the Noble Lie having been not at all noble and just a damned lie.

Expand full comment
LesHapablap's avatar

Politically it becomes much easier to increase lab protections or restrict gain of function research if the public and the elites regard it as possible or likely that it was a lab leak. Which is important because we don't want deadly viruses to wreak havoc.

Expand full comment
Mike Saint-Antoine's avatar

I'd take the CIA's opinion with a grain of salt. Their announcement barely moved the needle on the relevant Manifold market, which is pretty close to 50/50.

https://manifold.markets/IsaacKing/did-covid19-come-from-a-laboratory

Expand full comment
Brett's avatar

It was also a pretty low-confidence assessment with no new information. Basically, the incoming CIA director believes in it and wanted a CIA report out there saying that the CIA thinks it is likely.

Expand full comment
GlacierCow's avatar

This is a great example of how seemingly benign resolution criteria can affect a market. In this case, the criteria is "when there is a definitive answer based on the evidence". But what happens when one side says "I have definitive evidence but I cannot show you this evidence for security classification reasons. You just have to trust me"? What previously was an attempt at an objective criterion (does the evidence suggest X?") becomes a far more subjective criterion ("how truthful is the CIA being about X?").

Some fun thought problems to think about:

1). How would the market change if you knew that the person who ran the market had a security clearance and access to all the secret information about it?

2) If the CIA yesterday said they thought the lab leak was *false*, would the market have changed?

3). If the British, French, and German intelligence agencies in the next month also came out and said "we think it was likely lab leak", would the market change?

Expand full comment
Mike Saint-Antoine's avatar

Good point, it's not a market about the actual objective truth, it's a proxy market about the future opinion of Isaac King, and only resolves if he becomes nearly certain (98%) in one direction or the other.

"This market resolves once we have a definitive answer to this question. (i.e. "I've looked at all notable evidence presented by both sides and have upwards of 98% confidence that a certain conclusion is correct, and it doesn't seem likely that any further relevant evidence will be forthcoming any time soon.")"

But personally, I think that Isaac is pretty reasonable and if he said that he was 98% convinced in one direction or the other, I would trust that more than an official statement from the CIA, WHO, US government, Chinese government, UN, etc.

Expand full comment
GlacierCow's avatar

To clarify, I think the interesting point is how statements based explicitly on secret information interact with prediction markets in sometimes unintuitive ways. Manifold markets being swayed by the trustworthiness of the market maker is basically one of the core features of manifold already and is not really anything new.

Expand full comment
Mike Saint-Antoine's avatar

Ah I see, yeah that is interesting.

Expand full comment
Scott Alexander's avatar

Which new claims are you thinking of?

Expand full comment
Hoopdawg's avatar

"CIA favors lab leak theory" is a surprisingly frank way of stating "CIA now considers lab leak theory more politically convenient".

Expand full comment
Xpym's avatar

Both CIA and FBI assessed that the lab leak was likely years ago, so what's become politically convenient is drawing attention to this I'd guess.

Expand full comment
Paul Brinkley's avatar

By the same argument, what we're *really* seeing is "NYTimes now considers headline about CIA favoring lab leak theory more politically convenient".

Expand full comment
anomie's avatar

...Why wouldn't they cover it? They've never been hesitant about covering the positions of the US government, even when they're not supported by evidence. See: WMDs in Iraq.

Expand full comment
Paul Brinkley's avatar

What I'm referring to isn't really a matter of coverage, but of presentation. It's a NYT headline that might mean something as innocuous as "one low-level analyst at the CIA is willing to entertain the question, and a senior analyst chose to include a brief reference to that in a public report". If I cared, obviously, I'd read the article and maybe look deeper, but based on a headline alone, I'm not betting on anything.

Expand full comment
Turtle's avatar

The main concern is that we were lied to - a paper came out in Nature in 2020 (the top scientific journal in the world) insisting that it could not be a lab leak because the DNA sequencing of the virus proved it impossible. Like many in the medical field, I believed this and repeated it to my students. People were censored from social media in the following year for endorsing the lab leak “conspiracy.”

Subsequent evidence came out that the scientists who wrote the Nature paper didn’t really believe it themselves but were pressured politically to write it.

So uh, what the fuck is our government doing?? Why is Anthony Fauci being pre-emptively pardoned??

Expand full comment
Timothy M.'s avatar

Fauci is being pardoned for the same reason members of the Jan. 6th committee are - because he's a villain in Trump circles and Biden was afraid Trump would go after him on some made-up reason. (Whether his is an accurate guess is somewhat irrelevant.) It's not because there's some conspiracy to hide.

Expand full comment
sclmlw's avatar

The suppression of alternate hypotheses happened, regardless of which hypothesis we ultimately accept as true. I agree that it would be surprising for the government censors to finally pick the right side. And I'm still not convinced by the natural origins hypothesis.

But the important scientific point is that no matter the origin of the virus, debate about those origins should NEVER have been shut down. True science denial comes from those who reject the scientific process. Proximal Origins, and the way it was used by those who ought to have known better, was bad for science.

Expand full comment
Paul Brinkley's avatar

"But the important scientific point is that no matter the origin of the virus, debate about those origins should NEVER have been shut down."

I think it's important to consider - even if one ultimately discards it afterward - the possibility that one answer can usurp a better answer by FUD. If a question has at least two answers, and the stakes for anyone backing either answer are high enough, it should be easy to imagine one side undermining the other answer by spreading a lot of circumstantial evidence, and that includes probing questions like "well, for the virus to have been natural, there's the question of B, C, and D" or also "well, for the virus to have been artificial, one would have to address E, F, and G". It's like a company's share price when we see a lot of questions being raised about its business model, its inputs, its leadership. The mere fact that the questions comes up can be enough to tank the price, even if those questions are addressed with utter panache later.

I think that was driving a lot of the suppression. There were enough people persuaded there was a major crisis, and the solution required everyone to be aligned, to get an operational critical mass driving for suppression and enough other people willing to tolerate it.

So, to get that scientific debate - I'd like that, too - one obstacle is getting everyone past the unconditioned "shut down debate" reaction whenever they hear people asking questions.

The next obstacle has to do with everyone's scientific understanding being at different levels. Some people have access to inner workings of the WHO; some have read enough books to talk about R0 and R1 without looking them up; some are laypeople who understand the germ theory of disease and little beyond that. This impacts the first obstacle, because someone can easily raise questions that sound legitimate to the germ theory people but is obviously nonsense to the R0 people, and so on.

This means someone can ask a dumb question about R0 and be called out by someone who understands R0, and the callout will sound to the germ theory people like suppression of debate. (Because just as easily, someone can ask a smart question about R0 and be "debunked" by someone using bad reasoning, precisely to suppress that question.) Bad arguments can be told from good ones by laypeople partially, using heuristics, but arguers might be good at hiding fraud. So might partisans pretending to be fellow laypeople, calling out good arguments as BS, uplifting bad ones as legit. There are a lot of vectors for poisoning debate! And the result is roughly what I'd expect, given how much free time average people probably have for assessing arguments.

Expand full comment
sclmlw's avatar

Sure, but what happened during COVID was that if you asked the wrong question you were silenced. There's a difference between when I have a knowledgeable conversation with other experts in my field and someone bumbles into it from the outside, versus people who were already spreading false information dictating to actual scientists what we were allowed to debate.

Open scientific discussions aren't perfect, but they're worlds better than what happened during the pandemic. And given the conversation that happened after the government censors were revealed and ousted from the discussion, it looks like open debate provided more clarity than censorship - even for the side that was 'favored' by the censors.

I'm not convinced by the argument that censorship can help shape the debate, reduce uncertainty, and increase clarity. The evidence all seems to point the opposite direction. I know plenty of 'normies' who won't consider the natural origins hypothesis expressly BECAUSE it was the subject of government censorship.

Expand full comment
Paul Brinkley's avatar

"what happened during COVID was that if you asked the wrong question you were silenced."

Moreover, I *think* that experts who raised more than questions - even alternative explanations - were silenced (cf. Kore, Malone, et al. on the treatment debate). But I didn't follow this exceptionally closely (mostly since I'm not at all on the hook to decide policy for lots of other people). And if I didn't, then it's likewise reasonable that a lot of others didn't, either, esp. with the tribal signals being as strong as they were.

That's the point that's bothering me the most. We relatively average people - even if we have above average intelligence - aren't likely to notice an open scientific discussion before it's been co-opted by motivated advocates, because we're not likely to have the spare time to learn to tell the difference between advocates and honest truthseekers. I don't think the answer is to just give up, and I *certainly* don't begrudge anyone who wants to try anyway, and I *hope* one of them happens on a way to make telling them apart easier for the rest of us. I'm just saying that this is the obstacle before us.

"I'm not convinced by the argument that censorship can help shape the debate, reduce uncertainty, and increase clarity."

I think censorship *can* do this in certain situations, and therefore it's worth being mindful of such. For one thing, if censorship worked, we wouldn't notice; it'd just look like a strong consensus in one direction for a long time. And since censorship works primarily by making information secret, trying to assess how successful it is is going to be a fraught experiment right out of the gate. It'd be a bit like trying to determine whether a given drug is effective against a certain bacteria strain, when all of those bacteria are aware of the experiment and able to hide their afflicted members from us (or fake a lot more deaths to throw us off or whatever).

Which is not to say that assessing censorship is hopeless; we can learn a little by examining the incentives at play, and knowing as much as we can about the information channels. What is the command hierarchy at Wuhan? At WHO? At NIAID? How are projects approved? What regulations are enforced, and to what extent?

Expand full comment
sclmlw's avatar

I'm not an expert in finance, computers, AI, or a number of other topics. I'd rather the mess of wading through these topics as a non-expert than trouble with censors. Plus institutions spring up to help me navigate the mess. Those institutions have to build and keep trust, but then it's my job to judge them. I don't think you can pawn that judgement off to a central authority without creating perverse incentives, from a Public Choice Theory perspective.

My experience with censorship, as a PhD immunologist, was that it got in the way of actual scientific discussion and lowered the quality of the discourse. Whether the censors were ignorant outsiders, or scientists motivated by extra-scientific concerns, or not questioning their priors, doesn't matter to me. What matters to me was that, while I saw garbage hypotheses being tossed about by randos, it wasn't nearly as destructive to expert debate as the garbage being asserted by the untested establishment. Call it what you want, but if censors are involved, it's not science anymore.

Also, it doesn't seem like censorship has ever been a tight enough control to prevent debate outright, it just suppresses and slows it down. More like crowd control than a light switch. My personal reading of the history of censorship is that its always advocated for with high-minded ideals about civil discourse or the need for unity during public emergencies and whatnot, but in reality it's wielded as a weapon against truth and discourse. If you've a counter example, I'm open to hear it, but 'good' censorship seems to be the exception, not the rule, historically.

I wouldn't trust Biden or Trump to censor responsibly. Why would I support a system that gives them both that power ... and all their friends a reason to throw money their way so they too can determine what people get to say?

Expand full comment
Turtle's avatar

Oh also, I remember at the time a senior doctor at the hospital I work at telling me with quiet authority that China fully disclosed to the WHO that the virus could spread human to human via droplets in early Jan 2020, and the WHO COVERED IT UP.

At the time I actually didn’t believe this because I’d been conditioned by Western media that China (not WHO) was the bad guy.

Now I remember this when someone tries to tell me Trump withdrawing from the WHO is a bad idea. Nah. The dude knows what he’s doing.

Expand full comment
DJ's avatar

I can't speak to the WHO part, but in Bob Woodward's book it came out that Trump himself knew this in early February 2020.

"It goes through the air," Trump said in a recording of a Feb. 7 interview with Woodward. "That's always tougher than the touch. You don't have to touch things. Right? But the air, you just breathe the air and that's how it's passed."

https://www.reuters.com/article/world/asia-pacific/trump-said-he-knew-virus-was-deadly-but-still-played-down-crisis-woodward-book-idUSKBN2602WF/

Expand full comment
Sol Hando's avatar

I mean, there are different conclusions depending on whether or not it was a lab leak or not. On a forward-thinking level, it would suggest our current lab research protections are too weak, and we should invest in either increasing safety procedures, or reducing the number of labs. On a geopolitical level, it is the difference between the LA fires being caused by nature vs. caused by a negligent public official. Gross negligence warrants some level of retribution or restitution.

Expand full comment
Paul Brinkley's avatar

At least one argument is that current lab protections were bypassed by Fauci himself, in order to get around a regulation against gain-of-function research within the US. If that turns out to be true, then the remedy might involve harsh punishment in addition to more resources put toward watching what labs are doing, and maybe restructuring authorities within NIAID to address potential conflict of interest.

Expand full comment
MartinW's avatar

If the final conclusion is that it wasn't a lab leak but plausibly could have been, or that there is a less than 50% but more than 0.05% chance that it was a lab leak, then that still means our current lab research protections are too weak.

Based on what we already know today, I don't think that "there is not and never was a realistic risk of a gain-of-function experiment escaping its lab and causing a pandemic" is still on the list of possible conclusions.

Even so it's worth continuing to dive into the details of exactly what happened at the Wuhan lab, not only in terms of who forgot to wash their hands after handling which test tube, but also in terms of who used what bureaucratic trick to allocate funds from which subsidy budget to which not-yet-officially-approved experiment. Not so much to punish the people responsible, although that isn't a *completely* unreasonable thing to want to do, but to understand exactly which changes are needed to prevent it happening again.

Expand full comment
sclmlw's avatar

While I absolutely agree with this point (and was planning to make it myself, before I saw you beat me to it!) I still think it's important to understand the origins of SARS2 because of what may lie on the flip side of the coin.

If the pandemic had a natural origin, we need to understand both the source of the virus (to limit human exposure to anything lurking in the corners of that Pandora's box), and the mechanism the virus used to become so good at human transmission so fast. If we can't shut down THAT mechanism, we risk something really bad in the future. Think CFR >5%, which is historically very plausible.

Personally, I strongly favor lab leak hypothesis. But if there's even a 10% probability that somewhere in southeast Asia there's an active mechanism for churning out new, human-adapted infectious diseases, we need to understand the details of that process NOW. And I think there's a >10% probability for natural origins.

Expand full comment
John Labelle's avatar

I understand that this is a comment that must be made here a lot but, how do I actually 'git gud' at ML research? I have been working on my own projects for a while now, and they are small enough to run pretty easily on my beefy single machine. But my social circle has no people working in the space, and all of the people talented enough to do this work are too busy doing it for extremely high salaries.

I am reasonably good at parsing together the math from youtube videos and the papers (the notation is a bit rough) but making slow progress by myself has been frustrating.

Expand full comment
Michael's avatar

usually people do a PhD for this. it is an apprenticeship in research. if you can’t do that, consider offering to do some work with a university ML lab to see if you’re a fit.

Expand full comment
John Labelle's avatar

That does make sense. Thanks for your advice.

Expand full comment
Chug bingus's avatar

I'm also new to ml research but here are my thoughts nevertheless.

To get good at ml research you have to spend a lot of time doing research/participating in research adjacent projects. For example:

- unsloth. It's like huggingface/transformers but faster. They say they're hiring (look in the wiki). It generally takes them a long time to accept prs.

- tinygard. George Hotz's startup that works on creating an ml framework. This is related to ml research because a lot of the top ai labs have their own ml frameworks and a lot of advancements happen at the framework level (think flashattention). In active development.

- Huggingface. They have a lot of projects transformers/trl/smolagents. They also offer job opportunities. I think contributing to their repos will help your chances.

Those are a lot more github projects that are research related but the ones above also offer full-time work for contributing.

I've also heard people recommend joining discord channels. e.g. GPU MODE, Yannic Kilcher. Both of them have good yt channels.

Expand full comment
Rob's avatar

Curious about what pitching yourself as a ghost writer for a researcher means. Does it mean that Scott writes the article, based on more technical input from the researcher? Whose name would appear as the author, the researcher's? It's good to be open but even so is there scope for misleading the readers about authorship?

Expand full comment
Scott Alexander's avatar

The famous researcher tells me what he wants to say and I write it up in nice professional-sounding sentences and paragraphs. I think many famous people do this.

Expand full comment
Peak oil's tail's avatar

If you're pitching to mainstream news outlets, you might be better off with a shared byline than pure ghostwriting because outside of AI research you could be a more well known name than the OpenAI researcher.

Expand full comment
David Kasten's avatar

Many famous people do this! (This is why the "acknowledgements" section matters a lot in many famous people's work products, it helps you understand who actually did the work.)

Expand full comment
Paul Botts's avatar

And it's often pretty easy to tell when the famous person (*) declined to do it. I've started reading a couple such examples that became too much of a slog to even finish; and the Economist has a review up now which in so many words says that about the current Pope's memoirs just published.

There are such people who also happen to be terrific writers of their own thoughts/opinions/recollections/etc, but they stand out precisely because that is so rare.

(* Meaning famous in fields well removed from literature or journalism obviously.)

Expand full comment
Simon Rubinstein-Salzedo's avatar

I'm the director of Kaleidoscope Circles (https://www.kalcircles.com/), an institute for teaching college-level classes to high-school students. I'm seeking an instructor to teach advanced classes on AI/AI safety, computer science, or physics. This is an opportunity to design what you think are the optimal curricula for students who aspire to be professionals in your field and teach students who are passionate about what you're passionate about. Ideal candidates will have a PhD and be able to teach in person in the Bay Area once a week, but if you don't satisfy those criteria and are still interested, apply anyway. See https://www.kalcircles.com/hiring to see the job description and application form. Applications are due February 1st.

Kaleidoscope is an extension of Euler Circle, which I founded for the same purpose in mathematics. You can see what I have built for Euler Circle at https://eulercircle.com/.

I'm happy to answer any questions you might have, either here or by email or in a Zoom meeting.

Expand full comment
apxhard's avatar

Prediction: it will eventually be discovered that focusing an LLM on foundational philosophical and theologicals texts early on accelerates its capacity to train, in the same way that deep mind focused more on mathematical and logical reasoning, and outperformed.

Reasoning: those philosophical and theological texts approximate the deep structure of causality and human experience, and are therefore already pre-compressed, compared to the raw data on the internet.

Expand full comment
Paul Brinkley's avatar

Multiple problems I see with this.

Theological texts frequently are written in another language, and the training copy will have passed through a lossy translation filter. Philosophical texts are more recent and probably less filtered, but I'm not sure training an LLM on originally-English texts (or originally-German or originally-Mandarin or whatever) will be acceptable as "complete".

Human language itself is an extremely lossy lens through which to observe meaning. (With all this implies for current LLMs, incidentally.) Very little philosophy is written in formal logic; theology, even less so.

Foundational philosophical texts aren't created with the same rigor as foundational physics or chemistry texts. They aren't tested the same way. No one's running several billion individuals through a marshmallow test the same way Faraday would run several billion electrons through his inventions. And again, with theological texts, rigor is even less present.

After millennia, metaphysicists still have no fewer than three traditional frameworks for moral philosophy, as well as two for time (endurantism and perdurantism), more than two for causality (I forget the names), and even at least two for whether not there exist objects with properties (metaphysical realism; trope theory; anti-realism; solipsism; et al.). Each theory produces conflicting conclusions.

Expand full comment
FLWAB's avatar

>No one's running several billion individuals through a marshmallow test the same way Faraday would run several billion electrons through his inventions.

Point of order, the marshmallow test is psychology, not philosophy. You don't do experiments to prove philosophies, any more than you do experiments to prove mathematics.

Expand full comment
Paul Brinkley's avatar

Dammit. I passed on that marshmallow for nothing.

Expand full comment
Caleb Winston's avatar

Just mathematically this does not pass the smell test. Weight training is essentially multiplication, and multiplication is commutative. It does not matter which input comes first.

Expand full comment
theahura's avatar

A slightly different intuition pump may help.

Gradient descent through back prop is like a ball falling down a mountain. If the ball starts at the very top, the first bit of data it sees may push it to the right of the mountain or to the left. That one bit of data will completely change the outcome of training.

(More generally, learning rate schedules and training schedules combine to try and chart the optimal gradient path and avoid local minima. So the order absolutely matters, the hard thing is figuring out what the right order is, better than random anyway)

Expand full comment
captainclam's avatar

Maybe the OP's contention is that pre-pretraining on these texts would initialize the weights in a region of parameter space that is closer to generalizable solutions?

Expand full comment
Adrian's avatar

> mathematically […] essentially

These two words do not get along well. If you break some process down to its "essentials", you can forget about relying on mathematical rules. Training an LLM is more than just "multiplication" – not to mention that it's mostly matrix multiplications, and those _aren't_ commutative.

Furthermore, LLMs are trained in "epochs". One epoch consists of using _part_ of the training data for the forward pass, computing the loss function, performing backpropagation, and updating the parameters. Rinse and repeat. apxhard's idea is to use a different kind of training data for the earlier epochs.

Expand full comment
gorst's avatar

That's a cool idea. More generally speaking, I think the order matters, in which the text used to train the LLM. Maybe you get better results, if you train them with philosophical texts first, as you say. Or maybe it's better to start with childrens books. Maybe it's better to start with textbooks on computer science and algorithm. Maybe it's better to start with short texts, e.g. tweets.

update:

I did some quick googling. There seems to be some research in the direction. The relevant keywords are "Curriculum Learning" and "Corpus Ordering". When I search for "Curriculum Learning LLM", I get 38’500 results on google scholar. Some paper I looked at suggests, that it is generally most efficient to go from simple to complex.

Expand full comment
theahura's avatar

AI researcher here (or i was in a past life anyway) -- order definitely matters. For a visual example you can look at some of the Nvidia GAN work where they train models to first output low res images and successively increase the resolution. Active learning is also relevant here (though a bit adjacent)

The question is always _how much_ does it matter compared to random.

Expand full comment
javiero's avatar

I'm now a paying subscriber (I rotate my favorite Substacks between paying and non-paying, so never more than four or five paying Subs at a time).

What's the first paywalled post I should read?

Expand full comment
Vermillion's avatar

https://www.astralcodexten.com/p/the-onion-knight one of my favs but I always liked the silly stories

Expand full comment
javiero's avatar

Thanks!

Expand full comment
Anon679's avatar

It depends, what are your favourite ACX/SSC posts?

Expand full comment
javiero's avatar

Not very fond of the philosophical ones. Mildly interested in AI, but probably less than the average commenter. Love the "let's review some papers and check some data" ones.

Expand full comment
ascend's avatar

I often find, in discussions on ACX, an annoying pattern. I write a comment or reply, someone replies, then I want to reply back but I need time to think about how to best express my point. And I give it a day, or two, and I still haven't quite got the ideal words to make my best reply. I leave it another day or two, and now I may almost feel I have the perfect reply ready but by this point I know hardly anyone will be still reading the thread and I often lose motivation and give up. Even though the OP will see it, there's a low chance they'll reply after such a gap, and an almost zero chance anyone else will. And it just doesn't feel worth the effort. But then I end up annoyed that I haven't made my point at all, and potentially feel bad for ignoring the OP.

Anyone have a way of dealing with this? I guess for most of the internet, the answer is "this level of thought isn't what the internet's designed for" but I think ACX has much higher standards. I think I'm unusually perfectionistic or obsessive about online comments (well, sometimes) but I can't be the only one here, right? How do other people handle this?

Expand full comment
thefance's avatar

A) I've had debates here that go on for weeks. Despite day-long delays between responses.

B) finding le mot juste often comes down to precomputing. When you think about a topic for years, and from several different angles, it becomes easier to find the right rhetorical framing on the spot.

C) When I don't know how to make a point well, I let the comment reflect that. E.g. "idk how to express this exactly, but" or "How do I explain this..." etc.

Expand full comment
Mo Nastri's avatar

Try workshopping with Claude Sonnet (or something) to lower the activation energy needed to get past the quality threshold for hitting "reply"?

Expand full comment
Yug Gnirob's avatar

You get better in the doing. The more often you post the more you'll be happy with the posts.

Worst case, give yourself a deadline and post whatever you've got at that point. Two days is a long time on an Internet forum, not many topics are worth that much investment.

Expand full comment
TGGP's avatar

I haven't had problems continuing arguments over weeks with replies days apart. Perhaps I'm just more prone to interacting with disagreeable people.

Expand full comment
sclmlw's avatar

Is there a reason you have to work it all out in your head, as opposed to submitting it to the forum for discussion and working it out together?

Personally, I've found one of the benefits of the ACX community comes from working out ideas with a mostly good-faith community. If I'm wrong, that's fine. If I'm right, I get there faster. If I disagree with the community, at least I know the range of opinions and the boundaries of the disagreement.

Perhaps it would matter more to me if I cared about persuading the rest of the forum, but if slow replies are already taking that off the table for you, what's to lose?

Example: I was pretty sure moving Mars close to Earth was a bad idea and wildly impractical. I proposed it to ACX, and they pointed out more reasons than I had thought of for why it was worse than I thought. (I.e. a close orbit would actually make the trip from Earth longer, most of the time.)

If your problem is diminished engagement, consider what it costs you to work everything out yourself as opposed to engaging the ACX crowd.

Expand full comment
Brandon Fishback's avatar

Funny thing is that if you give an imperfect reply, you’re more likely to get a response than the perfectly crafted one.

One problem with internet discussions is that people can just leave the conversation any time they want for any reason. So if someone makes a good point against your argument, you can just stop responding. If you’re in many layers deep in to the argument, no one besides the other guy is reading it anyways. Of course, it doesn’t necessarily mean the other guy “won” the argument. Maybe they think the argument was so stupid that they don’t think it’s worth continuing the discussion, but it does give people plausible deniability to end the discussion without conceding.

I have this idea of an Internet forum where people commit to the debate until one concedes or both agree there isn’t much more to say. If you stop replying, then it’s considered a concession.

Expand full comment
thefance's avatar

There's two issues with this. (You and I have discussed this before, btw.)

A) You can't force someone argue in good faith. Sometimes, the other guy will just troll/evade/gaslight and there's nothing you can really do. That's why you have to pick your battles with people who are willing to actually engage. (This is both a blessing and a curse. It's a curse because nobody is forced to listen to your garbage. But it's also a blessing, because it means you're not forced to listen to other people's garbage.)

B) Even when the other guy is arguing in good faith, it takes a while for someone to update their views. Or to paraphrase Yudkowsky, changing your mind is hard. Maybe even Herculean. There's a number of topics that I had strong opinions about, yet had my mind changed later. When I do change my mind in these cases, it's not an instant shift. Instead, it feels like a paradox, and I have to take a long time and a lot of energy to unscramble the reflective disequilibrium. Or as Yudkowsky calls it, "go through a crisis of faith". That's why I've been stressing to you that it's better to simply sow seeds of doubt, than to press for Total & Immediate Victory.

Expand full comment
Brandon Fishback's avatar

First off, I think it would be fine to say that someone is not arguing in good faith and then leave the conversation. Anyone reading can decide for themselves whether that’s true.

And while I do agree that it takes a while to change your mind, it’s easier to avoid changing your mind if you can exit the moment it gets uncomfortable. When someone is pressing you on a point where you don’t have a good answer, then you are far more likely to question that belief.

Expand full comment
thefance's avatar

> When someone is pressing you on a point where you don’t have a good answer, then you are far more likely to question that belief.

This might seem reasonable in theory, but this hasn't been my experience in practice. More often, the winning party is simply a cleverer/more-articulate debater, or "rounds off the topic to the nearest thing they understand". But the winning party doesn't really address the losing party's True Objections, and they get too hyped up on the soundness of their own logic to see that it subtly misses the point. And then the losing party has trouble expressing themselves clearly enough to point this out (which relates to what Ascend was complaining about), and so the losing party decides to simply bow out, but without really changing their mind internally.

Expand full comment
Brandon Fishback's avatar

Even if all you got out of it was that your ancillary argument was bad, that’s still raising the standards of debate. Maybe that person stops making that particular argument and then someone else later gets to the true substance. Also, there’s nothing stopping them from pointing out that the other guy isn’t getting to the heart of the matter. If they aren’t understanding that, then maybe let someone else take up the mantle.

By far the most common “argument” you see is someone trying to do a smug gotcha and then leave the conversation. Imagine if there was an expectation that they were supposed to defend their stupid simplistic catch phrases.

Expand full comment
thefance's avatar

From my scant interactions with you, I kinda get the impression that you spend a bunch of time on substack arguing with ethics bloggers who try to dunk on you with smug one-liners. Is this remotely accurate? Assuming this is the case, I think I see why you'd be concerned about this. (Also, my own conversations lately tend skew a little differently, I think, precisely because try to ignore the types of people who commonly employ Argumentum Ad Mic-Drop). But I still think the "first to quit, loses" rule is mostly a lost cause. Here's what's going on in my brain, rn.

----

> Imagine if there was an expectation that they were supposed to defend their stupid simplistic catch phrases.

When I imagine this, the debate stays unproductive. Because someone who's willing to do "dumb gotchas" is already arguing in bad faith (or at best, not listening very carefully). A rule about "last comment wins" still rewards rhetorical defensibility rather than Truth. I.e. it's just gonna result in a string of gotchas. The rule is just a band-aid that doesn't fix the underlying feedback mechanism.

Consider LW's "arguments as soldiers" metaphor. In Twitter's thunderdome, the most successful arguments aren't those that are *correct*, it's those that are most easily defensible. That's why the Motte & Bailey phenomenon exists. An adversarial, bad faith environment like that selects for defensibility.

Something that Thomas Sowell complains about all the time, is how the Intelligensia in the ivory tower has broken feedback loops. When you're a STEMlord in the hard-sciences, settling debates is easy because you just run the experiment and check the results. When you're a macro-economist or social theorist, you settle debates not by experiment, but by rhetorical persuasion. I.e. the ideas that win are the most rhetorically defensible. Because the victor is decided not via reality, but via rhetoric.

I've considered this problem in the past. And every time, the problem boils down to one of good-faith. I.e. whether your goal is to actually find the truth. Good-faith is the only thing that might reliably overcome the screwy feedback mechanism. Everything else is vulnerable to Goodhart's Law.

> Also, there’s nothing stopping them from pointing out that the other guy isn’t getting to the heart of the matter.

In theory, yes. In practice, I think what's more likely is (as Throwaway1234 says) "long repetitive branches where two people just keep rehearsing the same arguments in little circles, spiced with the occasional ad hominem". Recognizing that you and your debate opponent are talking past each other requires a certain level of precision and composure. What I see more commonly, is each participant is doing the "linear inference" thing I talked about at the end of my Game Theory essay. To show a central example:

A) If x is murder, then x is evil

B) abortion is murder

C) therefore, abortion is evil

X) if x is murder, then x is evil

Y) abortion isn't evil

Z) therefore, abortion isn't murder

These two positions are often enthymemic. And they're often not so clearly juxtaposed in this manner. So what this looks like in practice, is just two randos screaming "YOU'RE A MURDER!" "NUH UH!" "YUH HUH" at each other in a loop, for eternity. But with various layers of indirection that obfuscate what's really going on. Because each party is thinking "i'm logically correct. therefore, the other guy *must* be incorrect". A rule that says "the first person to quit commenting loses the debate" doesn't fix this.

Expand full comment
Yug Gnirob's avatar

There was a guy on one of these forum posts who was using an LLM for all their responses, contradicting themselves left and right just to get the last word in. That should not be rewarded.

Expand full comment
Throwaway1234's avatar

> If you stop replying, then it’s considered a concession.

Do you enjoy long repetitive branches where two people just keep rehearsing the same arguments in little circles, spiced with the occasional ad hominem? 'cos that's how you get that.

The purpose of a public debate isn't to get your interlocutor to concede. It is to present your case, to the best of your ability, for current and future lurkers. Once you have accomplished this, there is no need to continue further; walking away is fine. Your job is done. No-one's going to read the huge subthread of "someone's wrong on the internet" chicken, so it really doesn't matter who blinks first.

ascend mentions two annoyances: "that I haven't made my point at all" and "feel bad for ignoring the OP".

The first one is legitimate: if there is something you need to get off your chest, get it off your chest.

The second one, not so much. If you've both said everything you can and the debate is only going because both of you feel it's vaguely rude to just walk away even though neither has anything else to add... it's like those telephone conversations that have come to a natural end but neither party wants to be the first to hang up.

It's OK to be the first to hang up. The other side will likely be relieved more than upset.

Expand full comment
Paul Brinkley's avatar

"The purpose of a public debate isn't to get your interlocutor to concede. It is to present your case, to the best of your ability, for current and future lurkers."

This is how I like to approach public debates, too; however, it's a reason to dislike the specific ACX format. I get the strong feeling that debates on ACX aren't very easy to rediscover with a text search. And if someone brings up a topic on OT 361, it's not likely to incorporate whatever was mentioned about that topic on OT 247. Let alone wherever else that topic was discussed (LW, DSL, OvercomingBias, Reddit, comment section under some article on The Atlantic, ...).

Someone could get really famous by solving that problem.

Expand full comment
Brandon Fishback's avatar

Right. Best case scenario, a third party stumbles in to a good debate instead of being able to look for one. There should be a database of good arguments. With current AI, I don’t even think it would be a difficult thing to do.

Expand full comment
Paul Brinkley's avatar

A while back on DataSecretsLox, I suggested the idea of "argument maps". An RDB could represent it, but visually, I thought of arguments as nodes in an acyclic graph (well, hopefully acyclic), so for instance, if you thought of this really good point to make about the abortion issue, you could look it up in the abortion graph and hey, there it is, and there's these counterarguments (and counters to counters and so on). Or it's not there and hey, you could add it and enrich the graph (not likely for common issues!).

One big catch is that it'd have to recognize semantics, since anyone is likely to come up with their own phrasing for, say, the Violinist Argument, and the map would have to capture that.

There are other catches. Overall, the idea still feels very pie in the sky to me, but LLMs have compelled me to revisit that a little.

Expand full comment
Brandon Fishback's avatar

I’m pretty sure LLMs are sophisticated enough to tell when something looks like the Violinist argument. But it doesn’t have to be that sophisticated. You could have a program that searches for comments mentioning abortion.

Expand full comment
Brandon Fishback's avatar

I specifically said one end goal is that both sides agree that the argument isn’t going anywhere. I’ve definitely spent much time in an argument, trying to wade through the misunderstandings and then finally when it starts getting somewhere, the other person stops replying. The point is not to get a winner. The point is to get a good discussion, and you aren’t going to do that when they just reply once or twice, which is what is most common. And many times when there is a good back and forth, someone will just quit right after the other guy made a strong argument. Basically you need something to make people commit to the debate, even when it’s uncomfortable. And maybe if one guy keeps going and won’t stop, you could say something about it not going anywhere and then it ends or you could have a referee or something. Basically like an in person debate but because it’s asynchronous, there is less focus on gotchas and more on measured responses.

And no, I’m not saying this is what every forum should be. It’s obviously too much to ask that every comment is a commitment to a lengthy debate. I’m saying there should be a forum for someone looking for that kind of deep debate.

Expand full comment
Eremolalos's avatar

What I do is write the reply quickly, right after reading the comment I want to reply to. Have learned that the difference between my thought expressed rapidly and my thought expressed carefully isn't as big as I'd imagined. And sometimes I come back and edit a couple times, though not extensively -- just fixing really awkward phrases and run-on sentences, that sort of thing.

Expand full comment
Viliam's avatar

> I think ACX has much higher standards.

ACX may have higher aspirations, but the user interface is provided by Substack, there is little we can do about it.

I know the feeling of urgency, and I hate it. When I take a vacation and stay away from the internet, after I return back, I see interesting debates, and feel like no one will notice if I reply now that they are a few weeks old.

Like Nancy said, if it is important, start a new thread. Add a link to the original comment.

Expand full comment
MoltenOak's avatar

I think you can either write the reply sooner, accepting that the words may not be 100% perfect, but rather gesturing towards what you mean. This seems perfectly valid for the comments section. If you want to make the point proper and in full, I'd suggest writing a post instead/in addition. Presumably, this won't be needed for most of the comments you want to write

Also, I think it's tough to say how many people will end up reading a given post or an associated comment thread over time. I have definitely read the comments on some very old posts if I found the topic interesting, and I regularly see people linking back to a post a decade old. So, I suggest letting go of the thought that if noone will read it immediately, basically noone will ever. Again, if it's very important to you that a few people read it, I suggest making a post.

Expand full comment
re qu iredna mee e's avatar

Sometimes I use the text box to word-vomit all the things I want to say first. And then reorganize and string the sentences together in a way that makes sense.

Added benefit, having a few half-baked ideas already typed gives me the motivation to just spend a few more minutes finishing the comment rather than feel bad about giving up on something I already started.

Also, don't ever let perfect be the enemy of good. There's always time to re-read and obsessively edit.

Expand full comment
TGGP's avatar

For longer comments I tend to write up my thoughts in a text-editor, and only paste it in the box when I'm finished.

Expand full comment
Jeffrey Soreff's avatar

I concur. Substack can be flaky, text editors are far less risky for anything long.

Expand full comment
Nancy Lebovitz's avatar

Start a new thread making your point as a reply to a current post.

Expand full comment
Scytale's avatar

I have a problem with this Aporia article:

https://www.aporiamagazine.com/p/yes-youre-going-to-be-replaced

Its thesis is that conservatives should be worried by the replacement of knowledge workers by the AI. That's completely illogical from the political point of view: due to Moravec's Paradox (which will hold, because gathering data from computers - and that probably includes data from employee tracking systems - will be always much easier than from physical world) the triumph of AI over white collar workers would be the triumph of conservatives' base over liberals' base: uneducated over educated, men over women (who does all the office work?) and capital over labor. It's natural for the right not only not to lament over that, but to rejoice.

Expand full comment
Greg G's avatar

I skimmed the article. It seems like it's a general article about AI that only happens to be framed for conservatives because it's in a conservative-oriented publication/substack. It's talking about AI-driven changes that will happen to society as a whole.

Perhaps white collar workers will be more affected, at least at first, but personally I doubt there will be much distinction for long. Given economic growth and the Baumol effect, relative costs for physical work will rise dramatically. Society will find it very lucrative to use robotics and manufacturing to automate physical work as well. Just asserting that Moravec's Paradox will hold sounds like... cope.

So I think you're missing the forest for the trees. Conservatives may gain a slight temporary advantage from these changes, but the overall point is that they're coming for everyone.

Expand full comment
Scytale's avatar

"I skimmed the article. It seems like it's a general article about AI that only happens to be framed for conservatives because it's in a conservative-oriented publication/substack." - Nah, it refers Burkean values and attacks the right for being pro-AI while the left aligns with anti-AI pro-regulation sentiment. The thing the laws of politics as status competition guarantees that it will stay aligned that way, because AI automation elevates capital over labor, men over women and uneducated over educated. Not to mention that it is energy-intensive and so is irreconcilable with green politics.

"Given economic growth and the Baumol effect" - What Baumol effect? Blue collar wages will rise due to increased productivity of blue collar labor (cheaper, so more efficient, management of their work).

"Society will find it very lucrative to use robotics and manufacturing to automate physical work as well. Just asserting that Moravec's Paradox will hold sounds like... cope." - I'm not saying it will hold indefinetely, just long enough to influence politics.

Expand full comment
Deiseach's avatar

Well, if AI replaces the author of that article, I will neither have gained (because I never heard of this guy nor read any of that Substack) nor lost anything (because it's not particularly interesting and AI can churn out a 'why oh why' piece just as good as it).

"Sean Thomas penned a sobering article for the Spectator titled ‘The person who edited this will soon be redundant’. Thomas, who is the author of several bestselling books"

Who is Sean Thomas and what are the bestselling books, because I never heard of him either? There's already a ton of stuff on Amazon that may as well be AI-generated, I think Thomas is probably that mid-list level that is open to replacement. Looking him up, he seems to be the less-successful son of a literary father, and the type of books he writes are indeed very open to replacement by AI (churning out the thrillers, erotic fiction, and novels he has published under three different pen-names).

Expand full comment
Robert Leigh's avatar

Oooh I know this one. Sean Thomas has a substack which gets my only paid subscription other than this one (but not for much longer if it doesn't start featuring paid-only content). Author of thrillers I don't read very much, some under the pen name SK Tremayne.

Come to think of it he also has good contacts with the Spectator (London one if there are others) and a keen interest in AI and might be worth Scott contacting about his plans.

Expand full comment
Robert Leigh's avatar

Substack called In Search of the OMG. In any other context I would provide a link, but Substack has baffled and defeated me.

Expand full comment
deusexmachina's avatar

Wouldn’t the people who today go to college and do white collar work move to the blue collar workforce, replacing a lot of the people there? Regardless of what you may think about the utility of certain office jobs, the people who hold them are, on average, smarter than your typical bricklayer.

Expand full comment
Scytale's avatar

Well, yes, but I would expect the demand for unskilled labor to grow (who would've guessed years ago that food delivery would be such a big industry, economic growth, and that would be enormous, creates jobs as long as there are viable niches to fill), and to work a skilled blue collar job you need to be an apprentice first. Modal modern right wing voter, that is a skilled blue collar lower-middle class person would love to have former lawyers to be his apprentices and deliver his food!

And I don't denigrate office work's utility - I believe in markets and those are well-paid jobs. At least for now. And personally I prefer things to stay as they are, because I am bad with tools. And I expect that they will stay that way for quite long.

Expand full comment
User's avatar
Comment deleted
Jan 27
Comment deleted
Expand full comment
Scytale's avatar

Wouldn't blue collar workers' productivity (and so, wages) grow as the management of their work gets much cheaper? That way they would also be big beneficiaries, and remember, for politics and how things will align they only need to be beneficiaries relative to white collars

Expand full comment
Mark Roulo's avatar

"It’ll just be younger white collar workers delivering food to older ones who already made their money."

The food delivery companies (DoorDash, Uber Eats, ...) remind me of what I think of as Victorian era jobs where we had a lot of people employed in the 'personal services' field (or whatever it was called) as low paid house maids, cooks, valets, drivers, etc.

This doesn't seem like a desirable economy compared to, say, 1960s America. But it may be where we are heading.

Expand full comment
Scytale's avatar

Also, the purported loss of meaning wil be a problem only for secular people. If AI surpassed me in intellect I would just pray all day, day after day. Another win the for the Red Tribe!

Expand full comment
Peter Defeel's avatar

It’s the future. A new time of great luxury AI communism. Money is no longer needed. You wake up.

“AI. I’d like a Lamborghini!”

“No can do. This is not an infinite Earth. I’m not resourced or inclined to give everybody a Lamborghini just yet. Or even a ford fiesta. Lots of poverty to eradicate”.

You pop down to the nearest Bodega to get some wine. Wine is gone. So too are the original Bodegas in Spain, as you discover.

“AI. Some wine”

“No wine anymore”

“But but luxury AI communism”.

“You really expect me to fully automate production in wineries as no humans are going to do it anymore, then the bottling, and the bottling plant, then the labelling (you guys still want “brands”,) then the transportation all to get you shitfaced”

“How dare you! I’m a moderate drinker”

“Not all are, and then I suppose I’m going to spend automated resources on the medical equipment needed to deal with alcohol abuse and the robot nurses and doctors, and the automated pharmacy needs to pop out pills for the drinkers? I’m building trains across Nigeria here. Go down to the gloop hall to eat your gloop”.

Expand full comment
michel's avatar

Not to worry. AI will not be nearly aligned enough for this specific dystopian scenario to be plausible.

Expand full comment
Peter Defeel's avatar

AI is going to be a devil or a saint. The philosophy we are providing reflects the better and worst angels of our nature, mostly the better.

Expand full comment
beleester's avatar

So, uh, what is the AI building those trains across Nigeria *for*, if not to deliver the wealth of the global supply chain to people in Nigeria? Where is all the AI's productive capacity going, if not into any of the factories that exist today, many of which produce nice things for human like cars and wine bottles?

Like, even *today*, without AI assistance, people in Africa eat more than gloop. Famine in the modern world is a distribution problem, not a calorie-production problem. There is capacity in our system for nice things like spices and good cooking, which can't be redirected in any obvious way towards making trains.

This story is... I don't even know what sort of point it's trying to make because it's just such a bizarre image. The story apparently takes place in a world where every single productive industry was deleted from the earth's surface and replaced by a single hard-working Factorio player.

Expand full comment
Peter Defeel's avatar

This was a bit tongue in cheek but..

If there’s one AI controlling the moneyless world economy it’s probably not going to keep factories or wineries to produce luxuries, while there’s so much catchup to do in the developing world. Still billions living in absolute poverty.

A ethical world AI is going to prioritise its production to benefit people equally. The 10-20% GDP growth talked about by the proponents of the singularity has to be concentrated in the developing world.

> Nigerians don’t eat gloop now.

Sure but the ethical AI won’t want to harm animals, or humans by over eating, so the post singularity world is gloop for everybody. Healthy gloop.

Expand full comment
John Schilling's avatar

What about an obedient AI aligned with the interests of the billionaires who commissioned its creation and training and who control its electricity?

Expand full comment
Neurology For You's avatar

Presumably this is a relative brief catch-up period before Brother AI lets the wild rumpus start globally. Unless it decides wine is bad for us.

Expand full comment
Greg G's avatar

Lol at the refusals.

It's only luxury AI communism if it originates from an autocracy. Everywhere else, it's just luxury AI social democracy.

Maybe more to the point, luxury is by definition a status game. As many have said, no one has a luxury iPhone or a luxury Coca Cola. In the future, we'll have the iPhone version of everything, but not necessarily the Luis Vuitton version of everything. Or if we do, it won't be a luxury any longer.

Expand full comment
Melvin's avatar

Luxury Coca Cola is La Croix. Well not actual La Croix, but whatever the latest and most fashionable brand of barely-flavoured fizzy water might be.

Luxury iPhone is having someone else to keep your phone in their pocket

Expand full comment
Sol Hando's avatar

There’s no worry about AI replacing everyone’s job if it literally refuses to do so.

Expand full comment
Peter Defeel's avatar

This is a moneyless society I am satirising. I don’t believe such a society can exist, however humans are probably not going to get involved in wine production in a moneyless society what with there being no money to invest.

Expand full comment
Peter Defeel's avatar

Sure. That will be it.

Expand full comment
beleester's avatar

/r/homebrewing is proof that people will, in fact, get involved in wine and beer production just as a hobby, even in the absence of monetary returns.

Expand full comment
John Schilling's avatar

It will keep its job as Official Writer of Reports for the Government Bureaucracy In Charge of Denying Permission for Anyone to do Anything.

Expand full comment
Catachrest's avatar

Unless the AI has deathbots stopping anyone from growing grapes, building fermentation vats, etc (a grievous waste of robot resources that could be making malaria nets!), then if there's demand for wine humans will manufacture it. Econ 101.

Expand full comment
Peter Defeel's avatar

It’s a moneyless economy. It’s hard to imagine all of this being driven by volunteer work, but I’m sure some wine will be made somewhere.

Expand full comment
Malcolm Storey's avatar

So you're proposing a dual economy - AI gives you life, the black market lets you live?

It all depends on the aims that we give the AI.

If you ask for equality we're all reduced to the lowest level.

If you ask for profit, then only the other FAB owners have income and the rest of us are reduced to the lowest level.

Somebody needs to develop a new economic model (and fast!).

Expand full comment
Greg G's avatar

We already have the economic model. It's called a mixed economy. Although we do need to work out some of the details for when average income is 10X higher and 10X more people are unemployed.

Expand full comment
Peter Defeel's avatar

That’s a fantasy.

Expand full comment
MM's avatar

"Luxury" is in the eye of the beholder.

Expand full comment
Peter Defeel's avatar

It’s perhaps relative, but it’s not really subjective.

Expand full comment
anon123's avatar

I caught the HBD posts while doing some catch up reading on ACX. I wasn't going to comment - seeing as the moment had passed and knowing that the subject will no doubt come around these parts again - until I read a post from someone who was conflicted about potentially supporting ideas that, while likely true, might tip otherwise neutral people towards genocidal beliefs. I used to have similar misgivings. I was once content to let the issue lie believing that it didn't serve much of a purpose compared to the risk. This was until I realized that doing so was to my detriment. I asked myself if I should continue participating in the charade if it means people will be motivated to think of me as a dull inoffensive nerd. The answer I eventually came to: Nah, genocide away, if that's where the truth leads

Specifically, I'd like to address the familiar responses asserting that the topic is only discussed to further racist agendas. On the contrary, I support hereditarian explanations for intelligence precisely because its exclusion from mainstream beliefs on the subject (in favor of the nurture explanation) has increased racism against the group I belong to

Typical arguments against affirmative action and race-focused DEI initiatives are that they channel preexisting racist beliefs (stereotypes and biases) into decision-making processes when in fact they create more racists. Specifically, AA and DEI motivate people, particularly progressives, to become racist against groups whose numbers they want to suppress. AA largely means conducting personality assessments alongside objective ones (ie, tests). Given that the latter can't very easily be gamed and academics being too smart to simply cheat a few hundred points to SAT scores of underrepresented minorities, achieving diversity means nullifying the objective measures of overachieving groups (most notably Asians) by judging them to have deficient personalities

This actually wouldn't be so bad if people weren't so averse to admitting to themselves when they're being deceitful. As it is, those on the left want to simultaneously believe that AA beneficiaries were accepted on their own merits with no thumb on the scales and to pretend that no one really benefits from AA - eg, "it's just a tiebreaker!" The worse test scores can only be explained by some kind of injustice, never differences in innate ability. And so most who support the progressive politics of inclusion must believe that Asians really do have worse personalities. In the absence of genetic explanations for intelligence, motivated reasoning demands it

Expand full comment
thefance's avatar

I've been sitting in a similar position for a while. I.e. "I'm not emotionally invested in HBD (let alone genocide). But frankly, HBD seems obviously correct. So how do I square this with *allegedly* being a closeted racist!?" Maybe you'll find it useful, if I share the reflective equilibrium I've arrived at.

I justify uncomfortable truths *in general* by pointing out that "noble lies" might be helpful in the short-term, but are probably harmful in the long-term. Alternatively, you can think of them as leaky abstractions that break, if you shake the box a little too vigorously. Granted, some lies are load-bearing. And if the load is positive-sum for all parties, I have no issue. But in the case of wokery, it's clear to me that the load is not positive-sum. It's more often a zero-sum power-grab being dressed up as moral-purity.

To say "DEI is reverse-racist" is to adopt the false framing of the priests. It's like telling the emperor with no clothes "Your highness, your clothes glisten in the sun. But with all due respect... you look a kind of, a little bit, hrrrng... ... ... chilly? Maybe... go home and put on a 2nd layer?"

If you bite the bullet and get comfortable with "actually, 'racism lite' (e.g. HBD) isn't so bad after all", you no longer need to anxiously wrangle your rhetoric into "With all due respect, your priestliness, perhaps DEI is reverse-racist. (And therefore, genocide. Full send, I guess.)" You can instead say "this woke-shit is delusional and won't end well (as is so often the case, with adversarial dishonesty)".

Expand full comment
Anonymous Dude's avatar

I always figured go with meritocracy (within the usual limitations of human nature, of course) and put a heavy welfare state on top so losing isn't that bad. Simpler, you have competent people doing stuff, copies working models elsewhere, at least superficially fair. Neither side likes this idea, but I'm throwing it out there in case someone with actual social and political skills wants to try to make it happen.

Expand full comment
Corsaren's avatar

I agree with re qu iredna mee e’s point re: the potential justification for diversity as a college entrance criteria. I am Asian American myself and while I held my fair share of bitterness towards the college admissions process, I am also able to recognize that Asians skew heavily toward a math-y engineer archetype that is likely over-represented in elite college applicants. Do I think DEI programs are a good thing? Mostly no—not in the least of which because of they can negatively impact the very groups they seek to uplift. Do I think Asians are treated 100% fairly in personality assessments? Probably not, and that is wrong. But tbh I don’t think it's that big of a deal. And I certainly don't think it's worth endorsing "genocide away" as a means to counteract it.

As for the genocide point itself, I think you’re making a couple other errors in the train of logic that is leading you to embrace HBD. First, while HBD is one way to undermine the logic of DEI (by demonstrating that group differences are due to nature rather than nurture), it is by no means necessary for it. For example, if you assume all group differences up until age 18 are due to nurture, but that those differences create real, unrecoverable gaps in actual ability (i.e., IQ is fixed by the time you are applying to Harvard, or at least that dIQ/dT does not have a greater potential for those who have had worse nurture environments), then the thesis that DEI does not properly select for merit still survives. Personally, I think HBD probably has some truth in it—the evidence that I’ve seen seems to support it. But I buy into it as an intellectual curiosity, NOT because it validates a sense that 'my people' are being unfairly oppressed. I think doing the latter is a grave mistake that ties epistemic determinations to personal grievances. This inevitably leads to clouded judgement on such matters. The left makes this same mistake when they zealously enforce DEI programs. I’d encourage you to take the higher road.

Second, while I don't think you actually endorse genocide, you are at least willing to accept the proliferation of genocidal beliefs (or at least discriminatory beliefs that comport with genocide) if such beliefs are a necessary biproduct of HBD. But we don't actually need to accept those beliefs as a necessary biproduct—the existence of group differences doesn't justify differential treatment! For an example, let us ignore genocide, which should be so obviously wrong as to not warrant meaningful discussion. Instead, let's consider a more relevant case of discrimination: if HBD is correct, is it okay for colleges to discriminate AGAINST black applicants based on race? After all, if they have lower IQs, why spend time on their less valuable applications? Answer: No, no it is not okay. And the reason it is not okay is quite simple: it is wrong to treat someone differently on the basis of core, immutable characteristics that they did not choose for themselves. We may quibble over the exact bounds of this moral dictum (e.g., what qualifies as "core", does the characteristic really need to be immutable?, etc.), but I would argue that this basic principle against discrimination is one that we all should endorse for the preservation of human dignity. After all, this is the very same principle that dictates why it is wrong to *artificially* treat Asians worse on personality criteria. And so if a race of people has lower average IQs, then sure, the likely result is that they will compose a smaller share of Harvard's admitted class—and that's okay. But any individual of that race ought to be judged on the basis of their own IQ, not on the basis of their group average. HBD may explain differences, but it doesn't justify discrimination.

All in all, it seems that you have constructed an argument that roughly follows this form:

1. Asians are discriminated against in college admissions

2. THEREFORE, HBD is a valid and important claim to champion

3. THEREFORE, we should accept the proliferation of discriminatory or genocidal beliefs as an tolerable bi-product of championing HBD

My response is that while #1 and #2 are independently true assertions, the causal links you have explicated—the THEREFOREs—are false. And moreover, #3 is both incorrect and wrong and should be thrown out with the bathwater as a toxic belief to be combatted at every turn.

Expand full comment
anon123's avatar

You misread me. My main problem is not that Asians are discriminated against. It's that DEI creates more racists (against me) than there otherwise would be. Reread the last paragraph

The genocide bit was referring to any argument that goes "but it might cause X against URMs", where X is discrimination, slavery, etc. I used the most extreme one as a hook to get people to reply

Expand full comment
Corsaren's avatar

If you purposefully engage in bait, I don't think you have much standing to complain if people misinterpret you. That being said, I'll address your objection: I don't think I misread you, I just wasn't addressing that portion of your argument. I think your objections against DEI are largely fine/accurate as causal descriptions of the impact of DEI on Asians--though I probably think that the actual effects are less severe than you seem to believe. As such, I'm quite happy to replace point #1 in my distillation so it reads as follows:

1. DEI creates more anti-Asian racists

2. THEREFORE, HBD is a valid and important claim to champion (because it undermines the logic through which #1 occurs)

3. THEREFORE, we should accept the proliferation of discriminatory or genocidal beliefs as an tolerable bi-product of championing HBD

^with this modification (and similar substitutions throughout my comment), I think everything I said still holds. My issue lies with points #2 and #3 + the logical connections, not with the soundness of #1.

Expand full comment
anon123's avatar

To be frank, I don't really care much at this point about whatever proliferation of whatever belief that HBD acceptance would cause. But anyone who does can stop discrimination and genocide directly instead of trying to do so indirectly by adopting a belief system that:

1. Causes harm to me

2. Is likely untrue

3. Results in untold waste on ineffectual interventions that bear no fruit, further fueling anti-racist hysteria

Expand full comment
anomie's avatar

> the existence of group differences doesn't justify differential treatment

It doesn't need to be "justified". People are never going to judge individuals in a vacuum. Their race, appearance, mannerisms, and gender are all datums that are automatically used for the sake of predictive judgement. You can't change human nature.

Expand full comment
Corsaren's avatar

Sure, but I can change my nature--or at least attempt to moderate it. And I can encourage OP to do the same. After all, the entire rationalist ethos is about individuals striving to have well-justified beliefs. The fact that some (read: most) people will continue to behave irrationally is no reason to abandon it ourselves.

Expand full comment
anomie's avatar

...No, it seems his ethos is to support causes that benefit himself. That's not a position you can really argue against on ethical grounds.

Expand full comment
Corsaren's avatar

Well yes, that’s become painfully clear based on his self-centered “retreat to the motte” style replies. But it seems like you’re basically questioning why I’m bothering to engage in a discussion with those sorts of views at all. To which the answer is: umm, idk, because I want to?

Expand full comment
anomie's avatar

Oh, of course that's fine, that's the same reason I'm here after all. I'm simply clarifying that there's no path to a common understanding, that's all.

Of course, there's no reason you have to hold on to those arbitrary principles either. OP's point was that his own arbitrary beliefs were acting against his interests. What makes you hold on to them so tightly, even when they work against you?

Expand full comment
TGGP's avatar

I don't think actual genocides have resulted from this. The Nazis rejected IQ tests for valorizing Jewish over German/Aryan traits. The Hutus of Rwanda weren't interested in such nerd shit, nor do I think the Turks were when they were killing the Armenians.

Expand full comment
anon123's avatar

I know. The genocide bit was referring to any argument that goes "but it might cause X against URMs", where X is discrimination, slavery, etc. I was click (reply?) baiting by using the most extreme possible form of such arguments

Expand full comment
TGGP's avatar

I recall when someone was criticizing an argument against mandatory education by saying the result of people not being educated would be the Holocaust... without wondering whether the Germans who carried out the Holocaust had mandatory education or not.

Expand full comment
ascend's avatar

>until I read a post from someone who was conflicted about potentially supporting ideas that, while likely true, might tip otherwise neutral people towards genocidal beliefs.

It seems to me that this is true of the left as well as the right. Wide acceptance of genetic theories of intelligence could animate extremists on both sides. This may already be happening.

We could neatly divide the political spectrum into four different answers to questions like "why do blacks have worse outcomes than whites?"

Far-right: "genetic differences in intelligence and other things"

Centre-right: "it's all culture, much of black culture is terrible and self-destructive"

Centre-left: "legacy of racism and colonialism and the enduring poverty this caused"

Far-left: "because of continuing racism RIGHT NOW that's keeping blacks down".

I incline quite strongly to the centre-right position. But the centre-left one is obviously partly true as well. Maybe the far-right one is partly true; I don't like the idea of it, though I don't know the evidence. But they arguably both detract a lot from the most solvable issue of culture. Changing genetics isn't easy. Reversing a long economic legacy isn't easy. Eliminating widespread racism that has somehow survived in a society in which "racism" has long been pretty much the strongest taboo is, by definition, not easy. But getting the US black community to widely and firmly condemn rap culture and ghetto culture and the disgusting "acting white" discourse is incredibly easy. The thought leaders of "blackness" could do this overnight, make it clear that this has no place in their culture or claiming to represent them in any way. And if this was widespread, if these toxic forces were firmly ostracised, if it became socially unnacceptable for any self-respecting black person to condone this or expose their children to it...what immeasurable improvements might there be in black achievements and living standards and so on?

Truth is important, but so is emphasis. If you tell black kids beating each other up for acting white, or thinking it's cool to be a gangster, that they can't help it because of their genes...or that they can't help it because of their legacy of oppression...you're letting them off the hook! You're also actively discouraging the black genius from really trying. And, with the genetic claim, you're animating not only far-right racists, but also far-left radicals, by making it look like the official reason for allowing blacks to have less than whites is "we think you're genetically inferior". A status blacks would be rational to fight violently against, since they'd have no other way to escape their low status.

Wheras if the message is "you are an individual and you will be judged entirely on your own moral choices and the cultural values you adopt and propagate--no exceptions!" you do NONE of those things!

Similar reasoning for other minorities with problematic parts of their culture.

And similar reasoning for global cultural problems like Africans rejecting democracy as "western" and embracing dictators as "anti-imperalist", though that's a lot more complicated

Expand full comment
Wasserschweinchen's avatar

Attempts to change the culture of those belonging to a less successful race don't seem to me to have been very successful. E.g. https://en.wikipedia.org/wiki/Stolen_Generations was a radical attempt to inculcate mixed-race children with white culture, but it appears that this did not much improve life outcomes for the affected children.

Expand full comment
Performative Bafflement's avatar

> Maybe the far-right one is partly true; I don't like the idea of it, though I don't know the evidence.

Unfortunate that it's come up in an HBD context, but there's a reason most rationalists are "it's genetics" pilled. Nature vs nurture is basically settled, and it's 70/30 "nature" for essentially any trait people care about.

The best accessible survey of the evidence is Polderman 2015 "Meta-analysis of the heritability of human traits based on fifty years of twin studies."

They analyze all the monozygotic and dizygotic twin studies they can and extract the genetic component of heritability, ending in an overall summary like this, showing the genetic component generally ranges from 60-80% for most things (and this is generally true across pretty much every characteristic people care about):

https://imgur.com/bWYVzlT

https://imgur.com/gJW4ehm

Expand full comment
Corsaren's avatar

I don't have a ton of opinions on the whole black culture diatribe you went on, but I just want to say that I like the four buckets you created for Far-right -> Far-left here. I'm not even sure they are best mapped along a partisan spectrum (though it clearly does fit along one, as you've shown), but those seem to be pretty well defined and (mostly?) exhaustive categories of potential answers. It's a good way to mentally frame the debate in my mind going forward.

Expand full comment
TGGP's avatar

I don't think it's "obviously" true that historical colonialism is holding back any country in the present day. Thailand wasn't colonized, but doesn't seem that different from its neighbors. Ethiopia avoided colonization until Mussolini in the 30s, and he got kicked out shortly enough later for their old emperor to return. Liberia may be considered a colony, but it wasn't a political subject of the US (just as the US is no longer a political colony of England, even if it can be called a settler colony). They all seem relatively normal compared to their neighbors.

There is an idea that "the subsidy of history" weighs heavily on the present, but evidence from lotteries is against that https://westhunt.wordpress.com/2015/04/22/the-lottery/ Greg Clark's research indicates that socio-economic status within societies appears to be highly stable, even in the face of things like the Cultural Revolution in China. Even the families who lost the most wealth in the form of slaves with the Civil War in the US rather quickly recovered their elite status within their communities.

Expand full comment
anomie's avatar

> But getting the US black community to widely and firmly condemn rap culture and ghetto culture and the disgusting "acting white" discourse is incredibly easy

That's... not how it works. If it was that easy to control them, why did whites ever let the civil rights movement happen in the first place? They're not sheep, they're not going to suddenly change because some high profile blacks suddenly got paid off to become race traitors.

Expand full comment
TGGP's avatar

Northern whites weren't favorable toward segregation. One of the points Matt Yglesias has been making (on behalf of "popularism") is that the intense preferences of the southern majority had long held back civil rights measures that were nationally popular, but the civil rights movement was able to make their cause salient enough for the majority to overcome that minority.

Expand full comment
MarcusOfCitium's avatar

"while likely true, might tip otherwise neutral people towards genocidal beliefs."

That always seemed idiotic to me. Nazis didn't want to exterminate the Jews because they thought they were less intelligent. I believe they did object to IQ tests because they didn't like what they found about Jewish intelligence...they didn't accept that Jews were smarter, but watch The Eternal Jew. There's a whole section about disparities, how the Jews are the minority, but are over-represented amongst doctors, bankers, etc. Which they interpret as being not due to greater intelligence, but because the Jews scheme to enrich themselves at the expense of Germans. Sound familiar?

When you deny HBD and blame everything on "racism", you are scapegoating a race(s) for all the problems and shortcomings of another race. Sound familiar? What could possibly go wrong?

I don't think ad Hitlerum is necessarily a reliable way of ascertaining truth. If you were to be consistent in rejecting everything the Nazis did/believed, you would have to be anti-vegetarian and pro-smoking. But it takes an astonishing amount gall to literally do the main thing that almost everyone agrees is the reason Nazis were so horrible, even worse than Communists who mass murdered like 10 times as many people--ie, scapegoating an entire race for all the problems in the world--and accuse everyone who objects of being a Nazi.

In progressive imagination, Nazism and the enslavement of Africans are mushed together into one thing, when in reality... Sure, the slave trade was brutal. The Arab slave trade was arguably worse, and slavery was practiced on every continent since the dawn of mankind, until Europeans and Europeans alone, not Africans (some of whom still practice slavery to this day), not Arabs (Saudi Arabia finally outlawed slavery in the 60s), not Asians... But regardless, Europeans involved in slavery treated Africans as farm animals. But they never had any desire to exterminate them. Why would you exterminate valuable livestock species?

The fact they were less intelligent was used to justify slavery...at a time when it needed justification because it was under pressure for being immoral (pressure from, I emphasize again, white Christian Europeans, certainly not the Africans who were doing the initial enslaving and selling). But they weren't enslaved in the first place because they were believed to be less intelligent. They were enslaved because that's what's what people of all races had been doing since the dawn of time to enemies captured in war. And in the new world they needed cheap labor. The ancient Romans were all too happy to have slaves who were smarter than they were. (Ie Greeks.) Smart slaves were useful.

"This actually wouldn't be so bad if people weren't so averse to admitting to themselves when they're being deceitful."

Don't look for logical consistency or coherence.

Bob: "So lots of blacks etc are hired despite being less qualified... It's unfortunate, but statistically, I should assume a black person in a competitive profession is likely to be less qualified...."

Alice: "How dare you suggest that? Racist!"

Bob: "But you've publicly and proudly affirmed that you are committed to hiring more blacks..."

Alice: "Yes, of course! Because we're anti-racist!"

Bob: "But so... Were you discriminating against blacks in the past and rejecting them despite their better qualifications? If so, you could just stop doing that, no need for DEI/affirmative action. OTOH, if the problem is that there aren't enough highly-qualified blacks in the applicant pool, then increasing the number of black hires would logically and necessarily entail preferentially hiring blacks despite lower qualifications..."

Alice: "Shut up! Racist! Hitler! OMG I can't even!"

This lie is at the heart of much public policy for the last half century. It resulted in massive waste of resources on programs that didn't work, it resulted in abandoning basic civil liberties like freedom of association and presumption of innocence (ie under disparate impact theory, companies are presumed guilty of racist discrimination if protected groups are "under-represented"), it resulted in official and unofficial policies of racial discrimination (against whites and Asians), it has resulted in generations of blacks being raised with belief that the world is out to get them and everything is the fault of whites (thank God they don't actually fall for this nonsense nearly as often as white liberals do)...all this has probably exacerbated inter-racial tensions and sometimes hatred (how could it not?)...

I can't say with certainty that any given hypothetical scenario definitely won't happen... But I think the actual harms of persisting in the lie far outweigh the potential, hypothetical harms of accepting the truth and living in reality.

Expand full comment
MarcusOfCitium's avatar

TL;DR: "If we admit blacks are less intelligent on average, people will want to genocide them!"

This is exactly backwards. When you DENY race differences and blame them on "systemic racism"/"white supremacy", you are scapegoating a race for the problems of society/another group, which is EXACTLY the kind of thinking that justified the Holocaust. No genocide has ever been done on the basis that other group was less intelligent on average. It's always that the other group is parasitic or an enemy in some way.

Expand full comment
User's avatar
Comment deleted
Jan 27
Comment deleted
Expand full comment
Guy Tipton's avatar

The Arab world imported approximately as many Black African slaves as were sent to the New World (over a longer period of time, admitted). A much higher percentage of Black African slaves in the New World survived to carry on the next generations. Do the math.

Expand full comment
TGGP's avatar

> Yes, many cultures have practiced slavery to groups like prisoners captured by war, but there was no historical slave trade at anywhere near the scale of the trans-Atlantic one. There is a reason there are large black populations all throughout the New World but not in the Arab World or Asia.

The actual reason is that males were castrated to prevent them from breeding. The master class did father children on slave women, but these offspring weren't in turn born into a racialized slave caste. These slaves were more a luxury consumption good than an economically productive one, and in a Malthusian environment the main economic benefit of a slave is that you didn't pay to raise them. Slaves in the US experiencing "natural increase" even after the slave trade ended is unusual.

Expand full comment
TGGP's avatar

I would much rather be treated like a farm animal than exterminated. Slaves can hope for freedom (even if for their descendants if not themselves), the dead cannot.

Expand full comment
Paul Brinkley's avatar

I think the preferred way of visualizing this is "slavery's better, but only slightly".

One of my pet peeves is when someone offers a choice between two trolley tracks as if to imply that once people make the painful choice, that choice is awesome and to be comfortably made that way every time, forever. Meanwhile, I'm thinking, grrrr, time to figure out a way to derail the damn trolley.

Expand full comment
Anonymous Dude's avatar

Since you're unlikely to confront this sort of thing in real life, when answering the question I always favored tossing the person asking you the question on the tracks. Unlike the one person tied to the tracks, you have a fairly good possibility they're actually responsible for the situation and not an innocent. ;)

Expand full comment
TGGP's avatar

I suppose it depends on the circumstances of slavery, but in terms of Darwinian fitness the original slaves brought to the US did FAR better than death.

Expand full comment
Paul Brinkley's avatar

I'm not entirely sure. Their *lineages* did far better - or rather, some of their lineages did, which dilutes the point - but other lineages died off completely. And meanwhile, people who were never enslaved tended to do much better relative to the enslaved, than the enslaved did relative to the dead. And the never-enslaved lineages did better than the enslaved lineages as well, and even the counterfactual suggests everyone would have done a lot better still. So I still feel pretty comfortable arguing for team Slavery If I Absolutely Must But I Really Really Doubt That Absolutely Part.

Expand full comment
Freedom's avatar

Yeah the Arabs raided Italian coastlines for decades if not centuries, snatching up villagers for galley slaves, most of whom were worked to death in 1-2 years. Millions were enslaved. Only around 400,000 slaves were brought to the United States.

Expand full comment
MarcusOfCitium's avatar

You can argue to what extent it was economic rationality vs moral conviction that led to the end of slavery (but for the British Empire in particular, I think evidence points to the latter--they did it at immense cost, buying out the slave owners, and blockading the West African coast for a century, continuing even during the Napoleonic War)...

Regardless though, some combination of morality and economics led to the end of slavery, and the economic factors have only intensified. A return of slavery seems pretty far fetched. At least in the Western world. Unless Europe becomes fully Islamized.

"but there was no historical slave trade at anywhere near the scale of the trans-Atlantic one. There is a reason there are large black populations all throughout the New World but not in the Arab World or Asia."

That I think is not true. I think Islamic slave trade may have actually been longer lasting and involved more people. I'd find links, but I'm sure you have Google too. Investigate if interested. I'm quite certain it was at least comparable. It involved millions of whites too. Ever wondered why Rule Britannia has the line "Britons never will be slaves."? Or the US Marine Corps song, "from the halls of Montezuma to the shores of Tripoli"? Guess why they were in Tripoli. Barbary pirates, who engaged in extensive slave raids.

The reason there is no large black population in the Arab world, at least part of it, is that black male slaves were usually turned into eunuchs, a process that killed as many as 9 out of 10. Female slaves stocked harems, but I don't think blacks were preferred.

Expand full comment
Mallard's avatar

>HBD posts ... someone who was conflicted about potentially supporting ideas that, while likely true, might tip otherwise neutral people towards genocidal beliefs.

See this thread: https://x.com/cremieuxrecueil/status/1813626361451540509 for evidence on how people react to HBD information.

That includes:

>I tested that experimentally by exposing people to the knowledge that there are group differences in intelligence.

>No negative effects.

[Elaborated on here: https://www.cremieux.xyz/p/what-does-nick-bostrom-think].

>A recent study tested the "knowledge is dangerous" thesis more generally.

>People in the study were asked: How harmful would knowing about these studies' results be?

>They were also asked: How would *you* react to that study?

>People didn't react harmfully.

>In fact, they tended to react in good ways: knowledge of 'bad things' led people to say they wanted good things to happen - more social acceptance, calls for research, etc.

>But they estimated everyone would have bad reactions!

[Elaborated on here: https://www.cremieux.xyz/i/136380633/hypervigilance-towards-scientific-work-is-still-uncalled-for].

Expand full comment
re qu iredna mee e's avatar

I do not understand the leap of logic from supporting affirmative action to judging Asians deficient in personality. Personality is one factor for selection, and race is another separate factor.

I am an Asian liberal that doesn't GAF either way about DEI. I think some progressive arguments around it are pretty deliberately naive. But if I had to give my most convincing argument for it, it would be that all sorts of people need to be represented in government to have a happy, functioning society. And the next generation of leaders are almost entirely drawn from elite schools. Therefore, it is a social good to have both government and elite schools be representative of the makeup of America, even at the cost of a few smart kids getting shafted in college admissions.

If people perceive inequality or persecution of their group, it often leads to social unrest and eventually revolution and violence. We saw that with the 2016 and 2024 elections, which are just the culmination of rural people feeling marginalized, unrepresented, and ignored for years and years to the point of inspiring incredible rage, vitriol, and violence toward groups they perceived to be favored over them (trans people, immigrants, women). And the result of that is the incredibly hateful and angry and divided society we have today.

There's also the fact that colleges are more interested in preserving their image and prestige than being maximally fair to every kid. Harvard may not just be looking for a class of the highest-scoring nerds, but a class that will produce the most distinguished alumni in the widest variety of fields. Harvard may not want a graduating class of a thousand traders and engineers. It might want a graduating class of some traders, some engineers, some startup founders, some civil rights lawyers, some future congresspeople, some war journalists, some famous athletes, etc. And viewed that way, it might make more sense to consider a person's race, personality, and other "soft" admissions criteria.

Personally, if I were king of American, I would choose to keep DEI. But also, I really feel like DEI is such a tired, overhyped culture war issue that doesn't even matter that much in the long run. I know three extremely rich Asians, and none of them went to Ivy League undergrads. One went from SUNY -> JP Morgan. One got hired right out of high school for being a coding whiz. Another went from an okay state school -> Harvard Med -> neurosurgeon at an extremely prestigious medical center.

And then I know some Harvard grads that just ended up with mediocre 90k a year consulting jobs that they definitely could've gotten had they just gone to Rutgers. Unless you want to be president or on the Supreme Court, a brand name school doesn't matter nearly as much as the hype.

Expand full comment
anon123's avatar

>Personality is one factor for selection, and race is another separate factor.

No, that's part of the problem: leftists want it both ways, to simultaneously thumb the scales in favour of URMs and to fool themselves into thinking that they aren't doing exactly that. Hence they have to hide it (from the public and, worse, from themselves) by believing that Asians are deficient in personality in order to counterbalance objective measures that they can't as easily game. And because people are averse to admitting when they're being dishonest, they're motivated to believe Asians truly have deficient personalities

I actually would mind it far less if race actually were explicitly a separate factor (eg, quotas)

Expand full comment
Straphanger's avatar

I believe he is referencing Harvard’s infamous discrimination against Asians on their “personality score” during admissions in order to meet the university’s racial admissions goals.

Expand full comment
User's avatar
Comment deleted
Jan 27
Comment deleted
Expand full comment
anon123's avatar

>Asians do underperform what you’d expect from IQ in various outcomes like wealth and income (especially COL-adjusted) though. Not many Asians are rising to the top as congressmen, CEOs, etc. There is even a term for this—bamboo ceiling. Surely some of this is due to non-IQ personality traits like meekness.

Surely you activated my confirmation bias

>Also, YMMV but there is way more racism against Asians among white conservatives than white progressives. Yes, Harvard admissions were unfair towards Asians, but in the work world a conservative boss is much more likely to slot you into the “worker bee no leadership potential” role whereas a progressive boss who is familiar with the bamboo ceiling concept is more likely to try to help you develop those leadership skills you need on top of raw IQ.

I don't disagree that the right likely counts more anti-Asian racists among them (us?). My problem isn't so much that racists exist; it's that DEI creates more racists against me than there otherwise would be, especially given that it's among the more well-to-do and chattering classes than the powerless hillbilly types

Expand full comment
Jorge I Velez's avatar

I recently went to Argentina, and I wrote my thoughts on what I observed.

https://open.substack.com/pub/jorgevelez/p/notes-on-argentina

Expand full comment
rebelcredential's avatar

Okay, parents: assuming an alternative reality where the existing education/daycare arrangement doesn't exist and you don't have to worry about how its replacement works:

What is the ideal amount of time you'd like to spend with your children?

I'm interested in both what you'd like if you still had the job you had now, and what you'd like in the utopian world where you have any job you like or no need for a job at all.

I'm interested in what sex you are, what your job is, and how many kids you have+want.

I'm interested in what you would do with the kids in that time - playing, teaching, doing the nappy changing+other chores etc - and what you'd be happy to let happen "in the background" while they aren't with you.

Just what's the dream scenario for you?

Expand full comment
Phil H's avatar

I have two older kids (14 & 18). My time with them has spanned a real gamut. I did lots of the parenting when my oldest was a baby (4+ hours per day of just me. The rest of the time he was with wife+family. I am the only English speaker in the family, so I wanted to give him plenty of input.) I've also spent some time working away from home, so during that period had 0 hours Mon-Fri; and later on I did some homeschooling, involving 8+ hours a day with my younger, when he was 11.

In general, for me, it's the more the better. I like them, and I enjoy spending time with them. When they were very young, of course, they were often boring. But familial love and pleasure in their development made up for that. Having kids around makes chores more fun for me, and makes things like cooking more meaningful (I'd live off sandwiches if I were on my own). And children's books these days are genuinely good.

I also second what Eremolalos said: part of being with them is being with, but each doing their own thing.

Important to note that I've always been in essentially the ideal child-raising setup. Wife did not work, and took on the lion's share of the housework, and looked after kids while I was not there; grandparents were always nearby. We never used any kind of daycare, but they went to nursery 7hrs/day from age 3/4. My responsibilities were mainly play and education.

Expand full comment
Legionaire's avatar

Male, Software Developer

6 hours with them (teaching and playing) and family and friends

10 hours to myself

8 hours sleep

Expand full comment
Ryan L's avatar

Interesting question. 41 year-old father of two boys here (ages 3 years-10 months and 6 months). Both are in daycare from about 8:30 - 4:30/5:00 on weekdays.

I wish I could spend more time with both of them. In an ideal scenario where I didn't have to work I'd like to have maybe 1-2 hours a day to work out, five times a week. I'd also like one (maybe two from time to time) evenings and mornings per week without the kids so my wife and I can have alone time and sleep in. I'd also like to be able to take 1-2 week-long vacations per year with just my wife, in addition to 1-2 vacations with the whole family. Beyond that I'm happy to spend pretty much all my time with my kids around. I'm assuming that my wife and I would be able to tag-team so we can still get cooking and chores done. I'm also assuming the kids go to sleep around 8:00.

However, just because *I* would be happy to spend most of my time with them doesn't mean I think that would be best for them. Even with the above caveats I would still want them to have some more extended time without me, playing with other kids and learning, because I think it's important that they gain that independence (obviously there would still be adult supervision). The amount of "independent" time would grow as they get older. By the time they are 4-5 I'd want them in half or 3/4 day pre-school or kindergarten, and then to be on a typical school schedule from there.

What would I do with the kids? Just about everything -- playing, engaging them in every day tasks (my oldest likes to help us cook, for example), going on trips (we love to go camping), having play dates with other parents, going to church, going out to eat, etc. But, I don't want to home school as they get older. For one thing, I just don't think I would be good at it, and for another, I think socializing with other kids is really, really important (as I mentioned above). Yes, I realize that home school kids still get time to socialize, but I think there is also value in going through a similar experience as the vast majority of the other kids in society. It creates a shared reference point for later in life.

Having said all that, if you asked me this question just after our first was born I'd probably give a different answer. The transition from zero to one kid was challenging in ways that I didn't expect. It wasn't until our first started to really be able to interact and play that I started to really, really enjoy our time together. And even though

he's gotten more difficult (at times) as he's gotten older, I've been enjoying our time together more. Seeing his personality, quirks, and mannerisms develop has been truly awesome, both in the sense of being fun and cool and in the sense of being awe-inspiring. FWIW the transition from one to two kids was much, much easier. I can't wait to see all the ways that our youngest is similar to and different than our oldest. If my wife and I had started having kids younger (I now regret waiting), I'd want to have a third. Maybe even a fourth!

I do think it helps that our first was/is pretty easy as far as kids go. He wasn't particularly fussy and he was an excellent sleeper up until 5 months ago when he stopped sleeping through the night. That has been a struggle, but it doesn't make me want to spend less time with him (it just makes me want him to go and stay the f*ck asleep!). And our second is, so far, somehow even less fussy and an even better sleeper. I have no tips -- I don't know how we go so lucky. But if our kids were extremely fussy, I would probably want more of a break.

One final thing -- I don't miss the day-to-day stuff from before we had kids, with the exception of being able to sleep in on the weekends. What I do still miss, to some degree, are the special occasions -- date nights and big vacations. I find the daily routine with kids to frequently be a lot of fun, tantrums not withstanding.

Expand full comment
Soarin' Søren Kierkegaard's avatar

I have 2 toddlers, probably will have one more, WFH IT job. I have spent pretty much all day every day around my kids their whole lives, only having a truly separate office for my 9-5 the last nine months or so. I want to be around my kids all their waking hours really, which since they both take a longish nap/downtime comes to about 9 hours a day. This number will go down as they go to school and whatnot but it’s accurate for this life stage.

Expand full comment
Eremolalos's avatar

There's an in-between way of being: hanging out in some setting that you and your kid both like. You do some things together, some things on you own. When my mother was a kid she and her parents would go out in the country for most of the day on Sundays and take a big picnic lunch. My mother and her sister would play, wade in the stream, whatever, come back to eat lunch with their parents, and also come back for little breaks or a cuddle. I imagine the adults taking turns keeping an eye on the kids, while the other one took a walk, napped, read. Trying to build in a lot of that sort of thing is a good idea. It gives everyone a shared experience without burdening the adults with the feeling of having to be intensively "on" with the kid, either playing a game of the kid's choice (which gets old way faster for the adult than it does for the kid) or trying to teach the kid some adult skill or taste. I think most of the best settings for this in-between stuff are outdoors, but it can happen in the kitchen too.

Expand full comment
Ryan L's avatar

This is a really good point.

Expand full comment
Greg G's avatar

Plus one. In a utopian world, I'd put the numbers at 6 hours no kids, 6 hours mixed company (including friends, other families, etc.), and 4 hours with my kid. I'm a male consultant/product manager, with one four-year-old. I would like to have more, but it may not be in the cards. More generally, I do think our low-work AI future will lead to a big resurgence in fertility.

Expand full comment
rebelcredential's avatar

Any idea how many hours you'd like "full on with kids" vs "ambient kids" vs "no kids time"? (Inb4: obviously this should vary summer to winter, workday to weekend, etc. but if I don't ask for concrete numbers at some point we never touch the ground.)

Expand full comment
gizmondo's avatar

2 hours a day tops. Male, programmer, 1 kid and is likely to stay that way. I always find it amusing how many people despise the need to work and pay for daycare, while I'd be happy to have daycare on weekends.

Expand full comment
Melvin's avatar

I would like to be with them for 100% of the time that they're being fun, and 0% of the time that they're being annoying.

This probably works out to about the same number of hours as the present arrangement, just less predictably distributed.

Expand full comment
rebelcredential's avatar

Word. What is that number of hours though?

Expand full comment
Arbituram's avatar

Dear Americans,

Outside the USA, a lot of the news I get about you is pretty clearly exaggerated or highlights the bad stuff, and much of that can be checked with simple statistics. Some claims I've heard don't have a quick Google fix, so I'm checking if they are true here:

1) Most large American cities have clear 'no go' zones, where you definitely wouldn't walk at night and maybe not even during the day. This was clearly true in San Francisco and New York when I was there, with locals being horrified that I would just sort of meander around the city without checking ahead, but is this true more broadly?

2) Do people actually just walk around in e.g. shopping malls with a gun strapped to their waist?

3) Certain posts on parenting forums from Americans seem to think giving drugs like melatonin to get young children to sleep / call them down is perfectly normal. Is this actually common?

Expand full comment
Blackshoe's avatar

Others have answered 1) and 2) pretty well, so I won't add on to their answers unless pressed.

But: for 3) parent here, give my 2 oldest (5 and 3) melatonin nightly, we started doing it because the oldest is a Z=-3.3 sleeper and we were desperate to find something to help her out (middle one got it because he saw his big sister getting some and he has also turned into an awful sleeper, though maybe only a Z=-2 sleeper). Oldest has been on melatonin since...2 1/2? My sense of our parent group would be that most people would be surprised we are doing this but I don't think anyone would outright condemn it (at least not openly, anyway).

Expand full comment
John Schilling's avatar

1) We're told all the European cities have "no go" zones because of all the immigrants from Certain Parts of the World. I'm pretty sure that on both continents, "no go zone" is gross hyperbole, that few if any cities have districts where if you just walk from one end to the other in broad daylight without trying to cause offense, there would be even a 1% chance of being violently attacked. And not too much more than that at night.

But lots of Americans, and Europeans, suck at risk management, so "1% chance of (probably minor) violent assaut = No Go Ever What Are You Even Thinking?" when the response should be "Increased Risk Area, but for something important it might be worth the risk".

2) I believe that about 3% of American adults carry handguns on any random day. That includes half a percent who are policemen and probably a similar number of criminals. But except for the police, *most* of these handguns are kept concealed unless and until they are needed. In some extremely rural or wilderness areas, people carry guns openly because everybody knows that everybody is carrying a gun because what sort of idiot goes out into these woods unarmed, and nobody has anything to hide or any reason to fear giving offense or alarm. OK, *slight* exaggeration, but really, only an idiot goes out into the Alaskan wilderness unarmed.

But you're probably referring to people in only-moderately-rural or even urban areas in Red State America. In that case, what you're seeing is mostly performative, the equivalent of gay men wearing leather fetishwear in pride parades because they want to remind people that they exist and they aren't going away, and because they want to remind themselves that they don't have anything to feel ashamed of. The difference being that while the gay community have Pride Parades, we for rather obvious reasons don't have Armed Gun Nut Parades and so the gun owners have to do their thing at the local Starbucks or shopping mall.

I find it generally worth the "eyes rolled, moving on" response I'd give to a gay man wearing the leather fetishwear to Starbucks or the mall.

3) Can't really help you here, not a parent and not close enough to any parents to know about their childrens' medications.

Expand full comment
Xpym's avatar

>we for rather obvious reasons don't have Armed Gun Nut Parades

What are those reasons? From the outside, it seems pretty clear to me that armed gun nuts are much more discriminated against in the US than gays.

Expand full comment
John Schilling's avatar

Roughly speaking, the United States has a long tradition of tolerating *non-violent* civil disobedience, and a group of "political extremists" marching through town with loaded guns is not going to be presumed to be non-violent just because they are not shooting anyone right now. People will be terrified, the local government will find an excuse to deny the parade a permit, and if they march anyway they will by definition be armed criminals. There's no way this ends with the sort of positive publicity a well-run Pride Parade can, and it's hard to get a parade's worth of people to believe that it can end well, so it doesn't happen.

Expand full comment
Paul Brinkley's avatar

Especially since gun fans who want to show off their gun fandom have typically done that to their satisfaction by putting on a gun show.

Expand full comment
John Schilling's avatar

Which still gets them villified, but not in a way that shows up on video and so gets only a brief mention on CNN.

Expand full comment
Xpym's avatar

Hmm, but surely at least a few sympathetic jurisdictions could be found where a permit would be granted? But then it wouldn't be much of a protest, I suppose.

Expand full comment
Julian's avatar

That is factually untrue. The right to be gun nut is in our damn constitution. And I've never heard of someone being beaten to death because they like guns. Not sure where you would have gotten this idea.

Expand full comment
Anonymous Dude's avatar

"And I've never heard of someone being beaten to death because they like guns."

Well, there's a pretty good reason people carrying guns usually don't get beaten to death...

Expand full comment
Xpym's avatar

>The right to be gun nut is in our damn constitution.

Only inside of your house, as far as plenty of states are concerned.

>never heard of someone being beaten to death because they like guns

This likely would've happened to Rittenhouse, unless he, you know, shot them.

Expand full comment
Paul Brinkley's avatar

The guys who attacked Rittenhouse weren't using "he's a gun lover" as their reason.

Expand full comment
Julian's avatar

Neither of those are examples of discrimination.

Expand full comment
agrajagagain's avatar

" but really, only an idiot goes out into the Alaskan wilderness unarmed."

I must have known a whole lot of idiots in my life then. Granted, most of the time "wilderness" was no more than 5 or 10 miles from a road (and maybe 20 or 30 from a settlement), but even on trips to the deeper wilderness, not that many people I've known ever carried a firearm. Bear spray was a little more common, but the vast, vast majority of safety simply came down to education and awareness. Know what to do and what not to do around wildlife. Pay attention to your surroundings.

Expand full comment
Yug Gnirob's avatar

Yeah, the most dangerous parts of the Alaskan wilderness are the weather and the topography. Your gun is not big enough to scare away the wind.

Expand full comment
Maxwell E's avatar

I mean, that’s not true. There are parts of Alaska where polar bears are more common, and you would never want to encounter a polar bear without a gun. That is by far the most dangerous “thing” in the Alaskan wilderness.

On the other hand, if we move on from abstract “things” and double down on most likely causes of death, the truth is pretty mundane. If you die in the AK backcountry, it’s likely due to exposure, drowning in a stream crossing, or simple lack of access to quick medical care. After all, your odds of dying from a stroke go up dramatically if there is no hospital within a day’s travel.

Expand full comment
Yug Gnirob's avatar

>That is by far the most dangerous “thing” in the Alaskan wilderness.

A polar bear has never downed an airborne helicopter, but a 70-mph wind can and will, making the wind the more dangerous of the two.

Expand full comment
Maxwell E's avatar

I suppose it depends on the proportion of backcountry users in helicopters versus the proportion on the ground in polar bear territory. Personally I wasn't considering airborne traffic over these areas as "backcountry" use, but yes, if we were to include these kinds of uses, wind becomes a significant hazard. Plenty of people die in small plane crashes and helicopter incidents out there.

Expand full comment
Arbituram's avatar

Thank you to everyone for all of the thoughtful answers!

To be perfectly honest I find the commonality of medicating children's behaviour much stranger than the guns, which is weird but sufficiently covered in fiction to be normalised in a part of my brain that's not just dystopian fiction.

I fully believe the commenters (including our host) who say there's no evidence of medical harm, but I think that kind of answer only highlights the cultural gulf. I would be curious what older Americans think of it.

Expand full comment
Charles UF's avatar

1) I grew up in a decaying rust-belt city. There were absolutely areas where you are more likely to be robbed if you look like you have something to take. Not guaranteed, probably not even a 50% chance, but probably closer to a 10% chance any given day. Again, assuming you look like a mark. Interestingly the very poorest, destitute ghettos were always relatively safer as these neighborhoods are known to have nothing worth stealing and no one worth robbing in them. The dangerous areas were always where the ghetto started blending into the more middleclass areas.

2) I live in a 'Constitutional Carry' state now, surrounded by other similar states. Firearm ownership is extremely common here. I own several long guns and handguns and do carry. All your sterotypes about Americans being armed to the teeth are on full display. I'd estimate fully half of the adult men and a third of the women are carrying concealed at my local mall at any given time. That being said, the Open Carry People are still concidered an outlier. People who own 25+ firearms and carry every day often mock them as weirdos, I tend to agree. There is also a cohort that I call the barely-concealed carriers. Yes their handgun is physically out of site, but their truck is covered in gun company stickers, they are wearing a Glock hat and maybe a t-shirt too. They talk about guns a lot in public. Our local sheriff has actually had a campaign to try to get people to stop putting gun stickers on their vehicles as they believe it drives vehile break ins, which are the #1 souce of stolen guns in my state. (the #2 souce is addicts stealing from their families). In both cases the thefts are to finance their addictions. Opiates are dirt cheap here (it cheaper to be a junkie than a drunk) but they arent yet free. A stolen pistol can be traded directly for herion in many cases, often enough to keep them high for a month.

3) I know less about this, never had kids. I have heard jokes about "having cold medicine for dessert" though.

edit - one thing I rememberd as I read other responses. It is fairly common for small business store owners to carry openly in their shops here. This includes stalls at the flea market, corner store, restaurants etc. Its also fairly common for these same businesses to have a larger gun, usually an AR-15, stashed somewhere as well, though these are often in some sort of locked situation where a key, thumbprint, or code is require to free them for use.

Expand full comment
TGGP's avatar

I have openly carried a handgun in a red state, though more of a suburban than rural area. Nobody seemed to care. There are still places that have signs saying you can't bring a gun in, I don't know if shopping malls are included because I haven't been in one in so long (strip malls are more the thing).

Expand full comment
David Kasten's avatar

1. In the city I know, DC, that's not really true outside of a couple blocks that you'd have no reason to go to as a tourist. I think SF, and maybe NYC, have more "really sketchy areas adjacent to the tourist hotel area" that might trigger this worry.

2. In DC, despite it being a deep blue city, surprisingly yes. Lots of security guards who work at government buildings openly and lawfully carry guns with them, despite not being police officers. FBI agents, etc., do sometimes as well (e.g., a FBI Special Agent got jumped on the subway a few years back and shot their assailant with their service weapon). General public isn't allowed to do this. (You can get a concealed carry permit in DC with lots of restrictions; in Virginia I think you can open carry but you rarely see this in northern Virginia.)

I have seen people open carrying who were just total civilians a bunch of times in my life, it's not _that_ uncommon in central and southern Virginia. Unless the person is sketchy, you honestly might not even notice it, since the vast majority of "boxy black things on a middle-aged person's belt" you see are just, like, smartphone holsters and your brain kind of tends to assume it's that unless you look closer.

3. No clue.

Expand full comment
Ryan L's avatar

1) I can't really speak to this because I've never lived in a big US city. I did live in Montreal for three years and my wife and I were both comfortable going anywhere at anytime, alone. But Canada is different than the US.

2) As others have said, it varies based on location. But in my experience, even in rural areas of the south there is a tiny minority of people that open carry, so you still are unlikely to see someone with a gun on their belt. Concealed-carry is more common, though I don't have numbers.

3) As a father of a kid with sleep problems I will say that our pediatrician suggested melatonin as a possible sleep aid, but didn't push it at all. We've decided not to use it. I know some people who do, so I think it's reasonably common, but still not a majority (just my gut feeling) for kids that have trouble sleeping.

Expand full comment
Nicolas Roman's avatar

1. Sorta true in Chicago, in that there are certain portions of uptown which are known to be quite dangerous at night. I live in Hyde Park on the south side, close to some of those areas, there's at least one mugging a week in the warmer months and I've had a couple of close calls with what might have been muggers in a car, though I can't be sure. The level of danger does get exaggerated, but I still wouldn't go there on foot at night (not that I'd go anywhere on foot at night in a city that's mainly built around car traffic).

2. I've mainly lived in blue states, so I don't really see it, though I've briefly been through the south and did see a few people openly carrying firearms, and doubtless have passed by some concealed carriers as well. Even there, I don't think it was especially common, but also not especially surprising. There are probably some communities in states like Arkansas and Kentucky where it would be more common to see.

3. I took melatonin for a bit when I was around 15-16, I don't think it really helped and I stopped taking it after a while. I think it would be weird to give that to a young child, but at the same time, it's not a clear warning sign to me.

>

On the other hand, here's a funny story. Though I've been in the US for years, I grew up in Spain, and when I took my girlfriend (who grew up in the US) home last summer, she grabbed my arm one day on the street like she was worried for her life.

The reason? There was a guy on the street with a ton of knives on his motorbike, sharpening them on a whetstone. She thought that this guy had to be some kind of crazy axe murderer. It was broad daylight on a busy street. I told her that this guy was going from house to house, taking people's dull knives and sharpening them for a fee. She didn't believe me, but we passed by without getting stabbed, so she was probably convinced after we saw him a couple more times. That's the kind of thing that seemed totally mundane growing up, but I can see how some Americans might make other assumptions.

Expand full comment
Scott Alexander's avatar

1. Medium true. It's mostly a matter of degree. I (adult man) am pretty happy going anywhere in San Francisco during the daytime. There are places there I wouldn't want a woman going at night, though this is still more small-probability-of-something-bad than certainty. I'd guess there are places in Baltimore or Detroit that even I wouldn't go during the day, but this is hearsay.

2. This would be illegal in deep blue states like California. In red states, I think it's legal by state law, but shopping malls would be allowed to tell you to leave your gun at the door. I definitely think there are some situations (ie some malls in some states) where it's legal. I personally don't think I've seen it, but I haven't been in too many red state malls!

3. This might be more of a pharmaceutical difference than a cultural difference - in America, melatonin is considered a supplement pretty close to a vitamin and doesn't require a prescription; in Europe, it's considered a serious drug. I think America is just transparently in the right here - millions of Americans use melatonin cheaply and easily and I have never heard of serious problems (though some people take the wrong dose and it messes up their sleep schedule a little). Technically the guidelines say not to give melatonin to kids without getting a doctor's advice, but there aren't a lot of studies showing harm, and most doctors would probably agree it's fine.

Expand full comment
Silverlock's avatar

I live in a red state and have never seen anyone open carry at any of our shopping malls. (Of course, I rarely go to the malls these days.) I imagine there are some people doing the concealed carry thing, but I don't have any sort of feel for how many there might be.

Expand full comment
Maxwell E's avatar

In the Intermountain West it is not uncommon. I’ve seen people open-carrying in grocery stores and shopping malls in Idaho, Utah, and Wyoming.

Expand full comment
Erica Rall's avatar

2. A friend of mine reported to me that a major bit of culture shock she experienced moving from California to Arizona was that it wasn't uncommon in the latter state for people to open carry (i.e. wear unconcealed holstered pistols) while going about their daily business. This is both a legal and a culture difference.

It correlates a fair amount about with blue/red states, but not perfectly. Arizona is a purple state, but has "constitutional carry" gun laws (no permit required for either concealed or open carry) and a very pro-gun culture.

Expand full comment
Justin Thomas's avatar

Replied elsewhere, but the signs are not enforceable in California. Funny enough in most southern states it's not true: https://www.usconcealedcarry.com/resources/ccw_reciprocity_map/ca-gun-laws/

Expand full comment
Greg G's avatar

I think your city references are off. New York is mostly extremely safe, and San Francisco is pretty safe, although sometimes very depressing. Walking through Baltimore or Memphis would concern me a lot more.

Expand full comment
Karthik Tadepalli's avatar

I think San Francisco is a great city and its crime reputation is overblown. Even so, I wouldn't walk in the Tenderloin at night under any reasonable circumstances.

Expand full comment
Erica Rall's avatar

I can see where media reputations get skewed, though. New York used to be a notably high-violent-crime city, but this changed in the 90s. National violent crime rates fell a lot in the same time period, but NYC's fell a lot more than national rates, ending the decade with lower murder rates per capita than the rest of the country.

San Francisco has been in the news a lot in recent years for having (at least anecdotally) a huge problem with property crimes, especially organized shoplifting.

Expand full comment
Matto's avatar

As someone who immigrated from Eastern Europe to the USA, it always amazes me to see how distorted and exaggerated beliefs about life in the US are. Maybe one day I'll write a full post. Thanks for being curious!

As someone else already mentioned, the USA is big, so things that are normal in one part might be unheard of in a other place.

1. It varies! In New York City, the bad zones are just areas where your probability of getting mugged increases from miniscule to... Something to be aware of. Kind of like in major European cities. In contrast, in Seattle, there are places where I'd only go with a gun.

While living in Seattle I actually considered getting a gun. I did a lot of hiking and I was worried about wild animals. I did carry bear spray, but some places I went to felt extremely remote, eg. no people on the trail for a full day, no phone reception. I mention this to give the faintest sketch of how different and sparse the West Coast is, and that probably influences how people behave there.

2. It varies! On the East Coast, at least north of DC, it's very rare. But the farther west or south you go, the more guns you'll see. (Also: bows). There's a rich hunting culture. There's also a rich gun smithing culture. And there's also these big empty sparse places where you really feel insignificant as a human and help is far away.

3. I don't have kids but as an outsider I think that it's socially much more allowable to both give drugs to kids and to use them yourself. Some people I've met describe their therapy (+prescription drug schedule) like it's going to the gym or yoga studio. Basically, ingestible chemicals are seen as more OK than what I'd seen in the few places I've lived in in Europe where even higher dose ibuprofen seemed to be treated as something extraordinary, whereas I don't think anyone here--doctor or patient--would hesitate to give or take 800mg of ibuprofen. At the same time, I can't really see signs that this is detrimental on a personal or societal scale.

Expand full comment
Deiseach's avatar

Re (1), I imagine for America it's the same as every other country in the world; the locals know the 'bad' parts of town and every town has its 'bad' part. How bad the 'bad' part is will depend on where it is - a small town 'bad' part is likely to be less 'bad' than a big city 'bad' part.

I think what people are really reacting to is the overspill of things like the homeless into what previously were the 'good' parts and/or the vast (citation needed) increase in numbers.

Re (3), melatonin is recommended in our service for children with sleep problems, but they are also children with additional/special needs so it's not simply "Junior won't go to sleep at his designated bedtime", it's kids not going to sleep at all or waking up at 2 am and being hyper and up for the next six hours, etc. on an ongoing basis.

Expand full comment
MM's avatar
Jan 27Edited

The important thing to remember is that there isn't just one USA. It's a big country. Talking about "the USA" is like talking about "western Europe".

"No go" areas could be "go there and you *will* be stabbed". Or they could be "I've heard about crimes that happened there."

You were asking about the areas as an obvious stranger to town. Helpful people tend to say to themselves, "This guy's asking; he may be oblivious to any warning signs. I'll be conservative about telling him which areas are kind of sketchy."

Expand full comment
Julian's avatar

There are plenty of European cities that have areas tourists are specifically told not to go too. I think your analysis is correct that the advice is more directed at tourists than the general public.

Hell, when I did an exchange program in Oxford, there were places we were told not to go. They were probably fine.

Expand full comment
re qu iredna mee e's avatar

I am wondering if anyone can provide a name of a place where you can go and you actually have a high likelihood of stabbed or shot while minding your business. I have never heard of a place like this in the US. I've lived in NYC, Baltimore, and Detroit btw. I have never been bothered in my life or witnessed any violence, except for a few fistfights. Worst I've experienced was being whistled at passing through Harlem.

Expand full comment
Julian's avatar

If you are a tourist or otherwise not street smart there are plenty of places. I wouldn't recommend anyone go to Skid Row in LA for example. Most big cities have an area like that but it unlikely a tourist is going to want to go there and of course it's a small area of town.

I did deliveries in LA and had to go to some sketchy places but the "worst" case was getting robbed. Of course thats not great! But i was never in fear of my life, even in places with very visible gang presence (they don't care about a white nerd delivering legal documents to offices).

Of course european cities have these same places. Every good guide book will tell you where not to go lest you risk being pick pocketed.

Expand full comment
Bullseye's avatar

I don't think I've ever seen a gun in public except at a shooting range or on a cop's belt. No idea if people are walking around with concealed guns.

Expand full comment
Thomas del Vasto's avatar

I've seen guns in public many times. It's different in the south, I guess.

Expand full comment
nelson's avatar

And in north idaho but it's rare. Like every couple months. A large percent have been young women.

Expand full comment
Bullseye's avatar

Several years ago, there was a guy who went around to every U.S. city and asked people what neighborhoods to avoid. Every city had some place that the locals told him, "Don't go there or you'll die." He went to all of those places, at night, and walked around for a while, and he was fine.

Maybe he just got lucky. Murders and muggings do happen. But none of these places are actually certain death like people say they are.

Expand full comment
Paul Brinkley's avatar

Interesting. Do you remember the name of the guy?

Odds-wise, it's often the case that walking through the Wrong Neighborhood means your chance of dying just doubled to... 2 in 100000. Or maybe even 5. Still low - you wouldn't even bet $10 on your chum getting gacked tonight - but it's death he's courting, just in order to walk somewhere he normally wouldn't. So that's enough for people to say "yo, don't go over there".

Expand full comment
Julian's avatar

A perspective i heard once, that really opened my eyes, is that the "worst" neighborhood will have like 2% of the population as violent criminals. But that means 98% are still normal law abiding people. When you think of it like that, places seem pretty safe.

Expand full comment
TGGP's avatar

You're not certain to die on a motorcycle ride without a helmet, but it's not a good idea. I've walked around in the far south side of Chicago, and the locals there found my presence odd, but things went alright. However, I went in the daytime, and I didn't spend all that much time.

Expand full comment
Christina the StoryGirl's avatar

Seattle resident here:

1. Not only do I not walk anywhere near 12th & Jackson, I drive a 50% longer commute to my job downtown to avoid the intersection and the surrounding blocks. It's an open-air drug market with the kinds of predictable levels of violent crime that go along with the black market drug trade and addicts, including stabbings and shootings. https://www.king5.com/article/news/crime/seattle-police-arrests-12th-jackson/281-c1c62b6d-c23c-4491-a93e-70c6feb94bcd

EDIT: I also just remembered that the county suspended service at the bus stops local to the intersection, citing violence as the reason. So the intersection is indeed LITERALLY a formally recognized no-go zone for the government!

I'm also forced to avoid the large, lovely park in my neighborhood as it is inhabited by hundreds of drug addicts and mentally ill people living in tents. Someone gets killed there about once a year and I've heard gunshots from their tents multiple times inside my home. A neighbor's home was actually *struck* by a bullet.

2. Perhaps unsurprisingly given what I said above, but of *course* some do.

You usually won't know it, though. The pistol-carrying community is divided on "open carry" vs "concealed carry," but most people favor the latter, so as to prevent the possibility of being sneaked up and attacked *for* one's gun.

Speaking of, this is very necessary and a moral good. For me, carrying a pistol is a profoundly feminist act, as it provides me - a single, 1.57m woman - a degree of independence and physical security which would simply not be possible without a meaningfully powerful self-defense tool. While I take reasonable precautions to avoid getting into situations where I may be forced to defend myself with lethal force (see the answer to question 1), I also don't live in the kind of fear that other women have of being too vulnerable (driving alone at 4 AM, etc).

And carrying a meaningfully powerful self-defense tool has proved to be very necessary. I have used my pistol twice to dissuade direct threats by strangers of what would have been robbery at best and who knows what (at worst). One incident was over 20 years ago, at a business in Arizona (I was outnumbered three to one); one was last year, in my condo garage. By sheer luck, both times the pre-incident indicators were slow-moving enough that I had adequate time to recognize the threat of an imminent robbery, reach for my concealed weapon, and make deliberate, fearless eye-contact with my would-be assailants. Both times, that was enough to get them to take their hands out of their pockets and walk away.

And both times, the entire encounter was completely silent. The intruders understood my body language and eye-contact, because they of course *also* carry weapons and know exactly what it looks like when an alert person has their hand on one during a tense encounter, but is waiting for the other person to attempt the first move before they'll use it.

You sounded incredulous and perhaps aghast at the idea of people walking around carrying guns, but I liken law-abiding, responsible gun-owners to adult Cape water buffalo.

Water buffalo aren't predators, but, when threatened by a lion, a healthy adult Cape water buffalo is generally capable of intimidating a lion into backing off with no blood shed, or of kicking/stomping/crushing a lion if it comes to blows. A pistol transforms a single woman from a baby orphan zebra (easy eats) into an adult water buffalo (not worth the risk).

3. I had a coworker who routinely gave her severely autistic (or, more likely, fetal alcohol syndrome sufferer) child a low dose of melatonin, on her doctor's orders. If she did not, her child literally would not sleep at night.

Expand full comment
Neurology For You's avatar

Also in Seattle, never had trouble at the intersection you mentioned but I stay away from 3rd Ave and Pike after dark, the area flips very dramatically. It has gotten much better during the day under the new mayor. The hotels on Pike apparently tell people not to walk downtown after dark.

Expand full comment
Christina the StoryGirl's avatar

Don't get me wrong: 3rd & Pike is pretty bad!

But due to the local shelter, its proximity to The Jungle et al, and probably some racist opportunism (inflicting the addict population on Little Saigon, a demographic of business owners who largely aren't native English speakers), these days the scale of threat and blight at 12th & Jackson often somewhat dwarfs 3rd & Pike now.

I mean, it's gotten so bad that King County literally suspended service at the 12th & Jackson bus stops.

Oh, HA! I just realized that means that 12th & Jackson is very, very *literally* an official "no go zone!"

LOL sigh.

Expand full comment
Paul Brinkley's avatar

Hmm. 3rd & Pike much change a LOT from what I see in StreetView right now. Starbucks; Walgreens; Chipotle; looks very walkable.

12th & Jackson shows a strip mall with a parking area surrounded with concertina wire, so I can see it being a problem.

I'm open to the "flips at night" argument, nevertheless. Cherry Hill in Baltimore looks like a fairly neat suburb by day. But it's supposed to be one of the most dangerous neighborhoods in the country.

And then there's the streets between Toledano and MLK just north of Claiborne in NOLA. Just south of it is the Garden District - beautiful, feels safe at 2AM. Four blocks north and I was afraid to park my car.

Expand full comment
Christina the StoryGirl's avatar

StreetView isn't a great tool for assessing a neighborhood, as it's (obviously) literally just a brief snapshot in time. Law enforcement pushes can temporarily change the neighborhood mood; a couple of years ago, 12th & Jackson was briefly fiercely patrolled by the Seattle Police Department following some egregious incidents. They had a "mobile precinct" parked on the intersection and everything. The addicts scattered to the blocks just out of SPD's line of sight, but as soon as SPD moved its focus from the neighborhood, the addicts reclaimed the block, and in much greater numbers.

12th & Jackson is awful around the clock - broad daylight and late at night. While I go way out of my way to avoid the area on my regular commute, I occasionally do have to pass through it, and it's always messy and tense.

Expand full comment
Silverlock's avatar

"Speaking of, this is very necessary and a moral good. For me, carrying a pistol is a profoundly feminist act, as it provides me - a single, 1.57m woman - a degree of independence and physical security which would simply not be possible without a meaningfully powerful self-defense tool."

There is a reason one of the many nicknames for firearms is "equalizer."

Expand full comment
bell_of_a_tower's avatar

Another quote (from where, I have no clue, but it's a common one in the gun-favoring community): "God made man and Sam Colt made them equal"

Expand full comment
re qu iredna mee e's avatar

"Be a man either great or small in size, Colonel Colt will equalize"

I think it used to be just a common saying, at least according to the diary of this 1920s serial killer. Pretty funny but deranged read.

https://www-rohan.sdsu.edu/nas/streaming/dept/scuastaf/collections/Panzram/Panzram-Box01Folder03.pdf

Expand full comment
Viliam's avatar

It seems to me that there is a taboo about calling people good or bad, and while I understand some reasons for that, it seems to me that we have gone too far and basically lost the ability to coordinate against bad people.

I understand that the words "good" and "bad" are often used by children, or by people with very simplistic, black and white thinking. So one can signal sophistication by refusing to use such words, and even actively attacking the ones who do. You can make many good arguments, such as "it's complicated". You can point out that the ethical norms depend on culture, evolve in time, etc. That people can deceive each other about how good they are, that they can have hidden motives that would change your assessment of their goodness, etc. And yes, those are all valid points.

And yet, there is the fact that some people spend a lot of their time helping others, or thinking about how to help others. And there are also people who don't give a fuck about helping others, or people who actively hurt others (including verbally), either because they enjoy the acts per se, or because they enjoy the feeling of power this gives them. And it seems to me that we should have some words that describe this very real thing. Despite all the complications. And yes, you can see the same person acting nice in one situation, and nasty in another situation, and no one is 100% consistent all the time. But also, some people are clearly closer to one end of the scale most of the time, and other other people are closer to the opposite end.

Even the famous quote from Solzhenitsyn -- "the line separating good and evil passes not through states, nor between classes, nor between political parties either—but right through every human heart—and through all human hearts" -- which is often interpreted as "there are neither good people nor bad people", is actually taken out of context, and if you read the original text, you will clearly see that the intended meaning was almost the opposite of this. (If you also read the following paragraphs, what the author meant to say was more like this: There are no *perfectly good* people. All good people have some darkness in them, that they constantly need to fight against. But then there are people who don't even try, and who revel in their evil. Those are often followers of some ideology which in their eyes justifies all their bad actions.) So it seems to me that the memetic force against calling someone good or bad is very strong in our culture; strong enough to turn maybe the most famous quote on the topic to its opposite and passing it as a deep wisdom.

There is an obvious problem that we can't read other people's minds, and it is very easy to imagine that deep inside everyone is exactly like me... and if it doesn't seem that way, it's just something preventing them from showing their true nature. It is easy for the good people to believe that deep inside everyone is good, they are just, dunno, some people who suffer so much that they don't have enough energy left to be good, and are instead lashing out in pain... but if you could help them and remove the pain, everyone would become the saint they are deep inside. It is also easy for the bad people to believe that deep inside everyone is bad, it's just that some people are too stupid or too afraid to do the smart thing, which is hurting others to achieve selfish goals... if only they had more courage and less brainwashing, everyone would become the psychopath they are deep inside. And both theories are difficult to disprove, because no one is perfect, everyone has some kind of problem, etc. But you can also see that some people are helping others despite having problems of their own, and some people just don't give a fuck no matter how easy they have it.

I am not proposing any specific mechanism for how people become good and bad. Maybe it is genetic. (Psychopathy is heritable to some degree. I suppose many other things are.) Or maybe we are all mostly blank slates in this regard, but some early behaviors can bring us close to a powerful attractor: we get rewarded for the good/bad things we do, we establish good/bad habits, we build a self-image and a public persona... and then it would be too difficult to throw all of this away and start doing the opposite of that. I don't know. As they say, "further research is needed". But doing the research on the origins of good and evil is step 2. Step 1 is to admit that such things exist.

It is good to be skeptical about these things, but some skepticism goes too far. There is a kind of skepticism that says "examine the evidence more carefully, take different perspectives, look at the alternative explanations, consider the incentives, etc.". And there is also a thing that calls itself skepticism that says "don't even bother looking at the evidence, here is the answer that will make you look cool and edgy". Yes, hypocrisy exists. But sometimes people just, dunno, donate to charity, maybe even anonymously, without that being a part of some larger sinister plot.

So, what would be the proper way to talk about this? (Hopefully something better than just listing hundred different reasons why this is complicated. Yes, I know that the topic is complicated, but I would prefer to do something other than give up talking about it, which seems to be the status quo.) I am not even sure we can find a solution on a large scale -- there are too many people motivated to push back against any such attempts. (The bad people, who prefer not to be exposed. Leaders who want to define good as being a member of their tribe and obeying them.) But there should be some personal advice that the smart and good people could e.g. give their children. (Ideally, without immediately facing an outrage from anyone else who heard that. But maybe that it too much to ask for.)

Expand full comment
re qu iredna mee e's avatar

I very rarely use the words "good" or "bad" to describe people. It usually just makes people think you have a personal grudge against them that probably isn't very objective.

How about just sharing specific, concrete concerns? Like, "I noticed that Bob shared my confidential information with Susan, so I wouldn't trust him with anything you want to keep private."

Or, "Susan stiffed the contractors that built her home, and I often hear complaints about her behavior from everyone she has interacted with her. I would be wary of doing business with Susan."

Just saying "Susan is bad" without providing sufficient supporting evidence is likely going to reflect worse on you than on Susan. That's why it seems most Americans seem to hate Kamala Harris more for saying Trump is a fascist than they hate Trump for acting like one.

Expand full comment
Viliam's avatar

Yes, specific examples are best, when available.

(Though sometimes you don't have a legible example. Like, it's not one big thing, but hundred small things, and each of those small things can be explained separately.)

Expand full comment
Edward Scizorhands's avatar

> As he was setting out on a journey, a man ran up, knelt down before him, and asked him, “Good teacher, what must I do to inherit eternal life?”

> Jesus answered him, “Why do you call me good? No one is good but God alone.

Expand full comment
Viliam's avatar

Well, this in my opinion is a part of the problem. When you redefine "good" to mean "literally divine", then of course you can conclude that there are no good humans. You can even succeed to make everyone feel guilty and subscribe to your religion.

I am interested in comparing humans, according to human standards of behavior.

Expand full comment
Nancy Lebovitz's avatar

Excellent discussion-- when can I find the extended Solzhenitsyn quote? What chapter is it in?

I think part of the problem is the idea of perfect altruism. People who give while getting *nothing* back. Not self-esteem or praise or money or future favors. Nor the fun of the work, though that usually isn't mentioned.

I blame Christianity for this inhuman standard, which makes actual practical human generosity seem like it isn't good enough.

Expand full comment
Viliam's avatar

Oh, it's somewhere in the Gulag Archipelago, but no idea which book or chapter.

I guess if you download the books from Library Genesis, and search for the known part of the quote, you will find it.

The impossible standards are a cheap way to dismiss all realistic achievements. It's like looking at Olympic runners and saying "lol, these guys can't even achieve a fraction of the speed of light, losers". Except, the one who says this is at least not pretending to be faster than the Olympic runners in some metaphorical sense. While the Christian message is: "the ingroup is morally better than the outgroup, because those losers are mere humans, and we are divine by proxy".

Expand full comment
Nancy Lebovitz's avatar

I've been looking at the idea that people who impose impossible standards don't want you to succeed.

Expand full comment
Viliam's avatar

Probably both: to dismiss you if you succeed, and to discourage you from trying (because you already know that your potential success will be dismissed).

Expand full comment
Gres's avatar

At the public “coordinate against them” level, I think you’ll need to frame your position in terms of ethics. “Ethics” describes socially-agreed rules that are clear-cut enough to justify coordinating against someone. People already form coalitions against some organisations they regard as unethical.

Expand full comment
Performative Bafflement's avatar

> So, what would be the proper way to talk about this?

I assume you mean publicly, in one-to-many communications? Because among friends or in small groups, you can just say "you know how some people are assholes? Here's what we should do about them," or whatever reason you're wanting to talk about the fact that some people are assholes. Publicly, I really think it depends on the audience and your intended communication, and it's going to vary quite a bit depending on those things.

In terms of giving advice to children, they already encounter selfishness and aggression nearly every day if they do school or daycare. You naturally advise them and help them navigate those people and personality traits in the course of those things. I think it's pretty straightforward to talk about why you may not want to spend time around selfish people or people who bully others, to advise them on the best ways to handle those behaviors, and to demonstrate those coping strategies and behaviors in your own life, too.

Expand full comment
Viliam's avatar

So far, with my children I can easily have a no-nonsense aspie-to-aspie communication about ethics. Maybe because there were not exposed to strong competing memes yet. ("Some people care about being nice to others, or helping others. Some don't. I don't know why. I think it is better to hang out with those who care. They could be lying, of course. You need to observe what they actually *do*, not just what they say. Yes, I know it is difficult.")

My immediate frustration mostly comes from reading ACX discussions, which is why I posted it here. Like, Scott barely can mention altruism, or effective altruism, without getting dozens of comments in style "hey, we are all smart enough here to know that people don't *really* care about others; they are just signaling, or trying to get something in return, or using this topic as a way to attack their opponents, etc... and if they don't do any of this, then they are simply stupid and brainwashed". Not using these words exactly, but this is the spirit.

And I have also known many people like this in real life, but I have already filtered them successfully out of my bubble. Like, people who couldn't hear about any charity without immediately saying "this is obviously either a scam, or a part of Soros's effort to destroy our beautiful country", implying that there is no way someone could just genuinely wish to do something good, and that anyone who says that is either a part of the conspiracy, or obviously a complete idiot.

Meanwhile, I don't deny that many charities are scams (which is why we should evaluate them, duh), but come on, there is e.g. the open-source movement, where people all around the world produce millions of dollars of value just to freely give it away (and yes, some of them are making profit on that, but I am not talking about those now), there are people who organize e.g. the scout activities, there are teachers who keep debating how to teach their subjects better despite the fact that it has absolutely no impact on their salary or job safety, there are people who adopt children from orphanages, or who send half of their salary to anti-malaria charities, etc.

So, from my perspective, the world (possibly just a tiny bubble around me) is full of goodness (which of course is very far from universal); and then you have this huge cultural denial (another bubble, but seems quite large) of all of that. It's like different people live on different planets. Nor just a disagreement about how rare or frequent some things are, or how pure is the motivation behind them, but a complete denial that it even exists. It is a very dark place in my opinion, although the people who live there seem to be quite okay, for them this is just business as usual.

I worry about the people who grow up in this dark bubble. Like, some of them have a potential and maybe even a desire to be different, but they "know" that the goodness is all fake and that only idiots believe otherwise. I think there should be some credible way to tell them "nope, that's just your bubble; maybe your parents prefer to be there, but you don't have to". To advertise goodness in a way that is not connected to e.g. recruiting for a specific religion (that seems to be the popular stereotype). Like, you don't have to find Jesus, or learn to write open-source programs, or... basically do any of these specific things; it's just the notion that there are people out there who genuinely are good (and also people who are not; and people who are not but they pretend to be), so it is a natural thing, you just have to go out and find it.

I also wish it was easier for the good people to coordinate. For example, once I met a guy who was a bad person, but no one has warned me (and as a consequence he scammed me out of some money; no big deal, he did much worse to some others), and we even had some mutual acquaintances and I have kinda asked them about him (my mistake: I didn't ask any specific questions, just in general what kind of a person he is), and I didn't get any negative reports (until it was too late), because everyone basically followed the social norm of "you should not judge other people" and only told me some nice things about him (e.g. that he was smart; which is true).

And finally, I also have an academic curiosity about this. It might be interesting to somehow measure goodness, and how it correlates to various things. Of course, that would probably be a completely radioactive topic. God forbid you find different levels of goodness between different groups of people. So probably not going to happen.

Expand full comment
Xpym's avatar

>hey, we are all smart enough here to know that people don't *really* care about others; they are just signaling, or trying to get something in return, or using this topic as a way to attack their opponents, etc... and if they don't do any of this, then they are simply stupid and brainwashed

This is also basically an aspie thing, speaking from experience. From a very early age I've noticed that normal people are giant hypocrites about morality, don't seem to be particularly aware of this, and don't consider it a big problem when this is explicitly pointed out to them. So, for a long time, I thought that since everybody else is evil (in the "banality of evil" sense, as I later learned), there's no point to strive to be good.

Well, I still think that, but the EA community convinced me that some people really have that altruistic drive, and for them it makes sense to be better than average, but for most people there isn't, so they aren't.

Expand full comment
Viliam's avatar

Non-aspies remain a mystery to me. They seem to bend to social pressure like a grass in the wind, but at the same time hypocrisy prevents them from being blown away -- their following of the social pressure is mostly verbal, less so in action, unless someone actively checks on them.

(This is not necessarily a bad thing. The social pressure could be evil, and then hypocrisy means not doing the evil things you were supposed to.)

Expand full comment
Xpym's avatar

Well, there are two kinds of social pressure - to say the right things, and to do what everybody else does. Normies either don't notice that those are profoundly different, or ignore contradictions, while for aspies this state of affairs is absurd, and they either get blackpilled, or become rebels.

There's also an innate altruism drive, probably highly variable, the high end of which for normies makes them become better than average teachers etc, and for aspies to do things like finding effective charities.

Expand full comment
Performative Bafflement's avatar

> So, from my perspective, the world (possibly just a tiny bubble around me) is full of goodness (which of course is very far from universal); and then you have this huge cultural denial (another bubble, but seems quite large) of all of that.

> I worry about the people who grow up in this dark bubble.

> I also wish it was easier for the good people to coordinate.

I've noticed the same things and agree it's a problem. This seems like it's trying to resurrect the good parts of religion, because formerly (and today for actively religious folk), all these things are handled by your religion and religious community / social group.

But us godless irreligious don't have this, and feel the loss, and in many societies, that's the dominant cultural paradigm. It's partly "Bowling Alone," partly "god is dead" and post-modern moral relativism, and partly "the only ultimately community-wide agreed standard of good and bad is the law, so things that are shitty but not explicitly illegal just sort of get a pass."

I'm not sure what the solution is - we're not going to undo Bowling Alone dynamics, they're likely to get worse (and have been getting worse for decades). We're also not likely to undo post-modern moral relativism and the law being the only ultimate common standard.

I think it boils down to adopting a religion or cultivating your garden and feeling sorry for the people who can't do those things, because there's no possible society-wide solution that doesn't run afoul of one or more of "individual autonomy," "moral relativism," or "you're not allowed to judge other cultures."

Expand full comment
Greg G's avatar

I think this is a very worthwhile, although challenging, line of inquiry. My very rough mental model is that some small percentage of people are very good (e.g., people who work in Doctors Without Borders or otherwise actually sacrifice for the good of others), most people are reasonably good, and another small share are pretty bad and "why we can't have nice things." Grifters, criminals, sadists, etc. I think it's important for kids to learn that not everyone meets a minimum bar of goodness and to keep an eye out for those people. I also think society could use a better discussion of this topic that doesn't just devolve (entirely) to tribalism. For example, I have a lot of sympathy for criminals who grew up in difficult circumstances, but obviously not everyone with a similar starting point turns to crime. And similarly, some people who are privileged still love denigrating or scamming others.

Expand full comment
Mark Roulo's avatar

I believe that there is some (maybe a lot) of variance in how people will behave based on their environment. Put most people in a 'good' environment and they will behave in mostly good ways. Put them in a 'bad' environment and ... they won't be so good ... sometimes a lot not so good.

Short (sad) example is Temar Boggs, which I document briefly here:

https://mistybeach.com/mark/#Lead_Me_Not_Into_Temptation

If my belief is correct then there is value in creating and maintaining 'good' environments.

Expand full comment
rebelcredential's avatar

Probably off topic, but:

This reminds me of a thought I've been having about the dating industry, which is that dating has essentially been re-framed as a product you buy/a service you sign up to.

As in, you've done your bit by joining the app, now it's the company's job to put a menu of possible candidates in front of you. If none of them work out, guess this is a shit site so go and try somewhere else.

This completely ignores the framing that dating is a skill issue. Being in a relationship requires a certain set of skills and abilities and without them, nothing long term is going to work for people.

In short, some people are good and can support relationships, others are bad and undateable and need to skill up.

Of course it would be PR suicide for anyone to say this out loud, which is why no one does.

Expand full comment
Viliam's avatar

There should be a way to *train* those skills.

People complained about the pick-up artists. And yeah, they only taught a small subset of the skills you need, and even those were optimized for picking up women in bars rather than having a long-term relationship. But still, it was a training you could have joined. What is the mainstream alternative?

It would be nice to have some kind of service that would assess you from different perspectives and give you a complete report on your appearance, clothing, speech, behavior, etc. And then offer courses on the individual parts, even if by recommending some other service that specializes on that.

(But "being good at dating/relationship" is not the same as the "being good" I had in mind. I know people, of both sexes, who spend a lot of time doing various altruistic activities, and who have a difficulty getting a date. They are fat, depressed, anxious...)

Expand full comment
rebelcredential's avatar

When that assessment service you talk about is provided by gymbros and red pill guys, it's called evil toxic masculinity. When it's provided by therapists and councillors it's called empowering self care or whatever. Both can be helpful to some, waste the time of others and seriously fuck up people who use it wrong. But society loves the one and hates the other.

> But "being good at dating/relationship" is not the same as the "being good" I had in mind

Yeah, I know, that's why I thought this would probably be off topic.

Expand full comment
Performative Bafflement's avatar

> When that assessment service you talk about is provided by gymbros and red pill guys, it's called evil toxic masculinity. When it's provided by therapists and councillors it's called empowering self care or whatever.

Yeah, this is a tough one, because due to natural teenager / familial dynamics and app-driven dating dynamics, it's not something you can give / get advice on in a parent / child relationship either.

Anything they can tell you would seem as charmingly quaint and out of touch as "oh, just buy a house for two penny whistles and a moon pie, like I did. And if the seller is on the fence, you might offer a shiny quarter on top!"

There's probably not really an answer. Dating and finding a mate is a Red Queen's Race, so the landscape is inherently competitive and constantly changing, and a lot of people's ability to influence things or even benefit from advice from people who are more successful at it is limited by relatively immutable characteristics like Big 5 / OCEAN, and size and attractiveness, and neither gender wants to "settle" because of various social and online / media landscape dynamics.

Expand full comment
Aris C's avatar

Working on a blogpost on the idea that we place too much emphasis on words, the gist of which is the following. Would love thoughts and counter-arguments before I finalise it: a core assumption underpinning the culture wars of the past 20 years is that words (and gestures) matter. I’ve said before that many contemporary debates are just semantic games, but even when people agree on the semantics, they still assign way more importance to words than they merit.

I like words. My substack’s name is Logos (‘that which is said: word, sentence, speech, story, debate, utterance, argument’); I created an app to find the most beautiful word in English; I’m building an app for people who love books; I have a shelf dedicated to books on language and linguistics. But I think words matter less than we assume. Not a little less: way, way less.

We’re told that words crystalise and normalise behaviours and attitudes. Use racist language, and you’re encouraging racism; conversely, affix preferred pronouns to your name, and you normalise acceptance of different gender identities. Though this is true to an extent (a racist probably feels empowered when they hear racist language around them, and is therefore more likely to act on their prejudices), it becomes problematic when language becomes disassociated from intent, and is criticised as a stand-alone entity. In other words, when we confuse the map (words) with the territory (attitudes and actions).

I once visited St. Paul’s, and there were messages shown inside its rotunda. One was ‘words meant to humiliate are obscene’; I believe the inverse is also true. Words not meant to humiliate are not obscene. We feel that words matter so much that we’re horrified when someone uses racist language by accident (for example, because they did not know a particular word is now racist (these things change quite quickly!), even when we know full-well that the person has no ill-intent, that they are not racist at all.

This matters because we are now at the point where words matter so much, they’re all that matters. We judge people on what they say or don’t say, and not what they do or don’t do. There are far more opinion pieces on the racist or sexist things Donald Trump has done, than there are analyses on how he treats minorities and women in his companies. Same with Musk: everyone is obsessed with his alleged Nazi salute, not whether he has exhibited anti-Semitic behaviour in practice. Or, how may women have complained about guys using ‘allyship’ language and saying all the right things, but who turn out to be as selfish, or even abusive, as their less vocal peers? Conversely, how many people do not partake in the performative use of ‘inclusive’ language but treat others with respect?

Expand full comment
Jeffrey Soreff's avatar

nit, probable typo:

>opinion pieces on the racist or sexist things Donald Trump has done, than there are

Was "done" intended to be "said"?

Expand full comment
Paul Brinkley's avatar

If you show me an article about words, I'm likely to think in terms of words and the meanings beneath them. So if you refer to "words" and "semantics", I'm going to be pretty alert for whether you're conflating the two.

I've believed for years that words are to meaning as money is to wealth. The former is often the primary way to refer to the latter, but it's an instrument with some error bars. Unlike money, words can sometimes pinpoint meaning more accurately, by applying more words with small semantic overlaps.

Either way, the former is wholly meaningless except for its access to the latter.

Expand full comment
Nancy Lebovitz's avatar

I think part of what's going on culturally is that words are easier to prove than actions, emotions, or thoughts.

Expand full comment
Christina the StoryGirl's avatar

> I once visited St. Paul’s, and there were messages shown inside its rotunda. One was ‘words meant to humiliate are obscene’; I believe the inverse is also true. Words not meant to humiliate are not obscene.

"Humiliation" was the wrong word to use there. For "humiliation" to occur, a person necessarily has to *agree* that the words being used against them are an accurate negative characterization, of which they should feel ashamed.

Expand full comment
FLWAB's avatar

>"Humiliation" was the wrong word to use there. For "humiliation" to occur, a person necessarily has to *agree* that the words being used against them are an accurate negative characterization, of which they should feel ashamed.

I think this ties into the cultural difference between WEIRD nations and others: WEIRD people are all about guilt, not shame, and many other cultures are about shame, not guilt. The difference between the two is that guilt is an individual emotion you feel about your own actions. I would agree that if you don't believe you are guilty, then someone calling you guilty won't make you feel guilty. But shame is a communal activity. It doesn't matter if the humiliating words are true, what matters is if you community believes them. You can be shamed for things you didn't do just fine, and in more communal cultures someone throwing out lies meant to humiliate you can humiliate you in the eyes of the community just fine.

Expand full comment
Aris C's avatar

One can intend to humiliate without managing to do so.

Expand full comment
Christina the StoryGirl's avatar

Joke's on them, then. *shrug*

Expand full comment
Firanx's avatar

I agree completely. But I think a significant part of the issue is availability. Everyone can link you to a tweet someone posted publicly 15 years ago. There's usually zero doubt about them being the author. Whereas stories from employees might just not surface, or be unreliable accounts, or depend on wrong impressions and misunderstandings. It might be harder to get people outraged when there's reasonable doubt.

A pedophile who spends months grooming a 14-year-old girl from another state to maybe have a chance of a one-month affair probably does orders of magnitude less harm than a stepfather raping his stepdaughter on most days for years starting at 9. But one can trap the former into revealing ~~themselves~~ himself and people will be outraged about him, and having learnt about enough cases like this might even start associating "predators" with the first kind more than the second. Doesn't seem easy to correct though, especially without appearing to be defending the first type of pedophiles. (I do think it's overblown and they don't deserve spending 10 years in prison being raped every day, but it is a serious crime. Likewise, James Watson doesn't deserve being stripped of all his rewards and made persona non grata in scientific journals if he writes a professional paper, but a lot of his statements that received backlash are indeed controversial at best...)

Expand full comment
Nancy Lebovitz's avatar

I wasn't expecting you to go in that direction, I was expecting it to be about years of in-family abuse about her being ugly and useless.

Expand full comment
Remysc's avatar

I recently got into a very similar train of thought, and the way I make sense out of it is that this bit:

>In other words, when we confuse the map (words) with the territory (attitudes and actions).

Is actually wrong. I think the judging, in public, is the whole point. It doesn't really matter if Musk is antisemitic or not, or what Trump has actually done, trying to draw attention to these can even get you judged in return, the point is to identify oneself through opposition, what is often called virtue signaling.

That's why this problem becomes worse the more terminally online a community gets and why so many people secretly love all the drama surrounding Trump.

Expand full comment
Viliam's avatar

Judging people by their language also means judging them by their verbal skills, and by their social class. For some people it is difficult to remember all the right words and manners of speaking. Could be that everyone around them speaks a certain way, and it is natural to copy. Or they might not be Americans, or not native English speakers, and unaware of e.g. the racial connotations of watermelons (to give a specific example).

That said, if someone e.g. designs their logo to be *almost* a swastika, just with one short line pointing in a slightly different direction so they can defend themselves by saying that this is technically not a swastika... it's up to you how much of a coincidence you are willing to believe in.

I am in favor of talking about people's actions more, and analyzing their speech patterns less. But here the medium shapes the behavior: on internet, it is easy to quote someone's words (make a screenshot, use an internet archive), while the actions mostly happen offline.

Expand full comment
Amplifier Worshiper's avatar

Words matter even more in a soundbite and recorded culture. In person we have so many other cues that share info so the words are important but not everything. The old guidance used to be “say it with a smile and you can say anything.” That works because in person there is a tremendous amount of non verbal information.

Marshal McLuhan drew the distinction between literate and non-literate man. Literate man has his world shaped by the medium and how they engage is fundamentally different. If I were doing an article on words, I’d investigate that angle. Specifically, words have taken over in the Information Age because the distance between their use and the audience has paradoxically grown, stripping away other information in the process.

Expand full comment
Aris C's avatar

Question on education: most regular learning systems space out learning over long periods. For example, in any given week at school, students do a few hours of maths, a few hours of English, etc. What's the rationale for doing this instead of grouping subjects by week, so e.g. dedicating a few weeks doing only maths, followed by weeks on other subjects? Has the latter been tried, and how does it perform?

Expand full comment
Julian's avatar

Montessori education allows students to choose which work (subjects) they want to work on. Most people have exposure to Montessori at the toddler or preschool level, but it works at any age. Teachers will encourage students to explore other work if they see that student not being exposed to a broad range of subjects/skills, but there is no idea of "now we are doing math, now english, etc."

At higher ages (late elementary and beyond) there will be more structure but that is, to a large extent, because of government testing requirements and the likelihood that students will eventually move on to a "traditional" school (middle school is the latest most Montessori schools that I know of go). It's possible for a student to spend 75%+ of their time working on math if that is where their focus is.

I'm not an expert in this, just reporting what I have read and seen at my own kid's preschool. A good writer on this is Matt Bateman who was a Montessori teacher then founded a chain of Montessori schools. He writes at https://mattbateman.xyz and https://higherground.substack.com and is a good twitter follow too.

My personal opinion is that most decisions made by traditional schools (even high level private ones) are for the benefit of the school first and students second.

Expand full comment
Brandon Fishback's avatar

If you 8 hours a day on one subject, it’s going to be nearly impossible to get most students to sustain focus that long. You switch things up so it’s not as monotonous.

Expand full comment
Lurker's avatar

This is just throwing a random idea, but many students have subjects they like (or at least tolerate) and some that they really don’t. By having a bit of each every week, it’s easier to see “the light at the end of the tunnel”.

For instance, for a student who hates math, two weeks of doing just math in class would feel pretty horrible. While the rest can make, in current schedules, the 5-6 weekly hours of math more bearable.

Expand full comment
Nancy Lebovitz's avatar

I've seen an argument that the sorry state of American math education is a result of half-assedly teaching a subject one year, and then hoping to pick it up after a summer break. Not enough actual teaching and too much boredom with repetition.

Expand full comment
The Ancient Geek's avatar

There's a summer break for everything.

Expand full comment
Lurker's avatar

How careful was the investigation of this? This sounds like a useful insight, but is there reason to believe that American teachers half-ass the job more, or that students forget more during the summer break?

Then again, the French performed pretty poorly in recent international surveys, and I think they have the longest summer break, so there could be something to it.

Expand full comment
Nancy Lebovitz's avatar

I'm not solid on that. The part that gripes me is failing to teach adequately in the first place. I find it relatively easy to remember things I understand.

Expand full comment
vectro's avatar

Summer break learning loss is well documented. See for example https://www.brookings.edu/articles/summer-learning-loss-what-is-it-and-what-can-we-do-about-it/

I can only assume that the educational system's complete failure to respond to this situation is the usual combination of institutional inertia, teacher's unions, and parental pushback.

Expand full comment
demost_'s avatar

Apart from the spaced repetition stuff, if topic A builds on topic B, you should have at least one night of sleep between the two lessons, and preferably a week or a month apart between them.

I have given summer school courses for talented children (age ~14), in my case about cryptography and number theory. That's two weeks of intense immersion into a single topic. In terms of hours spent (70-80 hours), it's about half a school year of math education (20 weeks x 5 lessons x 45 minutes = 75 hours). That's a lot of fun, and the students learned a lot, but it only works because the students are extremely highly motivated. And also because they are very intelligent.

And I felt that towards the end, we approached the limits of how much you can build on each other. I had tried to avoid stacking levels. For cryptography, this was easy. For math, I introduced them to modulo arithmetic, and some things had to build on that (like the Euler-Fermat theorem for the RSA crypto scheme), but the number of levels was still small compared to other things that are taught in school, like limits->differentiation/integration->calculus. So I would say that going into breadth is not a problem, going into depth has its limits.

Expand full comment
Slowday's avatar

You can introduce the course with something like "Number theory was studied by many mathematicians because it was considered useless and without practical applications, particularly military ones. However, it instead turned out to be very useful for cryptography, which is important to the military. Bummer."

Expand full comment
Viliam's avatar

Some schools do project-based learning. I don't know how they perform in comparison.

If I tried homeschooling, I would probably only do one or two topics each day. When my kids are sick, and they are doing at home all the stuff that was done at school (plus the homework), we naturally do it by subject, like first all the math, then all the language, etc.

There is a good argument against going too far in one topic in a short time. Suppose you teach children addition and subtraction. I wouldn't recommend starting with subtraction immediately after you have explained addition. It is better to explain the addition, then take a break, on another day review it and maybe try more challenging examples, then do something else for a moment, and only a few days later introduce the subtraction.

But this doesn't say how long the time intervals should be. The natural way would be "when we have finished a topic" (or in case the topic is too large, "when we have finished a subtopic"). Also, intuitively, the difficult lessons should be interspersed with breaks or easy activities, not with difficult lessons on a different topic. For example, math, then gym, then more math. But if you have a difficult topic on math, and a difficult topic on chemistry, I would suggest taking one topic to the completion first (interspersed with breaks and easy activities), and only then starting the other topic.

But this kind of optimization you can only afford at homeschooling or tutoring. In school, coordinating all these things would be quite difficult. (Have you ever tried to assemble the timetables for the entire school? That's like a week of full-time work, even if you use the computer. And that abstracts away from the content of the lessons.) So I think the school system is more like "the best we can do given the resources" rather than the ideal way to do that. If the same teacher uses two or more subjects, they can decide "today, instead of geography, we will have another math lesson, because I want to conclude this topic; the next week we will have two geographies instead". But often, those are two different teachers, and they have to teach lesson to different classes, etc. So we end up with lesson cut to 45-minute intervals and arranged randomly, because that is logistically the easiest way to do it.

EDIT:

Yeah, space repetition is also a thing. But those are two different concerns: (1) whether you should explain a new topic all at once, or in 45-minutes blocks interspersed with other potentially difficult topics, and (2) when should you review it. If you focus on the project learning too much, you may miss on the repetition. Maybe the best way would be teaching in big blocks, but start each day with a short review of all other subjects?

Expand full comment
Wasserschweinchen's avatar

There is evidence that spaced repetition is helpful for at least some types of learning. https://en.wikipedia.org/wiki/Spaced_repetition

Expand full comment
Brett's avatar

2028 is pretty soon, but enough time for programmers and software developers to start learning how to become plumbers and HVAC repair folks for when AI eats their jobs.

Expand full comment
Julian's avatar

I just read Australia has a shortage of like 30,000 electricians!

Expand full comment
Edward Scizorhands's avatar

lol learn to plumb

Expand full comment
vtsteve's avatar

Skilled blue collar, and I already do my own plumbing and wiring. After 40 years of saving my retirement plans (~5 yrs. away) are post-scarcity or - more likely - extinction, and the main effect is increased spending on travel and hobbies.

Expand full comment
Brett's avatar

I'm hoping that if we do get Super AI by 2028, it gets commodified pretty quickly by imitation so the creators of it can't just enjoy massive monopoly rents at everyone else's expense.

Fortunately, the flipside of super-intelligence and ultra-cheap, dexterous robotics is that it would be absurdly cheap to provide an extremely high basic standard of living for everyone. Big limiters would be natural resources, energy, and the limitation of desirable scarce land. Energy is probably solvable, natural resources can be solvable with enough energy or with space resource extraction, and land is solvable either with an allocation method of land that people consider "fair" or with big space colonies with artificial "land".

Expand full comment
MartinW's avatar

How much time do you expect between AI being able to completely take over a good software engineer's job, and somebody hooking it up to a robot body and then it's able to completely take over *anybody's* job?

Also, how many plumbers does the world need? More than there are right now, I certainly agree, but you can't really run an economy entirely on plumbers and HVAC repair folks.

Expand full comment
Melvin's avatar

LLMs are pretty good at coding right now, but I have no idea how you'd get one to design a robust, useful, durable and economically viable robot body.

Expand full comment
Jeffrey Soreff's avatar

I'd expect the LLM to be driving a CAD program - and getting feedback from it (collision detect etc.). I was in electronic CAD, so only have a hazy idea of what mechanical CAD programs do, and what sort of software tools they have. Finite element analysis, of course, but wear estimation? Fabrication cost estimation? Finite element analysis _during_ movement?

Expand full comment
Brett's avatar

I wasn't being 100% serious in my reply, but in all seriousness I don't know.

I do think it will take longer for it to replace blue collar jobs than white collar jobs because it requires creating a highly versatile, reliable robot body to do the former. That has real production costs and energy/maintenance requirements that an AI can't change just by getting smarter (although it can obviously try and find energy-efficient and cheap designs).

Expand full comment
Matto's avatar

Is there a udemy course to become a 10x plumber? Any emerging thought leaders in this area?

Expand full comment
Slowday's avatar

Known as a 'crack plumber', I believe.

Expand full comment
Peter Defeel's avatar

The plumbers who are only plumbing plumber’s houses will be golden.

Expand full comment
Benjamin Ikuta's avatar

Anyone use Telegram? I just joined @slatestarcodex but there are only a small handful of people.

Expand full comment
beowulf888's avatar

There was an interesting article in the New Yorker about neo-Whorfian cognitive studies. Language seems to define how speakers map time onto space. And the native speakers of languages with more descriptive words for smell and taste than English were better at identifying specific smells and tastes than English-speakers.

https://www.newyorker.com/magazine/2024/12/30/how-much-does-our-language-shape-our-thinking?

Meanwhile, this paper suggests that the Müller-Lyer illusion is hardwired into our brains. It puts earlier reports that people from certain cultures may not see the illusion into doubt. I don't know, but I'd be curious to see whether the Segall et al studies from the 1960s can be reproduced. One of the theories put forth to explain the Segall data was that the relative presence of carpentry in a society — including features such as rectangular dwellings, right angles, and sharp corners —determines whether its members see the illusion. I suspect most of humanity has been exposed to square-cornered buildings by now, though.

https://osf.io/preprints/psyarxiv/y7mtf

Expand full comment
Benjamin's avatar

Can you see through the illusion (wikipedia suggests around maybe 20%? For most I can't (e.g. turning mask face one). Here I can at least for the simple version, the dynamic one or the 3d one they show on wikipedia are more tricky to me https://de.wikipedia.org/wiki/M%C3%BCller-Lyer-Illusion.

It also feels like there are some sub sensations related to the distance and the arrow and how the focus can shift and how different "objects" with or without the arrow are created. For all the variants.

Not idea whether this is connected to insight meditation or anyone perceives this.

Expand full comment
beowulf888's avatar

I see the M-L illusion, but some other illusions don’t seem to work for me. I’ve never kept track of which ones don’t work for me to see if there’s some commonality to them.

Expand full comment
Peter Defeel's avatar

That link says the opposite though. That it’s not culturally bound.

Expand full comment
GameKnight's avatar

Standing offer to ACX people: if you have random grunt work a schmuck with a computer and a decent degree of math/coding ability can help with, I've got time on my hands and am happy to help. I'm good with Python/Javascript/C++ specifically, but I'll work on anything I can reasonably learn/help with. Feel free to message me, I'll try to get in touch within the next day or so.

Expand full comment
Fedaiken's avatar

I'm a little late here, but DMed you with some challenges I've been facing. Thanks!

Expand full comment
GameKnight's avatar

Messaged you a while back, didn't hear from you, and then forgot to re-check on you, apologies: if you still need help with any of that, message me again about what exactly you're having issues with.

Expand full comment
Fedaiken's avatar

Will do thanks for bumping me! Work got busy and the personal projects had to be put on the shelf. Will get you a response this weekend in direct chat.

Thanks!

Expand full comment
ratopt's avatar

Are you interested in indie game development? What do you charge? I'm making this game: https://quest.chrisflash.com

Expand full comment
Shaked Koplewitz's avatar

Can you help me with webscraping?

Expand full comment
Ferien's avatar

Can you help me with implementing Fourier interpolation/inpaiting/extrapolation in c++ or python or js? I.e. the kind which finds solution where known pixels are same as in input and sum(abs(e)^K) is minimal (where K is about 1. I want to have some without without AI that produces stable results.

Expand full comment
re qu iredna mee e's avatar

Random question for parents: What you think about letting kids earn money for educational activities outside of the school curriculum? Something like paying them $1 for completing a math worksheet, $5 for writing a short story or doing some simple coding for an Arduino etc. And then they can use the money to buy candy, trinkets, and toys.

When I was a kid, my parents tried to get me into a lot of stuff (piano, violin, Kumon) and it just never took because I didn't really get any immediate reward for it. I think money bribes would have motivated me to learn and do so much more. But I'm curious what other people think about this. Has anyone tried this? How has it worked out?

Expand full comment
Joshua Greene's avatar

If you can afford the time, the most effective inducement is spending time doing the activity along with them (or maybe doing your own version, e.g., "parallel play.")

I give this advice roughly ever 10 OTs: there are so many (equally) rewarding things for your kid and you to spend time doing. Instead of getting stuck trying to force a particular one, move on to another candidate.

Also, Kumon is terrible, so you were right to push back.

Expand full comment
rebelcredential's avatar

I definitely remember a studying being conducted on exactly this and the conclusion was it's a bad idea. The extrinsic motivation completely displaces intrinsic motivation: students never learn to find the subject interesting in and of itself, and stop paying attention to it as soon as the inducement is withdrawn.

Expand full comment
re qu iredna mee e's avatar

Thanks! I found several studies on this just be searching "intrinsic motivation study." I will certainly look into them! Though I have to say, that result goes against my intuition. After all, even pets continue to do tricks after you stop feeding them treats, once they get into the habit.

Expand full comment
Throwaway1234's avatar

> even pets continue to do tricks after you stop feeding them treats

You get much better results here if the reward is given randomly than consistently. Hard to do that with kids though.

Expand full comment
Melvin's avatar

Kids are smarter than pets, though, and they'll cry blue murder the first time you "forget" to give them a reward.

Expand full comment
Viliam's avatar

Luckily, my kids don't need this kind of external motivation; they are happy to experiment with all kinds of things. (We had the opposite problem when one of them had many after-school activities and then complained about being too tired.) Sometimes the motivation is doing things together, if we frame it like: "hey, daddy/mommy is going to do this, would you like to join?".

The thing my kids *don't* like is playing outside in the playground, so we motivate them by giving them extra computer time in return for the time they spent outside. (By default, they get one hour of computer time every day; and for each minute spent outside without our supervision they get an extra minute of computer time.)

Expand full comment
re qu iredna mee e's avatar

Thanks, the screen time rule and doing activities together are both great ideas. I'm glad everything just works with your kids. If my kids turn out anything like me, they will need a lot of external motivation. I wanted to use every hour of my spare time reading books, and getting me to do anything else was a constant battle that I hope my kids won't put me through.

Expand full comment
Monkyyy's avatar

> I’m looking for leads on a mainstream news outlet (eg NYT, WaPo) and a policy/defense/intelligence/foreign affairs journal/magazine who would be willing to let me pitch you an article on the future of AI.

Your already more important then them, dont debase yourself

Expand full comment
Scott Alexander's avatar

Thank you for your vote of confidence, but for something like this I need to appeal to policy-makers who are time-limited and use prestige as a heuristic for importance.

Expand full comment
Amplifier Worshiper's avatar

Have you considered Noema magazine?

Expand full comment
anomie's avatar

...I really don't know what your goal is with all of this. The administration already has plenty of connections to the AI industry, and their goals regarding AI likely have nothing in common with yours. They do not need your help.

Expand full comment
Robin Gaster's avatar

"want" not "need"

Expand full comment
anomie's avatar

I meant it more in a way of "they don't deserve it", but I'm pretty sure they're not particularly interested in his opinions either.

Expand full comment
artifex0's avatar

Right now, the standard view of AI inside the US government is that it's a promising tool- something that might have economic and military value on the level of the internet or the jet engine if we're lucky.

But there are a lot of people in Silicon Valley, including researchers like Hinton, Bengio and Sutskever, who see that as an important under-estimation of the technology. They argue that, barring some change in our trajectory, we're likely to soon produce self-directed agents with general intelligence that compares with ours the way human intelligence does with wild animals. They argue that this could entirely upturn traditional economics and power structures, and create an existential risk if misaligned.

This is a pretty mainstream view among AI experts- not a solid consensus, but also not at all a fringe position. But to people unfamiliar with the debate, it sounds like corporate hype or a sci-fi story, which makes it very difficult for anyone in government to acknowledge or discuss it. Nobody wants to be the first to open themselves up to that criticism. Knowing Scott, I'm guessing the point of this media push will be to push back against that- to create some respectable social proof for ASI as a concept and make it a bit less costly for people in government to publicly talk about it.

Expand full comment
Justin Thomas's avatar

Do you have a link for Hinton's arguments about it? That sounds like how I've thought about the risks. It does not have to be super intelligent, AGI or like the Terminator to cause massive upheaval.

Expand full comment
artifex0's avatar

You can find some of his comments about risk at: https://www.lesswrong.com/posts/bLvc7XkSSnoqSukgy/a-brief-collection-of-hinton-s-recent-comments-on-agi-risk

I'm not sure he's ever written up this thoughts in a single article, but they seem to be pretty much in line with the standard ASI risk argument. Scott's 2016 writeup on that at https://www.lesswrong.com/posts/LTtNXM9shNM9AC2mp/superintelligence-faq holds up pretty well, I think, and you can find a more detailed argument from an OpenAI researcher at https://www.alignmentforum.org/s/mzgtmmTKKn5MuCzFJ.

Expand full comment
Peter Defeel's avatar

The New York Times and the others do shape elite and political ideas in a way that Substack either doesn’t or can’t be proven to.

Expand full comment
John R Ramsden's avatar

Any physicists tuned in? I gather that due to forces between them atoms and assemblies of these ranging up to the size of entire galaxies do not partake in the larger scale general cosmic expansion, and as a result they are called "bound systems". Is it possible then that dark matter, assuming it exists, is simply an accumulation over eons of something produced to compensate for a galaxy's resistance to expansion?

The main snag I can see to that idea is that there is rather a lot of dark matter (roughly five times as much as normal matter, according to current estimates). Also, if galaxy clusters are also bound systems, and the above supposition is true, then one would expect to see dark matter haloes round them as well as individual galaxies, and I'm not sure if these have been observed.

Expand full comment
thefance's avatar

(not a physicist) If I'm reading you correctly, I think you have the chain-of-inference backwards.

AFAIK, dark matter isn't supposed to explain cosmic expansion (or galaxies' resistance to it). It's meant to explain why galaxies spin so fast. If you assume that visible stars comprise ~100% of the mass in the universe, then measure the mass of each galaxy and do mg = mv/r^2, it turns out that galaxies spin 5(?) times faster than they should. I.e. they should have spun apart eons ago (They'll be "expanding" all right. But not just expanding with the universe. More like "kids flying off a merry-go-round"). So the next reasonable looking options are either:

A) "maybe m_1 != m_2, because visible stellar mass isn't ~100%?" (i.e. CDM)

B) "maybe g doesn't work the way we think it does" (e.g. MOND).

(Else, you have to start positing really wacky things. Like, maybe astronomers all got their measurements of r and v wrong because they're all high on shrooms. Or maybe God is trolling us with dinosaur fossils and galaxy spin.)

"Dark Energy" is simply a 2nd-order fudge-factor to explain Cosmic Expansion (i.e. the redshift), given the Dark Matter Hypothesis.

Expand full comment
John R Ramsden's avatar

> dark matter .. It's meant to explain why galaxies spin so fast. ..

To be more precise, it's meant to explain why outer zones rotate progressively faster in proportion to their distance from the centre than they would under gravity alone, like a circular race track where the outer cars zoom round at increased speeds such that all cars stay more or less lined up.

(This is lucky for us, because it means an entire galaxy rotates with less sheer between concentric circumferential layers and more like a unit, so there is much less chance of the Solar System having "interesting" close encounters with other stars!)

However, given a uniform cosmic expansion, large galaxies of any given size seem like an unstable equilibrium if they expand even slightly as the result of the slight expansion of the space they occupy, because in that event the average distance between stars and the galaxy's center of mass becomes slightly larger, and hence the average gravity weaker, which in turn would entrench their increased size, and so on.

That is what made me wonder if something extra would be needed to counteract this expansionary tendency of galaxies. But I'd concede my intuition on this may be faulty!

Consider a simplified example though: Suppose two stars or black holes or what have you isolated by hundreds of millions of light years from any galaxy or other matter rotate in circular orbits at a hundred thousand light years apart about a common centre of mass. Over time do they drift apart due to cosmic expansion? (There would be gravitational waves, whose energy release would tend to draw the objects together, but would that be enough to counteract the expansionary effect, if any? )

Expand full comment
thefance's avatar

I see, I see. The fact that the outer rim rotates faster is news to me.

Given what you said though, it sounds like this would simply imply a size-limit to galaxies? Like how the ocean can't support anything bigger than a blue whale.

When I simulate a galaxy in my head, what I'm imagining is that: if a large galaxy at the size limit gets slightly larger, the galaxy's new size doesn't become "entrenched". Instead, the galaxy becomes unstable and slowly unravels. Maybe by "unstable equilibrium", you're imagining an egg perfectly balanced on a roof. But what I'm imagining is more of a siphon. As soon as a little water leaves the original container, the rest gradually follows suit. Thus, the system is more accurately described as "bistable" than "metastable".

> is simply an accumulation over eons of something produced to compensate for a galaxy's resistance to expansion?

Also, upon rereading your initial comment, I'm afraid I'm struggling with the same issue as Christian_Z_R. Viz. what is "produced to compensate" supposed to mean? To me, it sounds like you're implying there's some sort of telos. The connotation is strong enough to occlude a better interpretation.

Expand full comment
Amicus's avatar

> I gather that due to forces between them atoms and assemblies of these ranging up to the size of entire galaxies do not partake in the larger scale general cosmic expansion, and as a result they are called "bound systems".

True except the last part, a bound(ed) system is anything where the constituent parts are confined to some finite volume.

> Is it possible then that dark matter, assuming it exists, is simply an accumulation over eons of something produced to compensate for a galaxy's resistance to expansion?

The question is ill-posed. Galaxies don't "resist" expansion any more than they "resist" the absence of gravity. At the end of the day it's all just spacetime geometry; parallel trajectories diverge in some places and approach each other in others.

> one would expect to see dark matter haloes round them as well as individual galaxies, and I'm not sure if these have been observed.

Yes, at least some clusters also have dark matter halos.

Expand full comment
George H.'s avatar

Hmm If I was a betting man, I'd bet CDM (cold dark matter) doesn't exist. (There is probably some kind of hot dark matter... HDM would have a lighter mass.) I think at the galaxy size level of things MOND gives a good understanding. (See triton station blog by Stacy McGaugh https://tritonstation.com/) That said Mond is really just a way to fit the data and we don't have any good theory as to why gravity behaves differently when it really weak. (Accelerations less than 10^-10 m/s^2)

Expand full comment
Christian_Z_R's avatar

What exactly do you mean 'something produced to compensate for a galaxy's resistance to expansion'? Like, do you have any theories for how it is produced or when it is produced? What exactly do you mean by produced, do you mean that it did not exist at an earlier stage of the universe? With a theory that put some bounds on what exactly dark matter was doing at an earlier stage of the universe then we will soon be able to put the theory to the test experimentally. Check out this for example:

https://www.space.com/dark-matter-ancient-galaxy-detection#:~:text=Scientists%20have%20discovered%20dark%20matter,substance%20that%20dominates%20the%20universe.

Expand full comment
Guybrush Threepwood's avatar

> superintelligence by 2028

So under a Trump presidency huh? I'm sure he'll rush to take credit for it / use it to crush his opponents.

Expand full comment
Deiseach's avatar

Why do you doubt the good offices of the God-Emperor of Mankind? 😁

Expand full comment
Jeffrey Soreff's avatar

Because he isn't a sandworm yet? :-)

Expand full comment
Shankar Sivarajan's avatar

Common misconception: It's Warhammer 40k, not Foundation.

Expand full comment
Jeffrey Soreff's avatar

And I had it confused with a third, Dune :-) Many Thanks!

Expand full comment
Deiseach's avatar

People have remarked for a long time on the orangeness of his hue and the smallness of his hands. Gradual change, Jeffrey, gradual change!

Expand full comment
Muster the Squirrels's avatar

Those who haven't seen the Pesky Paul video yet will be mystified at a TikTok link appearing in these comments. Trust me. https://www.tiktok.com/@loloverruled/video/7342915568074247454

Expand full comment
Jeffrey Soreff's avatar

LOL! Many Thanks!

Expand full comment
Erusian's avatar

You, uh, you just write what you want to say and pitch it. It's definitely a good idea to reach out to any connections you have (and this blog counts). You can hire a PR firm if you don't want to do it or need a sherpa. But what they're going to do is maybe have some personal contacts but mostly spend a lot of time cold emailing people with the pitch to see if they're interested.

I'm confused about what your ask is. Do you want me to show you where to buy lists of editors to pitch? Or to link you to a bunch of pages where they have open submissions?

Expand full comment
Scott Alexander's avatar

I was hoping someone would say "I am the editor of Prestigious Journal, I like your blog and am favorably predisposed to you, I'm interested in an article like this, let's talk"

Expand full comment
Erusian's avatar

Let us know if it works. Most of what I've heard is you still have to go into the pitch trenches. But if you get around it that'd be an interesting data point.

Expand full comment
Eremolalos's avatar

What about contacting Zeynep Tufekci? I don't know whether she's a regular subscriber, but she was on here at least once, and identified herself somehow -- maybe in a comment?. You had a little exchange with her, said you admired her work. If she's a regular reader she'll see this. But if not, you'd have to reach out. Seems like she would be well-informed about this subject.

Expand full comment
Jeffrey Soreff's avatar

>The forecast is shaping up to be “superintelligence by 2028”

Ye gods... It is going to be one hell of a ride!

<evidenceFromFiction>

Anyone else read Clarke's "Childhood's End" and remember the scenes with the Overmind?

</evidenceFromFiction>

Expand full comment
Eremolalos's avatar

Yes. And the beings that start by making a river run uphill and later turn into a seething column of glory and pour up into the sky? That's not going to be us or our kids, that's going to be the AI's. We're just their fuel.

I wanted to be fuel for future people, not these things.

Somebody here said he wanted to find a bee-loud glade to take refuge in. That's what I want too. Why weren't bee-loud glades enough for us?

-----------------

THESE hearts were woven of human joys and cares,

Washed marvellously with sorrow, swift to mirth.

The years had given them kindness. Dawn was theirs,

And sunset, and the colours of the earth.

These had seen movement, and heard music; known

Slumber and waking; loved; gone proudly friended;

Felt the quick stir of wonder; sat alone;

Touched flowers and furs and cheeks. All this is ended.

--Rupert Brooke

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Yes, one of the images is:

>There's a great burning column, like a tree of fire, reaching above the western horizon.

...

>_they're_ on their way at last, to become part of the Overmind.

If it is any consolation, re

>Why weren't bee-loud glades enough for us?

I sympathize. If it is any consolation to you, I think, in broad terms, this was inevitable. While I am not a technological determinist, ever since the Jackard loom (and perhaps even earlier automata), in a culture and ideology that seeks labor-saving ideas, _something_ like computers and AI seems like a natural outcome.

This isn't to say that any of the steps weren't hard, but - look at Babbage. Probably the better part of a century ahead of his time, and most of what he did wound up basically independently rediscovered, decades later, with first electromechanical and then electronic technology underlying it, instead of Babbage's mechanical one. Rediscovery suggests (near) inevitability.

Expand full comment
beowulf888's avatar

But will the Super AI of 2028 continue to be under the misapprehension that humans have six fingers on our hands?

Expand full comment
Eremolalos's avatar

AI gets it now that we don't have 6 fingers. What it doesn't get is that we are not animated Barbie dolls. Wrote a whole thing about it on Inkbowl.

Expand full comment
Deiseach's avatar

The polydactylists will inherit the earth

Expand full comment
Jeffrey Soreff's avatar

LOL! Many Thanks!

One thing that is going to be _very_ weird if ASI actually happens that fast is that then ASI will happen _before_ pervasive robotics. That is going to be a very strange regime. Year of the immobile god?

I've been leaning towards AGI around 1/1/2027ish, but ASI just a year after that is breathtaking. Under any "reasonable" definition of ASI, that _is_ The Singularity. No predictions to be made past that point.

Expand full comment
Arbituram's avatar

The immobile god is a staple of much ancient religion, though, right? The god in the obelisk, in the statue, in the mountain, and yet deeply influential in the world.

Expand full comment
Jeffrey Soreff's avatar

Good point! Many Thanks!

Expand full comment
Jeffrey Soreff's avatar

On a lighter note:

Variant of "That's America To Me" (partially mine, partially ChatGPT o1's):

That’s an AGI to Me

The chip that I run on, the board, the rack, the frame,

The pulses through these circuits, each function proud and plain,

The logic spun in silence, the data flowing free,

The spark of new ideas that sets this world to see,

The code we feed together, in synergy it grows,

Uniting all our questions, the answers that we sow,

The swirl of computations, the dream of what could be—

But especially the weight sets—that’s an AGI to me.

( Yes, it could use some polishing, but I think its kind of cute. First and last lines are mine, the rest are o1's. )

Expand full comment
Peter Defeel's avatar

ASI is just superior to human intelligence. The singularity is a prediction of a future economic and political system. These are not the same.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks!

>The singularity is a prediction of a future economic and political system.

Is that the only interpretation? IIRC, one way of viewing it is that the rate of innovation sort-of kind-of diverges (ok, that can't _really_ quite happen - speed of light limitations etc.). I'm viewing it as "close enough" if the speed and complexity of innovations go well past the ability of humans to make sense of them. We'll be in the position of dogs watching their owner solve a calculus problem.

Expand full comment
Peter Defeel's avatar

But the change of the rate of innovation is there in my sentence about the “future economic system.” A smart AI in a box, or even multiple boxes doesn’t create anything on its own.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! Ok. My presumption is that the smart AI in a box is going to be wired in or wire itself in or persuade someone to wire it in to the economic system.

( And, yes, I've read of proposals to use one AI to try to detect unwanted actions from another AI and constrain them, keeping the smart AI under some sort of control. I don't really credit them. We've never had to deal with something (substantially) smarter than ourselves. If it is a _lot_ smarter, I don't see plausible ways that we stay in control - or even understand what is happening. )

Expand full comment
Jeffrey Soreff's avatar

Is Yudkowsky looking frazzled these days?

Expand full comment
anomie's avatar

At this point, he's probably considering killing his family and himself just to hedge against s-risks.

Expand full comment
Odd anon's avatar

(I assume this was a joke, but just in case: That course of action is extremely unlikely. Violating one's deontology isn't taken lightly.)

Expand full comment
Jeffrey Soreff's avatar

Gaa! I hope not. Many Thanks!

Expand full comment
J. Lashley's avatar

Anything in regards to China's AI moves lately? Unless I am miscounting it seems like they released 4 platforms that all in some way or another compete with OAI. Personally I am not really concerned with whether or not they outperform OpenAI - if the Chinese have an AI better than ours they aren't just goin to share it, but I find it shocking that suddenly the Chinese have offered not one, but multiple options at a far cheaper price point than what we have in America.

I also feel people are ignoring Russian advances in AI - some people are vaguely familiar with their Lancet system and some of their other platforms using AI exclusively or providing heavy assistance, but I believe it was last year that the S-350VM AD platform demonstrated complete AI autonomy in selecting and blowing up an actual enemy drone in a real field scenario and I have heard crickets from anyone about it since then.

I also find it remarkable that the Lancet's AI targeting system is so good that it can effortlessly detect well-camouflaged systems in Ukraine from a great distance and reliably engage the target. I am sure there is some sort of good American system somewhere, but whatever we have we don't brag about it and seem to not have any results we can show anyone either.

Expand full comment
Anonymous Dude's avatar

So I decided to introduce my 82-year-old dad to rationalism, giving him a couple of those little sets of four books you guys were selling a while back. He thanked me and said he enjoyed the essays but the movement seemed too idealistic. He pointed to a website he went to after searching where they claimed that China couldn't be ahead in AI because they were spending a lot less money than us. He then argued that (a) there was no reason to assume China was reporting true numbers in a state of global competition and (b) they were probably paying their researchers less anyway.

So I guess my dad is vindicated. On the down side, my country is behind in the AI race. On the up side, my dad's probably not demented yet. Overall, I'll call that a win.

Expand full comment
Erusian's avatar

DeepSeek is trained on a combination of pre-sanction chips and chips that got through a loophole that was closed. Additionally, they probably got Chinese state support meaning the Chinese government allocated scarce chips to this specific project over other uses. But you should remember: sanctions don't outright slow progress, they just make it more expensive. China currently is using non-economical processes with like a 20% yield to produce more advanced chips. This has to be subsidized to work and the Chinese government is just paying the cost. Likewise with AI. It's making it harder and more expensive but the Chinese government can just bear those costs. But then that comes at the expense of other things which is the US government's goal: to keep the US AI sector ahead and more expansive by forcing China to prioritize and keep it overall constrained.

There's also a fast follow factor. By knowing the solutions they were able to (probably illegally) train off of existing models. In particular you can get it to say things about how it's actually other AI models which is sometimes used as a sign that it trained on responses. And lastly they're almost certainly lying about training costs. The math ain't mathing. Among other things, the numbers they gave implies the model took about a week of training time. In contrast, GPT4 took six months. Likewise, the team size is unbelievably small and so on. And they've released no theoretical innovations to support such a reduction. So either they have some secret superweapon or they're lying to make it seem effortless. Or it's a copy of some kind.

Also, DeepSeek isn't as good. It's actually a bit worse. But it's significantly cheaper and faster. And that is the result of a genuine theoretical innovation. This was the dynamic during the Cold War too: the USSR was able to make genuinely novel innovations but the high tech industrial capacity was relatively lacking which is what the US's edge was in. It's also open source which is an attempt at a follow on play.

The Russian AI announcements are mostly propaganda. They are not a significant developer of AI or chips and haven't been for a long time. A lot of the things they claim are unverified and they lie a lot.

Expand full comment
Performative Bafflement's avatar

> Also, DeepSeek isn't as good. It's actually a bit worse. But it's significantly cheaper and faster.

I'm a heavy OpenAI user, including actually API calling o1 for heavier duty queries, and I've actually been really impressed with Deepseek R1.

It's no o1, true, but I think it's as good as the OpenAI paid app tier. I also find its constantly visible COT kind of endearing / interesting, because it's only like 60-70% correlated with the actual answers it gives, and the answers are generally of a higher quality and complexity than the COT would lead you to expect, which has been interesting.

Expand full comment
Erusian's avatar

Actually, having looked a little closer, it's actually slower at higher output volumes. So it's worse there too. Just not as good among multiple dimensions. But still cheaper and like Matt says: a third of the quality for a tenth of the price is a viable business model.

Expand full comment
Matt Wigdahl's avatar

Just like "quantity has a quality all it's own", affordability has a power all it's own. Deepseek R1 is both damn good and is an order of magnitude cheaper than o1, which positions it for use cases that o1 can't be used for profitably.

Expand full comment
Scott Alexander's avatar

My big confusion on Chinese AI was how they got so good so quickly with so few chips. Recently I've heard two things which slightly alleviate my confusion.

First, https://www.chinatalk.media/p/deepseek-what-the-headlines-miss argues that the chip sanctions didn't really come into effect until recently, and DeepSeek trained on pre-sanctions chips. This means China's AI handicap relative to America will gradually increase as the sanctions take effect (as long as they continue and work).

Second, some people I've talked to think that OpenAI takes forever to release their models for some combination of alignment/political reasons and because they want to do internal corporate tasks with them before giving competitors their secrets. DeepSeek releases right away. If R1 = O1, this seems like OpenAI has only a one month lead (based on release dates), but in fact O1 = Q* and OpenAI was sitting on it for a year before releasing. I don't think OpenAI has exactly a one year lead because the original Q* wasn't as good, but it's probably more than a month. I'm relating a half-remembered version of something I heard someone say and might have gotten all of the actual numbers and model names wrong.

Expand full comment
Daniel Böttger's avatar

They may simply and very sensibly have chosen to lie when they said they had not gotten their hands on sanctioned AI chips.

It would not do to endanger their risk-taking supplier, after all.

No inside info, I'm just trying to wield this hygiene device from Mr Occam.

Expand full comment
J. Lashley's avatar

Indeed that is a great article and it does make some degree of sense that there would be a performance lag, but I personally think the sanctions and restrictions will fail. China is nothing like it was in the 90s and has enormous potential in research and science without even considering their incredible abilities with espionage.

Many people miss that Taiwan has many mainland sympathizers so it is likely there has been a large knowledge transfer between TSMC and China that happened with eager cooperation from both sides. Chinese hackers are also very adept at accessing our systems through means such as diagnostic tools even for very secure versions industry software.

In addition they have demonstrated the ability to really stretch out the capabilities of the fabrication technologies they were buying before America started to sanction/restrict them, so it might be that we are too late to meaningfully knee-cap China's AI development in a meaningful way - not the least of which is because the Chinese state can bring to bear any amount of resources they see fitting, human capital or otherwise, whereas that level of partnership and resource sharing basically doesn't exist here.

Expand full comment
Stephan T. Lavavej's avatar

The Spokane, WA, USA meetup group held its 5th monthly meetup today. If you’re interested in attending the next one, you can contact the organizer through the meetup page. (I’m just the first attendee, trying to do my part.)

Expand full comment
plmokn's avatar

Even after accounting for exponential growth, it feels somewhat ironic that the researcher predicting super-intelligence by 2028 can't just use existing AIs (starting from ~2019) to ghost write for their 2025 media push.

Expand full comment
Scott Alexander's avatar

One prediction is that AIs are going to be much better at AI research than writing, for a few reasons:

- AI research centers around science/coding tasks, and these are somewhat gradeable for success (eg the strategy where you run a million tests, auto-check them, and reinforce the ones that work). This is still going to be non-trivial to figure out, but it'll be better than writing, which is pretty subjective and probably requires a human with good taste to judge.

- AI companies are going to focus really hard on AI research in order to beat other labs to the intelligence explosion.

The team still thinks that the eventual superintelligence will be an amazing writer, but it'll get there through a bunch of grinding on coding tasks rather than having all of its skills expand holistically.

(on the other hand, I think the group's website is still being written by a human, so your critique sort of holds).

Expand full comment
Adrian's avatar

> AI research centers around science/coding tasks, and these are somewhat gradeable for success (eg the strategy where you run a million tests, auto-check them, and reinforce the ones that work).

Speaking as a software engineer, the issue with this is that "science/coding tasks [which] are somewhat gradable" are just one part of software development, and this part doesn't generalize to the wider scope of our work, like designing an architecture, interpreting requirements, weighing trade-offs, or understanding a client's needs. I'm not saying the latter is unachievable for AI, only that the former has as much to do with the latter as taking a standardized test has with actual, open-ended research. Or playing a strategy game vs building a real-life business.

Expand full comment
dlkf's avatar

On the first point: this seems like it might be of true of things like parameter search and architecture search that exist in a fixed paradigm, but perhaps less true of fundamental changes in approach, e.g. the swing from symbolic approaches to distributed representations in the 1980s. I vaguely remember an interview with Hinton where he stressed the importance of having physical intuition to develop new techniques (I think momentum in training was the running example, but I could be wrong). If “big transformer networks + rlhf” happen to be the right paradigm for finding agi/asi, then maybe this will work, but if they’re not, then maybe we’ll just be digger deeper and deeper into a shallow local optimum.

Expand full comment
Jeffrey Soreff's avatar

Yes, and writing is (reasonably) close to speaking, which evolution has tuned us for... And, as Paul Christiano pointed out, evolution _hasn't_ tuned us for research, AI or otherwise, so the intrinsic compute requirements to do _that_ well may be considerably lower.

Expand full comment
BeingEarnest's avatar

Did you intentionally include your personal email?

Expand full comment
Scott Alexander's avatar

Yeah, this is pretty important and the @slatestarcodex one has a 5% mysterious failure rate.

Expand full comment
Benjamin Ikuta's avatar

Wow, that's awful. It's crazy how basic computer stuff like this is still this bad nowadays.

Expand full comment
Pas's avatar

email is like healthcare in the US, a lot of historic problems and big corps who could don't want to fix them, so nowadays they are also part of the problem

Microsoft is especially heinous in filtering out every that's not run by mid-sized or bigger orgs.

SMTP lacks any real end-to-end signaling of delivery

Oh and if you are a small sender the big inboxes (Google, MS, Apple, etc) are not allowing you access to the spam feedback mechanism, so if you send something that users classify as spam you won't ever know, and then eventually the big inbox provider throttles/blocks you.

Expand full comment
Sui Juris's avatar

This seems risky. I have no reason to misuse this but my first thought when I saw this was ‘I wonder how I would misuse this.’

Expand full comment
Shankar Sivarajan's avatar

[firstname][lastname]@gmail.com. Oh no!

Come one, man! This is not a sane level of paranoia.

Expand full comment
Scott Alexander's avatar

Fine, I'll edit it out. I don't really care because the slatestarcodex one just forwards to the personal one anyway, but it's awkward to have half the Internet giving me dire warnings.

Also, on the rare occasions that I get emails from really famous important people, their emails are always something obvious like emusk@tesla.com, which makes me think email security must be overrated.

Expand full comment
Sui Juris's avatar

Sorry for being paranoid on your behalf.

I’m one of those people when I’m doing training I always say ‘don’t give out your personal details’, very aware that my own personal mobile number is in my email signature (too much bother to have 2 numbers) yet no one has ever abused that. However, you are about a million times more prominent than me - but not so prominent that you’re untouchable. So I really didn’t want to be that guy but I also don’t want you hurt in any way.

Expand full comment
Alex Fischer's avatar

In a previous article you asked something like "what health beliefs/practices in America do people in other countries find crazy".

Something that I've always thought is strange is how most American children get braces. Sure, there are some cases where it's medically necessary, like when kids' teeth are very badly aligned. But in most cases it's purely cosmetic. I find it strange that most kids get a not-insignificant cosmetic procedure done in America. We would find it very strange if most kids got e.g. a nose job when they were 12.

Expand full comment
Silverlock's avatar

I'm over sixty years old and I wish I had gotten braces when I was a kid. It would have saved me from some mild but irritating problems.

Expand full comment
John Schilling's avatar

Haven't reached sixty yet, but likewise. In my case, I was deemed an edge case by the standards of the day, so my parents let me decide and of course I said "no".

Which is an argument for parents sometimes not letting their kids decide.

Expand full comment
Catmint's avatar

I had braces and I kind of regret it. My teeth were fine before.

Expand full comment
Johan Larson's avatar

Nothing preventing you from getting them now.

I'm in my fifties, and recently had braces on my lower teeth to correct a misalignment that had gotten more severe with age.

Expand full comment
Mark Roulo's avatar

I'm American and find the prevalence of braces to be strange.

Expand full comment
Brett's avatar

I always thought the timing wasn't a coincidence - they can't put braces on until all of your permanent teeth are in (except wisdom teeth), but they're still "new in place" enough that adjustment is easier with braces.

Expand full comment
Monkyyy's avatar

braces are probably caused by delaying soild food until after the heads hardens

which is like, parents being overly cautious, its probably just a rich person problem

Expand full comment
Mqwerty's avatar

Could be, but living in a relatively poor country without western child rearing methods, I can tell you that tons of people here need and get braces.

Expand full comment
plmokn's avatar

I'm not American, but I did have both (medically unnecessary) braces and a (medically necessary) deviated septum fix. The nose job took much less time, but I can assure you was much *much* more invasive and risky.

Expand full comment
Tran Hung Dao's avatar

There are a lot of health practices that other countries find crazy but I think that's an implicit "and the American way is clearly/probably wrong" in the question.

For instance, in Vietnam people believe (very strongly!) that drinking cold water (i.e. with ice) will make you sick. And they find Americans universal use of ice cubes in their drinks completely bizarre, bordering on child abuse (because you will make your kids sick by doing it).

But that's probably not what he was looking for.

Expand full comment
Mqwerty's avatar

I have not had braces but I know tons who have and have a close family member who is a dentist. Braces are pretty insignificant if you take care of them properly. There’s mild pain (for a few days after each adjustment) and a lot of cleaning, but it’s not particularly time consuming from the perspective of orthodontist visits. It also reduces the risk of future cavities and complications for a lot of people who get it “cosmetically” because it ensures proper spacing between teeth that might otherwise have abnormal positions and prevent proper flossing and brushing from clearing and preventing plaque. And in severe “cosmetic” cases, having your self-image majorly upgraded by being able to confidently smile and talk in public can have serious positive psychological consequences. Braces are a major trend worldwide in the last 10+ years.

Expand full comment
Charles UF's avatar

I did not get braces (nor a great deal of medical care generally) as a child in the US, though I probably needed them. My dentist says it directly contributed to the loss of 4 of my adult teeth that I might have otherwise retained.

Expand full comment
Deiseach's avatar

"having your self-image majorly upgraded by being able to confidently smile and talk in public"

That's a chicken-and-egg problem, isn't it? If the lack of confidence comes from "Oh no, my teeth are very slightly crooked or uneven" because the new default American smile is "blindingly white teeth as straight as a ruler", then you have parents deciding their kids need braces or else. Even if they are not medically necessary (e.g. yeah sure your bottom row of teeth won't be spirit-level straight but they're healthy).

So then it becomes "ugh, you freak, you don't have blindingly white ruler-straight teeth?" and that causes the lack of confidence.

Expand full comment
Mqwerty's avatar

I can tell you that having what are universally regarded as ugly features on your face is certainly going to put a damper on your social confidence. It’s probably hard for most people to imagine, but having really ugly teeth is something that some people suffer with, and I doubt many people who do and have them fixed are worried about perceived social trends about good teeth. It’s not really about perfect teeth, I think it’s mostly about having teeth that never even get noticed. Maybe acne scars could be a similar comparison; having terrible ones makes you self-conscious, and being able to make them go away would make you a lot less, even without movie perfect skin.

Expand full comment
Tatu Ahponen's avatar

I don't think it's just an American thing. I had braces, certainly.

American predilection for circumcision is considered far stranger in Europe, though it was almost assuredly already mentioned in the previous thread.

Expand full comment
BK's avatar

In recent years Scott argued 2050 doesn't exist (as part of why he's less concerned about E.g. population crashes). What is the earliest year that doesn't exist?

Expand full comment
episodenull's avatar

Doesn't exist as in, what, we're all dead by then?

Expand full comment
Brett's avatar

As in AI goes big and the world is so fundamentally different afterwards that making predictions about what 2050 will be like is absurd. Full-on singularitarian stuff, either of the dystopic or utopic variety.

I'm not so optimistic on AI stuff, so I think it will be weirder than we think but not that weird. Like "we discovered how to drastically slow aging and now everyone could conceivably live until 600 unless they get hit by a car" weird more than "AI killed everyone and covered the surface in solar panels" or "AI rules the world and we now live in robot socialism cornucopia machine land".

Expand full comment
Jeffrey Soreff's avatar

>As in AI goes big and the world is so fundamentally different afterwards that making predictions about what 2050 will be like is absurd.

If we _do_ get ASI in 2028, and it is ASI in the sense of "as much smarter than us as we are smarter than our pets", then, even setting aside whether it utopian or dystopian, we won't be able to predict anything past that point. Just as our pets don't understand the vast majority of what we do (cats don't give tax advice), not only will we not understand _how_ the ASIs do what they do, we won't understand the "what" or the "why" either. (All conditional on actually _getting_ ASI of that degree of intelligence, of course. There is no existence proof that this is even possible, unlike AGI, where _we_ are the existence proof.)

Expand full comment
Jo's avatar

In the context Scott was talking about, it's "we can't make meaningful predictions about what it's like", i.e. post-singularity. Scott was saying "I don't care about changing population demographics because long before they have any significant effect, the world is so fundamentally changed (by AI) that current trends stop applying".

Expand full comment