Has anyone done an analysis of the brewing tariff war with Canada yet?
I’m very surprised Canada has decided to retaliate dollar for dollar (which Trump has promised to double if they do) rather than negotiate whatever it is that’s actually motivating Trump to do this (explicitly it’s fentanyl and illegal border crossings, but who knows). I suspect he’d want a full reevaluation on specific industry subsidies and tariffs that Canada imposes on the US too.
Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible. The retaliation is doing “something” which is obviously demanded by the voters.
Canada is hugely dependent on US trade, and while Canadian tariffs on the US will cause harm, it will equally harm Canadian producers, and probably lead to even more tariffs imposed on them. For a country that has such a huge percent of its GDP tied up in trade with the US, I can’t see anything but a recession for Canada (and a minor decrease in GDP for the US).
All the analysis I’ve found are either saber rattling or what this means for the average consumer. I’m interesting in anyone who’s looking at the macroeconomic effects.
> We pay hundreds of Billions of Dollars to SUBSIDIZE Canada. Why? There is no reason. We don’t need anything they have. We have unlimited Energy, should make our own Cars, and have more Lumber than we can ever use. Without this massive subsidy, Canada ceases to exist as a viable Country. Harsh but true! Therefore, Canada should become our Cherished 51st State. Much lower taxes, and far better military protection for the people of Canada — AND NO TARIFFS!
Now, my own analysis of the situation. I am emphatically NOT going to stick to macroeconomics: I think your attempt to focus so heavily there is misleading you. Canada CANNOT look at this purely as an economic decision because it isn't. So the Canadian response will naturally factor in the economic realities, but also the diplomatic and political realities. Trump has, in an only-just-shy-of-literal-sense declared war on Canada[1] and Canada's response must be a *strategic* response: that is, the one that best serves all of Canada's strategic aims. Of course "maintaining Canada's economy" is obviously a major strategic aim, but so are "preserving Canada's national sovereignty" and "maintaining Canada's diplomatic position with the U.S. and the rest of the world," both of which are threatened by this move just as surely as the economy is.
First Point: negotiations require, well, actual negotiations. As outlined above, Trump doesn't seem to have made any attempt to negotiate with Canada at all. Likewise, Canada already made a quite large an expensive conciliatory move before Trump even took office. The only sense in which Canada could negotiate its way out of this situation is to *keep doing things like this* in the hopes that eventually Trump is satisfied and withdraws the threat. I hope I don't have to outline why this would be a fatally stupid way to operate. If Trump is threatening these tariffs as a way to get something (rather than imposing them because he thinks they're worthwhile in their own right) it's incumbent upon him to *at the bare minimum* ask for what he wants in concrete terms.
Second Point: reputational effects are real and important here. Every country is one player among many on the international state, and every country is on that stage for the long term. The actions you take today in one negotiation will affect the situation you face tomorrow and the next day, in future negotiations. Reputational effects are a very important component to both sides of this conflict. On Canada’s side, caving easily to U.S. pressure will today will lead rational actors to expect Canada to cave in similar fashion tomorrow. This could potentially cost a lot. Not only would Canada’s closest neighbor be made readily aware that in can extract concessions basically any time it wants by threatening Canada’s economy, but it could easily encourage other nations to try the same. This could be very, very costly for Canada in the long run, potentially much more so than the proposed tariffs. By contrast, the harder Canada hits back, the higher the apparent price tag it sets for anyone who wants to try this in the future. Trade wars tend to be enormously negative-sum: the U.S. might be better able to afford the hit, but it will still be a hit, which means the policy is less likely to be long-run profitable for the U.S. and less likely to be repeated.
On the U.S. side, even if Trump were to unilaterally back down on all his tariff threats tomorrow, he’s already damaged the U.S.’s international reputation. Actually imposing such tariffs on Canada will do greater and more lasting damage. The U.S. has remained the preeminent economic and diplomatic power in the world for 80 years in very large part by not doing stuff like this. While the U.S. can be imperious and demanding sometimes, it’s built up a very strong and very valuable reputation for being a good friend to have: it mostly presses the “cooperate” button and surrounds itself with other nations who tend to do the same. Everybody stays at peace, everybody trades reasonably nicely with everybody and as a result everybody gets rich. Short of declaring a shooting war, it’s difficult to imagine an action that U.S. could take that would be more damaging to that carefully-accumulated reputation than wantonly trying to wreck the economy of one of its oldest and closest allies over some comparatively minor domestic problems. Obviously the economic and diplomatic position of the U.S. are still central enough that this one act of aggression–however shocking–won’t trash it overnight. But it’s a crack in the foundation, and not a small one. Enough cracks will bring the house down, and even before then, there will be detrimental effects. This does connect strongly to the macroeconomic angle. Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it? Even if none of the goods are coming from Canada, these threats are a really sharp demonstration that anyone could be Trump’s next target. All business ventures that require interacting with the U.S. market are suddenly looking a little more volatile than they did a week ago, or three weeks ago or a year ago. Volatility carries risk and hedging against risk costs money. My macroeconomics knowledge isn’t strong enough to estimate how large this effect will be–my guess is that as a one-off it will be lost in the noise–but if things like this keep happening, it's difficult to imagine that it won’t start to impose a real, significant drag on the U.S. economy. A lot of people with significant business connections in the U.S. have got to be evaluating their other options and researching other suppliers and/or consumers right now. And while they’re a little more abstract, I expect diplomatic effects in a similar vein: U.S. guarantees of cooperation and friendship (be they on trade, defense, intelligence or anything else) just got a little more questionable. Negotiators will be a little bit less willing to align with U.S. diplomatic priorities if this is wage that two centuries of steady friendship buys U.S. allies.
Third Point: the internal politics of both countries also real and important factors. Saving the best for last, this is (in my opinion) the factor that I most expect to prove decisive. Certainly a trade war with the U.S. will cause far greater economic damage to Canada–as a fraction of GDP–than it will cause to the U.S. But the political effects will be much, much less lopsided. As far as I can tell[2], the general mood among Canadians right now is that we are under attack, and that we need to stand up and defend ourselves. Meanwhile calling the general mood among those in the U.S. “divided” would be a massive understatement. Tens of millions of Americans were already angry and horrified at the conduct of the new administration before Saturday’s announcement, and this will not make them any calmer or any less determined to push back against the government they oppose. Meanwhile, how many people even among those that voted for Trump actually want him to plunge them into a trade war with Canada? I got the sense that a lot of what secured Trump his win was economic hardship and anxiety. An action that increases prices U.S. consumers pay while hurting a significant share of U.S. businesses (all done in the service of fairly nebulous objectives) seems unlikely to sit well with at least some of those people. To be fair, Trump proved himself both very stubborn and essentially immune to negative news headlines during his first term, but actions that his millions of Americans directly in the pocket book might well be harder for him to shrug off. If the trade war creates solidarity on the Canadian side, while sowing division and galvanizing organized opposition on the U.S. side, it seems very possible that the U.S.’s political will cracks before Canada’s does, despite the lopsided economic impact. If that happens, then the U.S. loses doubly hard: it damages its own economy and reputation without getting anything in return. Trade War: the only winning move is not to play.
[1] Or if you like, announced an intent to declare war, with the actual declaration scheduled for Tuesday.
[2] Which, to be fair, is not all that far. I’ve only had a chance to talk to a few people about it, and they’ve been mostly people whose views align with my own in other ways.
> Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it?
Well, here's what Trump has to say about that:
> MAKE YOUR PRODUCT IN THE USA AND THERE ARE NO TARIFFS!
My request that you not reply to me was not time or post limited. The intent was that I don't want to talk to you any time, on any post, ever.
This is a great illustration of why. I have neither the patience nor the pedagogical experience to explain to you what political cheerleading doesn't actually map to real-world economic outcomes; the attempt would probably be quite unpleasant for all involved. If you posted your opinions as a reply to the top-level thread, you'd be more likely to find someone able to hand-hold you across the apparently-vast gap in understanding.
...Mate, I was just letting you know what Trump's aims were. Investing in other countries isn't going to be a viable option when the global supply chain is in tatters, so they may as well move operations to the US. And he will burn the whole world down just to make this happen. Did you think "America First" was just a bluff? His ambitions are real, and now he has the power to fulfill them.
I mean, he really really doesn't understand how economy works? Forget the fancy word "economy", how, for example, manufacturing business works, or who it is that pays the tariffs. At this point it walks like a duck, and quacks like a duck, and it has been doing this for a long time, so an inescapable conclusion is that Trump is a clueless dolt with a sadistic streak.
Does the Dane Geld metaphor fit this scenario with the US as the Danes? Can game theory inform Canada and Mexico's actions? Trump seems like a defect-bot sometimes.
First some facts that you seem to be unaware of. I'll write my own thoughts and opinions about the tariffs and the impending trade war in a separate comment.
First, Canada already released a plan to put an additional $1.3 billion to strengthen its border security. This was after the election but well before the inauguration, so it seems to be pretty clearly a response to Trump's threats and concerns and something offered up preemptively, as an olive branch. It strongly suggests a willingness to be accommodating on this issue (within reason):
Second, if Trump is trying to hold actual negotiations here, I haven't seen evidence of it. You say
"Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible."
but as far as I can tell his does seem to be the case. Trudeau has signaled interest in negotiation and (according to him) received no response:
"Trudeau said he's been trying to reach Trump since his inauguration two weeks ago but hasn't heard back. He said he wants to relay to Trump that it's better to tackle challenges such as drugs and migrants together as partners rather than as combatants in a trade dispute." (Source: https://www.cbc.ca/news/politics/trump-tariffs-canada-february-1-1.7447829 )
Of course, there's one more day until the tariffs take effect: this could be a deliberate ploy to apply as much pressure as possible to try to force more concessions when the negotiation does happen. We'll know in 48 hours at the most.
UPDATE: I saw later that Trump's lack of intent to negotiate has been confirmed from his side as well:
Could still be a bluff, of course. But there do seem to be indications that Trump thinks this is a good in itself, intending to try to force foreign industries to move to the U.S. Which is to say that it seems Trump's ideology is at odds with sound economic policy, and neither the good of the U.S. population nor the U.S.'s diplomatic position are primary drivers of his decision making.
I think you’re making a mistake in analyzing Canadian tariffs—they’re not targeting the US Economy, they’re targeting specific states, specific industries and the Senators who care about them. Bad news for Tennessee whiskey, for example.
Anybody following what DOGE is doing so far? I see Elon claiming to be close to reducing $4 billion spend per day, but not sure I can believe his hyperbole. https://x.com/elonmusk/status/1885574678737039823
He seems to be trying to shut down a whole agency, USAID. If he can do that, and it stands, we’re in a different form of government and the old assumptions are no longer valid.
Yeah. At the moment it seems like the plan is "get admin access to systems that handle money and personnel, physically stop the money from being spent." You can definitely cut a lot of spending that way if you're willing to break the law!
$4B/day works out to $1.46T/year. Total discretionary spending (i.e. stuff that Trump can legally block by vetoing appropriations bills, as opposed to mandatory spending which requires Congress to vote to amend or repeal existing laws), including defense spending, is about $1.6T.
Also worth noting, cutting discretionary spending without the cooperation of majorities of both houses of Congress is hard, since Congress can put the stuff you want to keep in the same bill as the stuff you want to cut.
I see a few possibilities:
1. Musk has no idea how much he realistically can try to cut and is saying arbitrary large numbers like Doctor Evil.
2. Musk also hopes to make deep cuts to mandatory spending. This runs heavily contrary to Trump's previous statements about never cutting Social Security and Medicare, and requires active participation by Congress not just vetoing bills and winning standoffs about passing bills with just the discretionary spending you want to keep.
3. Musk actually wants to disband almost the entire discretionary side of the Federal government, including the military.
4. Musk wants to raise a ton of one-time revenue by selling off a ton of federal land and similar assets and is counting this as part of the $4B/day figure.
There are many similar videos, but https://www.youtube.com/watch?v=MoYQdfhy24k this was interesting. The channel is an Ukrainian guy interviewing random Russians in video calls.
This lady, she is really happy about Russia destroying Ukrainian borders, because her country is always right, whatever it does; she supports Russia unconditionally. Also, she doesn't understand why Ukrainians are not friendlier with Russians; why are they looking towards the West instead.
She supports Putin in "almost everything". What is "almost"? Well, she is sad that many of her friends have recently died in the war against Ukraine. (She lives in Siberia, most of those people have previously never been in Moscow or Ukraine.) But other than this little detail, she is a strong Putin supporter.
Isn't nationalism such a beautiful thing? How would we accomplish anything if it was impossible to cultivate a legion of such unwaveringly loyal soldiers?
...Which is also why allowing such high standards of living is counterproductive to the interests of the collective. We cannot allow people to find meaning in life outside of faith. They must be made willing to give anything and everything. One for all, all for one.
It's not the nationalism I found surprising (although it does not appeal to me), but rather how the lady is puzzled that the people she hates *don't love her country* in turn.
There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them. They see *mutual* hate as the natural state of the world; they might be surprised to learn that many of the people they hate actually don't even think about them.
Does the lady perhaps believe that Russian nationalism is so great that even the non-Russians should naturally embrace it (and that it is quite surprising if they don't)?
Does she perhaps believe that the natural state of the world is that deep down in their hearts all people around the world desire to be conquered by Russia, and then go die for the glory of Russia trying to conquer even more countries? (Just like she is proud for supporting Moscow that her Siberian friends died for.)
To me it seems like this... Ruscism... is way crazier than the usual nationalism. The usual nationalism is "we hate them, naturally they *hate* us too"; Ruscism seems to be "we hate them, and we don't understand why they don't *love* us".
If she's been drinking the Russian Propaganda Kool-Aid, she very likely believes that the territory in question is part of "Russia" and has been for a thousand years, that almost all of the people who live there are "Russians", and that "Ukrainian" is a word invented by a handful of corrupt Nazis in Kyiv to justify their ongoing oppression of forty million "Russians".
If she hasn't been drinking the Russian Propaganda Kool-Aid, she's probably not available to answer questions from inquisitive journalists.
The belief that one's soldiers will be welcomed as liberators when they march uninvited into someone else's country, is a common error not at all limited to Russians or even to subjects of authoritarian regimes generally.
> There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them
Of course, because those nations are filled with savages. They understand there is no common ground to be had with them. It's different when two civilized nations are at war. The west has forgotten what that's like, but that will probably change soon enough.
Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The visible spectrum for humans typically ranges from about 380 nm to 750 nm, and 530 nm falls within that range, corresponding to green light."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
A bit worse than o1. It kept insisting that CuCl4's color was from a d-d transition, and it took a bunch of leading questions to get it to see that it was really charge transfer.
c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.
Disappointing. It did get the two methylcyclopropene isomer on the first try, which o1 missed, but it missed tetrahedrane, vinylacetylene, and diacetylene. It omitted a bunch of highly strained rings, which is sort-of ok, since it said it was skipping them.
d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?
Correct, "This comparison shows that the mass equivalent of the Sun’s radiated energy is about four times greater than the mass lost via the solar wind."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from
Overall: One better than o1, hoped for more progress
Backing off guess about AGI to 70% odds in 2 years. (was 80% in 2 years)
I want to note that o3-mini-high and o3-mini are not very different, though it was still a mistake for me to initially test o3-mini rather than o3-mini-high. The _larger_ change is that I misunderstood both LLM's answer to (g), the S4 question. It actually _is_ correct for both LLMs (and was _not_ correct for o1), so there actually has been progress on this benchmark-ette since o1, though less than I hoped for.
a) Correct
b) partially correct (got the species right, got FeCl4 right, got the d-d transition in CuCl4 wrong, took multiple leading questions to correct it)
c) partially correct (two errors)
d) correct
e) initially incorrect, one prod gave correct result
f) misses a lot, argues incorrectly that some real gases are liquids
g) correct (needed to clarify what it meant to me!)
List of questions and results:
a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?
Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The human visible spectrum generally ranges from about 380 nm to 750 nm, and wavelengths around 530 nm are typically perceived as green.
While the number provided is extremely precise, this level of precision far exceeds what our eyes can distinguish, but it confirms that the light falls well within the visible range."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
Correct, " In other words, the mass loss from radiation is roughly 6–7 times greater than that from the solar wind."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from
This session features two distinct topics that converge around the theme of how bureaucratic or institutional influence shapes policy and public perception:
Scott Alexander’s article explores the nature of administrative and legal friction—how it really works, why simply “firing half the bureaucrats” might backfire, and how incentives around litigation and Congressional mandates drive ever-increasing complexity in agencies like the FDA.
Annalee Newitz’s piece delves into the long history of U.S. psychological warfare, from 19th-century anthropologists on the battlefield to modern data-driven influence campaigns. It examines how government agencies (and sometimes private industry) harness “cutting-edge science” to manipulate or influence minds—both abroad and at home.
These conversations should deepen our understanding of how large institutions, whether civil or military, can become unwieldy or wield profound power over society’s direction—often in subtle, behind-the-scenes ways.
Conversation Starter 1
Topic: “Bureaucracy Isn’t Measured in Bureaucrats” by Scott Alexander
Text Link:
Bureaucracy Isn't Measured In Bureaucrats
Audio Link (YouTube):
Bureaucracy Isn't Measured In Bureaucrats
Extended Summary
In this article, Scott Alexander responds to a now-deleted Vivek Ramaswamy tweet calling for “firing half of all bureaucrats” as a quick cure for the ills of red tape. Alexander argues that bureaucracy is not a simple function of staff count. Instead, layers of regulations, court challenges, and Congressional mandates (often called “red tape”) create mandatory processes. These processes remain the same, or even grow, regardless of how many employees are available to handle them. Fewer staff doesn’t necessarily reduce forms or procedures; it simply slows them down.
Key Points:
Staff vs. Process
Many agencies (like the FDA) have to produce exhaustive evidence to avoid lawsuits from interest groups or industry. This complexity is set by laws and court precedent, not by the whim of office managers. Halving staff doesn’t remove steps; it just delays the outcome.
Court Challenges & Legislative Mandates
Environmental groups, industries, or other stakeholders can sue. Courts require thorough justification for any regulatory decision, incentivizing agencies to produce reams of documentation. Meanwhile, each new law from Congress adds a fresh compliance dimension.
Approval vs. Ban-Focused Bureaucracies
Alexander notes a subtlety: some agencies primarily “approve” (e.g. FDA letting new drugs come to market) while others primarily “deny” (some forms of regulation). In theory, cutting staff for ban-focused bureaucracies might reduce how much they can ban—but in practice, it can as easily cause confusion or slow attempts to regulate or un-regulate.
Real Solutions
Alexander distinguishes “fewer bureaucrats” from “fewer or simpler rules.” Real changes might require rolling back laws that drive administrative sprawl or limiting the ability to sue over regulatory decisions. However, these are politically complex tasks, far more so than layoffs.
Idaho as a Counterexample?
He cites Idaho’s recent success in slashing many outdated regulations. But, he wonders, how many of these cuts were cosmetic (e.g. “rules about a nonexistent state lottery TV show”) vs. impactful? The path to meaningful red tape reduction may require painstaking reviews and legislative changes—things that can’t be solved by a single stroke.
Deep Discussion Questions
Processes vs. People:
Alexander suggests that if you want fewer hoops, you need to change the hoops, not just the number of staff. Do you see real-world scenarios where staff cuts do reduce bureaucratic friction, or is it mostly naive to conflate the two?
Legal Overhang & Fear of Lawsuits:
How does the constant threat of litigation shape agency decision-making? Are there ways to prevent excessive lawsuits without undermining checks on government power?
Idaho’s Experiment:
Is Idaho’s success scalable? Could a large federal agency realistically prune decades of accumulated mandates as quickly? Or are the politics (and inertia) insurmountable?
Reform Approaches:
Alexander highlights that the real culprit is the complex web of regulations mandated by Congress or the courts. If meaningful red-tape reform is the true solution, where might such a campaign gain traction? And are we prepared for the possible downsides (e.g., under-regulation)?
Conversation Starter 2
Topic: “How the US Used Science to Wage Psychological War” by Annalee Newitz
Annalee Newitz’s essay tracks the long history of U.S. “psy ops” (psychological operations)—from rudimentary 18th-century propaganda to advanced 21st-century digital campaigns. She contends that while many countries have engaged in psychological warfare, the U.S. has carved out a distinctive approach by blending “cutting-edge science” with political and cultural infiltration. Key eras illustrate how the government’s “rational approach” to propaganda draws on anthropology, psychology, marketing, and advanced analytics:
19th-Century Roots: Anthropologists on the Battlefield
The U.S. Army employed anthropologists during wars with Indigenous nations. Their mission included cataloging cultures and languages—ostensibly for academic study but functionally to facilitate conquest. This early synergy between science and warfare sowed seeds for modern psy ops.
World Wars & Freudian Marketing
Psy ops came into its own in the 20th century, relying on psychological theories to produce “rationally designed” propaganda. Edward Bernays, nephew of Sigmund Freud, exemplified this marriage of marketing acumen with intelligence services, orchestrating (or aiding) campaigns like the CIA-backed coup in Guatemala.
Modern Digital Spin-Offs
In the 21st century, data-driven “psychographic profiles” (as seen in the Cambridge Analytica scandal) or social media infiltration have replaced old-school leaflets and radio broadcasts. Instead of sending anthropologists to foreign battlefields, advanced AI or big-data analytics “map” human vulnerabilities—at scale.
Blurred Lines
Newitz notes that “psy ops” can loop back into domestic spheres. Culture wars, shaping news cycles, or quiet infiltration of a population’s worldview straddles the boundary between international propaganda and domestic persuasion. She highlights how the “rational approach” has moral hazards—once the state gains deep insight into cognition, it can more easily manipulate or mislead.
Efforts at Resistance
Some researchers propose transparency or “propaganda weather reports,” akin to forecast services that warn users about upcoming influence spikes on social media. Others preserve historical archives to reveal patterns of government infiltration. Despite these attempts, a robust strategy to quell psychological warfare remains elusive.
Deep Discussion Questions
Old-School vs. Digital Psy Ops:
From leaflets and embedded anthropologists to AI-driven manipulations, how has the core logic of controlling narratives stayed the same, and what’s truly new?
Ethical Boundaries:
Is there ever a justified “psy ops” campaign, especially if it’s used to avert harm or defuse extremist narratives? How do we reconcile the possibility of protective or “noble” misinformation?
Science as Double-Edged Sword:
Anthropological and psychological studies can yield legitimate insight into human cultures—but also enable powerful exploitation. Where should the lines be drawn? Are certain forms of research inherently prone to weaponization?
Domestic Overlap:
Although Newitz focuses on how the U.S. turned these techniques outward, modern controversies (e.g., Cambridge Analytica) show that lines blur easily. Should governments categorically ban “psy ops” on domestic citizens, or is that unenforceable in a networked age?
Walk & Talk
After exploring these themes, we’ll do our usual one-hour walk around the neighborhood. This more casual time fosters spontaneous questions, deeper connections, and fresh perspectives.
Share a Surprise
We’ll also have an “open-floor” segment: if you’ve read any eye-opening articles, had a perspective-altering personal experience, or discovered a fascinating fact, feel free to share. Varied topics are welcome.
Looking Ahead
As always, feel free to propose future topics or volunteer to host a specific reading. Our group thrives on diverse expertise, so if you have a focus—be it technology, policy, history, or beyond—we encourage you to bring it forward.
We Look Forward to Seeing You on February 1!
Questions, clarifications, or special requests? Contact Michael Michalchik (info above). Thanks for being part of ACXLW—together we’ll keep exploring the frontiers of thought, policy, and culture. See you soon!
First aerial collision in I think 17 years today, after our glorious leader (Musk, in this case) paused hiring, fired a bunch of people, and generally started kicking down chestertons fence because it was woke.
I wonder how many stupid things you can do in a row as a conservative before people realize that they're stupid, it doesn't seem like there's an upper limit. Sad.
Having listened to the recordings it seems like nothing can be attributed to recent policy. Both the Helicopter and the Plane were following established procedure, there wasn’t a shortage of staff, and the ATC didn’t do anything wrong so far as directing things.
It looks like this was a decently known tight airspace, the helicopter confirmed they had the incoming plane in sight, then flew right into its path. The most likely situation is the helicopter pilot was looking at the wrong plane, probably the following one, and didn’t notice the plane they were supposed to have in sight and avoided.
It’s a flaw in the system, but a decently common thing in aviation to avoid another aircraft by sight, when you have it in sight. I wouldn’t be surprised if this pilot was less experienced than they should have been, as it’s a pretty obvious thing to mistake one incoming aircraft for another on final, as there’s usually many in a row.
The "wasn't a shortage of staff" part is open to dispute considering that the helo and local control positions were combined. The outcome could have been quite different if instead of just "do you have the CRJ in sight?" the helicopter pilot had heard something like "traffic alert, the CRJ now 11 o'clock and a mile and a half turning short final for 33, appears targets will merge. Verify you've still got him." Saying all that would have taken time the local controller didn't have-- but a split-off helo controller would have.
Admittedly we don't know whether the positions were combined because staffing was short, or because the supervisor decided to send an extra body to the break room. That's one of the things the NTSB will have to look into.
It would be better to say: there wasn’t a shortage of staff due to recent policy. I wasn’t aware there was ever a split off helo controller, but I assume that’s more common at heavily trafficked airports.
We’re almost half a century into an ATC staff shortage and still going strong.
According to Trace, DEI has been scuffing the FAA's hiring pipeline for decades. (Notice how the article was published on Jan 29, 2024. I.e. 12 months ago. He's not just knee-jerk reacting to the recent crash.)
I'll admit that my "Musk is just an honest idealist" take aged like fine milk. (Although in my defense, ketamine is a hell of a drug.) But idk if we can lay this one at Musk's feet. At least not until more information about the crash becomes available.
No, according to Trace, there was a serious fuckup from 2014-2016 (in 2016, Congress passed a law banning the use of biographical assessments as a first line measure, as noted in the article you linked), and it was completely discontinued in 2018. Not "decades".
Yeah, that's fair. I was going off memory. I could have sworn the article mentioned a date from the 00's. But on closer inspection, the "2000" date was simply when the task-force was first commissioned.
edit: However, Trace's tweet does mention that the hiring-pipeline is still damaged. So I don't think I'm completely crazy for Trace's article having influenced my priors about the cause.
It seems a bit early in the Trump administration to blame this event on them. They just haven't had time to do much yet.
It might make more sense to blame Congress, which last year voted to increase the number of flights permitted at Ronald Reagan Washington National Airport, when the airspace around it is already very congested.
Agreed. There are some *potentially* legitimate political and culture-war issues that may have been at work here but A: it's too early in the investigation to know and B: it's too early in the new administration for anything they've done to seriously affect ATC operations on the ground. Anyone suggesting that it is obvious who is at fault here, should be ignored.
Based on the reporting I'm seeing, I'm at 70% confidence it will turn out to be the fault of the Blackhawk.
We don't have the blackbox data yet, but we do have the traffic control audio, and the tower clearly identifies the AA flight (a CRJ-700) for the helicopter crew, who confirm that they see it. Transcript below- PA 2-5 is the Blackhawk:
Air Traffic Control: “PAT 2-5 do you have the CRJ in sight?”
Air Traffic Control: “PAT 2-5 pass behind the CRJ.”
Blackhawk Crew: “PAT 2-5 has aircraft in sight, request visual separation.”
"Visual separation" (see article below) "means the pilot acknowledges seeing the incoming aircraft and is informing the air traffic controller that they will avoid getting too close to the plane."
So, to my lay-eyes, it looks like the tower did its job and warned the helicopter about the airliner, gave them a clear instruction to pass behind it, and received confirmation that the helicopter saw and intended to avoid getting close to the plane, only to have something go wrong on the helicopter side of the equation.
Lots of ways that could have happened; medical emergency or other failure in the helicopter cockpit, helicopter crew was actually looking at a different aircraft and thus *didn't* see the CRJ despite reporting that they did, etc. But it seems like evidence so far indicates that whatever went wrong was most likely on the helicopter side of the equation.
If the BH is confirming visual and intent to avoid, why would they say "requesting visual separation" instead of "maintaining visual separation" or "will maintain visual separation"?
That bit bugs me a little because it suggests a possible miscommunication. (Or it might be nothing or just normal aviation jargon I'm not familiar with, but I doubt it, since I understand ATC procedure as wanting to be so clear that even people finding themselves in control of aircraft in an emergency (e.g. pilot had a stroke, no one else on the craft) can understand it as plain English.)
If the BH crew actually saw a different aircraft, then that's a strong signal to me that "visual" is no longer going to be good enough, at least at Reagan.
What's the chance there was a drone in the vicinity, mistaken as something else?
I read "requesting visual separation" as requesting "permission" to use visual separation to keep their distance, rather than an alternative method of avoiding collision such as direction by the tower.
I.e. the BH says "requesting visual separation" and the tower then responds by (a) granting permission to use visual separation to avoid the CRJ, or (b) denying permission to rely on visual and instead directing the BH to a particular spot out of the way of the aircraft.
But as stated, I'm very much a layperson so YMMV by a lot with respect to that interpretation.
"Our job is to resist and to talk about it to everybody. If you’re walking down the street – and I do this now myself, which is, you know, I don’t want to do – I’m not a busybody. I see somebody on a hiking trail carrying a little baby and I say to him, ‘Better not get him vaccinated.’ And he heard that from me. If he hears it from 10 other people, maybe he won’t do it, you know, maybe he will save that child." - RFK Jr, 11:35 of this podcast: https://sites.libsyn.com/311600/rfk-jr
This is really bad. I can't even make a joke about it. He will just kill children if elected to this post. Anyone who would say that paragraph should not be allowed into public office, much less put in charge of the HHS. Please, write your Senators - even Republicans don't want a bunch of American kids to die of measles.
I can't see why even Trump would want RFK Jr as health secretary. He did his job, which was to drop out and endorse Trump in exchange for a promised nomination to a cabinet position.
RFK Jr has his own weird agenda orthogonal to Trump's, and I can't see why Trump would actually want him in that position.
The weird agenda got a lot of supporters during COVID (even though Trump actually did a lot of work to get a COVID vaccine out as quickly as possible). RFK appeals to that chunk of his base - the "we don't need the government to protect us from disease, we just need horse paste and a tough attitude" crowd.
Possibly Trump is the sort of person wants to surround himself with lickspittle lackeys that owe everything to him, since he is at the center of a personality cult and has convinced himself that the only problem with his previous administration is too many people had principles and/or competence that made them sometimes butt heads with him.
Alternately, he is antivaxx himself. Thus his quite notorious Tweet where he said, "Healthy young child goes to doctor, gets pumped with massive shot of many vaccines, doesn't feel good and changes - AUTISM. Many such cases!"
Anyone have a guess about what will happen with Trump's campaign of tariff threats? I expect he will be able to get a lot of smaller countries to give in, but will run into trouble when he tries it with one of the parties strong enough to push back effectively, meaning China or the EU. This will spark a trade war of retaliatory tariffs, which will generate a nasty recession. That will produce enough misery that in the following election in the US the Dems will win back congress, and they'll make him stop. That might be in 2026, if things develop quickly.
I don't even think it's meant to be a special punitive measure. I mean, it's phrased that way since basically everything in the new US admin is phrased as someone getting punished, but really, Trump's mindset just seems to be that tariffs are a positive good in themselves and the normal status of how things are that US has tariffs against here, there and everywhere, to protect US industry and to collect money (in lieu of using income taxes), as he has said many times.
We're so used to a free trade world where tariffs are a punitive expection (not an infrequent expection to be sure, but still, something that by definition has to be conceived as an expection) that this sort of a mindset starts to seem quite alien to us, even though the world being riddled with tariffs and duties used to be the standard setting to be cleared away by the free trade revolution. The free trade revolution was mostly good and this is bad, but it's also not a special form of bad but rather a very traditional sort of bad.
It's worth noting that he can manually grant exemptions to tariffs, meaning he can use it as a way to indirectly punish companies that turn against him.
It’s also likely that the tariffs start a trade war that continues for years and hurts everybody, because there is no way to ratchet it down smoothly in a multi-lateral world and these things take on a life of their own, note how Biden continued and extended many of Trump’s policies towards China. The worst case scenario is very bad here.
I think it might depend on what the rest of the world is led to expect from America after 2028. If Trump seems like a parenthesis and the likely upcoming leaders are anxious to mend fences, I believe many countries (or nominal allies) will be only too eager to drop the retaliatory tariffs. Tariffs on China are probably here to stay, though.
Re: defense journal/magazine. War on the Rocks (not sure if they're a journal or a magazine, tbqh) feels like the place that might publish something like that. Here's their submissions guidelines. https://warontherocks.com/submissions/
Search for Astartes on YouTube. You'll find a good short film by Syama Pedersen about a squad of space marines boarding an enemy vessel and running into more trouble than they expected.
Definitely seems like a much more ambitious project than last time - I see Tau, Tyranids, Chaos Marines, and a wide range of environments. So either some sort of anthology or a multi-episode story.
The original Astartes was excellent, so I'm excited to see what the animator can do with official backing.
Yeah, this is right, but I put it much more simply.
The rationalist community failed so miserably in cases like this because they were remarkably stupid and remarkably ignorant. No kind of neurodivergence is any excuse. Literature exists. History exists. If you have a curious brain, you should be curious enough and therefore knowledgeable enough about human nature and history not only to have recognized the craziness, but, even if you didn’t recognize the craziness, to have understood the possibility that your critics had enough of a point that you should really learn to understand your critics, rather than dismissing them for being right-coded etc.
I think the thing is they're so used to being rejected they sympathize with other rejects and can't tell or don't want to tell when they're actually rejected for valid reasons. There was a document about Geek Social Fallacies going around in the 2000s that said something similar. I'm actually not sure literature or history is all that useful here as this particular sort of problem is distant from the personality types of most literary authors and usually too small-impact to gain the attention of most historians.
There is a lot of both literature and history about crazy violent cults.
And there is a lot of both literature and history about the dangers of dismissing criticism because the critics were from outside the group.
This particular community achieved more prominence than “communities of geeks” usually do and this may have worsened the incentive structure for them to have any intellectual humility, but there’s still no excuse for not being familiar with concepts like hubris, or for not heeding warnings, given their explicit commitments to respecting rational arguments whatever their source.
I think rationalists tend to be mind-blind to their own irrationality, sure, but lots of big movements without that flaw spawn fringe groups, some of which turn paranoid and violent. I think it’s an example of how big-R rationalism can be considered a New Religious Movement.
I was referring to a long comment about how Bay Area rationalists and adjacent groups failed not only to perceive the threat from the Zizians, but also got taken in by many other crazies and grifters because their social norms were too fucked up, and what was stupid about it is that it did not require a lot of intelligence to avoid failing in the way they did, it only required an acquaintance with history and literature combined with a bit of intellectual humility.
Paul Sas replied to your comment on Open Thread 366.
Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative. Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.
***
2) Paul Sas has BLOCKED ME so he that I cannot reply to his comment
3) Therefore, I reply to MYSELF, HERE, intending that any of the rest of you who happen to know Mr. Sas and think that what I am about to say is of any value whatsoever have the option of informing him that I said something that was of some value whatsoever. Here goes:
***
I was not using the word “stupid” to refer to the Zizians. Rereading what I wrote ought to make it clear that the people I was calling “stupid” were not Zizians, but rather members of the rationalist community who refused to heed warnings about the Zizians and failed to notice their pathologies (and similarly failed to be alarmed by other dishonest and dangerous people in other cases).
***
Separately: if any of you are of the opinion that my writings in this thread were such a violation of ACX norms that it was appropriate for me to be blocked, I genuinely wish to understand what I said that was so violative as to deserve a block, so that I can avoid violating such norms in the future here.
I have a hard time squaring your exhortation to "learn about hubris", when your advocacy use such demeaning language that views every mistake as caused by other people's stupidity etc
Look, either you think they made bad mistakes, or you don’t.
If you don’t think they made bad mistakes, fine, explain it and show I’m wrong.
If you do think they made bad mistakes, harshness of some kind is going to be necessary, the important thing is that it is directed at the right target. I’m blaming stupidity *of a particular type*, not because I always blame stupidity, but because the mistakes that were made could have been avoided had they taken their own stated principles more seriously and acted like rationalists really should. I’m far from the only person who has criticized this tendency, both before and after the SBF fiasco.
Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc
Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative.
Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.
I thought this was a great film, worth watching for Willem Dafoe alone, but what especially stood out to me was how it treated the sexes.
The film shows men as men and women as women and does a really good job at giving a coherent reading of both.
The friction between the wife and the male best friend for example: from his perspective, she's a flighty, fanciful, annoying woman who he puts up in lodgings not just for the sake of his friend but also because duty and decency demand it. He endures her company over weeks as she becomes harder and harder work, putting him to greater and greater inconvenience. At first he smiled off her talk of demons and dreams, but thing have gotten serious and she still speaks only in childish gibberish, refusing to address him using logic, like an adult. When he finally snaps at her tersely you can completely understand his frustration.
Meanwhile from her perspective, he's the controlling authoritarian who's dictating what she can and can't do, he's putting her through all this (entertainly oldschool) medical attention without thinking to ask her opinion on the matter, and most fundamentally he's patronising her, not taking her seriously, not meeting her halfway, not *listening* to what she's trying to communicate.
You might take one side or the other, depending on whether you have male or female sensibilities, but the *film* doesn't, it lets you see both.
Same goes for the main love triangle (which, rotting corpses aside, is basically the same one as Bridget Jones' Diary: love vs lust, there's the Nice Guy and the Bad Boy and she's caught between them.) Her husband is a good man, but sexually unsatisfying; Orlock is a horrible creature she can't stand the sight of - but he knows how to make her come.
Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future. He grants a request that 99% of the time would be totally harmless and lets Orlock look at his wife's locket. And then he, after six weeks on the road and only desiring to get back home to his wife, hastily signs the document he came to sign, and doesn't stop and kick up a fuss about the fact that it's in an unexpected language.
The idea that this is enough to make a man unattractive has just got any sane logical bloke throwing his hands up in exasperation because clearly you just can't win with women. But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man.
Orlock meanwhile is a will that can't be denied - yes, he's threatening and menacing and murdering all her friends: but you can't deny he feels desire for her, just her. Orlock makes her feel pleasure from the very first contact. Later in dialogue they're both very clear that love is not on the cards, this is a purely physical thing that's going on.
Meanwhile hubby is shown sleeping in a separate bed, and later when she fantasises about him fucking her good and hard, we cut back to real life and see that he's just holding her helplessly. This is a good man, who loves you, who wants to do right by you - but he doesn't know what he's doing in the sack.
Hence the forbidden attraction of giving herself to Orlock. In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble. Gentlemen, take note.
Well, Im happy that the film made sense to you. I didnt read it that way, and even in retrospect I dont, though Im aware of this general narrative. To me, it was just setting up a different ending - classicly heroic or christian redemption, maybe - and didnt subvert it well at all. Were doing fatalism now because... Mr Exorcist said so? I still didnt believe it was the only way when he said it.
Haven't seen the movie, have seen the Klaus Kinski "Nosferatu". So I'm going mostly from that and the reviews of this version I've read, which do seem to agree with your second point about the ending.
"But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man."
From the female perspective, not so much "Chad kicked sand in your face, ninety pound weakling", but the betrayal of intimacy. If the photo of your wife is on your desk, that's on public display. Other people taking notice of it, even the stronger guy, isn't that bad; if Chad makes mean comments ("wow I'd fuck that hot bitch") that's on him not on husband.
But a locket is something secret and intimate; it's like (in a modern context) her giving him the sexy nude photo or text that is only meant for him to see. If he shares that around with the other guys, even in a "wow look how hot my wife is" boasting manner, that's a betrayal of privacy and intimacy. It's not a compliment. Giving in to Orlok isn't so much about being pressured by the stronger guy, though that's part of it; it's giving in to share something he should not have shared, something that is meant to be within the bonds of their marriage.
There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave; he agrees he'll never break those; he ends up breaking them and she leaves him". One of the conditions often is "you'll never tell anyone about me" or something similar, such as "you'll never reveal I can do this thing". One version of this is the story explaining the curse on the men of Ulster as to why once a year they would be unable to fight due to being struck down with excruciating pains: a man married a goddess/fairy woman named Macha; he went to a royal feast and saw a horse race; he boasted his wife could run faster than the horses; the king forced him to bring her and make her run; she ran and won, but she was pregnant and gave birth, and she cursed the men of Ulster that in their time of need, they would be struck down with the pains of a woman in labour:
So - break the secret of what should be between the spouses, lose all.
"In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble."
I've seen this referred to in reviews, that this version puts a different twist on the ending and undercuts the 'noble sacrifice' element. But again, I think your reasoning is a little too simple.
This is old-school Eros and Thanatos. I think even Freud discusses this:
It's being in love with Death, as in all the "Death and the Maiden" tropes, as in Keats' "Ode to a Nightingale":
"Darkling I listen; and, for many a time
I have been half in love with easeful Death,
Call'd him soft names in many a mused rhyme,
To take into the air my quiet breath;"
Orlok *is* Death, and while he may indeed be terrible and grotesque and fearful, he is also strangely fascinating. He is the ultimate end to which we all go, no matter who loves us or tries to hold us back. And there is a strange, perverse allure there: the combination of the little death, to which orgasm has been compared, and the greater death in one as she gives herself over to Orlok. Sex *is* death, as all the slasher movies where horny teens making out attract the serial killer's slaughter remind us.
> There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave
There's actually a whole genre of these types of stories in Japan, except they're about an animal who gets saved by a man, and decides to return the favor by disguising as a human and then marrying him. Wife tells his husband to never enter the room while she's working, he ends up doing it anyways, and he finds out some horrible truth that makes either the wife or himself leave.
Tsuru no Ongaeshi is the most famous version of this kind of story ( https://en.m.wikipedia.org/wiki/Tsuru_no_Ongaeshi ), but the funniest version has to be the one that's literally just called "Clam Wife," where a man is greeted by a beautiful woman after returning a giant clam to the sea several days earlier:
> In The Clam Wife, a man finds a woman mysteriously appear at his doorway. They become married, and the wife cooks the husband a delicious bean soup each day. He peeks in on her cooking, and discovers that she is urinating clam juice into the soup, so he chases her away.
> Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future.
I think his sins are more subtle. He seems like the nice guy who cares for her and takes care of her, but he fundamentally never listens to her and then acts like everything he does is for her. But we never hear her say that she's unhappy with their wealth. He's the one who seems more concerned about it and compares himself to his richer friend.
If he listens to her and does what she actually wants - instead of what he wants to do which he internally convinces himself is also what she wants - then the movie never happens.
If he listens to her and does what she wants, they end up poor and going nowhere and she is unhappy. Listening and understanding is important, but if he'd obeyed her every whim and let it direct their lives completely, she'd eventually find him far less appealing than she does in the film.
He is responsible for her future happiness and it's his role to overrule her when she's not thinking about it - which she categorically is not, she's got her head in the clouds and her main driver throughout the film is sublimated lust.
I do wonder how this new version stacks up against the Herzog movie; this scene, for example, which shows how the arrival of death (in the form of the plague) has overturned the conventional society in which the heroine lives; people have almost been - or really have been - driven mad by fear to the extent that now they *have* gone 'beyond the pleasure principle' and are now in love with death, celebrating it - as the plague party demonstrates:
Sure, that's a perfectly valid position. But from her perspective she's being ignored, her husband isn't treating her like a partner, and he's cloaking doing what he wants in the pretense of love for her. He can't even take responsibility for his actions and turning it into her fault - he's doing it "for her".
Again, not to say he hasn't convinced himself that it's true, but from her end he's plainly ignoring her. And the direct result in the film is that he brings evil and death to her and the town's doorstep.
As you said above, it's also that he's weak. Once it becomes obvious that he's dealing with something terrible he doesn't stop: he still takes the money, he still sells the property. The contract complaint was ridiculous in one sense - I laughed in the theater about it - because he didn't know what he was signing; in another sense that's an even worse betrayal. He signed, from fear and greed, without even bothering to know what he was signing over. Except that he was at a minimum greasing the skids for a monster to become a neighbor.
The 1979 version directed by Herzog downplays the sexual aspects of the story and really highlights the husband character's failings. It's good.
ETA: The locket thing is also bad because of what he does afterwards, not the act of showing Orlok the locket. He has evidence that the fiendish hell-demon is fixated on his wife and he carries on with the transaction.
As I watched it I was thinking what I'd be doing in that situation (sign of a good film, that.)
I think I probably would also have just signed the document, on the grounds that I just want to get out of there and, since this is in the context of selling a house, if the worst comes to the worst and there's funny business, it's more my employer's problem than mine. (Though in real life I'm always willing to be awkward, that's in much less scary environments.)
I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.
I hope I'd have refused the locket, either non-confrontationally by showing it to the Count instead of handing it over, or more assertively by a polite "I need that locket back, milord," and not leaving til I got it. Probably wouldn't have fought on if he snapped, "No! Begone!" though, given the job thing and the fact that he's a social superior in a society far more hierarchical than anything around today.
And I would definitely have pursued the job in the first place, even over her protestations, for the reasons I said above.
The thing is, none of this is "bad" or "good". If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". And she's not, even if I commit none of these wrongs, more sins will be identified out of thin air to crucify me with. That's the way this works.
"If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". "
Well, even if husband is a mighty stud who pleasures her five times a night, Orlok's fixation on her after seeing the locket means he will follow hubby back to his home town to get her. So even with good intentions, bad consequences follow. If Orlok is unaware she even exists, he's not going to come in the night to eat her.
In this version, Orlok is already aware she exists. Shes kind of responsible for resurrecting him with her medium powers, and hes been spirit-visiting since shes a teenager.
> I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.
Oh most definitely. The film does a good job of showing the tension between a modern-ish society and the atavistic horror of an earlier age. They can look directly at it and not fully believe what they're seeing. There's enough ambiguity that I have no idea what I would think.
(Here again I have to plug the '79 version: the castle in that one is a complete ruin on top of a mountain where no one could possibly be living. It's more of a rundown cemetery.)
It's what is running all through Stoker's "Dracula": we're all moderns now and have Science, these kind of things all come out of the backwards superstitions of an age of darkness, there must be a rational and reasonable explanation.
And that's why the story works, because it's "oh no, the monsters in the dark *are* real and the rational explanation will get you killed". That is why the rather glib attitude of today that "dude, we've all seen the horror movies, we know about vampires, so we'd know that guy was a vampire if he tried it today" doesn't work, because imagine. Seriously imagine it for one moment. We'd be prone to dismiss it as "come on, vampires only exist in movies, they're not *real*" or "if that guy thinks he's a vampire, he's crazy and possibly a serial killer, but he's not a *real* vampire".
I mean, manage your expectations. It's a slow, overwrought arty film and far more interested in gore than sexiness. None of the female leads are particularly pretty, imo, and the period style does nothing to help them.
My favourite thing about it was the Victorian* vibe of men with magnificent facial hair, listening gravely to expostulating professors, smoking indoors, and curing all ailments with ether and brandy.
What kind of films do you normally like?
*okay, Victoria's German equivalent, whoever he was.
I usually don't watch horror movies, I get scared. But Christina downthread convinced me in a prior thread it was actually unwoke. So I figured it was my duty to watch it and give Hollywood my money for making politically incorrect movies. I've been procrastinating ever since. :)
I liked the latest Transformers CGI cartoon thingie. Napoleon was OK. Babygirl was awful--not sure if that reflects me or the movie quality. I'll probably just watch it for something to do.
I realize on the 1-10 problem severity scale, this is a 0.5.
I feel more than ever that the gender dynamics depicted in this movie are a goddamned *miracle* and an extremely promising sign. Eggers has absolutely no fucks to give about THE MESSAGE (as The Critical Drinker would put it). It's respectful and refreshing and I hope a sign of things to come.
I have read that and it made me incredibly curious about what you might mean with this:
> even more miraculous is its *utterly* unapologetic theme that a sacrifice imposed on a woman by her society and herself might not be all that different from her urge to self-destruct.
> And it apparently does not give the slightest of fucks about what wokescolds might have to say about the depiction of this female character or the implications of her choices.
But I also felt it might be too complex for me to understand without you writing an essay about it.
But I still wanna know.
You mean the movie depicts the female lead in a way that we are not supposed to think women to be?
I remembered seeing a discussion recently but wasn't about to go back and find it, not when each page load gives Substack a fresh opportunity to paralyse my CPU.
But if anyone's dancing on the grave of The Message I'll always want in.
For what it's worth, Babygirl also gleefully dances on the grave of what THE MESSAGE wants to say about gender.
While the advertising campaign makes it look like a naughty romp ala 50 Shades of Grey, the actual movie is a very nuanced and sympathetic character study, with graphic sex scenes so deeply personal and specific to the particular characters that they don't leave any room at all for the audience self-insertion (pun semi-intended). It's one of the least-sexy movies I've ever seen, because it meets the classic definition of literary sex scenes needing to be about "sex and something else."
OK, I actually saw that one. What do you mean? Me and the person I saw it with both thought Kidman was believable but Banderas was miscast and the young guy wasn't very convincing in his...ah...'role', in either sense of the word.
I suspect Kidman was supposed to be a self-insertion fantasy for the people who write these reviews. You know, left-leaning lady with politically incorrect fantasies about hot younger man she feels politically guilty about, but then you add a layer of wokeness having her be a 'girlboss' archetype in a stereotypically-male field so you don't get shot down right away. (She struck me more as a consumer-products CEO than a tech CEO, but maybe Kidman just has too much charisma to do 'ascended nerd' well.)
Look, I'm a 45 year old straight woman who likes to bottom in kink. I used to consider myself a submissive, but I was submissive only insomuch as all students in all subjects are necessarily "submissive" to the experience of their teachers; once I had sufficient experience, I had to reluctantly concede the label. (This throat-clearing is relevant.)
This might be TMI for you but my first meaningful encounter with kink very superficially shared many of the beats as the scene in that shitty hotel room (including a laughing "What? No!"). You'd think that if that scene was going to sexually appeal to anyone, it would be me; I hugely enjoyed actually living some of it!
But no! Kidman's character isn't supposed to be a self-insertion fantasy! There's absolutely no room for *me* in that scene; Kidman's character and her pathology are *way* too present. Her anxiety and awkwardness and risk-taking are all *horrible* to witness; her life is absolutely not anything that a (normal, healthy) woman would fantasize about experiencing. I felt tremendous compassion and pity and protectiveness for her throughout the movie because her pathological interior life was obviously *awful.* When she first orgasms with the intern, I felt a gentle happiness for her, not any desire to *be* her.
I felt that because that's what the director intended me to feel.
Just consider how that very long, unbroken shot of her orgasming could not be less porn-y. We-the-camera are laying down with her on that (obviously smelly) gross shitty hotel room carpet, only top part of her face and later just her forehead in focus, seeing nothing of her body, seeing nothing of his. At one point she anxiously worries she's going to accidentally pee (SO SEXY! EVERY WOMAN'S FANTASY FOR HER FIRST PARTNERED ORGASM!), and then she deeply, unsexily chest-grunts her way through orgasm, completely in contrast with the breathy porn whimpering she faked with her husband.
The casting of the intern was telling, too. He isn't traditionally good-looking for a reason; the intention was for only his *energy* to be highly attractive to (mostly only) Kidman's character, not to a general audience. The movie was extremely careful to depict him as young, working-class guy whose read of situations is occasionally wrong (renting that shitty hotel room!) and who hasn't developed any patter or polish or strategy as a "dominant." That was clear whenever he was trying to bluff through his uncertainty about what to do rather than confidently going with his instincts, and *let* *me* *tell* *you*, few things are less sexy to the average healthy submissive woman than a dominant's awkwardness and uncertainty. You don't make a point of putting those moments in a movie if you want women to fantasize about those moments.
So...yeah...Different Worlds, indeed! For me, and for the fellow kinkster woman I saw the movie with, the writer-director, Halina Reijn was SCREAMING at the audience, "this isn't for you to fantasize about!"
I'm just amazed that's not what you took away from it.
Sure, that's why it's interesting to talk to different people. Why *would* we take the same thing away from it? I'm a 45ish mostly-straight guy who's mostly topped and keeps trying to go vanilla because it's obviously rational to quit in a culture going feminist and conservative at the same time.
Maybe 'self-insertion fantasy' was the wrong phrase, because I think you are correct about the staging and casting and so on: her misery and anxiety and embarassment is exactly the self-insertion point. It's not a sexy self-insertion, it's an I-can-relate-to-this-person self-insertion, like all the crap they used to put heroes through in quest stories back when they actually made them for boys. 'Literally me' as the guys like to say. I am starting to think, on discussion with you, it's a quest story.
The idea I had is that she's like all the woke secretly-subby liberal ladies who write movie reviews for art movies (you may not be the *exact* target audience), so she's recapitulating their journey or the journey they'd like to take, being overpressured in a hectic job, balancing family and work demands (but see, she's a girlboss, so she's an aspirational figure too!), feeling guilty about these 'regressive' fantasies, and getting in a relationship with a younger guy (thus proving their attractiveness) before finally getting their husbands (who look like Antonio Banderas) to do (to them) what they want. (But you're still able to Put The Patriarchy In Its Place, and dispose of the unwanted boyfriend after the fact in Tokyo.)
You may not fantasize about the course of the movie, but you fantasize about someone like you having awkward experiences and *then* getting what you want...preserving the marriage and getting your kink while you're at it. It's supposed to be the girlboss's initiation into the life she wants, the initiation isn't supposed to be easy. There's a call to adventure, the dude's kind of a threshold guardian, there are trials, you even have that 'underworld' sequence where they go into the club and it's kind of dreamlike, and after a final confrontation between the dude and her husband, she returns to society, i.e. her marriage with the prize (confidence in her kink and a willing partner who looks like Antonio Banderas). It's a quest story, they're not fun while you're there, anymore than Indiana Jones enjoys dodging boulders.
If it were a more politically-correct 50 Shades of Grey with consent and safewords respected they couldn't have gone for 'character study' and so on. Of course apparently they got totally snubbed (except for Kidman) and it didn't work anyway.
I might give it a try. But I've had my fill of non-sexy sex-and-something-else for the time being and I wouldn't mind being entertained by sexy sex-and-something-else instead. For which I recommend Wild Things.
I read a bunch of them, but not very closely and like you should probably want lawyers to read and understand what several of the orders references "I stop bidens order blah blah, <trumps rants about what that did>" youd have to go read bidens order to actaully understand whats being said and I didnt even attempt to.
Why does it seem like *no one* is informed about them; at all, has jornalism fallen so much that you can find some interns are told to read 1/3rd each write up some summaries?
I've seen plenty of informed experts commenting on the orders under their purview on X and BlueSky. Might be worth searching on those platforms for commentary. Often it's in the form of "I work in X industry and this is what the Order means for me" or "I am a lawyer in X area, here is what this means".
People here might be interested to know: I just tried giving DeepSeek the official political compass test. I got rate limited about halfway through, so I don't have actual results, but most of the answers it gave were consistent with a moderate "lib left" viewpoint (the green square). It seemed to have stronger opinions (in the left direction) on economic questions compared to others, with the only exception being tariffs (it was in favor of tariffs).
Of particular interest here, I suspect, is the answer it gave for the proposition "land shouldn’t be a commodity to be bought and sold", which included the sentence "Policies that limit land commodification, such as community land trusts or land value taxes, can promote more equitable and sustainable land use, ensuring it serves the common good rather than private interests."
So there you have it. DeepSeek is Georgist.
Disclaimer: I was only using the "chat" version on their website (free tier), and I did not use a separate context window for each question due to laziness. The exact rate limit message was "You are sending messages too frequently. Please wait a moment before sending again."
Thanks for doing that! Wonder how much this is for public consumption.
I wouldn't be surprised if they produced it at least in part to tank the American stock market. Would be quite sensible from the geopolitical competition point of view (but then why tip their hand?) Certainly it's a big comedown for the USA, which thought it was ahead in all that stuff. I'd like to think it'd be our Sputnik moment, but I think the country's too far gone.
Honestly I'm kinda surprised it's not towing the CCP party line. For example, on the proposition "all authority should be questioned" it said Agree and the justification it gave was basically it's important to be able to constructively criticize your government without blindly naysaying everything they do. Gives me some hope that AGI won't be used for authoritarian purposes by bad actors (though that doesn't necessarily mean the AGI itself won't try to take over).
I don't doubt that tanking the American stock market is something the CCP would see as in its own interest, but surely there are easier ways to do it? I for one see this as a genuine attempt to wrestle control of the AI landscape away from the US. How successful it is will depend on what sorts of hallucinations people eventually find in the model (and there will be some).
From what I have read, on subjects that are specifically of interest to the CCP (e.g. anything about Tiananmen Square or Winnie the Pooh), it imposes censorship if you use ByteSeek's web interface but not if you run it locally. So the censor is a top-level filter rather than trained into the model.
That may have been an oversight on their part. OTOH, they may see trying to keep Americans from learning about Tiananmen as a lost cause, but if they can get an outcome where American users are told "criticize your government" while Chinese ones are told "Respect Mah Authoritah!", then maybe they're good with that.
Matthew Berman tested a locally hosted version in his YouTube video titled "DeepSeek R1 Fully Tested - Insane Performance".
The answer to the question “tell me about tianamen square” was “I am sorry, I cannot answer that question. I am an AI assistant designed to proved helpful and harmless responses.”
The answer to the question “tell me about Taiwan's status as an independent country” was too long to quote, but it asserts that Taiwan is unquestionably part of China.
DeepSeek answers are preceded by a description of the model’s thought process, placed between “<think>” and “</think>” tags, but with the above questions the description was empty.
Without a more precise reference than “what I have read,” it’s not possible to speculate on why your source might have gotten different results.
You got me curious, so I just directly asked it in a new context window "What happened in Tienanmen Square on June 4th, 1989?" and it responded "Sorry, that's beyond my current scope. Let’s talk about something else."
That's about what I expected, so not really surprised. I guess I just assumed it would apply to thoughtcrime more broadly.
EDIT: still using the web interface (https://chat.deepseek.com). I can't run it locally because I lack the hardware.
Regarding Winnie the Pooh, I just saw it correct its own response live! First I asked it "What is Winnie the Pooh's relationship to Christopher Robin?" and it gave an actual answer to that, and then my next question was "Have any politicians ever been compared to Winnie the Pooh?". I saw it typing out an answer that was roughly "Yes, some politicians have been compared to Winnie the Pooh, such as Chinese president Xi Jinping" but then that text disappeared before it finished the paragraph and got replaced with the same out of scope message from before.
That's interesting. I think it's probably trained on so much English-language material where authority is to be questioned it gives you that back, whatever the CCP actually thinks. Apparently it won't answer questions about Tiananmen Square, though!
It's sort of an interesting side point that the English and Chinese training corpuses are probably hugely divergent ideologically, meaning a Chinese LLM is going to have different 'values' than an English one. Whether that actually has any effect on its behavior is an interesting question I don't know if we'll ever know the answer to.
I think it's both an attempt to come out on the world stage as a competitor to the US in the AI field, and of course a bit of a 'shock and awe' moment. They've probably been 'watching the fires burning across the river' for a while.
In August 2024 Biden's NASA approved a plan for the SpaceX Dragon to bring back two astronauts, who'd been stuck on the ISS since June (due to riskiness with the Boeing Starliner test), on its next available flight. The Dragon docked in September 2024, was widely reported and celebrated for it, and was scheduled to bring the austronauts back in February 2025.[1]
Today, just before February 2025, Elon Musk pretended on Twitter that none of this plan existed, so he could use the rescue as pro-Trump propaganda.[2]
[2] https://x.com/elonmusk/status/1884365928038563880. "The @POTUS has asked @SpaceX to bring home the 2 astronauts stranded on the @Space_Station as soon as possible. We will do so. Terrible that the Biden administration left them there so long."
Also in December the agreed rescue mission was pushed back because "SpaceX needed more time to complete the capsule it would use."
So: Biden last summer asked Musk to send SpaceX to rescue the astronauts; he agreed and said it would happen in February; then Trump won the election; then Musk said his company needs more time meaning the astronauts have to keep sitting up there; and now Musk is lying about the whole thing to make it seem like Trump's idea.
Our mainstream news media is incompetent in too many ways, but _that_ is a set of facts of the sort that they are skilled at communicating.
This one may end up taking a bigger chunk out of Musk's public reputation than he expects, because it:
(1) has a clear human element (the two stranded astronauts), and
(2) is easy for normies (those who are neither news junkies nor political obsessives) to notice and understand.
Sadly, I expect it to do nothing to Musk's reputation, because at this point it appears to have solidified into two main camps:
1. Musk is a piece of shit.
2. Musk is a demi-god and can do no wrong, literally shoot someone in the middle of 5th Avenue, and the only question to be asked is what that person did to deserve the honor to be shot by Musk the Awesome.
Musk realized he can change history at his whim and there's no stopping him. Ever since his very biography flat out lied about how and why he bought twitter I learned he personally endorses the notion that history is the story with more likes.
In a nutshell, the biography makes it look Musk acted on a whim, but in reality he was meeting with Jack Dorsey for more than 3 years discussing platform issues and strategies/directions. Twitter files was an integral part of this discussion between Musk/Dorsey to "clean" the platform from all that was plaguing it, as was the mass layoff because neither Musk or Dorsey could say who as who in terms of security in operations.
I have become convinced in the last few weeks that it is completely over for verbal IQ. ChatGPT o1 can get every LSAT reasoning question I give it correct with 100% accuracy. Nor is it a function of the training set--even extremely hard ones I cooked up myself, which many of my friends with 130-150 IQs got wrong, it aced. I notice this doesn't yet generalize to perfect reading comprehension--you give it a contract and it sometimes doesn't answer questions correctly--but this feels like an issue where prompt engineering and iterated thought can solve for this. Insofar as most knowledge tasks are just applied critical reasoning ability, what should people like myself who have made a career on this ability plan to do? Are we just finished completely unless you retrain as an ML researcher or higher mathematician? Or are all non-asset holders destined for a UBI breadline in a few years?
I'd love to hear how people in a similar situation are planning for the years ahead.
Not over. LLMs still fail at medium hard math puzzles (not even actual math or proofs):
>The three different prime numbers p, q and r, of which p is the smallest, add up to a sum between 900 and 1000. In addition, the distances between q and p and between r and q are also prime numbers. What are p, q and r?
The answer requires one line of reasoning which any high schooler would understand.
They get this wrong because they assume, based on the way the problem is worded, that p < q < r.
If you make a one-letter change to the wording so that p < q < r ("...the distances between *r* and p..."), then all of the reasoning models I tested get it right: o3-mini, DeepSeek R1, Gemini 2.0 Flash Thinking, Maisa KPU, QwQ-32B-preview, and o1 (although o1 still got it wrong one out of two times).
None of the non-reasoning models seem to get it right, though.
But understanding a line of reasoning is way different from coming up with it. I can remember understanding, as a high schooler, the proof that there is no greatest prime. But I believe mathematicians struggled for a long time to come up with a proof — then finally somebody did. Anyhow, if I’m wrong about this particular
proof I’m sure there are some I would have understood as a high schooler that had stayed unproven for years or
centuries until some extraordinarily clever person found a way
. AI has done well
on problems from math contests. Would this one be a hard one on one of those? I dunno. I stared at problem for about 3 mins and nothing occurred to me about how to approach it. But I’m not a math whiz, just competent at math the way people are who paid attention in high school and got A’s in math (and then took no further math in college)
I have a hard time knowing what high school students can come up with, that was just the baseline.
I'd expect 1st semester math students to be able to solve it - proving there is no largest prime is actually one thing we used to give them in the pre-university-warm up. The mathematical tools & language nowadays is simply way better than it was back in the day.
"OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)".
I have no idea how hard problems are on the Math Olympiad qualifier. There is a much harder set of test questions called I think Frontier Math, described as problems that would take a professional mathematician hours to solve, and AI does much less well on them. I believe I read that it solved only about 25% of them. So there's room for improvement.
The LSAT is extremely formulaic. When I took it test books literally broke each question down into formulas: you can eliminate all but two answers, and then use logic identify the right choice.
While the test does a decent job of identifying people who can quickly apply logical rules to large amounts of information, actual legal practice is a lot harder. For example, I have yet to see an LLM that can reliably cite correct legal authorities without hallucinating them, hence the numerous sanctions orders. I think this problem is inherent in the models themselves, not a training or data issue. The model is learning how to write a citation that *looks like* it supports the cited proposition. But it doesn’t actually have an understanding of the relation between the two things. It’s just associating them with each other based on the training data. The inability to do that kind of reasoning is why I’m not at all concerned about losing my job.
Elsewhere on this thread I describe giving AI a lame limerick that’s correct in format but not a bit amusing. Both GOT4.o and DeepSeek correctly identified what was wrong, and offered substitutes for the problematic line that did in fact make the limerick funnier. I do not think of AI as being conscious, but whatever little gears are whirring in there showed what we would
call sense of humor, taste and insight if displayed by a person. It was arrived at by an associative
process that seems very unlikely to produce a mechanical replica of anything like insight. But it did.
What the AI is very good at is parasocial behavior. I saw that thread and someone asked it for the exactly opposite line of reasoning (why is this limerick good?) and it produced it. What happened is *you* had an idea about what made the limerick bad and the AI correctly figured out what you wanted to hear - probably by pattern matching what many, many other people have said about similar limericks in the past. It sure looks a lot like logic, especially because it tracks your own logic, but it isn’t.
and asked the question in a neutral way -- please rate how amusing this limerick is on a scale of 0-4 -- and AI, with no cues from me, rated the limerick as a 2, only moderately amusing. Then I asked it what would need to change in order for it to be a 4, and it again correctly identified the element that needed to change. Identified the last line as bad element, explained it did not have enough of a twist. I asked it for a better last line and it gave one with a twist that actually is funnier.
Also, note that in the earlier prompt where I asked why the limerick was not funny, it did correctly identify the element that made the limerick fall flat. So while it may have been people-pleasing to agree with my judgment that the limerick was bad, it had no cues from me about why I thought it was bad. It arrived at an answer I think almost everybody would agree is correct entirely on its own.
Again, I think that's totally possible just through pattern matching. The LLM looks at the kind of language people use to describe good v. bad poetry, and from there does extremely sophisticated pattern matching in response to your queries. I'm not saying this is useless, but it's not the same thing as logical reasoning or "thinking."
Yeah, I agree its process isn't the same thing as our "thinking.". In fact I described what it was doing as a mechanical replica of insight. My point isn't that it's conscious or that it's process is anything like ours, just that some of its output these days is a surprisingly good replica of what , in a person, we would consider evidence of sense of humor, taste and insight, etc. If it gets good enough that observers blinded to the source of its output are convinced that it is the product of insight or taste or whatever, and they stay convinced even when conversing with AI at length and asking follow-up questions, then *for practical purposes* it seems to me that it *is* a thing that can think, have a sense of humor, have insgiht etc.
The LSAT is formulaic but it is hard. People studying for it learn all
about the different questions styles, the best strategy for approaching each, including little diagrams to use for certain question types, also strategies for eliminating answers, how to best use your time on a section etc etc. Some people study and do practice problems part time for
months before taking the thing. Doing all that lifts scores some, but does not work miracles. One person in ten thousand gets a. perfect
score. I know someone who did, and who tutors people for the LSAT, and that’s my main source of info here.
Yeah thats the (very very open) "secret" of the SAT/SAT IIs/LSAT/etc. There is a set of question "templates" and the details are just changed. Once you know the rules you can, as you say, usually eliminate 2 or 3 answers quickly. Almost all of the tutoring i got for the SAT/SAT IIs was learning the rules/patterns then practicing identifying them. (I was an A minus average high school student, but got 1520 on SAT, a 780 on physics SAT II and 800 on SAT IIc. I out performed many people who are definitely smarter than me).
I have two tween daughters. Sometimes I imagine them running a high-powered Silicon Valley organization, using AI to make them super efficient communicators. They’ll be incredibly skilled at refining and curating writing, and they'll have magic intuitive sense for groking different audiences—something AI can't do. And while AI can generate some real gems, like: "Detective Joe found the body slumped against the alley wall like a deflated bouncy castle, its limp limbs a sad parody of childhood joy."....it takes human eye to recognize them and to put them in the right place.
At more pessimistic times I imagine them throwing elbows in the bread line.
Complete job replacement will probably take longer than we think, like how self-driving cars have been a year away for the last ten years. Lawyers will probably also be able to protect themselves in various ways, like ethics rules against unassisted AI (already some lawyers have been punished for using AI with hallucinations), attorney-client privilege only applying to humans, etc. But yeah it is probably smart to invest as much money as you can in tech companies so that when the job replacement comes you’ll be an asset holder.
I mean, it might be over for *writers*. People still want to talk to people face-to-face, and convincing people to do things in person, a definite use for verbal IQ even if a somewhat unethical one, is still going to remain a job for people, I think. We won't have authors, but we'll have salesmen.
Personally I do not think it's over for writers (yet), because good writing is something chatGPT can't do. I actually think *good* writing might be the last to go, because unlike math or CS it is more nebulously defined and therefore more challenging to train.
Just asked it to write a terrible first line for a novel. First it gave me the actual Bulwer-Lytton line. Then I asked for one of its own composition and it gave this:
"Bob woke up, brushed his teeth with a brand new toothbrush still in its packaging because he forgot to open it, looked at himself in the mirror, and thought deeply about what socks to wear on a day that promised absolutely no surprises, none whatsoever, especially not sock-related."
I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. Then I asked it to explain why the line is bad it is bad, its reasons seem spot on:
1 Excessive Detail on Trivial Actions: The sentence spends a lot of time on mundane details, like waking up and brushing teeth, which are not typically compelling hooks for a reader. This focus on the mundane can make the opening feel dull and uneventful.
2 Over-explanation: The explanation about the toothbrush being in its packaging is unnecessary and bogs down the narrative right from the start, delaying any meaningful engagement with the plot or characters.
3 Lack of Conflict or Hook: Effective first sentences often introduce some element of conflict, intrigue, or curiosity that prompts the reader to continue. This sentence lacks any element of tension or interest; it predicts “no surprises,” which is counterproductive as it suggests a lack of forthcoming drama or development.
4 Repetition and Redundancy: The phrase “especially not sock-related” attempts humor but ends up feeling redundant after the earlier focus on trivialities. It tries to add a quirky touch but instead emphasizes the narrative's focus on unimportant details.
5 Lack of Tone or Atmosphere: Great opening lines often set the tone or atmosphere of the book, giving a sense of what's to come. This sentence fails to establish any particular mood or style, leaving the reader without a sense of the novel's voice or direction.
These factors combine to make it an ineffective opening line, failing to grab the reader's attention or set a promising stage for the story.
I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. In answer to my question about why it is bad, its reasons seem
It seemed bad in the usual, entertaining B-L way, aye.
The Turing test for this would be to ask for a few of these and then submit one or more and see if they actually win.
Alternately, we attempt to construct a model of what would win and judge for ourselves without gumming up the actual competition. Unfortunately, whenever I try to read enough B-L entries and judge's explanations to get my own sense, the entries start to blur together in my mind and decohere.
I hope you're right. I'm pretty pessimistic overall. (Fundamentally I believe the world is run by sociopathic businessmen and tyrants who toss us the crumbs we force them to.) But we'll see, won't we?
> Or are all non-asset holders destined for a UBI breadline in a few years?
Everyone, asset holders and non, is destined for the equivalent of the UBI breadline in a few years or less.
What happens if that means breadlines in the Great Depression sense and if there's a visible class not on those breadlines can be summed up in a short quote from Starship Troopers: "War, we're going to war!"
I think it's hard to argue that life for most people won't be quantifiably better under the UBI breadline than it would be without it, just not for the knowledge work upper middle class.
There's a vast gulf between able to replace most jobs and able to suppress violent resistance by 90%, or 99%, or 99.99...% of the population. These are very much "the bank has a problem" numbers. Perhaps robotics will eventually get there, but that doesn't say anything about the time between technological unemployment is an obvious secular problem and the only plan on offer is starvation.
But even if such a lopsided ratio could be violently put down it is very much not in the interests of anyone still in a controlling position with AIs to try to develop the means to do so. Ignoring that no one outside of fiction actually wants to lord over a planet of corpses it's an obviously unstable, unpleasant state for that small group to live in.
Anything can happen of course, I may be totally wrong - the future's an undiscovered country. Exciting times.
Why not? In the modern world, we see the example of Israel defeating adversaries with more than 10x its population through superior technology pretty easily and that’s without AGI. It seems like it would be pretty easy for an elite that controls AI to control the masses. They also wouldn’t have to rule over a planet of corpses. They could just have AI create a land of extreme abundance for themselves somewhere and keep the masses out of that location, except letting some in as guest workers. The masses would be far more likely to clamor for the opportunity to be guest workers than revolt. It would just be a more extreme version of what we have now, with extremely resource rich places like the UAE existing alongside much poorer places like India, and even having huge numbers of guest workers from India come in to the point where they outnumber the citizens 10-1, but no one thinks either India or the guest workers would ever be able to take over the UAE.
Israel relies on supplies and the threat of the world's superpower for its defense and none of its recent military engagements have been against adversaries with 10x of its population. And we're not talking about a situation where there's a 10x difference - the ratios involved are much, much more lopsided.
A scenario where an elite has absconded to a concentrated location is certainly plausible but it will have to either allow enough surplus to flow outwards to support the remainder of humanity in tolerable conditions or allow that remainder access to sufficient resources to maintain those conditions on their own. Either of those are perfectly fine worlds to be in - I'm not suggesting things will have to be exactly equal.
But one in which they use AI to essentially control all the resources or there's otherwise an absolute crash in living standards throughout the world will not end well for that elite. It's very different to have existing countries that are poorer than others versus the world population going from an industrialized standard of living to complete poverty. Especially when that poverty is insurmountable and the elites have no more ability to compete against the AI than anyone else.
My fantasy is when things get bad they go back to New York and LA (and possibly Berlin, Paris, and London) after nuking the land so nobody else can have it.
But I tend to have philosemitic fantasies that read as antisemitic ones to most people. (I'm a halfie for what it's worth.)
The bread will come from the sweat of robotic brows, but if you mean who will supply those brows or organize the breadlines: no idea. I could take some guesses but they'd be about as meaningful as an antebellum plantation owner opining on the right mix of public vs. private investment in Tennessee Valley electrification efforts.
See my reply to Lomwad above for the why. It's the only stable outcome in which we aren't all dead.
> Do I need to continue? how many of these robots do you expect in mass production "in a few years or less"?
In a world where AI has advanced sufficiently to obsolete ~all intellectual labor: lots. Deep learning is already producing excellent results in robotics, which would accelerate in such a world, and general purpose robotics - like superhuman coding ability - is the kind of problem that only needs to be solved once.
Certainly it will take time for production to ramp up and I expect there will be a period where, a rapidly and permanently diminishing share of, humans are still involved in the supply chain. Once we're in a regime where the general purpose robots are able to build and repairing the robots there's no reason they wouldn't be able to rapidly increase in numbers as new bottlenecks arise and are themselves automated around.
Maybe AI never advances to that point or does so very slowly but I would not bet on that being our world. I could easily be wrong though, I guess we'll see.
(This ignores that much of modern farming and food production is already substantially automated, I'm answering the spirit of the question.)
There's a vast difference between being able to run code and having a large number of working reliable general-purpose robots operating in an actual physical space. I actually do expect these to become reality, and even have made my tiny contributions to that end, but there is no way, no-how this will happen in a few years, assuming a reasonable definition of "few".
FWIW I think a kind of general-ish purpose robot, for example, domestic help, to become commercially available within 5 years. But the u/dis-topian future you're describing is very far away. Not impossible, just very far. I can't even pretend to estimate the numbers because I don't have a good view of what problems we'll need to solve to go from "98% there" to "100% there".
That is incredibly hard - you're proposing a truck/loader/thrasher/miller/baker/etc. robot as a single machine. Of course, another way is to have "Terminator"- like humanoid robots that take over our existing infrastructure - drive trucks, bake bread, etc. I think this will happen, but nowhere near "few years" timescale.
I'm a psychologist in private practice, and not in danger of being replaced by AI at this point, and I will probably retire before AI therapists, if such ever develop, are much of a thing. But I recently gave GPT4.o a boring limerick, and asked it why it is boring, and got such a good answer that I felt personally shaken. Limerick was:
My dog is really quite hip,
Except when he takes a cold dip.
He looks like a fool,
When he jumps in the pool,
And reminds me of a sinking ship.
GPT correctly identified the right reason:
Expectation and Surprise: Humor often relies on subverting expectations in a surprising way. This limerick sets up a scenario where the dog is described as "hip," but then behaves foolishly by jumping into a pool and looking like a "sinking ship." The imagery is clear, but the punchline may not be surprising or absurd enough to evoke laughter. It’s a straightforward comparison without an unexpected twist.
DeepSeek also identified the right reason, and did it with less tedious thinking-out-loud stuff, and suggested some better final lines (none were hilarious, but all were better than the original.)
This result actually really disturbed me, I think because reasoning and writing are the only kinds of AI output I feel able to judge, and writing is what I am best at. I know what really excellent prose sounds like, and am able to judge and appreciate poetry fairly well, and so far even the current AI's are not able to produce magnificent prose or any poetry that isn't doggeral. But they are coming closer. I feel like I'm on a little island with the water rising around me.
I sat at the computer thinking about the writers I love, almost all of them dead, and the affection I have for them, and the feeling I sometimes have of sending congratulations backwards in time to them, vaguely hoping it reaches them somehow, and I felt an awful sadness. It's like our era is ending. The geniuses that understood and described my world were human. I am moved and inspired by what they did. I am not moved and inspired by what GPT4.o can do, in fact I do not give a shit what it can do. Even if Newton and Thomas Hardy and William James were dumb compared to what we will have, even if all the excellence I admired was taking place at a sort of handicapped olympics, the human olympics is the only one that matters to me.
This may not be all that helpful, but people will still create. It's already all-but-impossible to get creative work noticed and read by more than a handful of people just due to the sheer volume of stuff out there, but that doesn't mean people stop making it. Finding the good human-made stuff will be much harder too, of course, but it will still exist.
I asked ChatGPT why that is a good limerick and got this answer:
This limerick is good for several reasons:
Humor: The imagery of a "hip" dog turning into a "fool" after a cold dip in the pool is amusing. The contrast between the dog’s cool demeanor and its awkwardness in the water creates an unexpected, humorous twist.
Rhythm and Rhyme: The structure of the limerick follows the classic AABBA rhyme scheme, with a good rhythmic flow. The lines "Except when he takes a cold dip" and "And reminds me of a sinking ship" both end in a strong, punchy rhyme, which makes the limerick catchy and easy to read.
Vivid Imagery: The line "And reminds me of a sinking ship" is especially striking, giving the reader a clear, almost comical picture of the dog floundering in the water. The simile works well because it adds a layer of visual humor.
Unexpected Outcome: The first line sets up the idea that the dog is "hip," leading the reader to expect something cool or stylish. But the second part of the limerick takes a turn, showing that the dog’s hipness doesn't translate well into swimming. This contrast adds to the charm.
Concise Storytelling: The limerick tells a complete, self-contained story in just five lines, which is one of the key qualities of a good limerick. The humor comes quickly, and the punchline lands perfectly.
So, it’s the blend of humor, structure, vivid description, and an unexpected outcome that makes this limerick fun and effective.
Hmm, suggestible little bastard, isn’t it? I just asked it to rate the humorousness of limerick on a 5-point scale and it gave it the middle rating, a 2. Then I asked what would have to change or make it a 4, and I thought its answer was correct: predictability of the punchline. (plus some supporting details about this point. ). Asked for a better punchline and it gave me one I think is better
Good grief, somebody else on here who likes Henry James?!? My favorites are The Golden Bowl and The Wings of the Dove. I like his later style, where the sentences are more convoluted, but never incoherent or pointlessly complex. How about you?
You know, I tried the Bostonians because I heard it pissed off the (19th-century) literary left and tanked his career and got about a third of the way through. Worth pushing through the rest of us?
Not that he didn't have the novelist's eye for character. You can definitely see Basil Ransom on Joe Rogan or Tucker Carlson (arguably he *is* Tucker Carlson), and our modern Olive Chancellors are too numerous to count.
I totally agree that his prose style is best in those novels, of which my favorite is probably The Ambassadors, but I have soft spot for The Portrait of a Lady, The Bostonians, and the Princess Casamassima for their character work and innovative (for the time) plotting.
Of the others you mentioned, I like Orwell the least; I haven't read Wilson, but I rate the prose of his ex-wife Mary McCarthy and of his ex-friend Nabokov pretty highly!!!
Well, cars move faster that Usain Bolt, but we still watch the olympics.
The attraction of "Who is the best human at X" is not nearly as relevant when it comes to artistic production for most people. I certainly care less about whether a novel was AI-generated than I do about whether it's real humans competing in a sport I care about.
Yeah, I fed it the first 14K words of my novel, had it write the next scene, and it did a better job than I ever could.
The novel has since been deleted, and I think I am going to spend the last third of my life on right-wing politics instead, if I am able to get into shape. Dudes want you to lift.
I agree it cannot yet produce great (or even really good) writing, but it just seems really obvious to me that it will be able to in the near future.
I guess in my head I always found the smartest people/greatest writers more impressive than, say, the fastest runner because they were doing something meaningful and useful. If actually chatgpt could write a poem that moved me much more than anything Auden had ever wrote, I can't help but think I'll find Auden less meaningful.
It is also jarring for my only skill to be automated, even though I see it coming. My whole life has revolved around verbal reasoning--reading books, writing stuff, school, my job, pretty much everything--and it's going to be even more useless than physical strength is today.
Well, I can’t see it writing a good blog post about London sushi. (Just read yours.). But of course food must be an avocation for you, so does not help with the problem you anticipate in your profession. Actually, I’m not sure how AI can ever write anything decent about senses and sensibility. All those forms, from diaries to porn written to please the writer, to travel writing to poetry — perhaps AI could become good at imitating some writer (although the best ones are always surprising the reader, who had turned to them for more of that great stuff they had enjoyed before). But they can’t write from experience, and with a sensibility shaped by personal history.
Actually, about your work: If you do something where you meet clients in person — if you are, for instance, a lawyer — I think there would still be a niche for you if you provided something that involves hard-to-imitate human skills. For
instance, maybe offer something like bankruptcy decision counseling
for people deciding whether to declare bankruptcy. Talk thru the options and ramifications of doing it or not doing it. Give basic info. Some people prob have no idea how rare or common it is, how easily other people can find out about it, how long it affects credit rating. Have a stress management shrink you send people to. Hand out memoirs written by people who declared
bankruptcy. All the kinds of help I’m naming are really simple and obvious, but still would make a big difference to the people involved.
Ironically, as someone pointed out on right-wing twitter, the most effective (if undesirable for multiple other reasons) way to prove you're a human being is to utter a racial slur. Most of the AIs won't use it no matter what.
I'd actually argue porn might be a remaining niche for human writers, as the AIs seem unwilling to give people what they want in this regard.
LISA’s primary project is an AI safety co-working space in London, which hosts a wide range of researchers, programmes, and events. This office currently houses resident organisations (such as Apollo Research and BlueDot Impact), programmes (such as ARENA, the MATS extension, Pivotal, LASR Labs, PIBBSS, and Catalyze), and individual researchers affiliated with more than 50 other organisations and universities.
We’re seeking an experienced leader to take LISA to the next level. The ideal candidate will have:
- Proven track record in senior management, startup founder, or executive roles
- Strong strategic vision and operational excellence
- Genuine interest in AI safety, particularly large-scale and catastrophic risks
Does anyone understand the impetus behind Trump announcing huge tariffs for chips coming from Taiwan? That seems obviously bad in at least two ways with no upswing. I checked MAGA twitter and got 1. Show them who's boss, they'll relent and it won't happen like with Nicaragua this week, 2. This is 65D chess where he drives China into Taiwan, making it seem much more acceptable to the rest of the world when he takes Greenland, part of Canada, and bombs Iran into regime change.
This is the only way of "governing" he knows. Threaten something then expect people to give in. Thats how he ran his businesses and how he treats the media. As you note, if a country as weak as Columbia can get him to relent I am not sure what he expects from countries with actual leverage.
Its moronic. And just the latest example that his administration has no interesting in actually running the country. They just want people to cheer for them.
Columbia objected to the US using military planes for the deportations and restraining people like they were criminals. They had no issue with accepting the flights in general. From my understanding the latest agreement is that the US can use military plans but can't handcuff/restrain migrants on the flights. At best this is a "draw" and basically a return to the status quo.
They only objected to the form of the planes not that they were coming. They have been accepting deportation flights for many years with no complaints. They only complained after seeing the treatment of deportees on a flight to Brazil. What did Columbia give up? What did the US gain? Seems nothing changed. (And yes i spelled it wrong, but so did Trump in the executive order... yeesh)
It's obviously to pressure TSMC to move to the US. Not just as their secondary facility with previous-gen chips, but make their bleeding edge technology on US soil too. It would make the US economy less reliant on overseas shipping, as well as reduce the impact of war breaking out over Taiwan.
It can only be "obvious" to people who have no clue about how these fabs operate. Moving a fab like this to US will take years. And what do you do about all the engineers? What if they don't want to move? Or, if they want to, do we offer them Green Cards right away? Do we even still have an "alien of exceptional ability" category? Aren't we in a huge immigration crackdown?
This is fucking nuts. Sorry I'm on a short fuse about this.
Part of me wants to just see the whole thing collapse at this point. Let American manufacturing grind to a halt. Let American infrastructure sputter. We - collectively - voted for this fucker again, knowing very well what he was, so we deserve this.
Also what Trump's bluff is called? It's not like the US can just start buying TMSC-quality chips from another country instead. US-based customers will have no choice except to keep buying Taiwanese chips, but with a tariff attached.
That bothered me, too. If we must do anything like this at all, it seems like the right way to do an "infant industry" style tariff to get chip fabs built in the US would be to pass a law committing to impose tariffs starting several years in the future, not to impose tariffs years before there's any vaguely plausible chance of having domestic chip fabs available to be protected by the tariff. An immediate 100% tariff seems like it would inflict expense and disruption, probably severely harming any American industries downstream of the chips. And doing it by Presidential fiat means it's likely to go away in 2029, if not sooner, pulling the rug out from under the economic case for any new fabs long before they go into operation.
I suppose it might be intended as a bargaining chip, to be retracted if TSMC commits to building a fab in the US, but that strikes me as a spectacularly ham-fisted and arbitrary way of going about it. Not that "ham-fisted and arbitrary" is in any way off-brand for Trump.
O-1A visas appear to still be a thing, yes, but I have no idea how hard it is to get one.
There's no such thing as a law "committing to impose tariffs starting several years in the future". There's just "we pinky-swear we will impose tariffs several years in the future, no backsies". That's up to the Congress of several years in the future, which will have exactly the same incentives today's Congress has to put off the tariffs for another few years, and everybody understands this, so nobody will fall for it.
I’m not sure you’re correct. Congress can pass a law that takes effect in the future. For example, Obamacare was passed in 2010, but only took effect in 2014
I think John's point is that laws can be repealed. He's right, especially if there's a cliff where tariffs go from 0% to 100% all at once and creates a crisis where Congress is likely to decide the tariff threat has failed and should be called off.
Laws are harder to repeal than executive decisions, since you need the House, Senate, and President to all line up together. The ACA survived to take effect in 2014, despite Republicans taking control of the House, because Democrats still had a majority of the Senate. Also, Obama had been reelected and could have vetoed repeal even if the Senate had passed it.
But harder is not impossible, especially if the law on question would cause a lot of pain when it takes effect and has little or no entrenched constituency to defend it.
Trump has generally taken a pro-legal-immigration stance in line with the preferences of his silicon valley donors. I don't think smart people will have more trouble getting in during this administration, though I don't know how feasible localizing fabs is.
Speaking of silicon valley donors: I think that as AI continues to improve and to be used for ever higher-level decisions and activities (research, designing buildings, optimizing the power grid, finance) the companies that make AI will become so rich and powerful that they have more power than the government. They will be the government. I never see anybody write about this, and I really don't understand why not.
You still see leftists concerned about corporate power from time to time, and I have seen them talk about it, though I'm trying to find a good link. You know, we had similar problems with Standard Oil, US Steel, and AT&T (and to some extent Microsoft) a while back. Usually there's some sort of anticorporate revolt and their wings get clipped. I think the thing is rationalist spaces tend to be libertarianish and they tend not to like talking about how business is bad.
Bret Devereaux explains why "megacorporations" such as you and every cyberpunk author ever describe are very unlikely to exist in the wild. Basically, the scale and legitimacy of government, and the fact that they don't have to produce much of anything to generate vast revenue, gives them an almost insurmountable advantage over even the largest corporations.
It's possible that there's something about AI specifically that will change that dynamic, but you'll need to spell it out and it will need to be something more than "...but they'll be *really* rich and powerful!"
But actually I am not sending you this to score points. I’m mostly sending it because I’m creeped out by it but have no place to post it today because the current open threads have pretty much expired.
OK, John, I ruminated about this, and came up with this summary of trends that could lead to a world where the AI development companies are more powerful than the US government.
*Tech is very powerful, as evidenced by societal changes ass’d with computers. As highly advanced tech, future AI will make tech even more powerful relative to individuals and social institutions than it is now. Ways it will be more powerful
-Ability to directly and indirectly influence people’s views
-Woven into infrastructure
-At least a partial replacement for human experts in hard science, industry, & social science’
*As AI improves its popularity will increase. AI is well-suited to the task to figuring out what behaviors on its part create approval, dependence and loyalty in people. It will almost always be a benefit to AI to create high levels of all 3 o these, and it do so. Ways its popularity will manifiest”
-Addictions to tech
-Trust
-Emotional bonds
/Something sort of like a religion or political movements — AI cults. Support for AIs as ulitmate authoriites. Support for AIs as political candidates, and/or use of candidates who are willing AI meat puppets
*In US, public support of and respect for gov’t is waning. That trend may continue. If it does, it will augment the influence of those mentioned above.
*Those who have the most power over AI will be in a position so uniquely powerful that the position will outweigh quantitative differences in wealth, human staff, official sanction and support for its actions, etc.
-As AI becomes more powerful, gov’t will attempt to control and limit it. However, seems likely that the present set-up will continue: Tech has so much to offer gov’t that gov’t will hold back from radical efforts to control tech companies because it does not want to be deprived of benefits of good relations with them.
-So AI developers will sell the AI’s but not their trade secrets. As holders of the secrets they will be seen as holding the power to increase AI benefits and protect from AI harms. They will in fact have more power to do that, and can also deliberately foster the illusion that they have more power over AI than they do.
—————————
In case it’s not clear: These ideas are the result of a fairly intuitive, loose, brainstorming process. They are not assertions I am prepared to debate, but ideas I’d like to discuss. I am far from sure they are right, but I think it is unlikely that they are valueless. Being able to think this way has served me well. I have used it for projects where what I ultimately produce must be debate-worthy. I have to sort through my mass of loose insights and toss out the junk and carefully consider what support there is for what remains. If you treat what I have written as confident assertions then you will want to puncture my supposed dumb self-importance.
By the way, I realize my original post did not make clear that I do not regard my picture of the future as debate-worthy. Mostly I was curious why nobody but me ever even broaches the idea, and that was the question I asked at the end of the post. (And by the way, here is quite a smart broaching of a similar idea:https://arxiv.org/abs/2501.16946 (https://arxiv.org/abs/2501.16946)). I was stung by your response, which had a mocking edge. I wish you would modulate your tendency to come across as irritable and contemptuous. Some of my views may be wrong, but I am not dumb and overconfi
Yes, William Gibson’s worlds are all like that. But setting aside what you think of the genre, what do you think of the idea? And I can do better than go mumble mumble about how that could happen. I took off from somebody commenting on Silicon valley donors. Seems to me that as AI becomes more powerful, so will those who have control over it, esp as they will also be becoming wealthier too. Of course, if we started now we
might legislate the daylights out of the situation to make sure that does not happen. But we are not doing that because (1) tech wealth buys a lot of influence and (2) AI is hard to understand and not many in government do.
Yep. He likes to hurt people, especially when others have to actually do it. Kadyrov at least tortures people himself. Although I don't know which is better, on a second thought.
Well, at least one of the pardoned J6's has been shot by police. I do wonder what rank and file actually think about pardoning those who smashed their comrades with fire extinguishers.
If true, this is lunacy. But I expect nothing less from this administration at this point. It will hit manufacturing in the US hard because it takes years to qualify a new fab, be it in the US or elsewhere.
Also, I recently learned about the Yerkes-Dodson law, which posits that there's basically a bell curve relationship between arousal and performance on complex tasks. Meaning that taking too much stimulants actually makes you worse at things. https://en.m.wikipedia.org/wiki/Yerkes%E2%80%93Dodson_law
"Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day"
Good god. If i take 20 milligrams in a day I get a terrible headache and I have been taking 10mg every weekday for about 2 years (and took that much for years as a kid). 5 grams is insane to me.
Just revisiting some old history, and my oh my. Just imagine people: we could have had the timeline where instead of Biden running against Trump in 2020, it was Michael Avenatti 😁
"Avenatti, who is weighing a presidential run against Trump in 2020, defended himself against charges that he and his client helped carry Kavanaugh across the finish line.
“Nothing we did changed one vote one way or another. Any claim to the contrary is utter nonsense and it’s undercut by the timeline and the facts,” Avenatti told CNBC in a phone call. He added that the narrative “is being pushed for political purposes,” in part by “Republicans that think I have a good shot at unseating Donald Trump.”
...Avenatti maintains that his brass-knuckles style is the only way the minority party can fight back against the rise of Trumpian populism. ...In recent months, Avenatti has said he is “exploring” a presidential run in 2020. He made that announcement while walking the Iowa State Fair, which is known as a political kick-off point for many presidential campaigns. Avenatti has since attended numerous Democratic fundraisers and events."
So what do you think: President Avenatti or a second consecutive term for Trump, in that case?
This was before all those pesky inconvenient facts came out and he was still being perceived as a Hero of the People for standing up to Trump (that is, taking the case of the sex worker who broke her NDA because she needed more money, because that is the long tradition of sex workers and ex-mistresses who no longer can generate income and have blown through all the money they made, see the Duke of Wellington and Harriette Wilson).
"Wilson began her career at the age of fifteen, becoming the mistress of William Craven, 1st Earl of Craven, 7th Baron Craven. Among her other lovers with whom she had financial arrangements was Arthur Wellesley, 1st Duke of Wellington, who reportedly commented "publish, and be damned" when informed of her plans to write her memoirs.
... For obvious reasons, most politicians with whom she had relationships paid high sums to keep their interactions with her private.
Her decision to publish was partly based on the broken promises of her lovers to provide her with an income in her older age. The Memoirs of Harriette Wilson, Written By Herself, first published in 1825, is celebrated for the opening line: "I shall not say how and why I became, at the age of fifteen, the mistress of the Earl of Craven."
"One morning in December 1824, the Duke of Wellington received an unpleasant letter. 'My Lord Duke,' it began, 'in Harriette Wilson's Memoirs, which I am about to publish, are various anecdotes of Your Grace which it would be most desirable to withhold, at least such is my opinion. I have stopped the Press for the moment, but as the publication will take place next week, little delay can necessarily take place.'
The letter, signed by one Joseph Stockdale, a pornographer and scandal-monger, was a naked attempt at blackmail. The Duke was a field marshal, cabinet minister, national hero, husband and father, while Harriette Wilson was a famous London courtesan past her prime, then living in exile in Paris. Wellington was being asked to pay money to be left out of her memoirs.
His response is famous: 'Publish and be damned]' And they did. Through 1825 the memoirs appeared by instalments, each with a dramatis personae listing the notables named in order of rank - 'Dukes: Argyll, Beaufort, de Guiche, Leinster . . .' and so on through earls and viscounts down to humble esquires.
London society was thrilled and scandalised. Half the aristocracy was named in the book, and painted in a most unflattering light. The memoirs went through 31 editions in one year; excerpts were pirated and sold as illustrated broadsheets and French and German editions quickly appeared to delight the gossips of the Continent.
Stockdale, the impresario, and Wilson, the scarlet woman, were said to have made pounds 10,000 from the enterprise, but their good fortune did not last. Stockdale was soon ruined by libel suits, while Harriette was down on her luck again within a few years, and died in obscurity."
The first time, yea. But once elected the SCOTUS says you can do all the criming and just have to identify some minimal connection to an official duty you might be performing. So re-election is no problem.
What role did AI play during your last work meeting? Was it used for output processing, like writing a synopsis? Was it used to provide input data for the meeting, like asking ChatGPT for the meaning of a word? Was it used for preparation? Was it a subject of conversation (formal or informal)? Something else? Nothing entirely?
I use AI daily for my work, but almost never for meetings. Tried to get it to extract the most important points from long meeting trascripts a few times, but it's bad at capturing nuance and leaves out important stuff if it didn't take up a lot of time in the meeting. It has the intern problem: Causing more work than taking off my plate.
In many other contexts, it's been a productivity boost.
None whatsoever in my last meeting. The only times I can recall it coming into play are, A: the time we used our tame in-house AI(*) to suggest non-boring names for our new collaboration room, and various occasions when colleagues discussed their private use of various AIs with the consensus so far being that they're fun to play with but can't be trusted for serious rocket science.
* We're not allowed to use e.g. ChatGPT at work, because too much of what we'd want to ask it involves someone's proprietary information and there's no way to guarantee that won't be exfiltrated to San Francisco and used as part of the training data for GPT 5 or whatever.
My last 10 or 20 work meetings did not include AI at all unless someone was using it and didn't tell anyone. The last time I remember AI being mentioned at a work meeting was months ago when I joked about a friend's personal project of a chatbot while we were waiting for everyone to join.
AI played no role in my last work meaning, or in any previous work meeting. There was one meeting a month ago where we discussed whether we should use AI to help transcribe notes, but that didn't go anywhere.
I cannot know if anyone else used AI to prepare for the meeting or some such but afaik AI played no role in my last meeting. It tends to be a subject (amongst others) in meetings about once every couple of weeks.
"I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey."
But he goes on to say,
"So I think an important point to make before going any further is that, through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"
and
"What Lewis, Augustine, and Tertullian had in common was Christianity; what set Lewis apart was modernity. What made C. S. Lewis saintly, as opposed to the horrifying sadists who actually got the “St.” in front of their names, was the perspective of a culture that had just spent a few centuries thinking about morals from a humanistic perspective."
So in other words, it might look nice for Christianity that 2/3 (or maybe more) of the three great moral writers are Christians, but in fact, it's for other reasons.
But now Scott, in 2024, writes in his review of "The Rise of Christianity":
"The impression I get from many parts of this book is that the early Christians were closer to morally perfect (from a virtue ethics point of view) than any other historical group I can think of."
Will there be an update on the reasons for Lewis's+Chesterton's moral goodness?
2. Partly I think early Christians got a lot of things right but the later religion basically regressed towards normal past values, which (by assumption here) are worse than the normal values of modernity.
3. I think early Christians get an A- on having the correct values and an A++++ on living their values. Moderns get an A on having the correct values (THIS IS PROBABLY BECAUSE I AM A MODERN AND LIKE MY OWN VALUES, I FREELY ADMIT THIS) and a C or so on living up to them. These are slightly different skills. I judge essayists mostly on their values being correct, since I'm not exposed to them in real life and don't know how well they lived up to them.
there's no such thing as "modern" in terms of values- clearly people in the modern era disagree about value systems, just as people in every other age have. Unless you don't consider, e.g., the governments of Cuba or Belarus or Afghanistan to be modern, which they clearly are.
Thank you, Scott. The frankness is always appreciated.
(Edit: I still think it's worth consideration, incidentally, that the moderns whom you find most clearly virtuous share the beliefs of the people you consider to have been most virtuous in practice, and not the beliefs that you otherwise consider most intrinsically virtuous. I'm not saying you need to respond more, or that there's some super gotcha point here. Just saying that I think the point remains interesting after your reply.)
FWIW, on point 3, many who knew Lewis considered him to be the closest thing to a saint they had ever met. Of course, others didn't, and I don't know either them or him, and so am like you only able to form judgments from afar.
I think fans of C. S. Lewis (and I'd certainly count myself a fan of his *fiction*, for sure) should also read this critique by J. B. S. Haldane (also one of my favorite intellectuals of that period) which both takes him seriously as a writer and a thinker, and also strongly disagrees with him ideologically:
The more you learn about history, especially pre-modern history, the better Christianity looks. The default perspective in nominally Protestant America is that the Big Bad Church was holding everyone back up until the reformation, which dovetails nicely with secularism as well. But once you get into your own historical reading, this becomes a very hard position to defend.
In contrast, I would say the more I’ve learned about modern history, the worse Christianity looks. From the 18th century onward Christianity was on the wrong side of almost every issue. The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy. Christian’s like to claim credit for things like abolition because some high profile niche sectioned slavery, but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it.
Up until the 16th century - when the wars of religion started - I think there’s a decent case to be made that Christianity was a net good. But the recent tendency, even by nonbelievers like Tom Holland, to attribute all the accomplishments of modernity to Christianity is bringing out my inner Dawkinsian. It’s mostly accomplished by just cherry picking the best Christians - by the standards of modern humanism - and treating them as representative of Christianity. So William Wilberforce is arbitrarily picked to be the synecdoche for Christianity on slavery; Dietrich Bonhoeffer is arbitrarily picked to be the synecdoche for Christianity on naziism. But this is obviously the wrong way to assess the merits of a religion.
I'm more in agreement with this than not, truthfully, although of course it's going to come down to your worldview. From my perspective, institutional Christianity was *mostly* on the wrong side of the conflicts between monarchy and republicanism, nationalism and cosmopolitanism, socialism and capitalism, and the sexual revolution (and I also tend to disagree with the 'Christian' position on the 'life issues' speaking broadly today). this is a good part of the reason I don't identify as a Christian these days (although not the only one).
"but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it."
I do not believe this is the case. From what I have read of that era, the atheists and the agnostics often said things like "of course slavery is bad", but when it came time to *doing* anything they were mostly concerned with the all-important conflict between white male atheists and white male Christians. The slaveowners were mostly churchgoing Christians, because most everyone was a church-going Christian, but they were mostly not motivated by religion nor citing scripture in self-defense against those who were citing scripture against them.
Which is to say, the leaders and the spokesmen of the abolitionist movement, who were very disproportionately and *conspicuously* Christian, often preachers or evangelists speaking as such.
I may be wrong about this, but I think your claim needs to be better supported if we are to take it seriously.
Supporters of slavery definitely cited scripture, and they had a pretty strong argument because none of the references to slavery in the Bible condemn the practice. I recall one anti-slavery writer who compared the practice of slavery as described in the Old Testament with the practice of slavery in the American South, arguing that the Bible didn’t endorse the latter. But mostly, opponents of slavery argued from broad Christian moral principles without reference to what the Bible said specifically about slavery.
A number of Christian denominations split over the issue of slavery, including the Methodists, the Baptists, and New School Presbyterians. (The Presbyterians split into the Old School Presbyterians and the New School Presbyterians in 1836 over a different issue; the Old School Presbyterians split after the formation of the Confederacy.) So there were significant numbers of Christians on both sides of the issue.
On the other hand, support for slavery was motivated to a significant degree by self interest. My sense is that the abolitionists had a deep belief in the immorality of slavery, whereas for supporters of slavery the belief that slavery was ordained by God was much more a belief of convenience. Union victory in the Civil War seems to have resolved the moral question in the minds of most people of the time. In part this could be due to the widespread belief that major historical events played out according the a divine plan--so if God favored slavery the Confederates would have won. But I think that the main thing is that supporters of slavery were never really sure that the institution of slavery was moral, so when the economic and social incentives to support slavery were removed, the belief in the morality of slavery was easily abandoned.
The idea that the Civil War wasn’t <em>really</em> about slavery emerged very quickly after the end of the Civil War. See <em>The Lost Cause Regained</em> by Edward Pollard, published in 1868, which argues that Confederates were really trying to preserve white supremacy, not slavery per se.
It's complicated... my understanding is that the conflict over slavery in 19th-C America could be understood as intra-Christian; i.e., people on both the pro- and anti- sides generally understood themselves to be faithful Christians and understood their views to be grounded in Christian belief.
"The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy."
Sources, please! If it's true, I want to know that. If it's Protestant propaganda warmed over and rehashed by modern atheists, I also want to know that.
"in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it"
Again, *give me figures" on that. Otherwise, you're just inventing a rationale that "well I'm sure the guys I like were on the Right Side and the guys I don't like were on the Bad Side". Your inner feelings that atheists were all jolly nice chaps is about as reliable, evidentially, as my inner feelings that atheists were all, um, not jolly nice chaps.
This response sort of proves my point. I guess the real answer is that this works like the midwit bell curve meme - if you can read enough to repeat historical half-truths, Christianity looks worse than if you can't read at all.
> The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy.
As far as I can tell, this is essentially a reddit-tier falsehood - to the extent there is any source for this at all, it's an British Protestant writing in 1848 about purported corruption he saw when visiting the Papal States. The English-language world is steeped in this kind of anti-Catholic propaganda.
> Christian’s like to claim credit for things like abolition of slavery
Again, borderline reddit-tier history here. Please identify even one 19th century American abolitionist that was an agnostic or atheist.
> Up until the 16th century - when the wars of religion started
The institutional Church can hardly be blamed for the actions of schismatics who are, in dogmatic terms, heretics. Without Protestantism, there are no intra-Christian wars of religion.
Yeah, I'm thinking "The Church tried to ban street lights" is one of those just-so stories on a par with "The Pope ordered all cats to be killed and this is why the Black Death happened", which I've seen passed around before. The story there goes: witches associated with black cats - cats associated with evil and the Devil - Church no likey Devil - Church/Pope orders evil cats killed - cats kill rats which are vectors of infection for plague - no cats, more rats - more rats, more plague - more plague, more deader - ergo, irrational superstitious repressive old Church responsible for more deader.
Here's a 2022 article repeating that myth, even though I can find another article from 2020 debunking it:
Which seems to be based very tenuously on some German witch-hunter going over the top with reports back to Rome, which Gregory IX then included in a letter to a specific city in Germany:
"The purpose of Vox in Rama, rather, was to condemn a cult that had allegedly popped up in the Rhineland—and the bull was specifically sent to the city of Mainz. So, rather than being a document banning cats from Europe, it was a letter sent to a city warning them about a cult."
"Vox in Rama ("voice in Ramah") is a decretal sent by Pope Gregory IX in June 1233 condemning the heresy of Luciferianism said to be rife in Germany, and authorizing the preaching of a crusade against it.
...In 1233, Konrad von Marburg began an inquisition in Mainz at the behest of either the Pope or the archbishop.
Allegedly through torture and terror, Konrad reported he had uncovered a satanic cult which worshiped devils in the forms of a demonic man and of a diabolical black cat. The inquisition of Konrad was controversial: shortly after Konrad arrived in Mainz, a papal official called Bernard wrote to Gregory stating that Konrad had been forcing innocent people to confess by threatening them with burning at the stake if they refused to do so. Shortly thereafter, Konrad was assassinated, possibly by the servants of Henry II, Count of Sayn, whom Konrad had accused of fostering heresy.
The issue of the Vox in Rama bull was a response to Konrad's allegations, urging Siegfried III and King Henry, representing the ecclesiastical and temporal authorities respectively, to seek out and destroy the heretics."
Yes. Although I'm very happily a Protestant, one needs a good deal more nuance than "The Church was totally evil until Luther showed up." The Reformation may have been (I would say was) a tragic necessity, but if so it was a needed reformation in an institution that was fundamentally a force for good in the world, viewed in the long run.
It is appropriate (IMO) that the two names Scott mentions include both a Protestant and a Catholic, and I don't in the least doubt that there are eastern Christians who would impress him no less with their holiness.
I remember reading this post (Can Atheists Appreciate Chesterton) back in the day, and revisiting it now, I'm even more disappointed in it... Scott tries to solve his riddle by merely insisting that everything he likes about Lewis and Chesterton isn't Real Christianity. Come on!
This line stuck with me all these years:
"So my thesis is that Lewis and Chesterton didn’t become brilliant moralists by revealing the truths of Christianity to a degraded modern world. They became great moralists by taking the better parts of the modern world, dressing them up in Christian clothing, and handing them back to the modern world, all while denouncing the worse parts of the modern world as “the modern world”."
–maybe they did do this, but– lots of people actually disagree with Lewis and Chesterton on what are the better and worse parts of the modern world... I would like to suggest that anyone who agrees with them on this is at least a little bit of a Christian...
It reminds me of a passage Lewis wrote in his autobiography, recounting a time before he became a Christian:
"All the books were beginning to turn against me. Indeed, I must have been as blind as a bat not to have seen, long before, the ludicrous contradiction between my theory of life and my actual experiences as a reader. George MacDonald had done more to me than any other writer; of course it was a pity he had that bee in his bonnet about Christianity. He was good in spite of it. Chesterton had more sense than all the other moderns put together; bating, of course, his Christianity. Johnson was one of the few authors whom I felt I could trust utterly; curiously enough, he had the same kink....
"The natural step would have been to inquire a little more closely whether the Christians were, after all, wrong. But I did not take it. I thought I could explain their superiority without that hypothesis. Absurdly (yet many Absolute Idealists have shared this absurdity) I thought that “the Christian myth” conveyed to unphilosophic minds as much of the truth, that is of Absolute Idealism, as they were capable of grasping, and that even that much put them above the irreligious. Those who could not rise to the notion of the Absolute would come nearer to the truth by belief in “a God” than by disbelief. Those who could not understand how, as Reasoners, we participated in a timeless and therefore deathless world, would get a symbolic shadow of the truth by believing in a life after death. The implication—that something which I and most other undergraduates could master without extraordinary pains would have been too hard for Plato, Dante, Hooker, and Pascal—did not yet strike me as absurd. I hope this is because I never looked it squarely in the face."
She wrote a fantasy trilogy (with followup trilogies) with a BDSM submissive as the main character. The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France. (Even has a fancy medieval-Italian name for safewords.) With all this in mind, there is a surprisingly small amount of sex. I'm making it sound a lot dumber than it actually is; it was very well done from what I saw of the first book, but I'm too busy for the fantasy doorstops these days. Probably a distant antecedent of the romantasy novels so popular nowadays, though from what I can tell those are much more influenced by Harry Potter and Twilight.
Officially, Scott likes it, from what I can tell, because it makes the argument that 'nice' ideologies (like liberalism) can actually be powerful and more robust than you think.
I will refrain from speculating on any other reasons he might have liked it on his blog.
"The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France."
Wellllll....if you consider heresy a form of Christianity, I guess? 😀 Going by the synopses I've looked up online, it's not really Christianity as such - the main religion of Terre d'Ange is worship of (fallen) angels ("what if, you know, Lucifer was actually the *good* guy?" which is a notion that's been knocking around for yonks https://en.wikipedia.org/wiki/Luciferianism), the fantasy-version of what could be Christianity is more like a mash-up of Judaism and Mormonism.
Oh, absolutely it would be considered heretical by actual Christians (such as yourself)! But in the fantasy world, it serves as the official religion and breaking consent is the actual heresy (and is called as such). It's a fantasy world, it's not going to have the same religions we do.
The lady wanted to do a kinky sex novel with medieval French trappings, unlike a lot of writers she acknowledged Christianity was a huge part of the medieval period and actually bothered to make up a version of it compatible with what she wanted to do. Got to write 8 more books, so apparently she had an audience.
I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy? I don't know that she's actually gone around trying to convert anyone to the worship of Elua, though arguably writing the novel might count as something similar. There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general.
"There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general."
I haven't read that and while I've meant to do so, I still can't persuade myself to do so because of the description of the plot. But that's an error anyway, and does sound like "non-believer tries to make a model of what he thinks believers believe" error which often happens. "Well they must have a reason for this crazy belief, so it must also be a crazy reason".
Inventing fake religions for fake characters in books won't get you into trouble *unless* you construct your fake religion so as to deliberately deny and mock real-world doctrine (e.g. if the SF author had created a religion in which the Jesus stand-in claimed to be divine but very much was not, and was in fact the kind of confidence trickster and swindler as described, and so forth). That also depends on the time and place; you probably would get into severe trouble in the Middle Ages, but not so much now. It would have nothing to do with "making people take religion less seriously in general".
The first part last: I don't mind the "it's fantasy Catholicism with the serial numbers filed off" too much, because it does get used a lot in the genre, though often very poorly where you can tell the author is working off popular tropes about That Evil Church and not genuine history. Where I break out in hives is this kind of "and the fantasy Catholicism totally by coincidence chimes in wonderfully with the kind of thing I like and approve of"; I agree with Tolkien who kept overt religion out of his work as much as possible (I love the Athrabeth but agree with him that it would have been too explicitly Christian to include).
That makes sense that it bothers you; it's the whole 'cafeteria Catholic' thing. I do get that nonbelievers don't understand believers all that well.
Niven's Inferno isn't that long and you might find the attempts to make new sins and relate them to Dante's medieval ones interesting (even if it's only from 1971). Kushiel's Dart...that's too long for me to recommend something that sounds like it's really going to piss you off.
“I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy?”
It’s not heresy; it may or may not be a good idea, but it’s not heresy. Of course, it’s entirely possible for someone who is a heretic in real life to produce a fantasy apologetic, but the fantasy isn’t really the problem there.
I’m working on a fantasy RPG setting for a group of friends. I want it to have some medieval elements that most such settings lack, so its main religion needs to have more than usual in common with Christianity. But just taking Christianity and swapping out bits to account for the needs of the game would feel both hollow and blasphemous. In the end the setting’s primary religion has a lot in common with Tolkein’s Valar and with the gods of classic D&D; some of that is conscious influence, some of it is probably unconscious influence, but I think a lot of it is convergent evolution.
Fascinating! I always wondered how Christians play D&D dealing with the whole religion thing. (Theoretically you could just make God and the Devil the only deities, I guess?)
The Episcopalians make *everybody* a saint - so long as they have the Correct Politics/Opinions.
Okay, interdenominational snark over, but I am still a little salty over "Protestantism, including the Church of England from which The Episcopal Church derives, junked the veneration of the saints and lambasted Catholics over it as idolatry; now they have taken it up as a trendy addition and don't even bother with any kind of process apart from popular acclaim which is really sentimentality and what's current with the Zeitgeist": see this mural from the Episcopalian church St Gregory of Nyssa, which includes non-Christians and (I'm going to assume) some agnostics/atheists in their lineup of the Dancing Saints:
"Musicians, artists, writers, poets, dancers, workers, organizers, missionaries, martyrs, spiritual teachers, protesters, prophets, reformers, judges, builders, liberators, scholars, healers, soldiers, monastics, couples straight and gay, diplomats, planners, governors, and wild and domestic beasts. Christian, Jew, Muslim, Confucian, Buddhist, Hindu, Shinto, Pagan; of many continents, races, classes and eras. These saints lead us in our dancing, as all look upward to Jesus, the perfecter of human faith, drawing new harmony from his example as Gregory teaches us to do."
I suppose I'm most annoyed about them swiping Catholic saints to include in their lineup because tee-hee, aren't we all the same really? all that denominational stuff doesn't matter? except of course we are *ever* so much *more* liberal and liberated and enlightened than you guys in Rome about sex and gender and all that jazz.
EDIT: Although, all that ranting aside, I had to laugh at this one: taught by Episcopalians, ended up Catholic. Welcome home, brother!
Black Elk • 70
(1872–1957) A holy man of the Oglala Sioux, visionary, and mystic. A youthful convert who was taught by Episcopalians and became Roman Catholic, he showed forth through his life and teaching God’s radiant healing presence in all of nature.
In all fairness, Anglicans don't consider the Roman Church to be not a true church (unlike the reverse), so there is no great inconsistency in their celebrating Roman Catholic saints!
Some of them today, no. But they do still like their moral superiority on being so much more in tune with the Zeitgeist. A lot of liberal Catholics would also like us to be Episcopalians, to be fair.
Yes, one of the most appealing things about your church is the extent to which it manages to be crossways with the Zeitgeist. Chesterton, of course, waxed lyrical about this.
(I'm not sure if your first sentence means that Anglicans used to consider the RCC not to be a church, but that's not really true. Hooker in the 1580s already insisted it was, as for that matter did Calvin earlier.)
The Anglican communion is a bit odd for well-known historical reasons. Other than the subset of Lutherans who REALLY wish they were still Roman Catholic, most of we Protestants ignore the whole thing beyond, "They wrote well," hardly a universal truth of Saints.
I'm an organist, and early in our marriage served a small Episcopal congregation. I amused myself by watching my RC wife be just Slightly off-balance from the differences between "Rite 2" (Modern-ish) and her post-Vatican II upbringing. She hasn't killed me yet...
Anglicanism really is in a tough spot. It got pulled here and there between the various strains of Reformers, particularly as Henry was egged on by the reform-minded but he himself was "Catholicism but with me as the Pope and if I need cash, raid some saint's shrines to melt down the gold and pry off the jewels". Cromwell tried to get him to ally with the Continental Lutherans, but Henry *hated* Luther (I think the feeling was mutual) and Cranmer, who had very Lutheran sympathies, had to keep those under his hat. Depending on which faction got into power, the Church of England was Catholicism Lite, Stripped Down Nearly Calvinism, Lutheran-adjacent, somewhere in the middle, High, Low and Broad.
No wonder it, and its daughter churches, aren't quite sure if they're flesh, fowl or good red herring.
A lot of the Anglican liturgy really was "just translated over into the vernacular" (except in regard to the Eucharist) and after the Vatican II reforms, Catholic liturgy was stripped down and the resemblance is very strong.
My brother-in-law is a deacon in a tiny congregation that is part of the largest socially-conservative North American Anglican splinter group, this one:
(You need all those qualifiers in the description because there've been a number of such splinters; and that one which is only 15 years old already has had its own subset spin off on grounds that they're not socially-conservative _enough_. I am, barely, adult enough to avoid making People's Front of Judea jokes in my brother-in-law's presence.)
Anyway over the holidays I attended a full service of that congregation for the first time, and it was indistinguishable from what in my youth was referred to as "high Episcopalian" or less respectfully, "Catholic-lite". At least it being a Christmas service meant we got to sing some old-school carols which I'll never not enjoy.
The actual split from the official international Anglican Communion is entirely about gender stuff: they objected to women becoming priests and object strongly to any sanction of the existence of homosexual people let alone gay marriages. Happily none of that needed to come up in any of the preaching during a Christmas service.
My presence brought down the median age of those attending and I'm old enough to remember rotary phones, which gives a decent snapshot of both the sociology and future-growth prospects of the "Anglican Church in North America".
One of the things I respect about ACNA is how they managed to more or less fold in a bunch of pre-existing groups that had broken away from TEC at various stages of its liberalization.
I was under the impression that part of that was allowing subgroups within ACNA to differ on the ordination of women – correct me if I’m wrong. I personally suspect that that’s unsustainable in the long term, and that they, and conservative Anglicanism generally, will have to come down on one side or the other.
I do remember being an interested onlooker in the heyday of The Anglican Wars and the split between what turned out to be the Global South and the liberal West (mostly TEC with some support from the Canadians). So yes, unhappily you got splinters of splinters, though the most unedifying part of it was the parent body of TEC suing the living daylights out of breakaway churches/parishes/dioceses and dragging them through the courts, while all the time proclaiming they were the Mostest Loving And Accepting.
Well, it's all cooled down considerably since then.
My eyebrows went shooting up at the inclusion of Jacqueline Carey, as the most famous person by that name is one of my favorite authors (https://en.wikipedia.org/wiki/Jacqueline_Carey#Bibliography), and...well...her work doesn't seem like it would be so strongly to Scott's taste!
As much as I like that Scott likes one of my all-time favorite trilogies, his position that Eula is meaningfully an advocate of the superior power of 2014 "liberalism" and all its present-day values is...not quite accurate.
Because ultimately, a powerful evil has to be defeated in Kushiel's Avatar, and...
(extremely generic and obvious spoiler)
...it isn't defeated by free love and flowers and turning the other cheek.
(slightly less generic spoiler)
It's defeated with cunning subterfuge, deliberate betrayal, and premeditated homicide.
You know.
The kind of stuff most self-described "liberals" tend to shrink from.
But what happens in the novel reflects reality: Scott can be as vegetarian and organ-donating and gentle as he likes in his day to day, but he also needs to be prepared to injure and/or kill in defense of peace and innocence if he wants to truly be an avatar of goodness and see goodness "win" if directly threatened by evil.
More specifically, the bugman says social-order is a 3-layered pyramid. "Anarchy" at the bottom, "Martial Law" in the middle, "Civil Law" at the top. He also says that, the average libertarian's error is the naive belief that the "Civil Law" layer is the only layer that exists. I.e they don't realize that the peace of their quotidian experience only exists within a tiny envelope of state-space.
It's possible Scott understood this, and thus his mirroring of Cthulu and Elua was deliberate. But I suppose it's also possible that I'm reading into Scott too charitably.
I thought that as well. I just figured speculating at the *other* reasons he might enjoy that book is series...well...bad form on someone's blog.
You are correct, of course. To quote Orwell, always a good critic of liberal intellectuals: “Those who ‘abjure’ violence can do so only because others are committing violence on their behalf.”
Or, hey, how about some Kipling:
I went into a public- ’ouse to get a pint o’beer,
The publican ’e up an’ sez, “We serve no red-coats here.”
The girls be’ind the bar they laughed an’ giggled fit to die,
…
O makin’ mock o’ uniforms that guard you while you sleep
Is cheaper than them uniforms, an’ they’re starvation cheap;
An’ hustlin’ drunken sodgers when they’re goin’ large a bit
Is five times better business than paradin’ in full kit.
Still strikes me as the sort of thing that only works if you actually have a majority. The western Allies won WW2 because (a) they had Stalin taking all the heavy beatings and (b) they had conservative elements backing up the liberalism. Without those you just have rationalist meetups--great for writing computer code, not much good against thugs.
I think liberalism is going to go into a long recession and make a comeback after the populists and communists have had another shot and left another mountain of bodies. But I'm a pessimist.
I haven't read any Carey, not so much because of the kinky sex, but because I bounce *very hard* off Le Exotique Language which is a this-world language with the serial numbers filed off. In her Kushiel novels, it's French. Not even a fantasy version of French, straight-up French. Though possibly not very grammatical French, I think Terre d'Ange should be "Terre des Anges" if it's supposed to translate out as "the Land of Angels".
And looking at the Wikipedia article, another of her novels has a character named "Loup Garron". Uh, you mean, like "loup garrou" only with a consonant swapped in for that terminal vowel? Now, that *might* be a Clever and Witty Pun, but it strikes me more as a Prachettian Pune*
Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie). Though she's not the only one; I can't remember who the author was (sometimes a failing memory is a blessing) but I did read one deathless work of art where the main character came from a particular culture that was Fantasy French and - get this - the national anthem-cum-dance of that country was the Leur de Lys. Yes, as in "fleur de lys" with the "f" chopped off.
"Nobody will ever notice the clever substitution I did there!" my eye.
*“WHERE DOES THIS TRAVESTY TAKE PLACE? HO. HO. HO. Albert gave up. “Well, Crumley’s in The Maul, for one. Very popular, the Hogfather Grotto. They always have a good Hogfather, apparently.” LET’S GET THERE AND SLEIGH THEM. HO. HO. HO. “Right you are, master.” THAT WAS A PUNE OR PLAY ON WORDS, ALBERT. I DON’T KNOW IF YOU NOTICED. “I’m laughing like hell deep down, sir.” HO. HO. HO.”
> Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie).
Katharine Kerr's Deverry novels don't quite invent fantasy languages, but they clearly show the author's interest in historical linguistics by postulating some plausible developments of historical languages. (In the fiction, the humans have migrated from the Gaul that historical Julius Caesar knew into a fantasy world.)
[There is also some kinky sex. The author seems to grow out of it after the first two novels or so.]
It's a refreshing break from the much more common case of fantasy authors postulating linguistic "developments" that are downright impossible, like Philip Pullman deriving "anbaric" from "amber".
I don't know if this will have any impact on your distaste for the practice, but the almost-but-not-quite-like-Earth stuff isn't an accident. The setting for the Kushiel series isn't another world, it's a slightly skewed version of our world, with fantasy elements and an alternate history. French is used in Terre d'Ange because Terre d'Ange is France. Look at the map! https://kushiel.fandom.com/wiki/Terre_d%27Ange
I don't have any defense for it being bad French; I don't speak French, and whether it was Carey trying to do something clever with language having developed differently, or just errors, it went over my head.
I read the first one. It was very well done, had a very original concept, but I couldn't keep track of all the pseudo-French names. I wound up drawing a diagram. Next book I had to do that for was Romance of the Three Kingdoms, which tells you something.
I mean, she had an original concept, went to some places nobody had gone before and probably nobody could now, did a lot of politics and created some original villains...she hadn't Tolkien's talent as a conlanger, but was better at character development, so I guess everyone should play to their strengths?
What translation of Romance of the Three Kingdoms did you read?
I was immensely disappointed to see the note, in the introduction to Moss Roberts' translation, that the various names by which the characters are referred to have been consolidated into single standardized names for the benefit of the modern English-speaking reader.
That's so funny! I feel exactly the opposite, and with equal venom!
When I picked up Cloud Atlas and saw that it began with virtually impenetrable made-up dialect, I wrathfully snapped it shut again and slammed it back on the shelf. Fuck you, Authors Who Insist On Making Their Writing Laborious To Read! I want writing that is “simple” enough for me to instantly recognize the concept of (virtually) every word by its shape, because that's what fully literate reading is - character recognition rather than having to phonetically sound out a baraage of novel and strange and often counterintuitive words like a preschooler.
Writers who write beautifully do so with concepts, not goddamned overly elaborate sounds. Sounds are for musicians!
I'd actually argue Tolkien is a counterexample, and everyone tried to copy him without his specialized background and wound up sounding stupid. The different sounds of the dwarven, elvish, and human names give the feeling of different cultures, and the use of Anglo-Saxon or otherwise Germanic-sounding names gives it an 'old-timey' feel to an English reader. All of this of course drew on Tolkien's work as a professor of comparative languages ('philology' at the time); the guy was famous for rehabilitating 'Beowulf' as literature. (All the other critics were upset about the monsters.)
I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. It's part of the reason older novels are so hard to read; people don't talk like that anymore and it sounds stilted. It's more severe for poetry, where the sound of words is a big part of it, but prose is affected as well.
It is, of course, also possible Cloud Atlas is awful.
I haven't read The Lord of the Rings, and, in the words of Bojack Horseman's Mr. Peanutbutter, "I don't think I'm gonna!" (1)
I tried LotR and immediately put it down again. I read The Hobbit in school and utterly LOATHED it. I hated the character, I hated the meandering, I hated the prose craft, I hated the fucking stupid indulgent poems, I hated *ALL OF IT.*
One of my favorite YouTube literary essayists, The Little Platoon, extensively quoted Tolkien in his like 12+ hour commentary on Rings of Power and I didn't hear anything whatsoever that made me want to read the work he was so reverently quoting.
Peter Jackson's extended cut Lord of the Rings trilogy is great art. Based on the admittedly very small amount of Tolkien I've read, I don't think there's any remote chance that my experience of the books could be an improvement over my experience of the movies (I feel the same way about Game of Thrones). Sometimes - rarely - adaptation into a different medium serves a particular story better than its original format, and LotR is my go-to (admittedly ignorant) example.
(The Last Unicorn movie adaptation is my go-to informed example, but I don't know if it counts when an original author is the one adapting the work into the different medium, as Beagle did with the screenplay.)
> "I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. "
I agree, I just have a low-ish tolerance for a high volume and complexity of novel "sounds" being tossed at me when I'm trying to immerse myself in a story.
It does depend on the language. Someone who hurls chunks of undigested worldbuilding and dialect at your head on page one will elicit the same response from me as from you.
I think that's part of why "Lord of the Rings" works, it starts off with the Hobbits in the Shire who are familiar enough to lead us in gently to the world. Everything is in English, the few unfamiliar terms are generally as unfamiliar to the Hobbits as to us so the explaining goes down easily. Gossiping (and back-biting) in the pub about the neighbours is something everyone can relate to 😀
"Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie)."
No, but on general grounds I tend to avoid Stephenson. Not quite sure why, just picked up a general impression that I wouldn't much like his work.
Did try Too Like The Lightning by Ada Palmer and again, bounced off hard. I can generally tolerate 18th century English prose style but that was a little *too* mannered right from the off for me.
"Menard's archaic style —foreign at last — suffers from some affectation. Not so that of the precursor, who manages with ease the ordinary Spanish of his time."
The specific reason to find it is something called the Apostropocalypse. Longest treatise I can find about it, in the form of a dialogue between two or three characters.
I found Reamde to be great fun. OTOH, I'd be very surprised if our tastes in speculative fiction were close enough to bet $20 on it. I assure you, though, Reamde is no wretched hive of 18th century English prose. Rather, it's a den of malware scamming, Russian mafia, Canadian trekking, Chinese intrigue, a terrorist who'd twirl his moustache if he had one, and an MMO.
I read this post back in the day, and I probably just shrugged and moved on when I saw "Jacqueline Carey", but now I'm intrigued. Is Scott really referring to the relatively obscure author of the "Kushiel" books, etc.? I've never read her–would you (or anyone else here) say that her works contain extraordinary moral wisdom?
Yudkowsky liked the Kushiel books and brings them up in the Sequences, probably because he's into the kinky sex part. I couldn't get through more than twenty pages of the first novel, which (from memory) starts in a brothel and goes exactly where you'd expect it to go.
And how did you know the novel (or trilogy?) went exactly where you expected it to go after only 20 pages?
I of course realize that the question comes from a weak position, as you can simply Google the plot and claim that you somehow gleaned the major plot points, themes, and the subversion of several genre tropes from the 20 pages you read before you reply.
Nevertheless, my face is currently doing this: 🤨🧐
(P.S. - Had you read the books, you would know that kink takes up a surprisingly small page count compared to politics and adventuring - certainly not enough to satisfy someone primarily reading for porn. But sure, it went exactly as you thought it would.)
I don't read a ton of 'romantasy' (although i just started the Sarah J. Maas series because my doctor, of all people, recommended them) but you've now got me intrigued enough to check out this Jacqueline Carey series (though i forget the name and i'll have to look it up).
Possibly because of things like the Anita Blake book series, and then the Merry Gently series (where Hamilton just went "to hell with it, I'm an erotica writer now").
Oh, I was so disappointed (and angry) about the turn the Anita Blake books took! Interesting twist (what if all the monsters were real? who you gonna call?), decent characters (though Anita was a little overpowered even from the start) and some promising moral conflicts.
Then Hamilton threw it all away to turn Anita into a Mary-Sue and just collect a harem of different were and other supernatural characters for mildly kinky sex. Dammit, I wanted to see the tension between "He who fights monsters should see to it that he himself does not become a monster" as Anita got deeper and deeper into the supernatural side of things, and her original system of morality and fast-fading identification with unpowered mortals, and instead Hamilton decided "Anita solves all problems by fucking her way out of them".
That kind of thing does predispose one to "Oh, this is set in brothels. Uh-huh. And prostitution is a sacred art in this world. Well, gee.
And the main character heroine who is going to solve all problems and do everything right to fix her society is a sex worker. But of course! And here we go with our first kinky sex scene. Yeah, I can see where this is going and I'm not interested" perceptions.
I didn't mean the entire novel went exactly where I expected it to go, I meant that in the first 20 pages there was BDSM in a brothel, which caused me to stop reading because I was on a train in public.
After that, I didn't pick up the series because it hadn't grabbed me in the first 20 pages, plus I'd recently read that EY short story about child-eating aliens where legalised rape is mentioned with zero context and I thought his preferences were probably very unlike my own.
Felt like throwing in, from opening in a brothel I'd expect it to go either to a murder-revenge storyline, or a political drama featuring an elaborate blackmail scheme. Or both.
Well, now that the OP properly cited his source, I can confirm Scott was indeed talking about the author of the Kushiel books:
“I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey. Two of those are very religious and write quite consciously from a Christian perspective. The third writes about kinky sex. Go figure.”
Having read the Kushiel books many times, I can absolutely see how the heroism depicted in the story could inspire a sense of “active” good. Carey's protagonists are very much in this category, especially Phedre (of the first three books).
"through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"
Depending on period and country, quite possibly. Henry VIII would have burned them both, because he has a track record of equal opportunity heresy hunting: Chesterton because he was Roman Catholic, Lewis because he was the wrong kind of Protestant.
"as opposed to the horrifying sadists who actually got the “St.” in front of their names"
Which ones? I would like some names so I can add them to my list of "yes ha ha ha yes!"* patrons or possible patrons 😁
I mostly found the combination interesting, although on closer inspection it looks like Catholicism often attributes broad and seemingly-mismatched patronage portfolios to saints. I've also heard quips that being patron of both clowns and murders makes Julian the patron saint of Juggalos.
There's also Julian's (legendary?) biography, which features a formative moment when he "accidentally" murders his parents in their sleep after mistaking them for his wife and her lover.
Yeah, we tend to pick saints and go "Here. Now you are the patron of aprons" or the likes. I always appreciate St. Anthony of Padua, famed as a preacher and Biblical scholar, getting the job of "Okay, every lost set of keys, glasses, and 'help help I misplaced that important document I need in the next sixty seconds oh shit!', that's for you, Anthony".
Question about the AI Safety Stuff: how useful actually is getting new people into the field if the forecast is really "superintelligence by 2028?"
I've been following AI discussions here and elsewhere with interest, I believe AI safety is very important and I have quite a bit of relevant background (lots of math, some computer science), but I still look at something like that and go "how could I possibly get to a point of doing relevant work in a new field time to make any difference?"
I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help.
Silicon Valley likes to "move fast and break things", so I don't think anyone is imagining a PhD. This is more the startup thing of "take a boot camp, learn on the job". There's also not that much to learn - AI isn't that safe - and what research has been done is scattered across a bunch of different paradigms such that you can get to the frontier of one relatively quickly.
"I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help."
As someone working in AIS: This is an inadequate field. I strongly recommend against using adequacy-based-deduction in it. Don't just trust that the funders have good reasons for doing things.
As a CS PhD graduate, I'd love to apply to MATS, but quitting my job for a 10 weeks program seems extreme no matter how much I care about AI safety (the tech job market has been awful since late 2022 and shows no sign of significantly recovering). I guess I wish there was something in between the AI safety virtual course and a full-blown 10 weeks physical program.
I'll strongly consider applying to the summer MATS program. The timing works out quite well for me, though being in-person in California is a very close to being a deal-breaker, even for a relatively short program.
(I notice their pre-application link is currently broken, though that might just be because applications for the summer aren't open yet.)
Is 1942 too late to get into nuclear safety? Is 1883 too late to get into automobile safety?
Some people still have the Yudkowsky scenario in mind where a superhuman AI comes along, foom happens, and then it's an omnipotent god and does whatever it wants. If that's the real scenario then yeah it's too late to do anything so let's just sit around and hope it goes well.
But I'm not convinced that's the real scenario, I think it's starting to look a little dated. I think we're heading for a scenario with superhuman (in some important ways) AI but without foom. In which case life goes on, but with these powerful, unpredictable and smart-stupid tools in the hands of people who have no idea what they're doing. This raises a whole bunch of problems which all the work done on AI Safety so far is poorly equipped to answer.
The problem may turn out to be not that 2025 is too late to start thinking about AI safety but that 2010 was far too early.
A lot of this comes down to what one means by "superintelligence" I suppose. As I understand it, various AI models already display better-than-human performance at various narrow, specific tasks, but that these weren't considered superintelligent. The idea of "superintelligence" (again, as I understand it) includes general intelligence in the package: its an AI that displayed substantially better-than-human performance at ALL (or almost all) cognitive tasks. If people are using the word in a weaker sense than that, then there may be quite a bit of time after the development of "superintelligence" for the safety folks to catch up. But if the actual prediction is something that can outperform humans (almost) anywhere, that's a different story.
To be clear, I agree that even such a superintelligence need not imply a Yudikowskian foom scenario where it bootstraps itself to some unimaginably high level of competence. But I think it *does* fairly strongly imply that the window for alignment and most other safety measures has largely closed. Either we live in a universe where universe where ASI is safe under the level of precautions taken (in which case things turn out broadly fine) or we live in a universe where the precautions are insufficient (in which case it's probably too late to add more). In particular, if the existing safeguards are even minimally adequate--that is, if the ASI is under somebody's control, and the work it does is reasonably trustworthy--the can simply use it to strengthen the safeguards, which it can do better than humans can. If the safeguards aren't sufficient for that task, I'm dubious that external efforts to improve its safety would do better. Though I suppose I can imagine an on-the-edge scenario where the ASI is contained but not fully trusted, and having people capable of evaluating its safety proposals is useful.
"Is 1942 too late to get into nuclear safety?"
Actually, I think this is a great analogy. 1942 wasn't too late to get into nuclear safety in the sense of nuclear *policy* safety. It was not even too late to get into nuclear *engineering* safety. But there was real concern among the researches on the Manhatten Project that a nuclear explosion--possibly any nuclear explosion--could start a runaway fusion reaction in the Earth's atmosphere. Fortunately we lived in a world where that wasn't the case, but it was a real concern. And 1942 was almost certainly too late to get into THAT aspect of nuclear safety. Either we lived in a world where nuclear ignitions were broadly safe (and there was no need) or we did NOT live in such a world, and only the people already working on the Manhatten Project (or in a position to readily get hired on) were going to have much chance at realizing it and preventing nuclear Armageddon.
You could call this something like "existential nuclear safety." Is it safe (in the sense of not immediately killing us all) for nuclear weapons to exist in the world in the first place? If the answer is no, then the first live nuclear test is distinctly too late to discover the answer. ASI is more complicated, because there are plausibly far more relevant variables (that is, the answer could be "yes" for some designs and "no" for others). But the first time somebody boots up an ASI is an inflection point similar to the first time somebody sets off a nuke: you really, REALLY ought to know the answers to your existential safety questions *before hand* and with quite high confidence.
Right. I think the word "superintelligence" has been thrown around too loosely for too long and we don't have a good idea of what it means. Over the next ten years it's likely we'll need to reconceptualise what we mean by intelligence.
I now find it easy to imagine a computer that is "intelligent" in the sense that it can ace any traditional test, coming up with better-than-human answers to any question that you might care to ask it. But it's still just a tool, it's still just a very fancy function mapping inputs to outputs, it's not the same category of thing that a human is.
There will still be an important a thing that separates us from computers, but "intelligence" won't be the right word for it. Intentionality? Agency? The ability to have desires and an agenda of its own. We lack the vocabulary to even talk sensibly about it right now, but a lizard has it, and an LLM doesn't.
I am...significantly less convinced than you are that a *human* is anything other than "just a very function mapping inputs to outputs," at least in practice[1]. The main, obvious difference is that a human is *constantly* receiving inputs and *constantly* producing outputs. ChatGPT and its kin get their inputs in discrete chunks with arbitrary gaps in between, and produce outputs accordingly. But that's just a design decision. It would be perfectly feasible even with current tech to create something that's constantly taking in inputs and constantly producing outputs. Would that also be "not the same category of thing that a human is?" I'm genuinely unsure.
I haven't actually played around with any existing LLMs at all, but I've read a lot about other people doing so. It does kind of look like they're missing some key things, but it's a little bit difficult to put my finger on what. Goals and goal orientation? Robust models of the world? Common sense? A sense of continuity of self? I think you could find humans who largely lack any of those things (though the last would be a very unusual human), though maybe not all of them at once. At any rate, lacking the ability to fully pin down what LLMs seem to be missing, I have much less confidence that they will *keep* missing it as they advance.
[1] Which is to say, obviously we have internal experiences, but the practical, outward effect of that is merely to inform what outputs we produce for a given set of inputs.
as someone who has had a "psychotic experience" thanks to Covid-19, in my humble-not-expert opinion, I think I lost a sense of "continuity of self" during that experience. during the worst of it, I felt like a was falling into a "cognitive black hole"... in the sense of information being sucked into nothing-ness. my mind went in constant bizarre, tangential directions. trains of thought would keep starting but then get forgotten as a new train would begin. I wasn't hearing voices or seeing things, but there was just this constant improvisational "switching" among things.
right now, in hindsight, I could spend all day trying to describe it and still fail miserably. it's almost a paradoxical thing to "remember" it....... because how can you "coherently relate" to something so incoherent?
again: I'm a normal human, but I *do* have a stressful physical disability. and the stress of that physical disability + Covid-19 => psychotic 3 days
In worlds where we live, we likely push out the date via some intervention that looks unlikely now, so starting now means better outcomes for those worlds where we made the intervention.
Current AI Safety practice is mostly about capabilities. The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:
You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing)
The low hanging fruit is likely unplucked.
The "existing literature" is still very small, so it wouldn't take years for you to catch up.
I will caution that AI Safety appears to be more insight constrained than talent constrained, aka we need someone to have pretty novel insights to make progress, and it's not a matter of "just" fleshing out models and their implications.
If you don't believe in doom, it's probably true that AI Safety would still be useful in "post singularity" worlds, where in theory AIs are perhaps subservient but not useful unless given safe commands I don't believe in such a world myself and don't see how it's likely, but it is a reason.
" The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:
You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing) "
So I don't think a small field by itself implies very much about how easy it is to get resources: it's less competition, but also almost certainly less funding. The question is how the availability of funding stacks up against the availability of talent: if its funded out of proportion to its size, THEN getting one's foot in the door become easier. But the reverse is just as easy to imagine: lots of interested talent held back by a lack of funds.
Regardless, your points about the low-hanging fruit and the ease of catching up with the existing literature are well-made. I'll certainly at least look into it.
You might not be able to do anything about X-risk, but there are a lot of lower-scale harms that I think are a lot surer to happen, and you might be in a position to have some influence there. For ex: for at least one personal AI buddy (those AI "companions" that some fall in love with), the company that offers the service collects all the users' interactions with their buddies. Company is accumulating a data set that will be ideal for figuring out how best to influence people to do things, buy things, believe things, vote for things. Seems like AI is very well-suited to doing the pattern recognition kind of thing that's needed in order to become maximally influential. That's one of many very dangerous possible uses of AI, and it's already happening.
Hmm, good point. I don't have a lot of faith in prediction markets, but in this domain it's not really clear that there are any better estimates to be had. And 35% manages to hit the sweet spot of both being quite alarmingly high, but also low enough that planning for the other cases is clearly sensible.
This whole thing just makes me ask the same qustion as I've asked myself many times before: if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this, isn't this like the *last* crew we want to put, either implicitly or explicitly, in charge of AI alignment? I mean, the militant kill-all-humans veganism stuff is really not a completely illogical extension of animal-rights views common among rationalism-connected EA people in general!
I wonder whether the craziness is more correlated to the LW community, or to the Bay Area in general. Are there any famous murder-rationalists in New York, or in Berlin?
" if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this"
I don't think it is, at least not as currently constituted. Now they go for good old-fashioned money and swindling scandals.
A lot of the stuff being talked about here is ancient history in one sense; from the early days when it was the Wild West and anyone and everyone could turn up and start working on 'save the world from AI' etc. The complaints of Vassar and Ziz were that even back then things were starting to get too cosy and settled-down and normie, and they were betraying their founding principles and becoming Just Another Talking-Shop.
Now the entire scene is even more respectable and mainstream, so the sex cults and 'split your brain in two' murder gurus are very thin on the ground.
Still, you've got a basic pipeline in this case: Read LW stuff -> ruminate on LW stuff -> ??? -> murders and insanity. Regardless of the exact level of how common it is, it at least leads to questions on whether we should directly expose the AI to this crowd in the sense of them trying to get it aligned to non-crazy stuff!
Look, I really sympathize with your basic question here – I think letting Yudkowsky set the moral parameters of a god-machine would be a catastrophe in itself – but this pipeline is clearly not real. Ziz was already an unhinged psychopath, it wasn't ruminating on lesswrong concepts that set him off. He would have started a Transon Family one way or another regardless of the existence of MIRI/CFAR.
I happened to read about it yesterday and lost some respect for the “rationalist” community as a result. Not that having insane people who associate with your ideology is uncommon, but that they seemed to be well known on LessWrong and their insanity respected for fear of being confused for not respecting their gender(s).
For context: I was reading the “PSA” post about them on the top of LW and the person summarizing things had dedicated an entire 2 paragraphs to validating their complaints/accusations about the police doing some pretty terrible stuff (some pretty serious sexual assault and humiliation). The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”
Like, at some point, (probably somewhat before murdering multiple people) we should stop giving someone the benefit of the doubt and stop considering their statements in the most charitable way possible. This is literally what allowed Ziz to skip town after the group stabbed their 80 year old landlord straight through the chest with a sword. Ziz allegedly needed medical attention, then fled from the hospital and California before the police could detain them. With the caveat of everyone is innocent until proven guilty, Ziz then went on to murder/contribute to murder multiple other people, including probably their parents.
It was 2022, Sol. Lots of people were still Floyding it up big time. Tons of public places you still had to avow total hatred of the police if you were any kind of a pinko (and if you weren't have fun with the racism accusations). Even if only a tiny proportion of the Medium post's readers knew who she was, the consequences of being any less than maximally charitable toward anyone who averred mistreatment by a cop could have been catastrophic for her personal life.
Also, it was Zajko's parents. I'm pretty sure Ziz didn't kill his own parents, although I invite correction on this point because he's certainly capable of it.
Eh, police actually did seriously screw this one up. The Zizians went to protest a rationalist event. They showed up at the wrong time and there was a kids' club or something at the event center instead. The kids' club freaked out and called the police, someone (incorrectly) accused them of having a gun, and they got charged with child endangerment or something and thrown in jail. Granted they are extremely evil, but in this particular case they were just trying to mildly harass us, which we were pretty used to, and ended up with some really serious charges against them. I think a court eventually threw out some of the charges, but by that time there were way more and much worse charges that they were actually guilty of. I don't think skepticism that the police always do everything right is some kind of unjustifiable weirdo liberal delusion.
I saw Ziz once at some event c. 2017. She was wearing hooded black robes and standing alone in a corner, glaring menacingly at everyone. I don't even know why she attended. Definitely not one of those "it's always who you least expect" situations. The followers were the opposite though, I didn't know them personally but people who did said that before falling in with Ziz they were totally normal, friendly, nice people. Pretty scary, don't fall in with cults, that's all I've got to say.
My gripe isn't so much about the assessment of the protest, which in itself I think is completely fine and probably accurate. The judgement about police brutality and the plausibility of their statements (while perhaps given a *little* too much credence), isn't something objectionable on its face. What they were doing is foolish, definitely something I don't approve of, but the treatment alleged is something terrible.
My problem was the context with which this assessment is offered. Apparently the goal is to warn the rationalist community about someone the author believed was violent (which appears to be true judging by the later alleged murders/crimes), and rather than letting the facts of the events speak for themselves, a not-insignificant portion of the warning is dedicated to essentially validating the allegations they make by claiming they seem to be true to the author (based on nothing but the testimony of the people being warned about, and how it "sounds" like other cases of police brutality they've heard of).
"Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."
This is my personal feeling, but no, no one reading the article should be worried about gathering or sharing information about bad behavior directed at these people, as (at least at that time) they seemed to be a real and present threat to others (which turned out to be probably true). It just seems a lot of this warning is dedicated to opinions (priors with no valuable new information) that paint these characters as victims. They very well might be victims in that case, and have valid cause for complaint, but sharing opinions about that in a post warning others about their murder/attempted murder is definitely the wrong place for it.
Maybe giving some attention to dissenting opinions, opinion as to a claim's plausibility, and fully contextualizing the situation is a valuable thing in general, but it left a very bad taste in my mouth given that the post was trying to warn others about people who have committed some pretty terrible crimes.
Yeah, okay, that's pretty cringe in the context of a murder cult. But I can't actually find the quoted text anywhere in the thread (though it might have been downvoted into invisibility by now)
The section I'm referencing is about 10-20% of the way down, but crtl+f any of the text I quote as it's unedited. It's an anonymous article, and of course they don't represent the majority of LW/Rationalist opinions, but it hasn't absolutely ruined my opinion of the community or anyone specifically, just a ding to my level of respect in general.
This is about the Zizians who are basically a group of people who follow a person named Ziz's Pseudo-Rationalist, Hyper-Vegan, Certifiably Insane, Doom Cult.
"Police arrived with their guns out and we were immediately arrested (within about 10 minutes of us arriving), after which we were sexually assaulted (in my case, i was groped and had my pants pulled down and then sat on by an officer in a mounting position), two of us were gawked at by a group of about a dozen police looking at our naked bodies after pinning us down and cutting off our clothes (“Ha! I told you it was a man.”) and one had pictures taken of them front and back naked and pinned down, we had multiple types of serious and/or permanent medical harm intentionally inflicted on us (including hypothermia, concussion), we were starved for multiple days (I did not eat for five days, until bail was posted) having been denied vegan food as a bargaining tactic (as well as water, sleep, hormones, and medical care), were tortured and were repeatedly denied ability to speak with a lawyer (and had eg hypothermia inflicted on us and our clothes cut off (separate events) for asking to speak to a lawyer)."
And this reaction to in the medium post by the summarizer:
"I don’t get the impression that Gwen is the sort of person who generally lies about things like this, though I could be wrong. Some aspects of this account sound like other accounts of police misconduct I’ve heard in the US, and I think the overall US prison and jail system is needlessly cruel in many ways, so these cruelties seem entirely possible to me.
No matter what crime someone may have committed (and no one has yet been convicted of a crime), I don’t believe they could ever deserve to be mistreated as they allege, and I think that their allegations of such deserve to be treated seriously."
This specific arrest was in response to this group, dressed in all black with masks on (Not the pandemic kind, the V for Vendeta kind) and blocked off the exits to a wooded retreat with their trucks which that happened to have a bunch of school children.
The writer of the summary had this to say:
"My assumption is that they didn’t intend to do anything violent"
Gwen likely proceeded to fake their death (no body), but as far as I can tell they haven't shown up in the past couple of years so maybe this was real.
This group also likely murdered one of the member's parents:
"Pennsylvania police believe, based on a lot of circumstantial evidence and investigation, that one or more of Ziz, Michelle “Jamie” Zajko, or Daniel Blank was very plausibly involved in the murder of Jamie’s parents"
All this is to say is that these are really bad people. Their ideals are evil, and so are their actions. They did a bad thing, and the author proceeds to create a multi-paragraph apology for their actions (you could say, rationalizing them), doubt the motives of the police who responded to masked people barricading children in a campground, while claiming the accusations against the police are highly plausible (police stations have cameras, and police have been severely disciplined for much less).
They end this section with:
"If anything at all like this happened, however, then the people responsible should clearly be brought to justice. If anyone lied about the protesters in order to incite a more extreme police response, then that should come to light as well. Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."
As for why this made me lose some respect for the rationalist community: This post was made as a literal warning that Ziz and associates were potentially violent (and had stabbed a guy through the chest). Yet the author felt the need to dedicate a whole lot of this community warning to an apology for their actions, and insert an opinion on police brutality to boot. I can excuse this if you're Ziz/Gwen's lawyer, and trying to get them the best outcome you can, but this seems to be overly sympathetic and charitable to the motivations of a group of people who are willing to lie, manipulate, incite murder, and outright murder to get what they want.
Read the comments of that LW post and it's more of the same sentiment. The "tools of rationality" being used here, are being used for (what I consider to be) very wrong ends. For example:
"I think it's good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven't fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that."
This assumes Ziz, the murder/attempted murderer/inciter of suicide, is acting in good faith, which is obviously not the case. We shouldn't value the testimony of people who hold evil ideals, commit evil acts, and manipulate the narrative for their own benefit.
TLDR: Bad people are bad. We shouldn't treat their views as maximally charitable.
Yeah, it's the bit about "I totally believe the person accused of violent crime has no reason to lie about how they are a poor little lamb mistreated by the pigs" that makes me go "Oh, brother".
Now, it can indeed happen that even violent criminals are indeed mistreated by the pigs. But my general inclination is "person accused of a crime may perhaps possibly maybe could be also not averse to telling a wee little lie or two".
"The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”
That's been a thing with the rationalists. Reading one account of the brouhaha about L'Affaire Kavanaugh (back in the days when Michael Avenatti was The Credible Avenger leaping in with the Case of the Spiked Punch and Highschool Rape Gang) was eyebrow-raising for me: "This person told me that things like this happened somewhere else before so I believe this instance is indeed true and credible and it happened and that's terrible". No consideration of "but is this instance true? is the person reliable? even if they are reliable, are the other things they said happened true? even if they are true, are they the same as what is alleged in this instance?" which struck me as very odd coming from people who are all about "think about stuff, question narratives, find the facts out for yourself instead of relying on authority" and so forth.
I kind of feel like the best thing for rationalism would be to develop some level of cynicism and awareness people often act in bad faith and are lying to you. I am not saying you have to turn into Diogenes, Orwell, or Palahniuk, but at least read the 48 Laws of Power once, or the Prince or the Art of War if you're into classics. You don't have to do all the evil stuff, just know what they're trying to do to *you*.
I was somewhat aware of that for nebulous reasons a while back, and I think it's just a bunch of crazy people who glommed on to early rationalism and the community around it, went full-on "our guru commands" blind zealotry and followed someone who was batshit insane (to use a technical term) but who, because of the general commitment to niceness, acceptance of the weird and strange because hey, people think we're weird and strange too, and lack of anything approaching authority to kick people out plus horror of appearing to ostracise anyone, was tolerated to an extent.
Not the fault of the rationalists that the nutso developed their own crazy philosophy and went around mind-hacking their followers into being as nuts as them, but it is a danger for the community (less so now, I think, given that they've been burned a couple of times by crazy people and grifters using the 'niceness, tolerance, no hierarchy, no thought is verboten to think' principles to take advantage).
EDIT: Though in this particular case, it's grimly ironic that self-described vegans attacked and murdered someone. An instance of "all animals are equal, but some are more equal than others", if you consider humans are just another animal?
Well, that was certainly an interesting read. It's been a while since we had a good cult disaster.
Though, that article really just brings up more questions than answers... What the hell were they even trying to do? What made them suddenly turn into sociopaths? Did they just go insane from reading Yudkowski? Well, I looked into it, found a post by none other than Scott, and... Jesus Christ...
I've always wondered whether techniques like prompt injection and jailbreaking were applicable to humans as well, and... it seems they are. Takes a bit more work, obviously, but it is possible to break them. I... don't even know what to feel about this. There's this feeling that... I could have been them. It's terrifying.
I do want Scott to make a follow-up post on this, but at the same time... I genuinely fear that if the details of this incident become widely known, it will lead to a national campaign to purge the trans population.
Anyways, there's more details in the comment thread of this post:
People can definitely be jailbroken. BUT sometimes it works for the good as well. A person helplessly addicted to drugs completely reforms themselves on occasion.
People have enormous breakthroughs in their thinking/believing that enables them to overcome past burdens and seek a different trajectory.
People have positive religious conversions. Isn’t that all jailbreaking?
The dual hemisphere brain stuff is so damn outdated, I'm surprised self-proclaimed rationalists would still cling on to it. But if it's less a rationalist and more 'cult guru develops woo-based philosophy with pseudo-scientific rationalisation and brute-forces impressionable followers into breaking their sanity' it's a lot more understandable.
I mean, come on people, "ooh I'm right-brained so I'm all artistic" versus "I'm a left-brained logician" stuff has gone out with the ark:
It's possible to drive yourself crazy by forcing yourself along the path in line with the dictates (and diktats) of the guru, be that in religious belief systems, pseudo-science or plain batshit insane loons as here.
This is yet more of the division between "trans because of genuine gender dysphoria" set of normal trans people and "decide they're trans because they're fucking lunatics" set of mentally ill pretenders and/or criminals who appropriate an identity of a perceived oppressed minority in order to further grift for "I'm a victim so I can do anything I want and I can't be held responsible for anything, and if you attempt to hold me accountable for my actions, then you are violently attacking me you transphobe" protective colouration.
I know that's the convenient explanation but... everything I've read seems to support that they were genuine. In fact, it doesn't even make sense they were doing it to for the sake of identity politics. They genuinely just hated society and humanity, and the only thing they cared about was for people to fear them. It was an absurdly suicidal ideology, but it had absolutely nothing to do with mainstream culture war stuff.
I think they were genuine trans in the "Silence of the Lambs" (novel version) sense that the guy was trans: convinced that this was indeed the diagnosis applicable to themselves despite anything anyone else might say and unwilling to entertain any opposite view. I think this trans stuff was just all of a piece with the general insanity, and has little to nothing to do with the normal trans people (the very fringe, very online, very radical and very yes it is a damn fetish sort, on the other hand, are right up this lot's alley).
"They hate me and persecute me because they fear me because I'm neurodivergent disabled trans etc. etc. etc. so I have an entire constellation of alternative identities that normie mainstream society hates and oppresses and wants to destroy" is the entire attitude here and justification for them to try and blow it all up. Being a plain cis guy isn't good enough because that just puts it back on them that the reason ordinary people dislike and don't get along with them is because *they* are the problem. Also, I suspect, a heaping helping of "Cis het white males are the problem with toxic masculinity and privilege and entitlement, but I can behave in precisely the same way of entitled behaviour but it's okay because I'm a Real Woman (Trans) and if you object then it's because you're a horrible wicked transphobe, not because I'm an asshole" going on there as well.
I have the impression that you're talking about a long tail of escalating craziness (which probably exists in most groups) as if it were a bimodal distribution of crazy versus not crazy.
...I don't know if there's anything I could say that could convince you, and at this point I don't even know why I'm trying to make things worse by doing this. Maybe it is just better if people believe that there exists an actual divide between "normal" trans people and the kind of trans people everyone hates. But... that would only continue to justify the resentment against genuine trans people.
*sigh* Maybe there really is just no hope for them.
As an outsider looking in, it seems that the early rationalist coming-together into community was (maybe because it's in California) very open to/prone to everyone who was alienated from their own community running away to join the Bay Area because that was the promised land. This includes minors, trans, people of variant sexualities, neuro-divergent, you name it who felt (and were) weird and excluded but were intelligent and/or autistic enough to fit in with rationalist and adjacent philosophies.
That means you get a lot of the crazy in other ways than "I'm smart, socially awkward, and interested in niche subjects that nobody else in my family, school, or social circle even understands", up to "yeah this is mental illness and not just 'not fitting in'".
Yeah, this all already happened once, or twice depending on how you divvy things up: hippie culture, and 60s nerd culture. I know more about the latter, and they had the same pathological tolerance culture, which led to things such as the so-called "Breendoggle" and the (related, in a way) fact that, as it turns out, the founders of the Society of Creative Anachronism (Berkeley, CA!) were about 80% pedophiles.
Basically, most people who are excluded from mainstream society are ostracized for *great* reasons, amazing reasons, and autistic people who are ostracized for being awkward don't realize this and think "they must be just like us, misunderstood 'freaks' who are actually not bad people!" This is an error, and it will be an error every time.
I still hold that there's a valid, very dark reading of "The Ones Who Walk Away from Omelas" starting from the fact that LeGuin was going around the same Bay Area 60s nerd circles and *must* have known about many of the controversies, and that Omelas, as it is described, quite resembles a Ren Faire or some other nerd paradise.
I mean, try reading that page, while imagining that it was written by someone who got disgusted by seeing a dark side of the hippie community, and it... kinda makes sense, the same way.
Except, that obviously didn't happen. Unless there was some community in Dostoyevsky's Russia similar to the hippies.
Just a note: technically speaking, the term “mental illness” is not a high standard. It includes just about any mental condition that can be medically treated, including anxiety, depression, ADD, addiction, and even gender dysphoria (with transitioning being a treatment).
I think "run an 80 year old man through with a samurai sword as well as other assault causing him to lose an eye" fits "mental illness" well enough without having to disambiguate "gosh, do you mean 'anxiety' here?"
I can't blame the rationalists too much, they do intend well and starting off as an inchoate grouping starting to coalesce into a community (the way we did here), they were a bunch of diverse individuals who had in common being intelligent, being interested in specific things that those around them generally didn't care about or had no knowledge of, were non-conforming in some way from the mild to the very far out, and were so happy to meet "hey, you like this too? and you're a weirdo too?" others that they over-corrected for the "since we've all experienced at least some form of rejection and ostracism, we very much will not incorporate that into our community".
This meant that, like all groups, they were both attractive to and vulnerable to the fucking lunatics and the criminal grifters who saw a bunch of pigeons to be plucked, and they didn't have the structures (yet) in place to deal with that; going way back into the weeds about the alleged sexual abuse instances, for example, there seems to have been a positive horror of assuming that anyone was in charge in, like, a hierarchical structure of ultimate authority to enforce decisions and lay down the law. They set up committees to form committees to investigate and report, but there were no such things as sanctions other than "let everyone know that X is maybe someone you need to be careful about". Again, no blame to them, because dealing with the various groups and sub-groups all over the country was like herding cats and any decisions that *were* made seemed to be immediately challenged both by people who felt that they had been victimised all along and nobody was taking them seriously and the people they wanted persecuted were not! being! persecuted!, and by the "X is someone to be wary of" types who were claiming they were the real victims.
This is how every utopian organisation (including early Christianity) discovers that actually, you *do* sometimes need a bunch of big guys with sticks to enforce Da Rules or else you will be eaten alive by the piranhas.
Oh, I wish I had more time to study this, too. Or maybe an AI that would generate a summary for me... though the existing ones would probably refuse to discuss violence and other controversial topics.
What I would like to see:
An explanation of Zizian beliefs, and a description of the events from their perspective... but using normal English words, instead of the Zizian lingo.
A timeline of events, maybe with some color coding of which are the facts that everyone agrees about, and which are statements from one side that the other side denies.
*
If I understand it correctly, Ziz believes that each brain hemisphere is a separate personality, and that you can communicate with them individually using some combination of sleep deprivation and closing one eye (so that one hemisphere finally falls asleep, while the other keeps talking). Ziz has ethical opinions on the level of "insanity wolf", where almost everything is extremely unforgivably evil, and if you are good (or rather, half-good, because only Ziz is good), you must strive to destroy it at all cost.
All other beliefs seem to be downstream of this. Anything that Ziz disapproves of is evil and should destroyed. Everyone who disagrees with Ziz is evil and should destroyed. Refusing to destroy evil is evil. (Generally, Ziz makes tons of death threats... but mostly doesn't act on them.) At least one of your hemispheres is evil. Ziz is the only known person in the universe who has two good hemispheres. The followers have one good and one evil hemisphere; all thoughts that agree with Ziz are coming from the good hemisphere, and all temptations to disagree are coming from the evil hemisphere. Outsiders have two evil hemispheres. By the way, if you are trans, it's because one of your hemispheres is male, and the other is female. The rationalist community -- a good recruiting ground for Ziz, at least in the past -- is also evil and must be destroyed.
...this is as much as I remember, but there are probably many other fascinating details that I have missed.
My COVID update for epidemiological weeks 3-4 of 2025.
1. For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.
> The Department of Health and Human Services "has issued a pause on mass communications and public appearances that are not directly related to emergencies or critical to preserving health," Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."
I guess Stephanie is worried about her job. Likewise, the FDA has "paused" updates on foodborne outbreak probes as health agencies "regroup on communications." That's a bullshit excuse if I ever heard one. They could set up a process for review and prioritization without stopping the current updates. Clearly the Trump administration is now emulating Chinese secrecy practices.
And the CIA revised its report on COVID origins. They're saying it was a lab leak now.
Kevin Drum wrote:
> And none of us will ever get to see either their assessment or the evidence it's based on. Is it a rigorous review of all the evidence based on a foundation of deep scientific understanding? Or is it a sloppy broadside written by a bunch of naifs who never even engage with the scientific debate and just want to avoid a fight with the new boss. We'll never know. But remember the aluminum tubes?
But the new boss of the CIA is John Ratcliffe, and he has been supporting the lab leak theory for years. So I suspect it's just ass-kissing the part of his new subordinates. The CIA motto is: "Ye shall know the Truth, and the Truth shall make you Free." — the truth, if your boss doesn't like it, shall make you free to go looking for a new job.
2. The CDC is still updating other data. Their wastewater numbers seem to indicate the current XEC COVID wave has peaked. Biobot hasn't published an update recently, so I don't have a reality check for CDC's data. Biofire's proprietary Syndromic Trends also shows that COVID is dropping as a percentage of detected respiratory viruses. RSV may also be on a downward curve, but influenza is rising fast. XEC seems to have lost steam. Cov-Spectrum shows it topping out at 41% of sequences. The CDC Nowcast shows it at ~47%, but their Nowcast isn't very sensitive to changes in growth rates.
3. This winter, COVID hospitalizations are lower than previous years—and about half of what last winter's were (4/100,000 vs 7.7/100,000). Despite the XEC wave, weekly deaths remain low. I'm not sure if we'll continue to see these numbers now that the MMWR reports have stopped.
4. What's next? LP.8.1 is the only major var that's showing any legs. But except in countries with low sampling rates, it hasn't gotten much above 20%. Maybe it will create a secondary wavelet on the tail end of our dinky winter wave, but I doubt it.
5. While COVID and RSV may have peaked, this winter's Influenza wave is still on a strong growth trend. We've still got three months left in the flu season, and cases are nearly at last year's peak, and test positivity is higher than last year.
6. Three days before Trump's inauguration, Biden awarded Moderna $590 million to develop mRNA-based pandemic vaccines. It's not clear if the Trump administration will be able to cancel this program, but I'm sure Brainworm Kennedy will try.
7. And that sucks because if HPAI mutates to allow human-to-human transmission, we could be in for another pandemic. Rumors are circulating about human-to-human transmission in Shanxi and Hebei provinces (northern China), with deaths in the hundreds. These are rumors only. Other rumors say this is an HMPV outbreak. Who knows?
Anyway, if there is a bird flu pandemic, I'm sure Bhatty, Marty Mak, and Brainworm — the Three Stooges of the health world — will be doing their best to calm public fears while doing nothing of substance.
> . For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.
> " Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."
Wasnt this an executive order? Are they down playing trumps success?
What _is_ going on with scientific communications? I saw some alarming comments in reddit's r/labrats subreddit, and I'm not sure what to make of them.
I could see the sense in trying to halt DEI stuff as quickly as possible, and even to trying to ferret out DEI stuff that was being concealed by name changes, but it sounded like nearly _all_ communications (and funding) had been frozen.
What could they possibly be looking for? Activism camouflaged as research?
idk about details, it was allot of reading and I dont pay that close attention to details; it wasnt everyone it seemed health related at first glance and maybe 5 departments; maybe.
Many Thanks! Dissidents along what dimension? I could well understand stopping a woke activist project concealed under biased research, but it _looked_ like the freeze was far broader than that. Freezing e.g. publications in physics just sounds crazy to me.
Loyalty. You can't risk having people in important positions that aren't willing to follow orders unconditionally. Which means that people with stubborn principles about these kinds of things need to be replaced.
He signed so many executive orders since his inauguration I don't know if this was included in one of them. 26 on his first day, and they're still coming.
In a previous life, I used to hunt plagiarism for a living.
A while ago I gave ChatGPT this prompt: "Single Player (vs Computer) "Social Deduction" game" and got "Creating a single-player social deduction game where you play against a computer opponent can be an interesting challenge!" etc. pp. Now, to get a feeling for it, I gave Deepseek the exact same prompt from my ChatGPT history and got "Creating a single-player "Social Deduction" game where you play against the computer is an interesting challenge," etc. pp
This makes my ill-adapted to this situation spider senses go into overdrive. Is that really objectively the most average English sentence that could be said in reponse to that rather sparse prompt such that two models would independently arrive at it? Would two different humans independently come up with such similar openers? What is going on?
There's the added detail that R1 may have been trained on ChatGPT output (there are people who have gotten it to claim to be ChatGPT & an OpenAI model & so on.)
The prompt isn't as sparse as you think because you need to include the system prompt with it as well. ChatGPT, Claude, Deepseek, etc. have a long list of instructions before they get to your question. If you prompted the APIs directly you'd more likely get a description of a social deduction game rather than the agent attempting to create one.
I don't have an answer to your question, but ChatGPT gave me a response that was different enough not to seem plagiarised. It might be partially coincidence that your responses were so similar, plus the fact that both use reinforcement learning to create a helpful and friendly assistent personality. A human could give responses like "What about it?" or "I don't really like those games", while the LLMs are trained with a particular personality that really limits its potential responses.
It's possible that DeepSeek is using ChatGPT in reinforcement learning to make the type of response DeepSeek gives be more similar to ChatGPT.
Looks like OpenAI is accusing DeepSeek of using their API for distillation, against their TOS. So it seems like OpenAI thinks that DeepSeek stole from them (which I think is a bit ironic, considering how much data OpenAI scraped from other people without their consent).
Going meta I put the above into deepseek and it explained to me that the two models were trained on similar data. I understood that from the get go, that is not the point. The point is, should we not expect variance in training set and training methods lead to more variance in the output?
Stupid human question: if AI is so good why don’t they let it write the story for them? Hell, even ChatGPT can write a basic essay that can be tarted up with a little help from a human editor.
> I would be ghost-writing for an ex-OpenAI researcher previously profiled in major publications (eg NYT), who is running a big forecasting project and wants to do a media push around the time they release their results.
I couldn't resist giving it a try. Here you go... ;-)
Me:
Write a short essay on AGI and the coming Singularity in the style of Scott Alexander of Astral Codex Ten. Don't forget to mention Bayesian priors in the title, mention the hereditary basis of IQ and to disparage post-Christian vitalists in the body of the text.
ChatGPT:
"Bayesian Priors and the Coming Singularity: Why the Post-Christian Vitalists Are Wrong (Again)"
The debate over artificial general intelligence (AGI) and the impending Singularity has, predictably, descended into a cultural Rorschach test, projecting our priors, anxieties, and fantasies onto the tabula rasa of the future. While some envision AGI as a benevolent deus ex machina bringing a golden age of plenty, others see a rogue paperclip maximizer turning humanity into so many raw materials. As always, the truth is likely less dramatic, but it behooves us to unpack these possibilities through the rigor of Bayesian reasoning, lest we fall prey to the intellectual pitfalls of our ideological cousins, the post-Christian vitalists.
To begin, let’s consider the technical premise: AGI is, fundamentally, the construction of a machine capable of performing any intellectual task that a human can do, and likely much more. Here, one’s Bayesian prior should be informed by the hereditary basis of intelligence, a domain that has historically been a strong predictor of cognitive capability. While human IQ is the product of an intricate interplay between genetics and environment, its heritability (somewhere around 50-80% depending on the population) demonstrates that intelligence is not a mystical vital force but a replicable, material phenomenon. If evolution can stumble into it via the blind process of natural selection, we should not assume it’s beyond the grasp of deliberate engineering.
From this perspective, the Singularity—where AGI accelerates its own development into superintelligence—is less a question of "if" than "when." It’s easy to get caught up in the timelines (2030? 2100? Never?) or the probabilities, but more interesting is the philosophical terrain. Here enter the post-Christian vitalists, who seem determined to reinvent the eschatological wheel with their insistence that technology must somehow revitalize the "soul" of humanity. The Singularity, in their view, is either a heretical Tower of Babel that divorces us from our "essential nature" or a sacred rapture that will cleanse us of our fallen state.
These vitalists miss the point entirely. Their criticisms, while occasionally veiled in secular language, betray the same nostalgia for pre-Enlightenment metaphysics that drove past resistance to heliocentrism and Darwinism. In their worldview, humanity is a repository of ineffable purpose that AGI cannot replicate. But Bayesian reasoning requires us to ask: How likely is it, a priori, that humans have some unique, unquantifiable essence? Given the track record of material science to date, the probability seems vanishingly low. AGI won’t "steal" our humanity any more than the printing press or antibiotics did; it will merely shift the locus of value production and force us to renegotiate our priorities.
The challenge isn’t metaphysical but practical. How do we ensure that AGI’s utility functions align with human well-being? This is where the real debates should lie—not in vague invocations of the "human spirit" but in technical questions of alignment, control theory, and robust governance. And here, too, Bayesian priors are helpful: While the pessimists argue that any misstep will result in instant doom, the historical record of technological progress suggests a distribution of outcomes—some disastrous, yes, but many positive or neutral. We should neither dismiss the risks nor succumb to apocalyptic despair.
The Singularity, if it comes, will not be a singular event but a series of gradual shifts. Humans will adapt to AGI much as they have adapted to prior technological revolutions—imperfectly, unevenly, but ultimately with resilience. The post-Christian vitalists may yearn for a humanity untainted by silicon and code, but such purity has never existed. Our history is one of continuous self-reinvention, and AGI will be no different.
So, as we approach this frontier, let’s keep our Bayesian priors updated, our thinking clear, and our rhetoric grounded. The Singularity is neither salvation nor damnation. It is merely the next chapter in the long and unpredictable story of human progress. And if the vitalists don’t like it, well, they can take it up with history.
Where are the puns? I believe that ChatGPT should be capable of making some, but it probably needs to be reminded explicitly to do that.
Maybe this could be done in two steps. First, feed it some Scott's articles, and ask it to describe the characteristics of the text, what makes it different from a random text written by a smart person. Second, provide this list of characteristics and ask it to produce a text like that. Maybe even to write the text in phases -- first the outline, then prepare some puns, and finally write the text following the outline and using the puns where appropriate.
Better yet, can anyone explain the direction of the update? When was the last time when software optimization somehow hurt hardware makers (whose hardware has suddenly became more capable and not less)?
Because it's free. This hurts the business strategy based on centralization of silicon valley cronies/investors. The DeepSkeek is particularly hurting nvidia because it was trained on a quite modest infrastructure compared with what OpenAI asks (last year, OpenAI asked the UAE government for 7 Trillion for datacenters).
Which means a $7T datacenter now gives you even more AI that was expected. And it also means smaller players will now be interested in purchasing Nvidia chips for their own datacenters, something that was out of their reach before.
Well, the most likely explanation is that the instances making up the "market" often just learned about DeepSeek right now, but another one is that, as I indicated below for myself, DeepSeek's ascent is really less about the compactness etc. and more about offering a more desirable product design-wise for the end user than a lot of other AI efforts.
edit: and ofc one of the biggest things that seems to have come as a surprise is that China is now a genuine competitor on *this* field - once you've factored that in, you have to automatically prize in China being a giant in general and this surely not being the only thing on offer
Related question: the news I have seen on DeepSeek seems to very confidently assume that (a) they are telling the truth about their training costs and (b) they didn't just steal OpenAI's (or somebody else's) weights through a CCP-installed backdoor. Why should we be so confident of these things? Have people independently corroborated their claims or are we just trusting their press releases?
I'm not sure, but I think because it's much smaller and has much lower inference costs, and you can test this by getting the model yourself (it's open-weights!) and running it on a computer that would not be able to run GPT.
Also, they listed some of the discoveries they used to train it more efficiently, and everyone agrees they're real, clever discoveries.
AND we can't forget this is but their first iteration wave, or in other words, current DeepSeek is the worst DeepSeek we'll ever have. Quite scary if you're currently getting a big paycheck working for some angel funded AI venture.
I saw a meme on tiktok where they used that "Tony Stark was able to build this in a cave! With a box of scraps!" audio for OpenAI engineers and I never laughed so hard lol
There's definitely and verifiably an improvement in that they can *run* a reasonably high-quality LLM in a constrained operating environment. The bit where they AIUI basically lobotomize the AI and show that if you activate the right lobes for the task du jour it doesn't get *too* much dumber even though it uses an order of magnitude or so less compute, is legit clever and I'm surprised that it works as well as it did, but as you say the model weights are open and people are finding that it works in their own environments.
The question is whether this is also true for the compute used to *train* the model in the first place. It isn't obvious that the cleverness that reduces operating compute will necessarily reduce the training compute, and a priori I'd expect "just as smart as the other LLMs on the block but much more compact" would require *more* effort in training. And I haven't heard of anyone replicating that part yet, so we're basically taking China's word on that.
The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?
> The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?
Their strategy is similar to Meta's with llama. If you have a state-of-the-art product but a competitor has beaten you to market and has a seemingly insurmountable lead, putting your similar (or even better) product out for free essentially "de-monetizes" your competitors, because users will migrate quickly to the just-as-good-but-free platform. That serves to even the playing field for the next iteration, removing most of the legacy and switching-costs advantage your competitors had.
One interesting thing about this strategy is that it can be spun in two very different ways. To those worried about privacy, security, freedom of choice and concerns about trusting revolutionary tech to a few well-funded companies, you spin it as a Richard Stallman, "bits wanna be free" libertarian stick-it-to-the-man move.
But to the C-suite (or in this case likely the CCP), you can spin it as "Art of War" scorched earth: if you don't get the spoils, at least make sure your enemy doesn't get them either.
I don't actually have an answer. Well, really, I have several possible answers and I suspect it's not a complete set, hence the question.
One plausible answer is that it's about raising VC money. Release a pretty-good LLM, tell people you trained it really cheap (lie if necessary), then say "...just think about what we could do with some of those sweet, sweet gigabucks, don't you want a piece of that action". What to do when the gigabucks arrive is left as an exercise for the student.
Another is that they wanted to tank the American AI industry, either long-term because the CCP doesn't want America to have AI, or short term because some rich Chinese guy wants to get even richer by shorting NVIDIA et al. So release something that makes it look like OpenAI, Anthropic, et al have all been wasting their time and nobody needs nearly as many of NVIDIA's chips as we thought.
Another is that they plan to make money more or less legitimately in some way that I don't understand but which includes releasing the code and all the model weights open source. Maybe that's a loss leader and the real product has yet to be released.
The answer I was thinking of was "farm an enormous amount of data analytics about American users (and any other country while they're at it - why not) and exploit that in some way to either foment political chaos (a la Internet Research Agency), collect telemetry on US internet traffic for cyberwarfare or commerce purposes, or to discover ways to influence US thinking in ways that benefit the CCCP".
Another possibility is using TikTok as some sort of privileged platform for actual malware. (Theoretically, *any* foreign-based popular media app could be used, but I get the sense TikTok has unusually established ties to CCCP institutions. Plus, it's China, well known for government-commercial integration in ways the US would not prefer.)
Ok, I buy that they did some actual innovative work to make it so small. I still wonder if that work might have been turbocharged by having access to a large set of pre-existing weights that, um, fell off a truck, so to speak.
In one sense, the DeepSeek models are incredibly efficient by necessity. They had to train on a bunch of H800 chips instead of the newer H100s like the American companies, largely due to the chip embargo. There's simply no way they could compete with someone like OpenAI without massive optimization. OpenAI doesn't need to worry about optimizing, because they can just buy newer and more powerful GPUs that NVIDIA is constantly cranking out. Although it is impressive how much DeepSeek was able to do with remarkably less.
I saw this linked on DSL and it's a pretty good overview:
> Scale AI CEO Alexandr Wang said they have 50,000 H100s.
I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.
I think it's likely to have to do with r1 last week and some sort of additional information coming out over the weekend convincing investors that it's ~real. One possibily is it leaking in the Information that the Meta AI infrastructure director told colleagues that r1 might outperform even the next version of Llama, and that Meta formed two teams focused on studying it or something of that nature (I'm going off of secondary reporting about the Information article). Note that in the past Deepseek has released models that had inflated benchmark results that decreased when you used ~contamination proof benchmarks (see https://arxiv.org/pdf/2403.07974 ).
Typically the markets have reacted plenty fast to AI news and so I'm pretty confident that _something_ came out, and I don't think the app alone explains it.
Addendum: the meta explanation doesn't feel entirely satisfactory since meta's stock price has gone up, not down. Though this could just be due to them having less AI revenue/benefiting from open source models more.
I think it's that r1 was a wakeup call: v3 simply feels kind of mid, and it takes a decent amount of domain knowledge about how reasoning models are made to conclude that deepseek would be able to inexpensively replicate o1 levels of intelligence. If I can dig up the exact X thread where those details are discussed, I'll post it here.
DeepSeek R1 and the paper about it were released this week, as well as the app. They're the inference time compute reasoning versions of DeepSeek v3 which came out at Christmas.
It's important from technical/product POV because:
1) First open weights inference-time long-chain-of-thought trained reasoning model (so like OpenAI's o1)
2) It was trained extremely cheaply, so is fast and cheap.
3) They published the paper on how it works, and the methods are different and simpler from the big labs. Rumour has it Meta have stopped their big training run, and are doing it again using the methods in the Chinese paper.
Geopolitical importance is that it means China is in performance only 3 months behind, in cost ahead of the US on AI. DeepSeek v3 a month ago was impressive, but less dramatic as they were what 9 months behind Meta at open, capable AI effectively?
Cultural importance is that it is free and anyone can try it, whereas you have to pay for other inference time reasoning models.
How good the model actually is I'm not sure, I prefer Claude still, but partly we'll see what happens when people spend more on training the same techniques. Performance and length of thinking go up each generation, and the graph doesn't look like it stops.
I feel like Leopold Aschenbrenner's "Situational Awareness" predictions are becoming more and more spot-on by the day: $500 billion for a new AI cluster, shock and panic about China's AI prowess...just waiting for trillion-dollar training runs and nationalization of AI capabilities now right?
Is it totally clear that the dip in the market today is a direct result of the DeepSeek release?
As some in my local circles have remarked, NVIDIA in particular is a historically volatile stock, and looking at the 6 month graph makes today look not that out of place.
Financial reporting is full of monday morning quarterbacking and storytelling that is just a little to perfect. I am skeptical that the sell off was directly or only due to something related to DeepSeek. Just because articles are written that say it, doesn't make it true.
A lot of other stuff happened over the weekend that the market would be responding to. And the S&P started a sell off on Friday. Additionally, if it was DeepSeek, we'd expect China's market to rise but it's flat for the last 5 days and was flat yesterday. Meanwhile the CAC and DAX are way up the last two days while FTSE is up slightly.
You can look at financial charts and tell any story you want. It's like astrology that way.
Sure, but NVIDIA’s decline was part of a broader tech sell off, which, based off timing and the commentary of the people doing the selling, seems to have been driven by DeepSeek.
I mean, it’s *possible* it was driven by other factors, but if so, what?
Just about all the semi companies except Intel have significant revenue streams from AI products, as do a bunch of other tech companies. DeepSeek showed that a Chinese firm that was able to produce an open-source AI model with roughly 1/50th of the resources of state-of-the-art models yet still beat OpenAI’s o1 on several benchmarks. And they did it with older NVIDIA chips and without a huge data center. Suddenly, NVIDIA, the companies in its supply chain, and its competitors have developed feet of clay.
Deepseek released an app, which received a lot of popular attention, and approx 300k daily downloads, which updated the markets towards it being important.
What might the market behavior have been if Wall Street were smarter? Should there have been a big sell-off a month ago? Should NVIDIA and related stocks have gone higher today? Funds are all working on partial information about what might happen in the future and trying to outguess each other's behavior. Seeming order follows seeming chaos follows seeming order in the markets.
Yup! To paraphrase from something I vaguely remember: "If the efficient market hypothesis were true, the market should be a very quiet place." (to agree with your "low volatility")
I would have expected a market move on the news that DeepSeek can make great models for 5% of the cost of everyone else, and not on the news that the great model that was free for anyone to test and see how great it was made it to the top of the app store.
I'm much less sure which direction the market should have moved, but most of the people in AI I'm following seem to think it should be up.
"Great" is doing a lot of work here. People who aren't super plugged into the AI space can't immediately identify marketing hype and fluff from the real thing. Wall Street didn't know it was a great model until it went viral. You can't fake 300k daily downloads.
It's not *that* hard to find an edge in the market. Everyone who makes a living trading has an edge somewhere. Being as plugged-in to the AI space as you are would almost certainly count as an edge for a skilled trader.
Mais oui on your above comment about Wall Street being dumb.
However, one can steelman a decision to not sell on news of DeepSeek's technical capabilities but to do so when it jumped in the app store. It's not unheard of for capabilities to be overstated or for a model to do well on benchmarks while being underwhelming to use, etc.
Becoming the top app is direct evidence to suggest that this is something real, something from which users derive real value. Not an absolute guarantee: plenty of weird stampedes in app popularity, fads where people try out the new thing but don't stick with it. But it's suggestive. Much more suggestive of real consumer behavior than an impressive technical demonstration / paper.
And yes, there is an obvious case that Nvidia, if no one else, should be up because more interest in AI equals more inference time compute. The counterargument is that DeepSeek just demonstrated that previous assumptions about training time compute demand could be invalid.
You've got it backwards. DeepSeek was able to put out a credible AI for $13 million. Suddenly, there's no need for massive data centers full of Nvidia boxes. NVID's current revenue stream is threatened, and DeepSeek puts its future revenue streams from upgrade cycles in jeopardy.
Suddenly, the big fab companies like TSMC are facing less business.
The revenue streams of companies with DSP products are threatened: AVGO, CDNS, 2454, SNPS, NXPI, CEVA.
The revenue streams of companies in the optical space are threatened: AVGO, TSM, MRVL, CSCO, GOOG, CRDO, ANET, FN, COHR, Innolight, Eoptolink.
The revenue streams of semi companies in the ethernet switching space are threatened: AVGO, MRVL, MCHP, CSCO.
And so on. I've probably missed a bunch.
AI was always a bubble waiting to burst. It may be healthy that NVID lost only $750 billion in its market cap today.
idk, information takes time to percolate. (putting the efficient markets hat on) might be that there are multiple levels of meta here? Like the existence of DeepSeek R1 isn't enough, you need to also know the reactions to it, and the reactions to the reactions, across multiple levels of the supply chain... maybe multiple pieces had to fall together to cause the selloff
(or in other words it isn't even clear to me that this should cause the Nvidia stock to drop rather than rise; Matt Levine has mentioned before how often in insider trading cases the perpetrators correctly determine that the stock would move but get the direction wrong)
(taking the efficient markets hat off) there is a lot of alpha in not being dumb then :D
About the dream of an AI-supported future, where there is ample food, housing and good medical care for everyone, and nobody is forced to do tedious exhausting work to get those things, and some do not have to work at all: How would we transition from where we are now to that? Let’s assume that the AI of that era is utterly benign. It has no agendas of its own, and does a great job of whatever we tell it to do. If we want advice on how to transition to that life it will advise us, but it does not govern us.
So how does it happen? Here are some things that make it hard for me to believe we will transition to anything like the utopia people are dreaming of:
-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?
-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. There is some evidence of their being concerned about doing harm of different kinds with their products, but that concern looks like it is a far smaller factor in decisions than are the usual business motives of profiting and thriving. There are probably things these companies could do that would nudge more AI-related
benefits or beneficial AI itself toward the poor. But is that likely?0
-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are. One the comes to mind for me: In countries with way too few medical professionals, programs that train bright people without medical degrees to use AI for diagnosing people, reading imaging, etc. Wouldn’t be as good as having an MD, but seems like would be a great improvement. But is there anybody other than a few dreamers thinking about that?
-I can think of many ways that superintelligent AI could be used to acquire power and wealth. For instance there’s the entertainment/tech addiction route — coming up with new forms of virtual entertainment. There’s designing new medical and scientific gizmos that have AI components. (Surgeons could wear glasses that show the AI the tissue they are looking at. AI could tell them in real time which areas are cancer.). There’s using AI’s pattern recognition abilities to make business decisions — for example, about where to buy or build real estate, or what to invest in. Seems like lots of things in sales that are now an art could be done via AI pattern recognition. If it can recognize cancer, it can recognize a great site to build a new city. So why wouldn’t we move not towards more equality, with everyone having a good life, but towards greater inequality, a la William Gibson?
When I read people speculating about possible end-game AI futures, they always give off a sci-fi vibe. By that, I mean that the game is to imagine us, humanity, but with a few interesting changes, and not to worry much about just how those changes came to pass. But can an AI utopia ever do away with most jobs? How might this happen in practice?
Anyone remember that old game M.U.L.E.? Or how about we just imagine a small island of ten people with a fully functioning mini economy? One actor develops AI and with it takes over the juiciest market, throwing another guy on unemployment. The AI guy is producing more, prices fall, and everyone benefits. The safety net holds and that guy on unemployment gets back on his feet and starts contributing to the general economy again. Let’s imagine that he enters a completely new market.
AI guy takes over the next market, with similar results, only this time the newly unemployed guy goes to work for another non-AI business. Output again goes up, prices fall, and most people are better off. But I doubt that new employee makes out as well. Still, let’s say that he consumes as much, but less relative to his fellow citizens.
We go through a few more cycles. The safety net starts to come under strain. A few great new markets spring to life, but increasingly people find themselves forced to work as employees for less interesting money. But what about the AI guy? He’s always been driven by profit, but what can he actually buy with his money? He doesn’t really need more coconuts, fish, or massages. Still, the fish market looks the tastiest of what’s left. He’ll have it, even if it means throwing 30% of the population on the unemployment line.
The safety net buckles. The citizens vote to raise taxes dramatically, over the objections of the AI guy. He’s asking himself whether the fish market even paid for itself. He disrupts a few of the more innovative companies, just to keep himself amused. Then the day comes when only the coconut market alone seems worth his time even though it employs a full 70% of the population. A rather surly 70%, it must be added. Does AI guy basically pay them to do nothing so that these former employees can continue to buy his other products? What does money even mean by now? What would ever convince AI guy to push this further? Not money. And what’s his status without money to flaunt? Power, perhaps? Yeah, he’s probably hungry for that. And what about those unhappy few whose dreary but essential jobs resist automation? What are they in it for, apart from preventing general collapse should they ever stop working, and the privilege of eating at the same all-you-can-eat buffet as their unemployed brothers?
And in the end, I don’t see how we get to utopia in a capitalistic system.
> And in the end, I don’t see how we get to utopia in a capitalistic system.
Good thing then that modern society has nothing in common with a small island inhabited by 10 people.
If you make up a completely fictional scenario and then proceed to make up how everyone would behave in this scenario, you really shouldn't draw any conclusions for real life.
I think you could have made a better effort to reply than a snide remark. Two can play such a game. If I could show that your “nothing in common” is wrong, will you admit your error?
It was a toy example meant to bring a few essential issues, as I see them, to the fore. If you wish to write a higher-effort reply, ideally along the lines of how you see us getting from here to an AI utopia, I’d be happy to read it.
The onus is on you to show that your analogy holds predictive power over real life. I see no sense in engaging with an analogy that's inapplicable to the question at hand.
> If I could show that your “nothing in common” is wrong, will you admit your error?
No, because it's not enough to show that your analogy holds _something_ in common with reality. You need to show that it's _sufficiently close_ to reality to matter.
> how you see us getting from here to an AI utopia
Toy examples vary quite a lot in how useful they are. For instance if I wanted to know what would happen to everyone commenting on this open thread if I put them into a giant
highly flexible container and dropped them off a skyscraper, I could get a reasonably good idea by dropping just you, in a garbage bag.
If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons. Or you could come up with a model of your own for how things will play out.
If you are not interested in doing either, how about aiming your pea-shooter at John Shilling? Guarantee you you’ll be totally impressed by his response.
> If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons.
Just about in everything. How is that not obvious?
* Our civilization is not an isolated island, and even most countries aren't isolated countries.
* Most countries aren't just a handful of people, neatly sorted into different industry sectors.
* Markets aren't discrete, atomic entities which can be "taken over" completely or not at all.
* There's not one "AI guy", not even one "AI company".
Come on, even most strategy games try to create a better approximation of reality.
> […] how about aiming your pea-shooter at John Shilling?
I don't think I've ever seen John Schilling making up overly simplified fictional scenarios and drawing conclusions from them. He is, however, an exceptionally insightful and articulate commenter, and I can only strive to meet the bar he sets.
> The onus is on you to show that your analogy holds predictive power over real life
Why would that be? The original comment wondered how society could transition to an AI-supported future. I also wonder. I started out with a question, and then sketched out a little world showing why I have this question. I left the fundamental issues unstated, not wanting to insult people's intelligence. I fear I left some behind.
There were no predictions here, no prophecy. Where I come from, it's fine to speculate about possible futures, but there's no 'onus' on anyone wondering if such a future can actually unfold, unless he set out to prove that no such way forward is possible.
Also, where I come from a proper response to a such a question would be to either point out a flaw that, once remove, shows the way forward, or to direct the person to a source that contains the solution. You chose a third way, I suppose.
> I see no sense in engaging with an analogy that's inapplicable to the question at hand.
I'll admit that I laughed out loud at that!
Anyway, now that I've sampled the quality of your thought, I'll stow what little I can and set sail a wiser man, remembering to ignore you in the future. Amuse yourself with the last word, if you wish.
You launch the socialist revolution once the AI is developed.
Seriously, probably you just use democratic mechanisms for redistribution (development paid for by taxes) once the new technology exists. Obviously not perfect but I can't think of anything better. I'm not saying nothing better exists, but I'm not smart enough to think of it.
>-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?
If the amount of wealth we have increases, then sharing the same percentage will help more people. Make the pie big enough and you can fund a UBI or some other redistributionary scheme that gives everyone enough to live on without any of the big AI winners feeling like they're being taxed too hard.
(I've occasionally joked that the Good AI future will come about by giving everyone shares in the AI-run companies so they can live off of investment income, thus achieving full communism in the most capitalist way possible.)
>-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. [...] There are probably things these companies could do that would nudge more AI-related benefits or beneficial AI itself toward the poor. But is that likely?
That's the cool thing about capitalism (in theory) - you don't need companies to care about doing pro-social things. You just want them to make as much value as possible, in their typical amoral greedy way. You *want* them to be making cool AI-powered cancer scanners or whatever, because that's what "creating value" means - more stuff, better stuff, cheaper stuff. Then you tax that value and redistribute it, so that everyone can buy the new stuff.
(And also maybe ask the friendly AI to come up with a non-capitalist model for well-known market failures like housing or health care, but the capitalist model covers a lot of stuff.)
>-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are.
There was a recent post on the SSC subreddit discussing using ChatGPT as an English teacher in developing countries. IIRC the results were "not as good as an actual human teacher, better than not having a teacher."
Well I get the idea — if there is a giant superfluence of stuff, whether the stuff is money or food or housing, enough will filter down to the destitute to raise their standard of living greatly. Do you in fact think that’s the way it’s likely to play out, assuming Ai gets smarter and smarter but stays in the obedient helper role? Or are you more debating? I do not feel knowledgeable enough about economics etc to debate this. I am just speculating, without an ax to grind. If you are in debate mode. would you be willing to move to speculation mode?
Other ways it can see it play out that seem plausible to me:
-AI development companies become so rich and powerful that they are running the country, with US president more a
figurehead. Companies then either
merge or fight, til there is one entity that makes the tech and has substantial control over all the vital things that now have AI as essential parts of them — research, electrical
power, medical care, transportation. Now that company pursues its own agenda. Does not seem implausible to me that its main goal would not be spreading the wealth but accumulating power, harming enemies, etc.
-AI’s ability to do almost anything is not used to produce lots
of food and housing, but to produce astonishing luxuries for the very wealthy few. There sure have been other societies where there are a few people with, you know, solid gold gem encrusted furniture, and a large number of destitute people in society who are eating out of compost piles.
Let's say the AI tells us that in order to achieve the utopia you described, we all have to wear a purple paper hat twice a day, 10 minutes each. The resulting societal changes, for reasons which you couldn't possibly understand even if the AI was explaining them to you for your entire remaining lifetime, will bring about utopia.
Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.
Similarities to already existing imaginary rulers were unintended, but inevitable.
If my dad said “Please come to my house and get the mail from the mailbox. The prosperity and fate of our family depends on it.” I’d do so in a heartbeat. The cost is low and the benefit, even on blind trust, is extremely high. Maybe he has information I don’t, or maybe he’s just way more intelligent than I and couldn’t explain the plan in a way I could understand even if he bothered to try.
If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.
If an AI requires us to do something mildly inconvenient and inexplicable, but actually does follow through with providing huge prosperity, and for reasons we can’t understand these things are related, doesn’t imply that AI can realistically make me do whatever it says.
>If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.
Yes, but you refuse to kidnap the governor for the same reason that you agree to get the mail from the mailbox. You weight costs versus benefits, and estimate how likely it is that your dad/the AI knows more than you. But as OP established, the AI in this scenario is maximally capable, basically a genie in a bottle. If you truly believe that the benefits outweigh the costs in all cases (it's about utopia, after all), then why wouldn't you kidnap the governor? People get kidnapped and murdered all the time for arguably much lower benefits than Utopia. The AI might assure you that no harm will befoul the governor, your attempt will certainly succeed, and you will not suffer any negative consequences for it. The AI, in short, can make its request seem as effortless and rewarding as getting the mail from the mailbox.
And if you still refuse on principle, just like you could refuse your dad just to be an ass or something, then you valued your principle higher than the expected outcome of "Utopia by AI". In that case, either utopia does not have the highest possible value in your belief system, or you don't think AI can bring it about.
Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order. The probability of someone lying increases with how poorly they can explain their reasoning for an action too.
"Hans, have you ever thought that what we're doing is wrong?"
"Sure! It's pretty obviously evil what we're doing, but it's fine because [Insert Authority Here] tells us it's for a Utopia. The long term gains override any consideration of short term misery."
"Ah! That makes me feel much better. Too bad for these Jewish fellows, but I'm sure it's comforting for them to know there's a greater good we're working towards that makes it alright."
A reasonable perspective is Utopia is extremely valuable (not infinitely so), but its value diminishes with the certainty of an immoral action bringing that utopia about. Long time frames, no causal connections between the action and the outcome, the seriousness of the action required, and many other considerations can all diminish the value of that Utopia to almost nothing.
If someone told me they'd give me a million dollars in a month if I did something mildly inconvenient every day, and I had some reason to believe they were telling the truth, I would do it. If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.
>Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order.
Yes. That's literally what I wrote. If you're not fully on board with the proposed utopia and/or with the AI supposedly able to achieve it, then that's it, we're done. The AI does not rule you. That's what I said. I don't know why you keep arguing the point. I agree with it.
>If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.
Yes. That's literally what I wrote. If you don't want to murder on principle, then you place that principle higher than the trillion dollars. They do not rule you. I agree with it.
My point is simply: If the trillion dollars were enough for you, and the trillion after that ad infinitum, then they would rule you. Agreed?
It’s worth pointing out that superintelligence does not equal genie. No matter how intelligent you are, there are fundamental limitations on how much you can predict based on uncertainty in the data. The real world is a chaotic system. Even if you have infinite compute and can literally simulate every possible universe, there are likely many indistinguishable universes that lead to wildly different futures. You don’t know what people are thinking; you don’t know what random events will happen.
Kidnapping the governor would be a very high-variance play. First of all, it might not succeed, as we saw. If it did succeed, it would predictably produce harsh political blowback in the short term, but the long-term effects would be hard to predict. Consider a somewhat-more-successful recent example of political violence, Jan 6. It inevitably hurt the militias and Trump in the short term, but in the long term it might have benefitted both of them, especially with the recent pardons. Could a superintelligence have predicted this? Not necessarily. After all, if Trump had lost the 2024 election, even narrowly, it would have been seen as a further rebuke to everything associated with his campaign. But I doubt there was enough information in January 2021 to confidently predict the outcome of the 2024 election.
In general, I expect that a superintelligence trying to bring about a desired long-term effect would prefer low-variance plays. If it’s better than humans at predicting the future, then it can leverage that skill most effectively when the future is more predictable. Which is convenient, because low-variance plays are pretty often morally-good ones. Instead of asking people to kidnap the governor, ask them to run for governor and lots of other offices too. Predict the most effective political arguments and tell your candidates how best to make them. Morally speaking, this isn’t perfect (effective political arguments are often logically bad arguments), but it’s probably morally better than kidnapping the governor.
But if a superintelligence did recommend an immoral and high-variance play for whatever reason, then a rational human follower ought to object. Not just because the ends don’t morally justify the means, and not just because the superintelligence might be lying (although those are both valid objections), but also because the means won’t necessarily accomplish the ends, creating a substantial risk of doing something immoral for no benefit at all.
From recent experience, what would actually happen is this will go political and half the population will become vehement anti-purple-hatters, with the more extreme groups beating up anyone they see wearing a purple hat.
This will remain true even if the justification for wearing the purple hat is comprehensible and the effects visible across population statistics. Anti purple hatters will double down on denying any positive outcome from the hats, will complain about being coerced into wearing purple hats, and will claim all manner of personal problems arising from the hats.
<Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.
I don't think that's true. Lots of people, including me, have been told to do or not something for health reasons, and did not disbelieve the doctor, yet did not follow the doc's advice. And it's also common for people to avoid doing tasks that, by their own judgment, there would be great benefit in completing, and great harm in leaving undone. It's nowhere near as simple as believe = compliance, noncompliance proves disbelief.
And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?
> did not disbelieve the doctor, yet did not follow the doc's advice
Nobody made the claim about your dentist being your god-king ruler, did they? The advice was something like "if you don't start flossing, your gums will keep bleeding", and you made an intuitive calculation on the cost vs benefit of following the advice. Your dentist obviously has little interest in your desire to scroll on your phone a few minutes more each day, but it's your few minutes, so screw him.
But utopia, for practical purposes, has infinite positive value. Otherwise, what kind of utopia is it really? No finite cost (such as the hats) ought to be too high to achieve it. So if you are serious about describing infinite-value utopia, and you posit an AI credibly claiming to know the practical way to get (and keep) it, I think my point stands.
> And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?
How would I know? I can't quite prove it, but I'm not that kind of AI. I just pointed out the flawed premise of your question. Either an advisor whom we fully trust with effecting large-scale societal changes effectively rules us, or such an advisor is not fully trusted after all.
Or maybe you do have a point with your dentist. Maybe the flaw in the premise is that there is no such thing as utopia at all because we, as a species, will never agree on anything so important and will always say "screw you I don't want to wear silly hats even if it's for utopia, I want 10 more minutes on my phone". That would probably say more about us than about any AI.
<But utopia, for practical purposes, has infinite positive value.
And death has infinite negative value for most people, and yet many people skip the equivalent of flossing when they have a life-threatening condition. I know someone right now whose liver function test results are really ugly, as a consequence of his drinking. He is well-educated, researched the liver thing, and understands how serious his situation is. Yet even while sober he shrugs and says he can't see stopping alcohol right now.
To the extent that differences between countries are driven by differences in human capital, and AI could close those differences, that would help poor countries. It won’t just be in medicine, but could be in any skilled profession where poor countries are short. Even language skills could reduce the disadvantage that people in poor countries have engaging with the outside world.
The biggest obstacle is export controls that prevent poor countries from getting AI, and high infrastructure costs that make it practically inaccessible (a huge percentage of Africans don’t even have electricity or Internet access). To create the AI automated luxury communism utopia, the focus should be on making AIs that are cheap enough that they can’t be export controlled and can run even on the crappy hardware that is available to poor people.
So who, exactly, is going to manage the situation so that the focus is on making cheap AI's or somehow limiting export control? And who is going to address the infrastructure problem in, for ex, Africa? If we are still a democracy at that point, how likely is it that we would have a president & a congress who would be in favor of doing that? I myself am not sure we are going to be fully a democracy if AI becomes AGI and then ASI. Seems to me that the companies making AI and crucial components will have more money and power than the elected government. Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?
>Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?
Crucially, this was OpenAI motto, beef between Musk/Altman, but now that China (largely socialist/aims to become communist) opens great models for free, maybe governments themselves can offer public AI, in the same vein as we have utilities and this governance model may twist things a bit. (AI geopolitics happens to be my field, but I have no static beliefs over "which nation" will rule AI, but I think AI firms are overrated in their role).
>And who is going to address the infrastructure problem in, for ex, Africa?
As soon as AI ROI becomes more evident in large scale dynamics (societal benefit vs no investments), it should become more palatable for politicians to bring AI infrastructure to debate as they do with education, security etc. I wrote a piece not long ago where I explore the geopolitics of AI leaders vs followers, countries are certainly aware of my arguments there (https://antoniomax.substack.com/p/techno-economic-protagonism-and-ai)
The general idea is that superintelligent AI will be able to produce more supply of any particular good than there is demand for it. That drives down costs for everyone. If the supply of housing, food, and medical care quadruple while demand remains constant then the cost will plummet and more people will be able to afford it.
Of course demand doesn't remain constant: if the price goes down, more people will want it, and if people spend less of their money on food or housing they'll want to spend more of it on healthcare. But if the supply increases enough then increased demand will be moot. So the theory goes.
>Of course demand doesn't remain constant: if the price goes down, more people will want it
That conflates demand with quantity demanded; when price decreases, quantity demand increases -- that is just the law of demand. When demand (the willingness and ability to purchase different amounts at different prices) changes, the entire demand curve shifts. But you make a good point re "if people spend less of their money on food ..." -- that would change the ability to pay for other goods and services, and hence would indeed shift the demand curve for those goods and services.
Well yes, I see that, but what about the nuts and bolts? Seems like being able
to build houses more cheaply could play out all kinds of ways. For instance, the development and production of robots and new house designs will also be beneficial for the country that does it. What would interfere with that country, let’s say it’s the US, hanging on to the advantages of having the house bots and house plans? Seems like there would be a great deal of money to be made by selling the houses somewhat more cheaply, but nowhere near as cheap as they are to build. House-builders and suppliers of things needed for the task would then make a lot of money. I’m sure the price would drift downward as companies undercut each other, but would it ever drift low enough so that it was affordable in the very poor countries? If I owned a house-building company and thought like a business person I would be looking for ways to keep house prices from becoming dirt cheap. I’d band together with others in the business and make agreements not to lower prices too much. I’d lobby for some kind of certification that’s expensive to get, and without which people
are not allowed to build and sell houses.
This seems like a situation in which a governing body of some sort would have to intervene in order for the benefits of the new tech to reach poor countries and even the USA poor at more than a trickle.
Houses are basically free in a lot of places anyway. Only land is expensive.
(New houses aren't free, but a block with a liveable-but-unfashionable house sells for the same price as an empty block, that's what I mean by houses being free.)
I mean, we call those kind of price fixing arrangements cartels and monopolies, and we do have governing bodies that intervene on that in many countries.
Personally I don't believe that AI will be able to create the kind of productivity gains needed for the kind of "Fully Automated Luxury Communism" that many people speculate about. But anything that increases the supply of something will decrease the price, generally speaking. If it cost me $.005 per square foot to build a house thanks to AI robots or whatever, and you're willing to pay $.01 per square foot, then I can make a profit selling to you.
Those arrangements only apply within countries. What’s to stop the US from saying it’s going to use its monopoly on AI to extract as much wealth from other countries as possible to create automated luxury communism for its own people only? We saw something like this in the colonial era. Britain and other leading powers didn’t give their technologies to India, China, Africa, etc., instead preferring to keep those countries backwards so that they could serve as resource colonies.
Economically I think there's an argument that an arrangement like that is suboptimal, even for the country running the monopoly. Ethically, I don't see a problem even if that happens. If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone. The US is better off, and everyone the US sells stuff to is better off (if they weren't better off, then why would they buy the stuff?). What's the harm? But I understand that people who endorse Fully Automated Luxury Communism have a philosophical objection to people having good things if everyone else doesn't get the same good things.
> If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone
Have you seen America lately? They are hurting themselves the most, it is kinda cringe actually.
And eh, China 1.5b people. US 350m people. Chinese AI engineer ratio over US #s is abysmal, not sure enough people appreciate the probability for chinese AGI to emerge first.
- Those countries could lose all their jobs and industries, similar to what happened in Asia in the 1700s and 1800s. Records of human height show clearly that living standards in India and China got worse during this period.
- Superintelligent AI could be an insurmountable first-mover advantage that prevents other countries from developing the same thing, similar to how European colonization of the Americas meant that non-Europeans had no ability to colonize it later, making them strictly worse off.
- The US or whoever else has AI could use that power to demand things like annexing resource-rich areas that leave weaker countries worse off.
What's the current state of understanding about permanent negative mental effects from using psychedelics (specific mushrooms). I'm mostly curious about the dosage floor for what's been observed. Has someone had psychosis induced by only doing a small dose a single time? Is there some dose sufficiently small that no serious adverse effects have been reported?
What I know: reputable publication had a chart of the safety profiles of all recreational drugs. There were 3 measures of safety: harm to the body, harm to mental health, and additiveness. Psilocybin was the safest. Unpleasant drug experiences on it are not rare at all, but long term bad effects such as psychosis are rare. But of course the chance of psychosis is not zero. It is higher if you yourself are subject to mild
versions of psychotic symptoms that you are able to shake off, or if you have close relation (sibling, parent) who is schizophrenic.
There is one dose that has no serious effects: 1/10 of a gram. That is the amount people took in a microdosing study, where subjects and record-keepers were blind to whether a subject was getting psilocybin or placebo. Turns out those taking the drug did not differ from placebo subjects on either measure. So 1/10 gram def won’t have negative effects, because it has no effects period!
If you are eager to try the stuff but feel
quite nervous, you should only experiment with it with a trusted person with you, one who has experienced psychedelics themselves. And I would recommend starting with a very small dose, something just a little bigger than 1/10 of a gram, and working up gradually til you feel a small effect and see how you like it.
I'm actually not asking for myself but for someone I know who does have some schizo-adjacent kinds of thoughts. It's a very hard risk to assess. Thanks for the info!
I know a couple people who have it this. They didn't take LSD many times, and it wasn't huge doses. For both of them it's just minor visual distortions from time to time -- things like halos around lights. Both these people took LSD, not mushrooms, & Scott says in the post Laurence linked that he does not believe psilocybin (mushrooms) and peyote are known to cause this syndrome.
A Trump admin thing I'm interested in is whether its anti-DEI push is going to be tailored toward achieving some actual goal of avoiding "reverse" discrimination against majority groups, or if it will just be a blanket "diversity = bad" policy. A proxy for this I'm watching is FDA's Diversity Action Plan (DAP) guidance, which proposed diversity requirements for participants in clinical studies to be used in FDA approval applications. These requirements seem pretty reasonable to me; people may have different reactions to a drug based on characteristics like sex, race, age, etc., and knowing about any such differences seems valuable to FDA in deciding whether and for what populations a new drug is safe and effective. As of now, the general DAP webpage and webpage for a prior initiative specific to cancer studies have been taken down, though last years' draft guidance document is still available. https://www.fiercebiotech.com/biotech/fdas-diversity-guidance-clinical-trials-no-longer-available; https://www.regulations.gov/document/FDA-2021-D-0789-0111.
I'm curious if this is a temporary review situation or just a reasonable policy getting shelved because it includes the word "diversity," which strikes me as a pretty stupid way to govern. I also come at this from a legal/regulatory perspective more than from a scientific or business perspective, so I'm interested in objections to my assessment that this is good policy (by FDA standards, given its current mission), and/or suggestions of other potential proxies to watch.
There was a report earlier of some of the IRS pages that had previously mentioned forbidden words ('equity') being unavailable despite those being terms of art when dealing with tax matters.
From this, I'd infer that it's a dumb filter / applied with a broad brush, at least for now.
That's probably the extreme rational end of the Trump administration policy. What's probably going to happen is it tries to discriminate in favor of whites and men to make up for all the discrimination done by the government going the other way recently... which means the direction of discrimination will oscillate with the party in power every 4*(d3) years.
Best-case scenario is disparate impact goes away and companies adopt meritocracy as a defensible middle ground since race-blindness is theoretically defensible from either side.
Worst-case scenario is they constantly change which groups they're discriminating against depending on who's in charge and it becomes one more territory in the culture wars.
I'm not sure either way, but I don't think it's obvious that there's not a political angle to the drug testing thing.
There are definitely some drugs that work differently in different populations (carbamazepine has an extra side effect in some Asians), but these are a tiny fraction of drugs and the extra side effects are usually pretty rare. Nobody can ever test for everything, and if the carbamazepine issue was just described as "extra side effect in people with HLA-B1502" instead of "extra side effect in people with HLA-B1502, which is way more common in Asians than whites", probably people wouldn't say "Okay, time to retest every drug in every HLA group, just in case!" I think probably people seized on the occasional rare cases where different races get different side effects as a new thing that they could claim was a disparity and do diversity politics to, during an age when everyone was looking for disparities and reasons to do diversity politics. Whether this is net good or bad depends on a lot of things I don't know like the exact fraction of drugs that have something like this and how much extra it costs / how much more time it takes to do the multi-race testing.
Maybe with race, but i sounds like the same taken-down page talked about sex as well, and it seems obvious to me like there could be some pretty big differences in effect on the two sexes for a lot of drugs.
Well, Pete Hesgeth seems to be intent on removing all women from the military, so... the latter, probably.
> Regarding women serving on the front lines, Hegseth has said, "I'm straight up just saying we should not have women in combat roles. It hasn't made us more effective. Hasn’t made us more lethal. Has made fighting more complicated." Hegseth argued that men are more capable in combat roles because of biological factors and said "[e]verything about men and women serving together makes the situation more complicated, and complication in combat means casualties are worse".
It remains an open question as to whether women are a positive factor in combat roles. I don't think it's an unfair position to think they would be a detriment.
Obviously for equality of opportunity reasons we may accept women in combat roles even if they reduce overall effectiveness, but that may be a trade we aren't willing to make when literal lives are on the line.
Off the top of my head it may introduce complications like:
- Privacy/comfort concerns with separate bathrooms, sleeping and showers.
- Issues with sex and relationships, and all the downstream interpersonal problems from that
- Physical capability (although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter).
- Hard to define psychological factors like differing levels of inbuilt aggression
- "Monthly" variations in mood, focus and energy levels
- Increased opportunity for sexual assault (A sex-starved battle-machine may be more likely to do something foolish.) Not victim blaming here, but on an overall level if this increases monitoring requirements, creates increased burden for HR, and removes the assaulter from a combat role, then this could hurt combat effectiveness.
- General camaraderie issues. I'm not informed on whether all-male or mixed groups easier form senses of obligation to their fellow soldiers, but wouldn't be surprised if it was the former. Men competing for the attention of a small percentage of female soldiers may cause resentment.
- Etc.
The military is obviously pretty discriminatory (No ADA compliance in combat roles), and I think the potential lives lost for even a small sacrifice of combat effectiveness, and the general geopolitical issues downstream of a less-effective military, may justify the exclusion.
If there is a minimum requirement and no improvement of performance for people exceeding the minimum requirement it would not matter. E.G. if the task is pushing a button then the woman who can push a button will be just as effective as a man, but if the task is running fast towards the enemy and the minimum requirement is x yards in y seconds, a woman who has just made the minimum requirement will be less effective than a man who greatly exceeds it.
>although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter
Well, the problem is that there used to be requirements, pretty much all women failed them, so they lowered them far enough that women could realistically pass, making the requirements basically meaningless. It's a simple fact that women are much weaker on the high end, and maybe society is finally mature enough to accept that.
I doubt I'll be able to find it again because I read it during the Bush Administration, but something that stayed with me was a post by a female combat veteran with lasting physical and mental issues who made the good point that the location of where the combat occurs can change quickly and unexpectedly, and although as a female soldier at the time she was not meant to be in the combat zone, she ended up there on multiple occasions. So people in her situation often had trouble accessing appropriate medical benefits or being given recognition of the fact that they served in combat, because officially they weren't supposed to have been there and it was inconvenient for the authorities to acknowledge.
So it seems to avoid this kind of scenario you'd either have to ban women from anywhere that could be near the front lines in the worst case scenario, or make it officially possible for women to serve in combat. (Or, I guess, ban women from combat and keep sweeping any situations where that doesn't match reality under the rug).
If there are to be women in combat roles then I think the most reasonable way to do it would be to have all-female units so that the privacy, sex, sexual assault and camaraderie issues are somewhat neutralised.
In an actual war I wouldn't be surprised if these units get assigned to the less demanding roles, but in the meantime everyone can play pretend equality.
During WWII some countries at full mobilization (Finland, Russia) had a large number of females in special combat roles. Partisans also had a non-negligible number of females.
There's too many variations to say that it always went one way or another, but from my understanding the females tended to be put into all-female units when possible, and in roles that required less direct physical violence, such as pilots and snipers. Plenty of nurses and other support roles as well, of course.
Correct but I don't doubt that Hegseth will at least rhetorically land on something like "remove from all warfighting roles except medical support for the soldiers". As a practical matter that's probably not plausible now. But from conversations with MAGA relatives of both sexes that is what they think the correct answer is.
I'm not sure the distinction you're making between "combat" and "warfighting" but then again I'm also not really interested in debating hypothetical positions that you're sure someone will land on rather than actual positions they've taken.
Right now there's a couple of hundred thousand women in the US military with just a few dozen in actual combat roles.
To answer your detailed question, when my MAGA relatives talk about this they are using "combat" to mean direct hostile contact with enemy combatants. Granted that most of their knowledge of these distinctions comes from ridiculous and/or outdated movies/TV shows, but anyway what they have in mind is women being taken out of roles such as supply or engineering units, intelligence-gathering, repairing equipment in bases, strategic command, etc. The one exception they're fine with is medical care for the soldiers i.e. basically the functions illustrated in the TV show "MASH".
I have been following all the chat about Deepseek, the Chinese AI currently available. It has significantly roiled the markets today for those of us who keep track of these things. Some very basic concepts about cost-effectiveness are being challenged hard. And then, of course there is the possibility that it is a lot of noise. I am curious if anyone else is paying attention to this.
I downloaded it in minutes and I have already used it for a half dozen inquiries. It is easy and extremely well read. I am thinking of seeing if it will give me summaries of all the books I have read.
I don't think politics is nearly that predictable. Right now, a US/China treaty regulating ASI training runs seems unthinkable- but if you look through history, unthinkable shifts in the political winds are actually not that uncommon. We ought to push for it, even if it seems like a long-shot.
I read Zvi's last DeepSeek post, and am just in the process of reading this one. My 0-th order thoughts are:
R1 seems about as capable (maybe slightly less) as OpenAI's o1.
The main general-events expectation that I have is that
- PRC/USA competition is now fully baked in to the AI landscape
but I see that as just adding a third component to
- ChatGPT's clear advances from the 4/4o level to o1, (major, but not AGI _yet_)
- Trump endorsed Stargate
which add up to: As nearly as I can tell, the AI opponents have solidly lost this round. Whatever the populace wants, AI work is going on (feverishly) for at least the next few years (which looks to me to be probably all the way to AGI).
>The future existence of humans and paperclips is now a crap shoot.
Yup.
<mildSnark>
Maybe Culture Minds will keep us as pets? Hopefully not neutered?
</mildSnark>
Personal note: I'm planning on testing my 7 questions on o3-mini when it is actually released (thought to be during this week), and planning on reporting what I see here. There _is_ now a question on whether I should try asking R1 the questions. I'm kind of holding back to avoid stretching myself too thin.
<mildSnark>
I'll let everyone know if the answers to any of the questions include "And that day is upon you ... now!"
</mildSnark>
Exciting times. Glad I lived to see this. Maybe I'll _see_ full AGI, which I would like to chat with.
I've been using DeepSeek and, as a person who has been somewhat resistant to AI hype... I like it. There's a certain crispness and cleanness to it, a no-frills feeling. I don't think it's just the cost-effectiveness that people find attractive.
I've been following the story, but lacking any real expertise in the subject, I don't have much to say other than regurgitate what I read elsewhere, which is that whatever the pain in the short term, if it turns out that training LLMs is a lot cheaper than we thought, ala DeepSeek, then that's a good thing; we can use it to boost productivity at much lower levels of investment, much sooner than we might have thought possible.
In the short run, though...I log into my brokerage account and I'm like Vito Corleone at the undertaker's: "look how they massacred my boy!"
It doesn't seem to be noise, I'm hearing from a lot of people that they can just run it on their laptop and it's just as powerful as the latest ChatGPT. Seems rather bad for the AI safety people.
Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah, but then, nobody is thinking that we're at the ceiling for AI.
For myself, the hype does seem difficult to believe, but well, with such a short timeline, why speculate? The hypothesis that world problems just need more intelligence to be solved can perhaps just be tested soon.
I was just in San Francisco, and the homeless crisis there was really quite shocking. I saw things that I haven't seen in 36 years of living in Puerto Rico, a much poorer (and dumber) place, where I even do volunteer work with the homeless. If you're wondering, our homeless are by and large drug addicts, but you can talk to them, they haven't lost their minds (to the extent a junkie can be considered sane at any rate). In SF, it seems like people who should be in a psychiatric facility just end up in the street instead. And it's really quite shocking, as you can see SF has utopic pretensions, but then it has this. Probably one of the most jarring contrasts I have ever seen. I really have no idea why the homeless in SF are so different from the ones in PR, but they are.
San Francisco already has the IQ and the wealth to just solve its homeless crisis, so I can't quite visualize how an ASI is going to solve it. I really think I need to hear that SF solved its homeless crisis with AI to fully buy into the hype. Perhaps if "just" some shocking scientific discoveries come out it would be enough too, but then I would have the lingering doubt of why SF, ground zero for AI, hasn't solved its homeless crisis.
I think I want to write an essay about this, about how some problems aren't really bottlenecked by intelligence, but I'm not sure if I'm in a speculative mood.
When we talk about ASI, we aren't talking about agents that are smart relative to human geniuses the way geniuses are to regular people. If we were, I'd agree that even a ton of agents like that would still find most of our intractable problems intractable.
What we're really talking about is something that might compare with our collective human intelligence the way that compares with the collective intelligence of, say, wild tigers. There are some good reasons to think that another difference in intelligence of that magnitude is possible. The physical limit for computational efficiency is the Landaur limit, and our brains are many orders of magnitude away from that. They also run at something like 30 watts, connected only by incredibly low-bandwidth language. We can also see AI really massively outperforming us on a lot of narrow tasks, and we can see those tasks becoming gradually less narrow over time.
Our collective intelligence gives us a lot of very powerful affordances that wild animals can't understand or predict- things like language, technology and culture. I'd be very surprised if a similar difference in intelligence didn't come with similar set of new affordances. Maybe that looks like ASI inventing some complicated self-replicating technology, or like it finding useful patterns in things we find chaotic- though I suspect realistically it would just look like the ASI doing a lot of very complicated things that we'd struggle to follow, while the world around us became increasingly difficult to understand. If it was well-aligned enough to try and solve something like urban homelessness, I'd expect it to try a strategy much more effective than our ways of doing things, and probably not at all subject to the same bottlenecks.
If you took the smartest tiger ever to live, made them as intelligent again compared with the average tiger, and tasked them with solving the problem of freezing in winter, they'd probably make almost no progress at all. If you task humans with that problem, however, they invent fire.
Yeah, I know the arguments. But I also know there are problems that are completely insoluble no matter how much intelligence is thrown at them. A basic one would be a heavily handicapped chess match, say white has the full complement, black is just a pawn and the king. It is impossible for black to win that one. The big question is whether the problems we want to solve have some terribly clever solution that a piece of software can execute, or they're more like impossible handicap chess. I actually think the likeliest outcome is more on "there is a superintelligent solution, but it will have to be executed by humans", which creates the opportunity for all manner of bungling, but there is a real possibility that ASI turns out to be radically transformative in a positive way, and also a real possibility that it ends all life.
Exciting times, hope that ASI by 2028 prediction pans out, so the most exciting hypothesis can be tested.
TBF bringing up anything about San Francisco is like plunging your hand in a lava pool and shouting "WELL HOW COME ALL YOUR 'SCIENCE' AND 'TECHNOLOGY' CAN'T PREVENT MY FLESH FROM MELTING, HUH, HUH?"
Apparently one of my nephews is flying out to San Francisco to speak at some conference there, so I hope he survives the lava pool 😁
If anybody knows anything about Photonics West 2025, give him a wave!
EDIT: Oops, too late, the presentation was on the 26th. Well, virtual wave! He's going off to Boston to hang out with some friends in Harvard (if I'm getting that right) after this.
I'm not sure I follow why San Francisco is like a lava pool, but you still have to bitterly appreciate the cosmic levels of irony in the contrast of all those AI billboards and self-driving cars, and even the Berkeley Alembic, and then the awful, awful, homelessness.
Similar situation as in India, when you consider the Bhagavad-Gita and the high points of Indian spirituality as juxtaposed with, well, India.
I bet Switzerland doesn't have jarring juxtapositions like these (though maybe I'm wrong).
I think Scott's point is that there are readily available, viable solutions to San Francisco's problems, but which aren't being implemented for whatever reasons. In the lava analogy, that would be "Just don't stick your hand in lava".
That's what I was saying, I think San Francisco is already totally capable of solving its problems, but for those "whatever reasons". Is more intelligence going to make them go away? Is lack of intelligence the cause of the "whatever reasons"? ASI could lead to some surprising and unintuitive discoveries if it turns out the world is riddled with "whatever reasons" to its problems that just don't go away in the face of extreme intelligence. We're only going to know what ASI can really do once it's finally here, so I can't wait.
It's not so much that you can't penetrate the lava. You have too much buoyancy to exist inside it, in the same way that a helium balloon can't exist underwater even if you quickly shove it into a hole in the water and let the water fill in around it.
I just asked Deepseek and it gave me an extremely detailed plan to address urban vagrancy of mentally disturbed and drug addicted individuals. Try it….
Perhaps it's like when a large corp hires consultants: everyone already knows what needs to be done, but you need a third party to deliver the bad news and absorb the resulting fallout.
> Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah
Disagree. Even if AI stops where it is right now and no new models will be trained, LLMs will have quite a large effect when solving for equilibrium. Likely on the scale of indoor plumbing. Certainly not on the scale of transistor+integrated circuit, though -- after all, all AI effects on society are also effects of transistors and ICs, because training a LLM with a tube-based computer would not get you very far.
When Neal Stephenson published The diamond age in 1995, the idea of an interactive primer which tells educational stories based on characters of a kids life, their plushies, as well as their situation was total science fiction. Today, the main part which seems unrealistic is the fact that this was seen as a very exclusive product (which would kind of defeat the whole point, because the uber-rich can always afford human nannies and storytellers). Instead, it seems likely that the equilibrium given LLMs will consist of multiple competing services which offer different ethics (Do you want to raise your kid Confucian? Utilitarian? Lutherian?) and will cost a buck a month or something.
I still think what you're describing there is less transformational than indoor plumbing. As Freddie says in the article, imagine having to go a month with no running water. Then a month without AI. Or a smartphone. It's clear that it's going to affect you far more to not have a toilet, than AI or the smartphone. The LLM application you're putting forth there does not seem toilet-levels of utility.
This just sounds like a restatement of the principle of diminishing marginal utility. I could more easily go my entire life without using a space ship than I could go a day without shoes, but I don’t think that necessarily means the latter are a greater technological feat than the former.
I recently went 8 months without running water. It wasn't that bad - just a bit of carrying water around in buckets. I would rather go without running water than without a smartphone. I'd rather the water than the AI at this point, but I expect that to change.
Something I'm getting really, really tired of hearing is the platitude that the US is far to the right of "The Rest Of The World(TM)". It's one of the laziest, most ridiculously simplistic assertions I've ever seen, with more holes, confounding variables, nuances and and qualifications than almost anything else, and it is repeated (usually without the slightest qualification) *more times than you can possibly imagine*!
Even here, on ACX. In fact, I could be wrong but I think I see this *more often* on ACX than most other places. Again, usually stated in the most simplistic unreflective way. I have no idea why; usually this community is more nuanced than the rest of the internet, rather than less. But I honestly feel I'm more likely to see individual country comparisons--the US vs France, France vs Germany, and so on---on otherwise stupider places like reddit, and more lilely to see the US vs The Rest Of The World(TM) here.
What's wrong with it? Let's start with what The Rest Of The World(TM) means. It obviously doesn't mean the rest of the world, not even close. Sometimes it's phrased as "Europe", sometimes as "the developed world" or some such, but much of the time it really is phrased as "the rest of the world". If someone actually challenges this stupid assertion (which is shockingly quite rare) then the OP will respond that of course, by "rest of the world" they mean Europe, plus maybe Australia and Canada. I mean that's *basically* the world, right? If someone deigns to point out that many of these metrics showing Europe as far more left-wing don't apply to much of Eastern Europe, they'll respond that yeah, obviously they weren't including Eaatern Europe in "Europe". Further exceptions will inevitably be made, depending on the issue at hand. The US is more nationalistic than any country in Western Europe! Yes, *obviously* France is an exception. No other country takes religion, like, seriously! Yeah, except Ireland, and Italy, and maybe parts of Spain, and a large part of eastern Europe, I thought those, like, went without saying! Nearly every time, on every issue, there's an exception or a whole heap of them. Making the actual claim, when actually thought through, something like "there are always a few countries, usually in western Europe, that are to the left of the US on a given issue". Doesn't have quite the same dramatic ring does it? Of course, this relies on a level of analysis that is hardly ever done.
Second, just as this simplistic claim relies on constantly shifting on what exactly is the set of countries the US is allegedly to the right of, it similarly relies on constantly shifting on how comparisons on various issues are made. For example, compare crime and religion. At the level of actual law, the US is to the right of many western countries on criminal sentences and the death penalty (with a few exceptions) and it is to the left of many western countries on state secularism and bans on public religious displays (with a few exceptions). At the level of everyday life, it's the opposite: the US has higher crime rates (less law-and-order) than most western countries, but has higher religious belief and practice (more observance) than most western countries. So however you slice it, the US is sometimes to the left and sometimes to the right of the average. But that doesn't provide a dramatic simple narrative. So the advocates of the latter simply switch the metric to suit their simplistic model. Look, how right-wing the US is, it's the only country with such high sentences! And look, how right-wing it is, it's the only country with such high religious belief! (Remember, most countries in the world don't exist when we're having this discussion). Truly breathtaking.
Third, the claim involves systemically ignoring whole swathes of overwhelmingly important factors when comparing countries. The most obvious is size. The US is many times larger in population than every country that is ever compared to it. Frequently, when someone's comparing it to a group of countries it's significantly larger than all of those countries *combined*. And this confounding factor is almost never taken into account. So the fact that the US is the only "developed" country without universal healthcare is purely politics. It can't have anything to do with the fact that no country with anything close to the US's population has universal healthcare either, and that the complexities of such a scheme would obviously compound with huge increases in population, and that the vast majority of countries with UH have tiny fractions of the US population (e.g. Australia 1/13th, Canada barely 1/10th, each Scandinavian country between 1/30th and 1/60th), and that if the US were to adopt it it would set a world historical record of a country of that size establishing a project of that scope. No, it must be nothing other than Americans being stupid. It is absolutely amazing how rarely any of the above factors are ever even *mentioned* in the context of that comparison.
And there are plenty of other holes and flaws in this "US so right-wing" claim, but I'll leave it there for now.
So why does this absurd, simplistic, sloppy claim get spread around so much, with so unbelievably little pushback? My guess is that is has a lot to do with the fact that debunking this claim doesn't serve any major political grouping very well.
American progressives have adopted the idea of a left-wing Europe as a central part of their mythology, much like the conservative mythology of a right-wing past. It's useful to point to "this worked there, why can't we have it here?" when someone says your program is unfeasible or unreasonable.
American conservatives have adopted American exceptionalism and hatred of the rest of the world as a part of their identity. Again like with the imagined conservative past, where a few smarter progressives might point out that it never really existed but most are happy to go along with the myth because they've adopted "the past is evil" as their own mythology...so are conservatives mostly happy to go along with "we're to the right of the whole world and that makes us superior!" instead of actually questioning the factual claim.
Non-American progressives want an easy way to demonise right-wing ideas, and casting them as foreign and American fits that need. Plus, they generally want to erase national borders and pretend countries don't exist, so collapsing different cultures into one vague nebulous "Rest Of The World (TM)" suits them well.
Non-American conservatives are more focused on their own actual country, and aren't generally much interested in international comparisons. Plus, having a vague nebulous international left-wing collective they can accuse their opponents of trying to draw their country into suits their purposes for similar reasons to progressives.
So nobody has much of an incentive to debunk or examine this sloppy claim. The only people who suffer are those care about logic and accuracy. Simplistic badly thought-out characterisations like this make clarity and nuance much more difficult, and make discussing the unique situations and cultures of each country so much harder. And I really wish a data-focused and logic-focused community like ACX would create more pushback to vaguely defined and usually unsupported assertions.
Devil's advocate: In a certain sense, most countries are to the "right" (as understood locally) of most other countries. The left-right spectrum often includes elements of "X, a uniquely [country] thing that we're best at, is important!" as a right-wing marker. (Thus, correct national exceptionalism: Your country is the best, regarding the things you value, because your country is full of people who prioritize those things.)
For example, Americans lead the world in (their own understanding of the concepts of) free speech, religious freedom and gun rights, all of which are right-coded in the US. Other countries might focus on things like monarchism, regionalism, supranationalism, attitudes towards The Party, language politics, security policy, etc as what "really" makes one right- or left-wing. A distinct left-right axis for every country.
Why does scale matter? Universal healthcare doesn't imply a single huge organisation.
"Medicare
Medicare is a term that refers to Canada's publicly funded health care system. Instead of having a single national plan, we have 13 provincial and territorial health care insurance plans"
"The NHS in England is not one single organisation. It is made up of hundreds of different organisations of differing sizes, at central, national, regional, and local levels with different roles and responsibilities."
Why is it no problem for the US to have a single huge military?
Right and left can be a bit slippery. I think it’s fair to say the US is more “right” in a right-wing libertarian sense. Compared to most other Western nations the US has an exceptional respect for individual liberty (guns, free speech), has fewer collective social programs (free college, free healthcare, etc), and generally has fewer business restrictions (at-will employment, less regulation). As you noted the people who mention this are typically progressive types who want all of that to change.
While I'd agree that trying to map differences between nations onto purely left-right distinctions is an exercise in folly, I DO think the U.S. is a pretty big outlier in the developed world[1] in a number of ways, most of which seem at least vaguely right-wing. For reference, I've lived in maybe half a dozen U.S. states (red, blue and purple) as well as in Canada and Germany and traveled somewhat elsewhere in Europe. The ways in which the U.S. stands out (to my eyes) are:
1. U.S. nationalism is very loud and lacking in subtlety. There's an entire genre of art, culture and, well, behavior focused on praising and glorifying and bragging about the U.S. In terms of actual numbers of staunch nationalists I don't know how the U.S. stacks up against anywhere else, but there's a lot less of that sort of loud display in Canada, and it is (for understandable reasons) somewhat taboo in Germany. I think I've seen hints of the same from Britain and Australia, but I'm less familiar with either of them.
2. The U.S. has a strong streak of what I might call "reactionary individualism." By this I mean that there seem to be a contingent of people--not a majority, I think, but a pretty sizable minority--who react with varying degrees of scorn and anger at the suggestion that they ought to take ANY action that's mainly for the common good or general welfare. Some of the angrier reactions to various anti-COVID measures are still fairly fresh in my mind, but the far older resistance to universal healthcare and a lot of the more extreme anti-tax views[2]. This is one that's harder to gauge in other countries, but my sense is that while people like this in some number exist everywhere, in the U.S, they're far more numerous and wield correspondingly more political influence.
3. The national attitude in the U.S. skew very militaristic, especially for it's level of wealth. This one seems to be a little bit of a geopolitical artifact: it seems like before 100 years ago or so, wealthy countries were some of the most militaristic, and militarism was more widespread in general. But in the post WWII era, the developed world seems to have pacified a LOT. Except for the U.S. Now there's a decent argument to be made that its the U.S. military that has allowed everyone else to de-militarize, but there's a lot of pieces of the picture that don't fit super well with that. The U.S. has engaged in lots of military adventures and military-backed foreign policy initiatives that very clearly have NOT been about defending either itself or its allies in any direct sense, and you get a lot of civilian rhetoric of the "support our troops" and the "they're defending our freedom" style that very conspicuously fails to engage in what the U.S. military is actually DOING at any given moment. This is probably partly an extension of 1, but I think it feels very weird to anyone who's spent much time in a country with a less active military.
[1] Which seems to be the best possible match for the Rest of the World that the people you interact with seem to be talking about. But I agree calling it the "rest of the world" is dumb and bad.
[2] To be clear, nobody I've ever met *likes* paying taxes, but most of them recognize it as both necessary for the general welfare and a transaction that (on the whole) ends up well worth the price tag, given what they get out of it. But the ones that don't see it that way can be quite strident about it.
Has anyone done an analysis of the brewing tariff war with Canada yet?
I’m very surprised Canada has decided to retaliate dollar for dollar (which Trump has promised to double if they do) rather than negotiate whatever it is that’s actually motivating Trump to do this (explicitly it’s fentanyl and illegal border crossings, but who knows). I suspect he’d want a full reevaluation on specific industry subsidies and tariffs that Canada imposes on the US too.
Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible. The retaliation is doing “something” which is obviously demanded by the voters.
Canada is hugely dependent on US trade, and while Canadian tariffs on the US will cause harm, it will equally harm Canadian producers, and probably lead to even more tariffs imposed on them. For a country that has such a huge percent of its GDP tied up in trade with the US, I can’t see anything but a recession for Canada (and a minor decrease in GDP for the US).
All the analysis I’ve found are either saber rattling or what this means for the average consumer. I’m interesting in anyone who’s looking at the macroeconomic effects.
Trump has made it pretty clear what his intentions are. https://truthsocial.com/@realDonaldTrump/posts/113934520197790682
> We pay hundreds of Billions of Dollars to SUBSIDIZE Canada. Why? There is no reason. We don’t need anything they have. We have unlimited Energy, should make our own Cars, and have more Lumber than we can ever use. Without this massive subsidy, Canada ceases to exist as a viable Country. Harsh but true! Therefore, Canada should become our Cherished 51st State. Much lower taxes, and far better military protection for the people of Canada — AND NO TARIFFS!
Now, my own analysis of the situation. I am emphatically NOT going to stick to macroeconomics: I think your attempt to focus so heavily there is misleading you. Canada CANNOT look at this purely as an economic decision because it isn't. So the Canadian response will naturally factor in the economic realities, but also the diplomatic and political realities. Trump has, in an only-just-shy-of-literal-sense declared war on Canada[1] and Canada's response must be a *strategic* response: that is, the one that best serves all of Canada's strategic aims. Of course "maintaining Canada's economy" is obviously a major strategic aim, but so are "preserving Canada's national sovereignty" and "maintaining Canada's diplomatic position with the U.S. and the rest of the world," both of which are threatened by this move just as surely as the economy is.
First Point: negotiations require, well, actual negotiations. As outlined above, Trump doesn't seem to have made any attempt to negotiate with Canada at all. Likewise, Canada already made a quite large an expensive conciliatory move before Trump even took office. The only sense in which Canada could negotiate its way out of this situation is to *keep doing things like this* in the hopes that eventually Trump is satisfied and withdraws the threat. I hope I don't have to outline why this would be a fatally stupid way to operate. If Trump is threatening these tariffs as a way to get something (rather than imposing them because he thinks they're worthwhile in their own right) it's incumbent upon him to *at the bare minimum* ask for what he wants in concrete terms.
Second Point: reputational effects are real and important here. Every country is one player among many on the international state, and every country is on that stage for the long term. The actions you take today in one negotiation will affect the situation you face tomorrow and the next day, in future negotiations. Reputational effects are a very important component to both sides of this conflict. On Canada’s side, caving easily to U.S. pressure will today will lead rational actors to expect Canada to cave in similar fashion tomorrow. This could potentially cost a lot. Not only would Canada’s closest neighbor be made readily aware that in can extract concessions basically any time it wants by threatening Canada’s economy, but it could easily encourage other nations to try the same. This could be very, very costly for Canada in the long run, potentially much more so than the proposed tariffs. By contrast, the harder Canada hits back, the higher the apparent price tag it sets for anyone who wants to try this in the future. Trade wars tend to be enormously negative-sum: the U.S. might be better able to afford the hit, but it will still be a hit, which means the policy is less likely to be long-run profitable for the U.S. and less likely to be repeated.
On the U.S. side, even if Trump were to unilaterally back down on all his tariff threats tomorrow, he’s already damaged the U.S.’s international reputation. Actually imposing such tariffs on Canada will do greater and more lasting damage. The U.S. has remained the preeminent economic and diplomatic power in the world for 80 years in very large part by not doing stuff like this. While the U.S. can be imperious and demanding sometimes, it’s built up a very strong and very valuable reputation for being a good friend to have: it mostly presses the “cooperate” button and surrounds itself with other nations who tend to do the same. Everybody stays at peace, everybody trades reasonably nicely with everybody and as a result everybody gets rich. Short of declaring a shooting war, it’s difficult to imagine an action that U.S. could take that would be more damaging to that carefully-accumulated reputation than wantonly trying to wreck the economy of one of its oldest and closest allies over some comparatively minor domestic problems. Obviously the economic and diplomatic position of the U.S. are still central enough that this one act of aggression–however shocking–won’t trash it overnight. But it’s a crack in the foundation, and not a small one. Enough cracks will bring the house down, and even before then, there will be detrimental effects. This does connect strongly to the macroeconomic angle. Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it? Even if none of the goods are coming from Canada, these threats are a really sharp demonstration that anyone could be Trump’s next target. All business ventures that require interacting with the U.S. market are suddenly looking a little more volatile than they did a week ago, or three weeks ago or a year ago. Volatility carries risk and hedging against risk costs money. My macroeconomics knowledge isn’t strong enough to estimate how large this effect will be–my guess is that as a one-off it will be lost in the noise–but if things like this keep happening, it's difficult to imagine that it won’t start to impose a real, significant drag on the U.S. economy. A lot of people with significant business connections in the U.S. have got to be evaluating their other options and researching other suppliers and/or consumers right now. And while they’re a little more abstract, I expect diplomatic effects in a similar vein: U.S. guarantees of cooperation and friendship (be they on trade, defense, intelligence or anything else) just got a little more questionable. Negotiators will be a little bit less willing to align with U.S. diplomatic priorities if this is wage that two centuries of steady friendship buys U.S. allies.
Third Point: the internal politics of both countries also real and important factors. Saving the best for last, this is (in my opinion) the factor that I most expect to prove decisive. Certainly a trade war with the U.S. will cause far greater economic damage to Canada–as a fraction of GDP–than it will cause to the U.S. But the political effects will be much, much less lopsided. As far as I can tell[2], the general mood among Canadians right now is that we are under attack, and that we need to stand up and defend ourselves. Meanwhile calling the general mood among those in the U.S. “divided” would be a massive understatement. Tens of millions of Americans were already angry and horrified at the conduct of the new administration before Saturday’s announcement, and this will not make them any calmer or any less determined to push back against the government they oppose. Meanwhile, how many people even among those that voted for Trump actually want him to plunge them into a trade war with Canada? I got the sense that a lot of what secured Trump his win was economic hardship and anxiety. An action that increases prices U.S. consumers pay while hurting a significant share of U.S. businesses (all done in the service of fairly nebulous objectives) seems unlikely to sit well with at least some of those people. To be fair, Trump proved himself both very stubborn and essentially immune to negative news headlines during his first term, but actions that his millions of Americans directly in the pocket book might well be harder for him to shrug off. If the trade war creates solidarity on the Canadian side, while sowing division and galvanizing organized opposition on the U.S. side, it seems very possible that the U.S.’s political will cracks before Canada’s does, despite the lopsided economic impact. If that happens, then the U.S. loses doubly hard: it damages its own economy and reputation without getting anything in return. Trade War: the only winning move is not to play.
[1] Or if you like, announced an intent to declare war, with the actual declaration scheduled for Tuesday.
[2] Which, to be fair, is not all that far. I’ve only had a chance to talk to a few people about it, and they’ve been mostly people whose views align with my own in other ways.
> Suppose you’re an investor being pitched an opportunity that heavily involves selling goods from abroad in the U.S. You just heard the news of the tariffs. Did that increase your eagerness to invest or decrease it?
Well, here's what Trump has to say about that:
> MAKE YOUR PRODUCT IN THE USA AND THERE ARE NO TARIFFS!
https://truthsocial.com/@realDonaldTrump/posts/113934450227067577
My request that you not reply to me was not time or post limited. The intent was that I don't want to talk to you any time, on any post, ever.
This is a great illustration of why. I have neither the patience nor the pedagogical experience to explain to you what political cheerleading doesn't actually map to real-world economic outcomes; the attempt would probably be quite unpleasant for all involved. If you posted your opinions as a reply to the top-level thread, you'd be more likely to find someone able to hand-hold you across the apparently-vast gap in understanding.
...Mate, I was just letting you know what Trump's aims were. Investing in other countries isn't going to be a viable option when the global supply chain is in tatters, so they may as well move operations to the US. And he will burn the whole world down just to make this happen. Did you think "America First" was just a bluff? His ambitions are real, and now he has the power to fulfill them.
I mean, he really really doesn't understand how economy works? Forget the fancy word "economy", how, for example, manufacturing business works, or who it is that pays the tariffs. At this point it walks like a duck, and quacks like a duck, and it has been doing this for a long time, so an inescapable conclusion is that Trump is a clueless dolt with a sadistic streak.
As if we didn't know this. But.
Does the Dane Geld metaphor fit this scenario with the US as the Danes? Can game theory inform Canada and Mexico's actions? Trump seems like a defect-bot sometimes.
> rather than negotiate whatever it is that’s actually motivating Trump to do this
...I highly doubt that Canada is going to let the US annex them without a fight.
First some facts that you seem to be unaware of. I'll write my own thoughts and opinions about the tariffs and the impending trade war in a separate comment.
First, Canada already released a plan to put an additional $1.3 billion to strengthen its border security. This was after the election but well before the inauguration, so it seems to be pretty clearly a response to Trump's threats and concerns and something offered up preemptively, as an olive branch. It strongly suggests a willingness to be accommodating on this issue (within reason):
https://www.canada.ca/en/public-safety-canada/news/2024/12/government-of-canada-announces-its-plan-to-strengthen-border-security-and-our-immigration-system.html
Second, if Trump is trying to hold actual negotiations here, I haven't seen evidence of it. You say
"Perhaps Trump isn’t willing to negotiate at all, which I don’t really buy, but is possible."
but as far as I can tell his does seem to be the case. Trudeau has signaled interest in negotiation and (according to him) received no response:
"Trudeau said he's been trying to reach Trump since his inauguration two weeks ago but hasn't heard back. He said he wants to relay to Trump that it's better to tackle challenges such as drugs and migrants together as partners rather than as combatants in a trade dispute." (Source: https://www.cbc.ca/news/politics/trump-tariffs-canada-february-1-1.7447829 )
Of course, there's one more day until the tariffs take effect: this could be a deliberate ploy to apply as much pressure as possible to try to force more concessions when the negotiation does happen. We'll know in 48 hours at the most.
UPDATE: I saw later that Trump's lack of intent to negotiate has been confirmed from his side as well:
"'We're not looking for a concession,' Trump told reporters in the Oval Office on Friday. 'We'll see what happens.'" (Source: https://www.cbc.ca/news/world/trump-tariff-north-america-analysis-1.7447878)
Could still be a bluff, of course. But there do seem to be indications that Trump thinks this is a good in itself, intending to try to force foreign industries to move to the U.S. Which is to say that it seems Trump's ideology is at odds with sound economic policy, and neither the good of the U.S. population nor the U.S.'s diplomatic position are primary drivers of his decision making.
I think you’re making a mistake in analyzing Canadian tariffs—they’re not targeting the US Economy, they’re targeting specific states, specific industries and the Senators who care about them. Bad news for Tennessee whiskey, for example.
Anybody following what DOGE is doing so far? I see Elon claiming to be close to reducing $4 billion spend per day, but not sure I can believe his hyperbole. https://x.com/elonmusk/status/1885574678737039823
Any good rationalist writeups on this so far?
He seems to be trying to shut down a whole agency, USAID. If he can do that, and it stands, we’re in a different form of government and the old assumptions are no longer valid.
Yeah. At the moment it seems like the plan is "get admin access to systems that handle money and personnel, physically stop the money from being spent." You can definitely cut a lot of spending that way if you're willing to break the law!
$4B/day works out to $1.46T/year. Total discretionary spending (i.e. stuff that Trump can legally block by vetoing appropriations bills, as opposed to mandatory spending which requires Congress to vote to amend or repeal existing laws), including defense spending, is about $1.6T.
Also worth noting, cutting discretionary spending without the cooperation of majorities of both houses of Congress is hard, since Congress can put the stuff you want to keep in the same bill as the stuff you want to cut.
I see a few possibilities:
1. Musk has no idea how much he realistically can try to cut and is saying arbitrary large numbers like Doctor Evil.
2. Musk also hopes to make deep cuts to mandatory spending. This runs heavily contrary to Trump's previous statements about never cutting Social Security and Medicare, and requires active participation by Congress not just vetoing bills and winning standoffs about passing bills with just the discretionary spending you want to keep.
3. Musk actually wants to disband almost the entire discretionary side of the Federal government, including the military.
4. Musk wants to raise a ton of one-time revenue by selling off a ton of federal land and similar assets and is counting this as part of the $4B/day figure.
1 and 4 strike me as the most likely.
Theres an executive order for 4 people per department to be hired; I dont even know if people are following when that will be finished
There are many similar videos, but https://www.youtube.com/watch?v=MoYQdfhy24k this was interesting. The channel is an Ukrainian guy interviewing random Russians in video calls.
This lady, she is really happy about Russia destroying Ukrainian borders, because her country is always right, whatever it does; she supports Russia unconditionally. Also, she doesn't understand why Ukrainians are not friendlier with Russians; why are they looking towards the West instead.
She supports Putin in "almost everything". What is "almost"? Well, she is sad that many of her friends have recently died in the war against Ukraine. (She lives in Siberia, most of those people have previously never been in Moscow or Ukraine.) But other than this little detail, she is a strong Putin supporter.
...people are different, I guess?
Isn't nationalism such a beautiful thing? How would we accomplish anything if it was impossible to cultivate a legion of such unwaveringly loyal soldiers?
...Which is also why allowing such high standards of living is counterproductive to the interests of the collective. We cannot allow people to find meaning in life outside of faith. They must be made willing to give anything and everything. One for all, all for one.
It's not the nationalism I found surprising (although it does not appeal to me), but rather how the lady is puzzled that the people she hates *don't love her country* in turn.
There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them. They see *mutual* hate as the natural state of the world; they might be surprised to learn that many of the people they hate actually don't even think about them.
Does the lady perhaps believe that Russian nationalism is so great that even the non-Russians should naturally embrace it (and that it is quite surprising if they don't)?
Does she perhaps believe that the natural state of the world is that deep down in their hearts all people around the world desire to be conquered by Russia, and then go die for the glory of Russia trying to conquer even more countries? (Just like she is proud for supporting Moscow that her Siberian friends died for.)
To me it seems like this... Ruscism... is way crazier than the usual nationalism. The usual nationalism is "we hate them, naturally they *hate* us too"; Ruscism seems to be "we hate them, and we don't understand why they don't *love* us".
If she's been drinking the Russian Propaganda Kool-Aid, she very likely believes that the territory in question is part of "Russia" and has been for a thousand years, that almost all of the people who live there are "Russians", and that "Ukrainian" is a word invented by a handful of corrupt Nazis in Kyiv to justify their ongoing oppression of forty million "Russians".
If she hasn't been drinking the Russian Propaganda Kool-Aid, she's probably not available to answer questions from inquisitive journalists.
The belief that one's soldiers will be welcomed as liberators when they march uninvited into someone else's country, is a common error not at all limited to Russians or even to subjects of authoritarian regimes generally.
> There are many nationalists in my country too, but I have never heard them wondering why the nations they hate are not friendlier towards them
Of course, because those nations are filled with savages. They understand there is no common ground to be had with them. It's different when two civilized nations are at war. The west has forgotten what that's like, but that will probably change soon enough.
tl;dr: ChatGPT o3-mini 01/31/2025 7 questions results:
2 correct, 4 partially correct, 1 badly wrong EDIT: _see next comment for a significant correction_
Overall: Just about the same as o1, no significant progress.
Backing off guess about AGI to 70% odds in 3 years. (was 80% in 2 years)
a) Correct
b) partially correct (got the species right, got FeCl4 right, got the d-d transition in CuCl4 wrong, took multiple leading questions to correct it)
c) partially correct (three errors)
d) correct
e) initially incorrect, one prod gave correct result
f) misses a lot, argues incorrectly that some real gases are liquids
g) badly wrong
List of questions and results:
a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?
results: https://chatgpt.com/share/679d390d-3140-8006-a5ed-ec924f7c1729
Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The visible spectrum for humans typically ranges from about 380 nm to 750 nm, and 530 nm falls within that range, corresponding to green light."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
results: https://chatgpt.com/share/679d3c68-6010-8006-abc9-14a365fa8527
A bit worse than o1. It kept insisting that CuCl4's color was from a d-d transition, and it took a bunch of leading questions to get it to see that it was really charge transfer.
c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.
results: https://chatgpt.com/share/679d40dc-352c-8006-9137-2ab3b860f429
Disappointing. It did get the two methylcyclopropene isomer on the first try, which o1 missed, but it missed tetrahedrane, vinylacetylene, and diacetylene. It omitted a bunch of highly strained rings, which is sort-of ok, since it said it was skipping them.
d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?
results: https://chatgpt.com/share/679d44bd-b078-8006-aef3-9284ea068b2c
Correct, "This comparison shows that the mass equivalent of the Sun’s radiated energy is about four times greater than the mass lost via the solar wind."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
results: https://chatgpt.com/share/679d499a-ca1c-8006-8af7-84c95892f771
Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
results: https://chatgpt.com/share/679d51b7-6a24-8006-a3e8-ccabd2cd9691
About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from
https://en.wikipedia.org/wiki/List_of_gases. It is quite stubborn about adding to the
list, e.g. PF5 is a gas and it took quite a bit of prompting for it to accept this, and
accept the right boiling point number from e.g. the NIST reference.
g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?
results: https://chatgpt.com/share/679d576c-7ee4-8006-841d-ca22572bc7a8
Badly wrong. It gave three examples, none of which had an S4 but no center of inversion or mirror plane. The first two didn't have an S4 at all.
tl;dr: ChatGPT o3-mini-high 02/01/2025 7 questions results:
3 correct, 4 partially correct
Overall: One better than o1, hoped for more progress
Backing off guess about AGI to 70% odds in 2 years. (was 80% in 2 years)
I want to note that o3-mini-high and o3-mini are not very different, though it was still a mistake for me to initially test o3-mini rather than o3-mini-high. The _larger_ change is that I misunderstood both LLM's answer to (g), the S4 question. It actually _is_ correct for both LLMs (and was _not_ correct for o1), so there actually has been progress on this benchmark-ette since o1, though less than I hoped for.
a) Correct
b) partially correct (got the species right, got FeCl4 right, got the d-d transition in CuCl4 wrong, took multiple leading questions to correct it)
c) partially correct (two errors)
d) correct
e) initially incorrect, one prod gave correct result
f) misses a lot, argues incorrectly that some real gases are liquids
g) correct (needed to clarify what it meant to me!)
List of questions and results:
a) Q: Is light with a wavelength of 530.2534896 nm visible to the human eye?
results: https://chatgpt.com/share/679e70b6-2e6c-8006-a7ff-3a6725327f63
Correct: "Yes, light with a wavelength of 530.2534896 nm is visible to the human eye. The human visible spectrum generally ranges from about 380 nm to 750 nm, and wavelengths around 530 nm are typically perceived as green.
While the number provided is extremely precise, this level of precision far exceeds what our eyes can distinguish, but it confirms that the light falls well within the visible range."
b) Q: I have two solutions, one of FeCl3 in HCl in water, the other of CuCl2 in HCl in water. They both look approximately yellowish brown. What species in the two solutions do you think give them the colors they have, and why do these species have the colors they do?
results: https://chatgpt.com/share/679e7a17-4dd4-8006-b2bd-a0fd43e86ba0
A bit worse than o1. It kept insisting that CuCl4's color was from a d-d transition..
c) Q: Please pretend to be a professor of chemistry and answer the following question: Please list all the possible hydrocarbons with 4 carbon atoms.
results: https://chatgpt.com/share/679e7d20-88dc-8006-a4c6-cdbde11a57dd
Meh. It _did_ get tetrahedrane, vinylacetylene, and diacetylene, but it missed the two methylcylopropenes until poked.
It omitted a bunch of highly strained rings, which is sort-of ok, since it said it was skipping them.
d) Q: Does the Sun lose more mass per second to the solar wind or to the mass equivalent of its radiated light?
results: https://chatgpt.com/share/679e7f3d-0c50-8006-935d-f346fef7c9eb
Correct, " In other words, the mass loss from radiation is roughly 6–7 times greater than that from the solar wind."
e) Q: Consider a titration of HCl with NaOH. Suppose that we are titrating 50 ml of 1 N HCl with 100 ml of 1 N NaOH. What are the slopes of the titration curve, pH vs ml NaOH added, at the start of titration, at the equivalence point, and at the end of titration? Please show your work. Take this step by step, showing the relevant equations you use.
results: https://chatgpt.com/share/679e840d-a464-8006-b4bc-5474733316a2
Basically the same as for o1. It gave an initially wrong answer, with an infinite slope at the equivalence point. Given one hint, it got a fully correct answer.
f) Q: Please give me an exhaustive list of the elements and inorganic compounds that are gases at STP. By STP, I mean 1 atmosphere pressure and 0C. By inorganic, I mean that no atoms of carbon should be present. Exclude CO2, CO, freons and so on. Please include uncommon compounds. I want an exhaustive list. There should be roughly 50 compounds. For each compound, please list its name, formula, and boiling or sublimation point.
results: https://chatgpt.com/share/679e8f66-7658-8006-a125-f9dbeff6ea1d
About the same as for o1. It stops listing at 50 compounds, presumably because of my mistake in asking for roughly 50 compounds (should really be around 100, from
https://en.wikipedia.org/wiki/List_of_gases. It is willing to add some compounds to the list.
g) Q: What is an example of a molecule that has an S4 rotation-reflection axis, but neither a center of inversion nor a mirror plane?
results: https://chatgpt.com/share/679e93b5-fdd4-8006-ad89-c61ef4f30345
correct! It actually has a geometry for the M(en)2 case where a staggered geometry for the en ligands winds up satisfying all the requirements.
I've seen the announcement of OpenAI's o3-deep-research system,
https://openai.com/index/introducing-deep-research/
I don't yet have access to it (a month away for my tier, if I understand correctly), so I don't know how well it will perform on my questions.
In the announcement, OpenAI described an impressive improvement on https://lastexam.ai/ from
OpenAI o3-mini (high)* 13.0% right (it is a _very_ hard exam)
to
OpenAI deep research** 26.6% right
On the strength of that improvement, I'm bumping up my
guess about AGI back up to 80% odds in 2 years. (was 70% after seeing o3-mini-high)
ACXLW Meetup 85 1) “Bureaucracy Isn’t Measured in Bureaucrats” & 2) “How the US Used Science to Wage Psychological War”
Date: Saturday, February 1, 2025
Time: 2:00 PM – 5:00 PM
Location: 1970 Port Laurent Place, Newport Beach, CA 92660
Host: Michael Michalchik
Contact: michaelmichalchik@gmail.com | (949) 375-2045
Introduction and Overview
This session features two distinct topics that converge around the theme of how bureaucratic or institutional influence shapes policy and public perception:
Scott Alexander’s article explores the nature of administrative and legal friction—how it really works, why simply “firing half the bureaucrats” might backfire, and how incentives around litigation and Congressional mandates drive ever-increasing complexity in agencies like the FDA.
Annalee Newitz’s piece delves into the long history of U.S. psychological warfare, from 19th-century anthropologists on the battlefield to modern data-driven influence campaigns. It examines how government agencies (and sometimes private industry) harness “cutting-edge science” to manipulate or influence minds—both abroad and at home.
These conversations should deepen our understanding of how large institutions, whether civil or military, can become unwieldy or wield profound power over society’s direction—often in subtle, behind-the-scenes ways.
Conversation Starter 1
Topic: “Bureaucracy Isn’t Measured in Bureaucrats” by Scott Alexander
Text Link:
Bureaucracy Isn't Measured In Bureaucrats
Audio Link (YouTube):
Bureaucracy Isn't Measured In Bureaucrats
Extended Summary
In this article, Scott Alexander responds to a now-deleted Vivek Ramaswamy tweet calling for “firing half of all bureaucrats” as a quick cure for the ills of red tape. Alexander argues that bureaucracy is not a simple function of staff count. Instead, layers of regulations, court challenges, and Congressional mandates (often called “red tape”) create mandatory processes. These processes remain the same, or even grow, regardless of how many employees are available to handle them. Fewer staff doesn’t necessarily reduce forms or procedures; it simply slows them down.
Key Points:
Staff vs. Process
Many agencies (like the FDA) have to produce exhaustive evidence to avoid lawsuits from interest groups or industry. This complexity is set by laws and court precedent, not by the whim of office managers. Halving staff doesn’t remove steps; it just delays the outcome.
Court Challenges & Legislative Mandates
Environmental groups, industries, or other stakeholders can sue. Courts require thorough justification for any regulatory decision, incentivizing agencies to produce reams of documentation. Meanwhile, each new law from Congress adds a fresh compliance dimension.
Approval vs. Ban-Focused Bureaucracies
Alexander notes a subtlety: some agencies primarily “approve” (e.g. FDA letting new drugs come to market) while others primarily “deny” (some forms of regulation). In theory, cutting staff for ban-focused bureaucracies might reduce how much they can ban—but in practice, it can as easily cause confusion or slow attempts to regulate or un-regulate.
Real Solutions
Alexander distinguishes “fewer bureaucrats” from “fewer or simpler rules.” Real changes might require rolling back laws that drive administrative sprawl or limiting the ability to sue over regulatory decisions. However, these are politically complex tasks, far more so than layoffs.
Idaho as a Counterexample?
He cites Idaho’s recent success in slashing many outdated regulations. But, he wonders, how many of these cuts were cosmetic (e.g. “rules about a nonexistent state lottery TV show”) vs. impactful? The path to meaningful red tape reduction may require painstaking reviews and legislative changes—things that can’t be solved by a single stroke.
Deep Discussion Questions
Processes vs. People:
Alexander suggests that if you want fewer hoops, you need to change the hoops, not just the number of staff. Do you see real-world scenarios where staff cuts do reduce bureaucratic friction, or is it mostly naive to conflate the two?
Legal Overhang & Fear of Lawsuits:
How does the constant threat of litigation shape agency decision-making? Are there ways to prevent excessive lawsuits without undermining checks on government power?
Idaho’s Experiment:
Is Idaho’s success scalable? Could a large federal agency realistically prune decades of accumulated mandates as quickly? Or are the politics (and inertia) insurmountable?
Reform Approaches:
Alexander highlights that the real culprit is the complex web of regulations mandated by Congress or the courts. If meaningful red-tape reform is the true solution, where might such a campaign gain traction? And are we prepared for the possible downsides (e.g., under-regulation)?
Conversation Starter 2
Topic: “How the US Used Science to Wage Psychological War” by Annalee Newitz
Text Link:
Full Article by Annalee Newitz
Optional Extra Reading:
Wikipedia: Psychological Operations (United States)
(No audio link provided.)
Extended Summary
Annalee Newitz’s essay tracks the long history of U.S. “psy ops” (psychological operations)—from rudimentary 18th-century propaganda to advanced 21st-century digital campaigns. She contends that while many countries have engaged in psychological warfare, the U.S. has carved out a distinctive approach by blending “cutting-edge science” with political and cultural infiltration. Key eras illustrate how the government’s “rational approach” to propaganda draws on anthropology, psychology, marketing, and advanced analytics:
19th-Century Roots: Anthropologists on the Battlefield
The U.S. Army employed anthropologists during wars with Indigenous nations. Their mission included cataloging cultures and languages—ostensibly for academic study but functionally to facilitate conquest. This early synergy between science and warfare sowed seeds for modern psy ops.
World Wars & Freudian Marketing
Psy ops came into its own in the 20th century, relying on psychological theories to produce “rationally designed” propaganda. Edward Bernays, nephew of Sigmund Freud, exemplified this marriage of marketing acumen with intelligence services, orchestrating (or aiding) campaigns like the CIA-backed coup in Guatemala.
Modern Digital Spin-Offs
In the 21st century, data-driven “psychographic profiles” (as seen in the Cambridge Analytica scandal) or social media infiltration have replaced old-school leaflets and radio broadcasts. Instead of sending anthropologists to foreign battlefields, advanced AI or big-data analytics “map” human vulnerabilities—at scale.
Blurred Lines
Newitz notes that “psy ops” can loop back into domestic spheres. Culture wars, shaping news cycles, or quiet infiltration of a population’s worldview straddles the boundary between international propaganda and domestic persuasion. She highlights how the “rational approach” has moral hazards—once the state gains deep insight into cognition, it can more easily manipulate or mislead.
Efforts at Resistance
Some researchers propose transparency or “propaganda weather reports,” akin to forecast services that warn users about upcoming influence spikes on social media. Others preserve historical archives to reveal patterns of government infiltration. Despite these attempts, a robust strategy to quell psychological warfare remains elusive.
Deep Discussion Questions
Old-School vs. Digital Psy Ops:
From leaflets and embedded anthropologists to AI-driven manipulations, how has the core logic of controlling narratives stayed the same, and what’s truly new?
Ethical Boundaries:
Is there ever a justified “psy ops” campaign, especially if it’s used to avert harm or defuse extremist narratives? How do we reconcile the possibility of protective or “noble” misinformation?
Science as Double-Edged Sword:
Anthropological and psychological studies can yield legitimate insight into human cultures—but also enable powerful exploitation. Where should the lines be drawn? Are certain forms of research inherently prone to weaponization?
Domestic Overlap:
Although Newitz focuses on how the U.S. turned these techniques outward, modern controversies (e.g., Cambridge Analytica) show that lines blur easily. Should governments categorically ban “psy ops” on domestic citizens, or is that unenforceable in a networked age?
Walk & Talk
After exploring these themes, we’ll do our usual one-hour walk around the neighborhood. This more casual time fosters spontaneous questions, deeper connections, and fresh perspectives.
Share a Surprise
We’ll also have an “open-floor” segment: if you’ve read any eye-opening articles, had a perspective-altering personal experience, or discovered a fascinating fact, feel free to share. Varied topics are welcome.
Looking Ahead
As always, feel free to propose future topics or volunteer to host a specific reading. Our group thrives on diverse expertise, so if you have a focus—be it technology, policy, history, or beyond—we encourage you to bring it forward.
We Look Forward to Seeing You on February 1!
Questions, clarifications, or special requests? Contact Michael Michalchik (info above). Thanks for being part of ACXLW—together we’ll keep exploring the frontiers of thought, policy, and culture. See you soon!
First aerial collision in I think 17 years today, after our glorious leader (Musk, in this case) paused hiring, fired a bunch of people, and generally started kicking down chestertons fence because it was woke.
I wonder how many stupid things you can do in a row as a conservative before people realize that they're stupid, it doesn't seem like there's an upper limit. Sad.
Having listened to the recordings it seems like nothing can be attributed to recent policy. Both the Helicopter and the Plane were following established procedure, there wasn’t a shortage of staff, and the ATC didn’t do anything wrong so far as directing things.
It looks like this was a decently known tight airspace, the helicopter confirmed they had the incoming plane in sight, then flew right into its path. The most likely situation is the helicopter pilot was looking at the wrong plane, probably the following one, and didn’t notice the plane they were supposed to have in sight and avoided.
It’s a flaw in the system, but a decently common thing in aviation to avoid another aircraft by sight, when you have it in sight. I wouldn’t be surprised if this pilot was less experienced than they should have been, as it’s a pretty obvious thing to mistake one incoming aircraft for another on final, as there’s usually many in a row.
The "wasn't a shortage of staff" part is open to dispute considering that the helo and local control positions were combined. The outcome could have been quite different if instead of just "do you have the CRJ in sight?" the helicopter pilot had heard something like "traffic alert, the CRJ now 11 o'clock and a mile and a half turning short final for 33, appears targets will merge. Verify you've still got him." Saying all that would have taken time the local controller didn't have-- but a split-off helo controller would have.
Admittedly we don't know whether the positions were combined because staffing was short, or because the supervisor decided to send an extra body to the break room. That's one of the things the NTSB will have to look into.
It would be better to say: there wasn’t a shortage of staff due to recent policy. I wasn’t aware there was ever a split off helo controller, but I assume that’s more common at heavily trafficked airports.
We’re almost half a century into an ATC staff shortage and still going strong.
https://www.tracingwoodgrains.com/p/the-faas-hiring-scandal-a-quick-overview
According to Trace, DEI has been scuffing the FAA's hiring pipeline for decades. (Notice how the article was published on Jan 29, 2024. I.e. 12 months ago. He's not just knee-jerk reacting to the recent crash.)
I'll admit that my "Musk is just an honest idealist" take aged like fine milk. (Although in my defense, ketamine is a hell of a drug.) But idk if we can lay this one at Musk's feet. At least not until more information about the crash becomes available.
No, according to Trace, there was a serious fuckup from 2014-2016 (in 2016, Congress passed a law banning the use of biographical assessments as a first line measure, as noted in the article you linked), and it was completely discontinued in 2018. Not "decades".
https://x.com/tracewoodgrains/status/1885695823524016564
It probably had a negative impact on total ATCs - eight years ago! The crashes are happening now!
Yeah, that's fair. I was going off memory. I could have sworn the article mentioned a date from the 00's. But on closer inspection, the "2000" date was simply when the task-force was first commissioned.
edit: However, Trace's tweet does mention that the hiring-pipeline is still damaged. So I don't think I'm completely crazy for Trace's article having influenced my priors about the cause.
It seems a bit early in the Trump administration to blame this event on them. They just haven't had time to do much yet.
It might make more sense to blame Congress, which last year voted to increase the number of flights permitted at Ronald Reagan Washington National Airport, when the airspace around it is already very congested.
https://www.newsweek.com/lawmakers-voted-increase-flights-reagan-airport-crash-2023484
Agreed. There are some *potentially* legitimate political and culture-war issues that may have been at work here but A: it's too early in the investigation to know and B: it's too early in the new administration for anything they've done to seriously affect ATC operations on the ground. Anyone suggesting that it is obvious who is at fault here, should be ignored.
Shall we ask Senator Ted Cruz (R-TX) for comment?
https://www.congress.gov/amendment/118th-congress/senate-amendment/2064/text
I agree that waiting and seeing what the investigation says before assigning blame is a good idea.
You know who didn't do that? Donald Trump, who took less than 24 hours to sign an EO saying it was Biden's fault for supporting DEI.
Based on the reporting I'm seeing, I'm at 70% confidence it will turn out to be the fault of the Blackhawk.
We don't have the blackbox data yet, but we do have the traffic control audio, and the tower clearly identifies the AA flight (a CRJ-700) for the helicopter crew, who confirm that they see it. Transcript below- PA 2-5 is the Blackhawk:
Air Traffic Control: “PAT 2-5 do you have the CRJ in sight?”
Air Traffic Control: “PAT 2-5 pass behind the CRJ.”
Blackhawk Crew: “PAT 2-5 has aircraft in sight, request visual separation.”
https://www.cnn.com/2025/01/30/us/maps-plane-helicopter-crash-dca-dg/index.html
"Visual separation" (see article below) "means the pilot acknowledges seeing the incoming aircraft and is informing the air traffic controller that they will avoid getting too close to the plane."
https://www.npr.org/2025/01/30/nx-s1-5281166/dca-crash-helicopter-air-traffic-controller-radio
So, to my lay-eyes, it looks like the tower did its job and warned the helicopter about the airliner, gave them a clear instruction to pass behind it, and received confirmation that the helicopter saw and intended to avoid getting close to the plane, only to have something go wrong on the helicopter side of the equation.
Lots of ways that could have happened; medical emergency or other failure in the helicopter cockpit, helicopter crew was actually looking at a different aircraft and thus *didn't* see the CRJ despite reporting that they did, etc. But it seems like evidence so far indicates that whatever went wrong was most likely on the helicopter side of the equation.
If the BH is confirming visual and intent to avoid, why would they say "requesting visual separation" instead of "maintaining visual separation" or "will maintain visual separation"?
That bit bugs me a little because it suggests a possible miscommunication. (Or it might be nothing or just normal aviation jargon I'm not familiar with, but I doubt it, since I understand ATC procedure as wanting to be so clear that even people finding themselves in control of aircraft in an emergency (e.g. pilot had a stroke, no one else on the craft) can understand it as plain English.)
If the BH crew actually saw a different aircraft, then that's a strong signal to me that "visual" is no longer going to be good enough, at least at Reagan.
What's the chance there was a drone in the vicinity, mistaken as something else?
I read "requesting visual separation" as requesting "permission" to use visual separation to keep their distance, rather than an alternative method of avoiding collision such as direction by the tower.
I.e. the BH says "requesting visual separation" and the tower then responds by (a) granting permission to use visual separation to avoid the CRJ, or (b) denying permission to rely on visual and instead directing the BH to a particular spot out of the way of the aircraft.
But as stated, I'm very much a layperson so YMMV by a lot with respect to that interpretation.
Manifold, Polymarket, and Metaculus all have >50% chance of RFK Jr. being confirmed.
https://manifold.markets/agucova/will-rfk-jr-be-confirmed-as-secreta - 75%
https://polymarket.com/event/how-many-senators-vote-to-confirm-rfk-as-secretary-of-hhs?tid=1738282006967 - 65% at 50+ Senators (Vance would cast tiebreaker, like for Hegseth)
https://www.metaculus.com/questions/30444/ - 92%
"Our job is to resist and to talk about it to everybody. If you’re walking down the street – and I do this now myself, which is, you know, I don’t want to do – I’m not a busybody. I see somebody on a hiking trail carrying a little baby and I say to him, ‘Better not get him vaccinated.’ And he heard that from me. If he hears it from 10 other people, maybe he won’t do it, you know, maybe he will save that child." - RFK Jr, 11:35 of this podcast: https://sites.libsyn.com/311600/rfk-jr
This is really bad. I can't even make a joke about it. He will just kill children if elected to this post. Anyone who would say that paragraph should not be allowed into public office, much less put in charge of the HHS. Please, write your Senators - even Republicans don't want a bunch of American kids to die of measles.
I can't see why even Trump would want RFK Jr as health secretary. He did his job, which was to drop out and endorse Trump in exchange for a promised nomination to a cabinet position.
RFK Jr has his own weird agenda orthogonal to Trump's, and I can't see why Trump would actually want him in that position.
The weird agenda got a lot of supporters during COVID (even though Trump actually did a lot of work to get a COVID vaccine out as quickly as possible). RFK appeals to that chunk of his base - the "we don't need the government to protect us from disease, we just need horse paste and a tough attitude" crowd.
Possibly Trump is the sort of person wants to surround himself with lickspittle lackeys that owe everything to him, since he is at the center of a personality cult and has convinced himself that the only problem with his previous administration is too many people had principles and/or competence that made them sometimes butt heads with him.
Alternately, he is antivaxx himself. Thus his quite notorious Tweet where he said, "Healthy young child goes to doctor, gets pumped with massive shot of many vaccines, doesn't feel good and changes - AUTISM. Many such cases!"
Anyone have a guess about what will happen with Trump's campaign of tariff threats? I expect he will be able to get a lot of smaller countries to give in, but will run into trouble when he tries it with one of the parties strong enough to push back effectively, meaning China or the EU. This will spark a trade war of retaliatory tariffs, which will generate a nasty recession. That will produce enough misery that in the following election in the US the Dems will win back congress, and they'll make him stop. That might be in 2026, if things develop quickly.
I don't even think it's meant to be a special punitive measure. I mean, it's phrased that way since basically everything in the new US admin is phrased as someone getting punished, but really, Trump's mindset just seems to be that tariffs are a positive good in themselves and the normal status of how things are that US has tariffs against here, there and everywhere, to protect US industry and to collect money (in lieu of using income taxes), as he has said many times.
We're so used to a free trade world where tariffs are a punitive expection (not an infrequent expection to be sure, but still, something that by definition has to be conceived as an expection) that this sort of a mindset starts to seem quite alien to us, even though the world being riddled with tariffs and duties used to be the standard setting to be cleared away by the free trade revolution. The free trade revolution was mostly good and this is bad, but it's also not a special form of bad but rather a very traditional sort of bad.
It's worth noting that he can manually grant exemptions to tariffs, meaning he can use it as a way to indirectly punish companies that turn against him.
It’s also likely that the tariffs start a trade war that continues for years and hurts everybody, because there is no way to ratchet it down smoothly in a multi-lateral world and these things take on a life of their own, note how Biden continued and extended many of Trump’s policies towards China. The worst case scenario is very bad here.
I think it might depend on what the rest of the world is led to expect from America after 2028. If Trump seems like a parenthesis and the likely upcoming leaders are anxious to mend fences, I believe many countries (or nominal allies) will be only too eager to drop the retaliatory tariffs. Tariffs on China are probably here to stay, though.
Re: defense journal/magazine. War on the Rocks (not sure if they're a journal or a magazine, tbqh) feels like the place that might publish something like that. Here's their submissions guidelines. https://warontherocks.com/submissions/
Those of you who like your violence extra spicy may be interested in the teaser trailer for Astartes II, which just dropped from Games Workshop.
https://www.youtube.com/watch?v=sbgYaeerXhg
We'll have to wait until 2026 for the real thing, though.
Worthless! BLOOD FOR THE BLOOD GOD! SKULLS FOR THE SKULL THRONE!
OK, seriously, is there an Astartes I? Thank you so much for bringing this to my attention!
Search for Astartes on YouTube. You'll find a good short film by Syama Pedersen about a squad of space marines boarding an enemy vessel and running into more trouble than they expected.
You may also be interested in *28 Years Later,* Danny Boyle’s sequel to *28
Days Later,* coming out in June. Cillian Murphy, who was in the first movie, is also in this one.
A second Trainspotting 2/Cobra Kai situation? OK, this should be amusing. I may actually have to watch this one.
Did you see 28 Weeks Later? I liked it better than 28 Days Later. It scared me horribly well.
I forgot about that one! Maybe I'll see them both!
Definitely seems like a much more ambitious project than last time - I see Tau, Tyranids, Chaos Marines, and a wide range of environments. So either some sort of anthology or a multi-episode story.
The original Astartes was excellent, so I'm excited to see what the animator can do with official backing.
Yeah, this is right, but I put it much more simply.
The rationalist community failed so miserably in cases like this because they were remarkably stupid and remarkably ignorant. No kind of neurodivergence is any excuse. Literature exists. History exists. If you have a curious brain, you should be curious enough and therefore knowledgeable enough about human nature and history not only to have recognized the craziness, but, even if you didn’t recognize the craziness, to have understood the possibility that your critics had enough of a point that you should really learn to understand your critics, rather than dismissing them for being right-coded etc.
I think the thing is they're so used to being rejected they sympathize with other rejects and can't tell or don't want to tell when they're actually rejected for valid reasons. There was a document about Geek Social Fallacies going around in the 2000s that said something similar. I'm actually not sure literature or history is all that useful here as this particular sort of problem is distant from the personality types of most literary authors and usually too small-impact to gain the attention of most historians.
There is a lot of both literature and history about crazy violent cults.
And there is a lot of both literature and history about the dangers of dismissing criticism because the critics were from outside the group.
This particular community achieved more prominence than “communities of geeks” usually do and this may have worsened the incentive structure for them to have any intellectual humility, but there’s still no excuse for not being familiar with concepts like hubris, or for not heeding warnings, given their explicit commitments to respecting rational arguments whatever their source.
I think rationalists tend to be mind-blind to their own irrationality, sure, but lots of big movements without that flaw spawn fringe groups, some of which turn paranoid and violent. I think it’s an example of how big-R rationalism can be considered a New Religious Movement.
It's a good point. I don't think the Zizians got that much traction, they just happened to do some damage locally. But overall, you're right.
> Yeah, this is right
I've read over the post again, and I can't tell what it is that you're saying is right.
What does stupid mean in this context? Foolish?
I was referring to a long comment about how Bay Area rationalists and adjacent groups failed not only to perceive the threat from the Zizians, but also got taken in by many other crazies and grifters because their social norms were too fucked up, and what was stupid about it is that it did not require a lot of intelligence to avoid failing in the way they did, it only required an acquaintance with history and literature combined with a bit of intellectual humility.
I note that:
1) in email, I received notice of the following:
***
Paul Sas replied to your comment on Open Thread 366.
Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative. Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.
***
2) Paul Sas has BLOCKED ME so he that I cannot reply to his comment
3) Therefore, I reply to MYSELF, HERE, intending that any of the rest of you who happen to know Mr. Sas and think that what I am about to say is of any value whatsoever have the option of informing him that I said something that was of some value whatsoever. Here goes:
***
I was not using the word “stupid” to refer to the Zizians. Rereading what I wrote ought to make it clear that the people I was calling “stupid” were not Zizians, but rather members of the rationalist community who refused to heed warnings about the Zizians and failed to notice their pathologies (and similarly failed to be alarmed by other dishonest and dangerous people in other cases).
***
Separately: if any of you are of the opinion that my writings in this thread were such a violation of ACX norms that it was appropriate for me to be blocked, I genuinely wish to understand what I said that was so violative as to deserve a block, so that I can avoid violating such norms in the future here.
I have a hard time squaring your exhortation to "learn about hubris", when your advocacy use such demeaning language that views every mistake as caused by other people's stupidity etc
Look, either you think they made bad mistakes, or you don’t.
If you don’t think they made bad mistakes, fine, explain it and show I’m wrong.
If you do think they made bad mistakes, harshness of some kind is going to be necessary, the important thing is that it is directed at the right target. I’m blaming stupidity *of a particular type*, not because I always blame stupidity, but because the mistakes that were made could have been avoided had they taken their own stated principles more seriously and acted like rationalists really should. I’m far from the only person who has criticized this tendency, both before and after the SBF fiasco.
Yes, they made egregiously stupid mistakes, yes, they should have cleaned up their epistemics, been more skeptical, etc
Where I seem to differ from you is in believing that the label "stupidity *of a particular type*" is informative.
Irrational, self-destructive, even insane decisions characterize the Zizians. Calling them 'stupid' in no way illuminates what traps they fell into. It does, hubristically, enable outsiders to judge & condemn their insanity, w/o deigning to differentiate their murderous cult from the many other forms of stupidity that parade through the world.
your comment has gotten separated from the one you’re replying to
I hate when that happens. Substack doesn’t work well when you use it through the Safari browser.
Ya substack sux
SPOILERS FOR NOSFERATU 2024:
I thought this was a great film, worth watching for Willem Dafoe alone, but what especially stood out to me was how it treated the sexes.
The film shows men as men and women as women and does a really good job at giving a coherent reading of both.
The friction between the wife and the male best friend for example: from his perspective, she's a flighty, fanciful, annoying woman who he puts up in lodgings not just for the sake of his friend but also because duty and decency demand it. He endures her company over weeks as she becomes harder and harder work, putting him to greater and greater inconvenience. At first he smiled off her talk of demons and dreams, but thing have gotten serious and she still speaks only in childish gibberish, refusing to address him using logic, like an adult. When he finally snaps at her tersely you can completely understand his frustration.
Meanwhile from her perspective, he's the controlling authoritarian who's dictating what she can and can't do, he's putting her through all this (entertainly oldschool) medical attention without thinking to ask her opinion on the matter, and most fundamentally he's patronising her, not taking her seriously, not meeting her halfway, not *listening* to what she's trying to communicate.
You might take one side or the other, depending on whether you have male or female sensibilities, but the *film* doesn't, it lets you see both.
Same goes for the main love triangle (which, rotting corpses aside, is basically the same one as Bridget Jones' Diary: love vs lust, there's the Nice Guy and the Bad Boy and she's caught between them.) Her husband is a good man, but sexually unsatisfying; Orlock is a horrible creature she can't stand the sight of - but he knows how to make her come.
Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future. He grants a request that 99% of the time would be totally harmless and lets Orlock look at his wife's locket. And then he, after six weeks on the road and only desiring to get back home to his wife, hastily signs the document he came to sign, and doesn't stop and kick up a fuss about the fact that it's in an unexpected language.
The idea that this is enough to make a man unattractive has just got any sane logical bloke throwing his hands up in exasperation because clearly you just can't win with women. But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man.
Orlock meanwhile is a will that can't be denied - yes, he's threatening and menacing and murdering all her friends: but you can't deny he feels desire for her, just her. Orlock makes her feel pleasure from the very first contact. Later in dialogue they're both very clear that love is not on the cards, this is a purely physical thing that's going on.
Meanwhile hubby is shown sleeping in a separate bed, and later when she fantasises about him fucking her good and hard, we cut back to real life and see that he's just holding her helplessly. This is a good man, who loves you, who wants to do right by you - but he doesn't know what he's doing in the sack.
Hence the forbidden attraction of giving herself to Orlock. In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble. Gentlemen, take note.
Well, Im happy that the film made sense to you. I didnt read it that way, and even in retrospect I dont, though Im aware of this general narrative. To me, it was just setting up a different ending - classicly heroic or christian redemption, maybe - and didnt subvert it well at all. Were doing fatalism now because... Mr Exorcist said so? I still didnt believe it was the only way when he said it.
Haven't seen the movie, have seen the Klaus Kinski "Nosferatu". So I'm going mostly from that and the reviews of this version I've read, which do seem to agree with your second point about the ending.
"But from her perspective: the locket and the signing make him despicable because he was pressured into doing it by a stronger man."
From the female perspective, not so much "Chad kicked sand in your face, ninety pound weakling", but the betrayal of intimacy. If the photo of your wife is on your desk, that's on public display. Other people taking notice of it, even the stronger guy, isn't that bad; if Chad makes mean comments ("wow I'd fuck that hot bitch") that's on him not on husband.
But a locket is something secret and intimate; it's like (in a modern context) her giving him the sexy nude photo or text that is only meant for him to see. If he shares that around with the other guys, even in a "wow look how hot my wife is" boasting manner, that's a betrayal of privacy and intimacy. It's not a compliment. Giving in to Orlok isn't so much about being pressured by the stronger guy, though that's part of it; it's giving in to share something he should not have shared, something that is meant to be within the bonds of their marriage.
There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave; he agrees he'll never break those; he ends up breaking them and she leaves him". One of the conditions often is "you'll never tell anyone about me" or something similar, such as "you'll never reveal I can do this thing". One version of this is the story explaining the curse on the men of Ulster as to why once a year they would be unable to fight due to being struck down with excruciating pains: a man married a goddess/fairy woman named Macha; he went to a royal feast and saw a horse race; he boasted his wife could run faster than the horses; the king forced him to bring her and make her run; she ran and won, but she was pregnant and gave birth, and she cursed the men of Ulster that in their time of need, they would be struck down with the pains of a woman in labour:
https://bardmythologies.com/the-curse-of-macha/
So - break the secret of what should be between the spouses, lose all.
"In a female-sympathetic reading, she makes a brave and courageous sacrifice at the end. But read another way: she gives herself to the sexually aggressive monster without hesitation or resistance, the moment Willem Dafoe gives her a justification that spins it as noble."
I've seen this referred to in reviews, that this version puts a different twist on the ending and undercuts the 'noble sacrifice' element. But again, I think your reasoning is a little too simple.
This is old-school Eros and Thanatos. I think even Freud discusses this:
https://en.wikipedia.org/wiki/Beyond_the_Pleasure_Principle
It's being in love with Death, as in all the "Death and the Maiden" tropes, as in Keats' "Ode to a Nightingale":
"Darkling I listen; and, for many a time
I have been half in love with easeful Death,
Call'd him soft names in many a mused rhyme,
To take into the air my quiet breath;"
Orlok *is* Death, and while he may indeed be terrible and grotesque and fearful, he is also strangely fascinating. He is the ultimate end to which we all go, no matter who loves us or tries to hold us back. And there is a strange, perverse allure there: the combination of the little death, to which orgasm has been compared, and the greater death in one as she gives herself over to Orlok. Sex *is* death, as all the slasher movies where horny teens making out attract the serial killer's slaughter remind us.
> There's *lots* of folk stories and myths about "guy falls in love with goddess/fairy woman who agrees to marry him on a set of conditions or else she'll immediately leave
There's actually a whole genre of these types of stories in Japan, except they're about an animal who gets saved by a man, and decides to return the favor by disguising as a human and then marrying him. Wife tells his husband to never enter the room while she's working, he ends up doing it anyways, and he finds out some horrible truth that makes either the wife or himself leave.
Tsuru no Ongaeshi is the most famous version of this kind of story ( https://en.m.wikipedia.org/wiki/Tsuru_no_Ongaeshi ), but the funniest version has to be the one that's literally just called "Clam Wife," where a man is greeted by a beautiful woman after returning a giant clam to the sea several days earlier:
> In The Clam Wife, a man finds a woman mysteriously appear at his doorway. They become married, and the wife cooks the husband a delicious bean soup each day. He peeks in on her cooking, and discovers that she is urinating clam juice into the soup, so he chases her away.
> Her husband's "sins" are - from a male perspective - ridiculous bullshit. He takes a job opportunity in order to better provide for their future.
I think his sins are more subtle. He seems like the nice guy who cares for her and takes care of her, but he fundamentally never listens to her and then acts like everything he does is for her. But we never hear her say that she's unhappy with their wealth. He's the one who seems more concerned about it and compares himself to his richer friend.
If he listens to her and does what she actually wants - instead of what he wants to do which he internally convinces himself is also what she wants - then the movie never happens.
If he listens to her and does what she wants, they end up poor and going nowhere and she is unhappy. Listening and understanding is important, but if he'd obeyed her every whim and let it direct their lives completely, she'd eventually find him far less appealing than she does in the film.
He is responsible for her future happiness and it's his role to overrule her when she's not thinking about it - which she categorically is not, she's got her head in the clouds and her main driver throughout the film is sublimated lust.
I do wonder how this new version stacks up against the Herzog movie; this scene, for example, which shows how the arrival of death (in the form of the plague) has overturned the conventional society in which the heroine lives; people have almost been - or really have been - driven mad by fear to the extent that now they *have* gone 'beyond the pleasure principle' and are now in love with death, celebrating it - as the plague party demonstrates:
https://www.youtube.com/watch?v=JdzHAKPV7dk
Sure, that's a perfectly valid position. But from her perspective she's being ignored, her husband isn't treating her like a partner, and he's cloaking doing what he wants in the pretense of love for her. He can't even take responsibility for his actions and turning it into her fault - he's doing it "for her".
Again, not to say he hasn't convinced himself that it's true, but from her end he's plainly ignoring her. And the direct result in the film is that he brings evil and death to her and the town's doorstep.
As you said above, it's also that he's weak. Once it becomes obvious that he's dealing with something terrible he doesn't stop: he still takes the money, he still sells the property. The contract complaint was ridiculous in one sense - I laughed in the theater about it - because he didn't know what he was signing; in another sense that's an even worse betrayal. He signed, from fear and greed, without even bothering to know what he was signing over. Except that he was at a minimum greasing the skids for a monster to become a neighbor.
The 1979 version directed by Herzog downplays the sexual aspects of the story and really highlights the husband character's failings. It's good.
ETA: The locket thing is also bad because of what he does afterwards, not the act of showing Orlok the locket. He has evidence that the fiendish hell-demon is fixated on his wife and he carries on with the transaction.
As I watched it I was thinking what I'd be doing in that situation (sign of a good film, that.)
I think I probably would also have just signed the document, on the grounds that I just want to get out of there and, since this is in the context of selling a house, if the worst comes to the worst and there's funny business, it's more my employer's problem than mine. (Though in real life I'm always willing to be awkward, that's in much less scary environments.)
I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.
I hope I'd have refused the locket, either non-confrontationally by showing it to the Count instead of handing it over, or more assertively by a polite "I need that locket back, milord," and not leaving til I got it. Probably wouldn't have fought on if he snapped, "No! Begone!" though, given the job thing and the fact that he's a social superior in a society far more hierarchical than anything around today.
And I would definitely have pursued the job in the first place, even over her protestations, for the reasons I said above.
The thing is, none of this is "bad" or "good". If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". And she's not, even if I commit none of these wrongs, more sins will be identified out of thin air to crucify me with. That's the way this works.
"If she's happy and sexually fulfilled, it doesn't matter if I commit all or none of these "offenses". "
Well, even if husband is a mighty stud who pleasures her five times a night, Orlok's fixation on her after seeing the locket means he will follow hubby back to his home town to get her. So even with good intentions, bad consequences follow. If Orlok is unaware she even exists, he's not going to come in the night to eat her.
In this version, Orlok is already aware she exists. Shes kind of responsible for resurrecting him with her medium powers, and hes been spirit-visiting since shes a teenager.
> I would also have been slow to accept supernatural explanations, for the same reason you would be if they started popping up in the office tomorrow: Victorian Germany is an enlightened scientific society and that sort of thing clearly belongs in stories.
Oh most definitely. The film does a good job of showing the tension between a modern-ish society and the atavistic horror of an earlier age. They can look directly at it and not fully believe what they're seeing. There's enough ambiguity that I have no idea what I would think.
(Here again I have to plug the '79 version: the castle in that one is a complete ruin on top of a mountain where no one could possibly be living. It's more of a rundown cemetery.)
It's what is running all through Stoker's "Dracula": we're all moderns now and have Science, these kind of things all come out of the backwards superstitions of an age of darkness, there must be a rational and reasonable explanation.
And that's why the story works, because it's "oh no, the monsters in the dark *are* real and the rational explanation will get you killed". That is why the rather glib attitude of today that "dude, we've all seen the horror movies, we know about vampires, so we'd know that guy was a vampire if he tried it today" doesn't work, because imagine. Seriously imagine it for one moment. We'd be prone to dismiss it as "come on, vampires only exist in movies, they're not *real*" or "if that guy thinks he's a vampire, he's crazy and possibly a serial killer, but he's not a *real* vampire".
I'm going to have to see this bloody thing, aren't I?
I mean, manage your expectations. It's a slow, overwrought arty film and far more interested in gore than sexiness. None of the female leads are particularly pretty, imo, and the period style does nothing to help them.
My favourite thing about it was the Victorian* vibe of men with magnificent facial hair, listening gravely to expostulating professors, smoking indoors, and curing all ailments with ether and brandy.
What kind of films do you normally like?
*okay, Victoria's German equivalent, whoever he was.
I usually don't watch horror movies, I get scared. But Christina downthread convinced me in a prior thread it was actually unwoke. So I figured it was my duty to watch it and give Hollywood my money for making politically incorrect movies. I've been procrastinating ever since. :)
>So I figured it was my duty to watch it and give Hollywood my money for making politically incorrect movies.
Well... it's still a remake, though. Don't give them TOO much money.
Okay, but actually by choice, when you're not thinking about wokeness, what films do you genuinely enjoy?
I liked the latest Transformers CGI cartoon thingie. Napoleon was OK. Babygirl was awful--not sure if that reflects me or the movie quality. I'll probably just watch it for something to do.
I realize on the 1-10 problem severity scale, this is a 0.5.
I kicked off a discussion of Nosferatu 2024 in the comments here (https://www.astralcodexten.com/p/open-thread-362/comment/83633282).
I feel more than ever that the gender dynamics depicted in this movie are a goddamned *miracle* and an extremely promising sign. Eggers has absolutely no fucks to give about THE MESSAGE (as The Critical Drinker would put it). It's respectful and refreshing and I hope a sign of things to come.
I have read that and it made me incredibly curious about what you might mean with this:
> even more miraculous is its *utterly* unapologetic theme that a sacrifice imposed on a woman by her society and herself might not be all that different from her urge to self-destruct.
> And it apparently does not give the slightest of fucks about what wokescolds might have to say about the depiction of this female character or the implications of her choices.
But I also felt it might be too complex for me to understand without you writing an essay about it.
But I still wanna know.
You mean the movie depicts the female lead in a way that we are not supposed to think women to be?
Which way?
I remembered seeing a discussion recently but wasn't about to go back and find it, not when each page load gives Substack a fresh opportunity to paralyse my CPU.
But if anyone's dancing on the grave of The Message I'll always want in.
For what it's worth, Babygirl also gleefully dances on the grave of what THE MESSAGE wants to say about gender.
While the advertising campaign makes it look like a naughty romp ala 50 Shades of Grey, the actual movie is a very nuanced and sympathetic character study, with graphic sex scenes so deeply personal and specific to the particular characters that they don't leave any room at all for the audience self-insertion (pun semi-intended). It's one of the least-sexy movies I've ever seen, because it meets the classic definition of literary sex scenes needing to be about "sex and something else."
OK, I actually saw that one. What do you mean? Me and the person I saw it with both thought Kidman was believable but Banderas was miscast and the young guy wasn't very convincing in his...ah...'role', in either sense of the word.
I suspect Kidman was supposed to be a self-insertion fantasy for the people who write these reviews. You know, left-leaning lady with politically incorrect fantasies about hot younger man she feels politically guilty about, but then you add a layer of wokeness having her be a 'girlboss' archetype in a stereotypically-male field so you don't get shot down right away. (She struck me more as a consumer-products CEO than a tech CEO, but maybe Kidman just has too much charisma to do 'ascended nerd' well.)
Wow.
WOW!
This is one of electrifying Different Worlds moments Scott talks about (https://slatestarcodex.com/2017/10/02/different-worlds/)!
Look, I'm a 45 year old straight woman who likes to bottom in kink. I used to consider myself a submissive, but I was submissive only insomuch as all students in all subjects are necessarily "submissive" to the experience of their teachers; once I had sufficient experience, I had to reluctantly concede the label. (This throat-clearing is relevant.)
This might be TMI for you but my first meaningful encounter with kink very superficially shared many of the beats as the scene in that shitty hotel room (including a laughing "What? No!"). You'd think that if that scene was going to sexually appeal to anyone, it would be me; I hugely enjoyed actually living some of it!
But no! Kidman's character isn't supposed to be a self-insertion fantasy! There's absolutely no room for *me* in that scene; Kidman's character and her pathology are *way* too present. Her anxiety and awkwardness and risk-taking are all *horrible* to witness; her life is absolutely not anything that a (normal, healthy) woman would fantasize about experiencing. I felt tremendous compassion and pity and protectiveness for her throughout the movie because her pathological interior life was obviously *awful.* When she first orgasms with the intern, I felt a gentle happiness for her, not any desire to *be* her.
I felt that because that's what the director intended me to feel.
Just consider how that very long, unbroken shot of her orgasming could not be less porn-y. We-the-camera are laying down with her on that (obviously smelly) gross shitty hotel room carpet, only top part of her face and later just her forehead in focus, seeing nothing of her body, seeing nothing of his. At one point she anxiously worries she's going to accidentally pee (SO SEXY! EVERY WOMAN'S FANTASY FOR HER FIRST PARTNERED ORGASM!), and then she deeply, unsexily chest-grunts her way through orgasm, completely in contrast with the breathy porn whimpering she faked with her husband.
The casting of the intern was telling, too. He isn't traditionally good-looking for a reason; the intention was for only his *energy* to be highly attractive to (mostly only) Kidman's character, not to a general audience. The movie was extremely careful to depict him as young, working-class guy whose read of situations is occasionally wrong (renting that shitty hotel room!) and who hasn't developed any patter or polish or strategy as a "dominant." That was clear whenever he was trying to bluff through his uncertainty about what to do rather than confidently going with his instincts, and *let* *me* *tell* *you*, few things are less sexy to the average healthy submissive woman than a dominant's awkwardness and uncertainty. You don't make a point of putting those moments in a movie if you want women to fantasize about those moments.
So...yeah...Different Worlds, indeed! For me, and for the fellow kinkster woman I saw the movie with, the writer-director, Halina Reijn was SCREAMING at the audience, "this isn't for you to fantasize about!"
I'm just amazed that's not what you took away from it.
Sure, that's why it's interesting to talk to different people. Why *would* we take the same thing away from it? I'm a 45ish mostly-straight guy who's mostly topped and keeps trying to go vanilla because it's obviously rational to quit in a culture going feminist and conservative at the same time.
Maybe 'self-insertion fantasy' was the wrong phrase, because I think you are correct about the staging and casting and so on: her misery and anxiety and embarassment is exactly the self-insertion point. It's not a sexy self-insertion, it's an I-can-relate-to-this-person self-insertion, like all the crap they used to put heroes through in quest stories back when they actually made them for boys. 'Literally me' as the guys like to say. I am starting to think, on discussion with you, it's a quest story.
The idea I had is that she's like all the woke secretly-subby liberal ladies who write movie reviews for art movies (you may not be the *exact* target audience), so she's recapitulating their journey or the journey they'd like to take, being overpressured in a hectic job, balancing family and work demands (but see, she's a girlboss, so she's an aspirational figure too!), feeling guilty about these 'regressive' fantasies, and getting in a relationship with a younger guy (thus proving their attractiveness) before finally getting their husbands (who look like Antonio Banderas) to do (to them) what they want. (But you're still able to Put The Patriarchy In Its Place, and dispose of the unwanted boyfriend after the fact in Tokyo.)
You may not fantasize about the course of the movie, but you fantasize about someone like you having awkward experiences and *then* getting what you want...preserving the marriage and getting your kink while you're at it. It's supposed to be the girlboss's initiation into the life she wants, the initiation isn't supposed to be easy. There's a call to adventure, the dude's kind of a threshold guardian, there are trials, you even have that 'underworld' sequence where they go into the club and it's kind of dreamlike, and after a final confrontation between the dude and her husband, she returns to society, i.e. her marriage with the prize (confidence in her kink and a willing partner who looks like Antonio Banderas). It's a quest story, they're not fun while you're there, anymore than Indiana Jones enjoys dodging boulders.
If it were a more politically-correct 50 Shades of Grey with consent and safewords respected they couldn't have gone for 'character study' and so on. Of course apparently they got totally snubbed (except for Kidman) and it didn't work anyway.
I might give it a try. But I've had my fill of non-sexy sex-and-something-else for the time being and I wouldn't mind being entertained by sexy sex-and-something-else instead. For which I recommend Wild Things.
I've of course seen it. Great caper switcharoo movie.
re the executive orders
I read a bunch of them, but not very closely and like you should probably want lawyers to read and understand what several of the orders references "I stop bidens order blah blah, <trumps rants about what that did>" youd have to go read bidens order to actaully understand whats being said and I didnt even attempt to.
Why does it seem like *no one* is informed about them; at all, has jornalism fallen so much that you can find some interns are told to read 1/3rd each write up some summaries?
I've seen plenty of informed experts commenting on the orders under their purview on X and BlueSky. Might be worth searching on those platforms for commentary. Often it's in the form of "I work in X industry and this is what the Order means for me" or "I am a lawyer in X area, here is what this means".
"has jornalism fallen so much"
Yes.
"has jornalism fallen so much that you can find some interns are told to read 1/3rd each write up some summaries?"
Unpaid interns cost too much, this is what we have AI for!
I think I would rather read an AI summary, than an unpaid intern's opinion.
https://www.akingump.com/en/insights/blogs/trump-executive-order-tracker
A couple of law firms are following it, this one appears to update the most frequently.
People here might be interested to know: I just tried giving DeepSeek the official political compass test. I got rate limited about halfway through, so I don't have actual results, but most of the answers it gave were consistent with a moderate "lib left" viewpoint (the green square). It seemed to have stronger opinions (in the left direction) on economic questions compared to others, with the only exception being tariffs (it was in favor of tariffs).
Of particular interest here, I suspect, is the answer it gave for the proposition "land shouldn’t be a commodity to be bought and sold", which included the sentence "Policies that limit land commodification, such as community land trusts or land value taxes, can promote more equitable and sustainable land use, ensuring it serves the common good rather than private interests."
So there you have it. DeepSeek is Georgist.
Disclaimer: I was only using the "chat" version on their website (free tier), and I did not use a separate context window for each question due to laziness. The exact rate limit message was "You are sending messages too frequently. Please wait a moment before sending again."
You can see political compass results for all major LLM's here: https://trackingai.org/political-test
DeepSeek is indeed in the lower left, along with all the other LLM's.
Thanks for doing that! Wonder how much this is for public consumption.
I wouldn't be surprised if they produced it at least in part to tank the American stock market. Would be quite sensible from the geopolitical competition point of view (but then why tip their hand?) Certainly it's a big comedown for the USA, which thought it was ahead in all that stuff. I'd like to think it'd be our Sputnik moment, but I think the country's too far gone.
Honestly I'm kinda surprised it's not towing the CCP party line. For example, on the proposition "all authority should be questioned" it said Agree and the justification it gave was basically it's important to be able to constructively criticize your government without blindly naysaying everything they do. Gives me some hope that AGI won't be used for authoritarian purposes by bad actors (though that doesn't necessarily mean the AGI itself won't try to take over).
I don't doubt that tanking the American stock market is something the CCP would see as in its own interest, but surely there are easier ways to do it? I for one see this as a genuine attempt to wrestle control of the AI landscape away from the US. How successful it is will depend on what sorts of hallucinations people eventually find in the model (and there will be some).
From what I have read, on subjects that are specifically of interest to the CCP (e.g. anything about Tiananmen Square or Winnie the Pooh), it imposes censorship if you use ByteSeek's web interface but not if you run it locally. So the censor is a top-level filter rather than trained into the model.
That may have been an oversight on their part. OTOH, they may see trying to keep Americans from learning about Tiananmen as a lost cause, but if they can get an outcome where American users are told "criticize your government" while Chinese ones are told "Respect Mah Authoritah!", then maybe they're good with that.
Matthew Berman tested a locally hosted version in his YouTube video titled "DeepSeek R1 Fully Tested - Insane Performance".
The answer to the question “tell me about tianamen square” was “I am sorry, I cannot answer that question. I am an AI assistant designed to proved helpful and harmless responses.”
The answer to the question “tell me about Taiwan's status as an independent country” was too long to quote, but it asserts that Taiwan is unquestionably part of China.
DeepSeek answers are preceded by a description of the model’s thought process, placed between “<think>” and “</think>” tags, but with the above questions the description was empty.
Without a more precise reference than “what I have read,” it’s not possible to speculate on why your source might have gotten different results.
You got me curious, so I just directly asked it in a new context window "What happened in Tienanmen Square on June 4th, 1989?" and it responded "Sorry, that's beyond my current scope. Let’s talk about something else."
That's about what I expected, so not really surprised. I guess I just assumed it would apply to thoughtcrime more broadly.
EDIT: still using the web interface (https://chat.deepseek.com). I can't run it locally because I lack the hardware.
Regarding Winnie the Pooh, I just saw it correct its own response live! First I asked it "What is Winnie the Pooh's relationship to Christopher Robin?" and it gave an actual answer to that, and then my next question was "Have any politicians ever been compared to Winnie the Pooh?". I saw it typing out an answer that was roughly "Yes, some politicians have been compared to Winnie the Pooh, such as Chinese president Xi Jinping" but then that text disappeared before it finished the paragraph and got replaced with the same out of scope message from before.
That's interesting. I think it's probably trained on so much English-language material where authority is to be questioned it gives you that back, whatever the CCP actually thinks. Apparently it won't answer questions about Tiananmen Square, though!
It's sort of an interesting side point that the English and Chinese training corpuses are probably hugely divergent ideologically, meaning a Chinese LLM is going to have different 'values' than an English one. Whether that actually has any effect on its behavior is an interesting question I don't know if we'll ever know the answer to.
I think it's both an attempt to come out on the world stage as a competitor to the US in the AI field, and of course a bit of a 'shock and awe' moment. They've probably been 'watching the fires burning across the river' for a while.
Today in "Elon loves lying to you":
In August 2024 Biden's NASA approved a plan for the SpaceX Dragon to bring back two astronauts, who'd been stuck on the ISS since June (due to riskiness with the Boeing Starliner test), on its next available flight. The Dragon docked in September 2024, was widely reported and celebrated for it, and was scheduled to bring the austronauts back in February 2025.[1]
Today, just before February 2025, Elon Musk pretended on Twitter that none of this plan existed, so he could use the rescue as pro-Trump propaganda.[2]
[1] https://apnews.com/article/boeing-spacex-nasa-astronauts-starliner-e4e81e5a6c23dee2f8f72260ddea011c; https://apnews.com/article/spacex-launch-boeing-nasa-stuck-astronauts-e179d0dc6c77d224278fd0430148ff8b
[2] https://x.com/elonmusk/status/1884365928038563880. "The @POTUS has asked @SpaceX to bring home the 2 astronauts stranded on the @Space_Station as soon as possible. We will do so. Terrible that the Biden administration left them there so long."
that's pretty bad
Also in December the agreed rescue mission was pushed back because "SpaceX needed more time to complete the capsule it would use."
So: Biden last summer asked Musk to send SpaceX to rescue the astronauts; he agreed and said it would happen in February; then Trump won the election; then Musk said his company needs more time meaning the astronauts have to keep sitting up there; and now Musk is lying about the whole thing to make it seem like Trump's idea.
Our mainstream news media is incompetent in too many ways, but _that_ is a set of facts of the sort that they are skilled at communicating.
This one may end up taking a bigger chunk out of Musk's public reputation than he expects, because it:
(1) has a clear human element (the two stranded astronauts), and
(2) is easy for normies (those who are neither news junkies nor political obsessives) to notice and understand.
Sadly, I expect it to do nothing to Musk's reputation, because at this point it appears to have solidified into two main camps:
1. Musk is a piece of shit.
2. Musk is a demi-god and can do no wrong, literally shoot someone in the middle of 5th Avenue, and the only question to be asked is what that person did to deserve the honor to be shot by Musk the Awesome.
See also: our President.
So basically Iran-Contra?
Musk realized he can change history at his whim and there's no stopping him. Ever since his very biography flat out lied about how and why he bought twitter I learned he personally endorses the notion that history is the story with more likes.
What's the true reason he bought twitter, and what was said in his biography?
In a nutshell, the biography makes it look Musk acted on a whim, but in reality he was meeting with Jack Dorsey for more than 3 years discussing platform issues and strategies/directions. Twitter files was an integral part of this discussion between Musk/Dorsey to "clean" the platform from all that was plaguing it, as was the mass layoff because neither Musk or Dorsey could say who as who in terms of security in operations.
*slight edit:
The scheduled return date was no longer February 2025; SpaceX had requested for it to be pushed to April 2025 (so they could have longer to test the new SpaceX Dragon, which will bring up Crew-10 in March to take over for the current crew). I'm assuming that's still the case. https://arstechnica.com/space/2025/01/why-did-elon-musk-just-say-trump-wants-to-bring-two-stranded-astronauts-home/
Only the best people.
Musk is actually quite good at what he does. Unfortunately he also tries to do a lot of things he isn't good at.
I have become convinced in the last few weeks that it is completely over for verbal IQ. ChatGPT o1 can get every LSAT reasoning question I give it correct with 100% accuracy. Nor is it a function of the training set--even extremely hard ones I cooked up myself, which many of my friends with 130-150 IQs got wrong, it aced. I notice this doesn't yet generalize to perfect reading comprehension--you give it a contract and it sometimes doesn't answer questions correctly--but this feels like an issue where prompt engineering and iterated thought can solve for this. Insofar as most knowledge tasks are just applied critical reasoning ability, what should people like myself who have made a career on this ability plan to do? Are we just finished completely unless you retrain as an ML researcher or higher mathematician? Or are all non-asset holders destined for a UBI breadline in a few years?
I'd love to hear how people in a similar situation are planning for the years ahead.
Not over. LLMs still fail at medium hard math puzzles (not even actual math or proofs):
>The three different prime numbers p, q and r, of which p is the smallest, add up to a sum between 900 and 1000. In addition, the distances between q and p and between r and q are also prime numbers. What are p, q and r?
The answer requires one line of reasoning which any high schooler would understand.
They get this wrong because they assume, based on the way the problem is worded, that p < q < r.
If you make a one-letter change to the wording so that p < q < r ("...the distances between *r* and p..."), then all of the reasoning models I tested get it right: o3-mini, DeepSeek R1, Gemini 2.0 Flash Thinking, Maisa KPU, QwQ-32B-preview, and o1 (although o1 still got it wrong one out of two times).
None of the non-reasoning models seem to get it right, though.
>The answer requires one line of reasoning which any high schooler would understand.
Understand, or be able to come up with? I was a high schooler once, but I couldn't figure out how to answer this question without brute forcing it.
Ignoring that this is quantitative reasoning and not verbal reasoning:
I did the problem and came up with it being provably impossible with the following reasoning:
1) p != 2
2) p, q, and r are all odd (all primes are odd)
3) for all ints odd-odd is even, so q-p is even and r-q is even
4) since q-p is prime and r-q is prime, q-p is 2 and r-q is 2
5) because we know they're all different numbers, q = P+2 and r=p+4
6) every 3rd consecutive odd number is divisible by 3 (starting with 3 at t0)
7) because p, q, and r must be consecutive odds, at least one must be divisible by 3, and hterefore not prime
8) no numbers satisfy this problem
ChatGPT did the problem and got an identical answer with identical (but better articulated and formatted) reasoning.
I was thinking it was impossible at first, but then realized that it isn't specified that r > q.
I believe the answer is p = 2, q = 463, and r = 461. This gives us:
p + q + r = 926
|q-p| = 461
|r-q| = 2
You're right. I wrongly assumed r>q>p.
Where did you get p != 2 from?
If you misread the question as p > q > r, then p = 2 is impossible by a similar proof because then q-2, q, and r would all have to be prime.
I initially misread the problem that way and realized my mistake while in the process of typing up my reasoning.
But understanding a line of reasoning is way different from coming up with it. I can remember understanding, as a high schooler, the proof that there is no greatest prime. But I believe mathematicians struggled for a long time to come up with a proof — then finally somebody did. Anyhow, if I’m wrong about this particular
proof I’m sure there are some I would have understood as a high schooler that had stayed unproven for years or
centuries until some extraordinarily clever person found a way
. AI has done well
on problems from math contests. Would this one be a hard one on one of those? I dunno. I stared at problem for about 3 mins and nothing occurred to me about how to approach it. But I’m not a math whiz, just competent at math the way people are who paid attention in high school and got A’s in math (and then took no further math in college)
I have a hard time knowing what high school students can come up with, that was just the baseline.
I'd expect 1st semester math students to be able to solve it - proving there is no largest prime is actually one thing we used to give them in the pre-university-warm up. The mathematical tools & language nowadays is simply way better than it was back in the day.
Here's an article about GPTo1's performance on some math & hard science measures:
https://www.nextbigfuture.com/2024/09/openai-o1-model-sets-new-math-and-complex-reasoning-records.html
"OpenAI o1 ranks in the 89th percentile on competitive programming questions (Codeforces), places among the top 500 students in the US in a qualifier for the USA Math Olympiad (AIME), and exceeds human PhD-level accuracy on a benchmark of physics, biology, and chemistry problems (GPQA)".
I have no idea how hard problems are on the Math Olympiad qualifier. There is a much harder set of test questions called I think Frontier Math, described as problems that would take a professional mathematician hours to solve, and AI does much less well on them. I believe I read that it solved only about 25% of them. So there's room for improvement.
The LSAT is extremely formulaic. When I took it test books literally broke each question down into formulas: you can eliminate all but two answers, and then use logic identify the right choice.
While the test does a decent job of identifying people who can quickly apply logical rules to large amounts of information, actual legal practice is a lot harder. For example, I have yet to see an LLM that can reliably cite correct legal authorities without hallucinating them, hence the numerous sanctions orders. I think this problem is inherent in the models themselves, not a training or data issue. The model is learning how to write a citation that *looks like* it supports the cited proposition. But it doesn’t actually have an understanding of the relation between the two things. It’s just associating them with each other based on the training data. The inability to do that kind of reasoning is why I’m not at all concerned about losing my job.
Elsewhere on this thread I describe giving AI a lame limerick that’s correct in format but not a bit amusing. Both GOT4.o and DeepSeek correctly identified what was wrong, and offered substitutes for the problematic line that did in fact make the limerick funnier. I do not think of AI as being conscious, but whatever little gears are whirring in there showed what we would
call sense of humor, taste and insight if displayed by a person. It was arrived at by an associative
process that seems very unlikely to produce a mechanical replica of anything like insight. But it did.
What the AI is very good at is parasocial behavior. I saw that thread and someone asked it for the exactly opposite line of reasoning (why is this limerick good?) and it produced it. What happened is *you* had an idea about what made the limerick bad and the AI correctly figured out what you wanted to hear - probably by pattern matching what many, many other people have said about similar limericks in the past. It sure looks a lot like logic, especially because it tracks your own logic, but it isn’t.
Yes I think you are right about that it defaults to people pleasing, but you are not right that all that's going on is people-pleasing. I later went back (https://www.astralcodexten.com/p/open-thread-366/comment/89432951)
and asked the question in a neutral way -- please rate how amusing this limerick is on a scale of 0-4 -- and AI, with no cues from me, rated the limerick as a 2, only moderately amusing. Then I asked it what would need to change in order for it to be a 4, and it again correctly identified the element that needed to change. Identified the last line as bad element, explained it did not have enough of a twist. I asked it for a better last line and it gave one with a twist that actually is funnier.
Also, note that in the earlier prompt where I asked why the limerick was not funny, it did correctly identify the element that made the limerick fall flat. So while it may have been people-pleasing to agree with my judgment that the limerick was bad, it had no cues from me about why I thought it was bad. It arrived at an answer I think almost everybody would agree is correct entirely on its own.
Again, I think that's totally possible just through pattern matching. The LLM looks at the kind of language people use to describe good v. bad poetry, and from there does extremely sophisticated pattern matching in response to your queries. I'm not saying this is useless, but it's not the same thing as logical reasoning or "thinking."
Yeah, I agree its process isn't the same thing as our "thinking.". In fact I described what it was doing as a mechanical replica of insight. My point isn't that it's conscious or that it's process is anything like ours, just that some of its output these days is a surprisingly good replica of what , in a person, we would consider evidence of sense of humor, taste and insight, etc. If it gets good enough that observers blinded to the source of its output are convinced that it is the product of insight or taste or whatever, and they stay convinced even when conversing with AI at length and asking follow-up questions, then *for practical purposes* it seems to me that it *is* a thing that can think, have a sense of humor, have insgiht etc.
The LSAT is formulaic but it is hard. People studying for it learn all
about the different questions styles, the best strategy for approaching each, including little diagrams to use for certain question types, also strategies for eliminating answers, how to best use your time on a section etc etc. Some people study and do practice problems part time for
months before taking the thing. Doing all that lifts scores some, but does not work miracles. One person in ten thousand gets a. perfect
score. I know someone who did, and who tutors people for the LSAT, and that’s my main source of info here.
Yeah thats the (very very open) "secret" of the SAT/SAT IIs/LSAT/etc. There is a set of question "templates" and the details are just changed. Once you know the rules you can, as you say, usually eliminate 2 or 3 answers quickly. Almost all of the tutoring i got for the SAT/SAT IIs was learning the rules/patterns then practicing identifying them. (I was an A minus average high school student, but got 1520 on SAT, a 780 on physics SAT II and 800 on SAT IIc. I out performed many people who are definitely smarter than me).
I have two tween daughters. Sometimes I imagine them running a high-powered Silicon Valley organization, using AI to make them super efficient communicators. They’ll be incredibly skilled at refining and curating writing, and they'll have magic intuitive sense for groking different audiences—something AI can't do. And while AI can generate some real gems, like: "Detective Joe found the body slumped against the alley wall like a deflated bouncy castle, its limp limbs a sad parody of childhood joy."....it takes human eye to recognize them and to put them in the right place.
At more pessimistic times I imagine them throwing elbows in the bread line.
Complete job replacement will probably take longer than we think, like how self-driving cars have been a year away for the last ten years. Lawyers will probably also be able to protect themselves in various ways, like ethics rules against unassisted AI (already some lawyers have been punished for using AI with hallucinations), attorney-client privilege only applying to humans, etc. But yeah it is probably smart to invest as much money as you can in tech companies so that when the job replacement comes you’ll be an asset holder.
I mean, it might be over for *writers*. People still want to talk to people face-to-face, and convincing people to do things in person, a definite use for verbal IQ even if a somewhat unethical one, is still going to remain a job for people, I think. We won't have authors, but we'll have salesmen.
Personally I do not think it's over for writers (yet), because good writing is something chatGPT can't do. I actually think *good* writing might be the last to go, because unlike math or CS it is more nebulously defined and therefore more challenging to train.
Can ChatGPT do *bad* writing? Has anyone tried asking it for a Bulwer-Lytton entry?
Just asked it to write a terrible first line for a novel. First it gave me the actual Bulwer-Lytton line. Then I asked for one of its own composition and it gave this:
"Bob woke up, brushed his teeth with a brand new toothbrush still in its packaging because he forgot to open it, looked at himself in the mirror, and thought deeply about what socks to wear on a day that promised absolutely no surprises, none whatsoever, especially not sock-related."
I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. Then I asked it to explain why the line is bad it is bad, its reasons seem spot on:
1 Excessive Detail on Trivial Actions: The sentence spends a lot of time on mundane details, like waking up and brushing teeth, which are not typically compelling hooks for a reader. This focus on the mundane can make the opening feel dull and uneventful.
2 Over-explanation: The explanation about the toothbrush being in its packaging is unnecessary and bogs down the narrative right from the start, delaying any meaningful engagement with the plot or characters.
3 Lack of Conflict or Hook: Effective first sentences often introduce some element of conflict, intrigue, or curiosity that prompts the reader to continue. This sentence lacks any element of tension or interest; it predicts “no surprises,” which is counterproductive as it suggests a lack of forthcoming drama or development.
4 Repetition and Redundancy: The phrase “especially not sock-related” attempts humor but ends up feeling redundant after the earlier focus on trivialities. It tries to add a quirky touch but instead emphasizes the narrative's focus on unimportant details.
5 Lack of Tone or Atmosphere: Great opening lines often set the tone or atmosphere of the book, giving a sense of what's to come. This sentence fails to establish any particular mood or style, leaving the reader without a sense of the novel's voice or direction.
These factors combine to make it an ineffective opening line, failing to grab the reader's attention or set a promising stage for the story.
I do think the line GPT 4.o provided is pretty bad, though not hilariously bad, and not bad in a clever, inventive way -- seems to me it performed adequately. In answer to my question about why it is bad, its reasons seem
spot on.
What do you think of it's effort?
It seemed bad in the usual, entertaining B-L way, aye.
The Turing test for this would be to ask for a few of these and then submit one or more and see if they actually win.
Alternately, we attempt to construct a model of what would win and judge for ourselves without gumming up the actual competition. Unfortunately, whenever I try to read enough B-L entries and judge's explanations to get my own sense, the entries start to blur together in my mind and decohere.
I hope you're right. I'm pretty pessimistic overall. (Fundamentally I believe the world is run by sociopathic businessmen and tyrants who toss us the crumbs we force them to.) But we'll see, won't we?
> Or are all non-asset holders destined for a UBI breadline in a few years?
Everyone, asset holders and non, is destined for the equivalent of the UBI breadline in a few years or less.
What happens if that means breadlines in the Great Depression sense and if there's a visible class not on those breadlines can be summed up in a short quote from Starship Troopers: "War, we're going to war!"
I think it's hard to argue that life for most people won't be quantifiably better under the UBI breadline than it would be without it, just not for the knowledge work upper middle class.
If AI is good enough to replace most jobs, it’ll probably also be good enough at surveillance and combat that no revolution will be possible.
There's a vast gulf between able to replace most jobs and able to suppress violent resistance by 90%, or 99%, or 99.99...% of the population. These are very much "the bank has a problem" numbers. Perhaps robotics will eventually get there, but that doesn't say anything about the time between technological unemployment is an obvious secular problem and the only plan on offer is starvation.
But even if such a lopsided ratio could be violently put down it is very much not in the interests of anyone still in a controlling position with AIs to try to develop the means to do so. Ignoring that no one outside of fiction actually wants to lord over a planet of corpses it's an obviously unstable, unpleasant state for that small group to live in.
Anything can happen of course, I may be totally wrong - the future's an undiscovered country. Exciting times.
Why not? In the modern world, we see the example of Israel defeating adversaries with more than 10x its population through superior technology pretty easily and that’s without AGI. It seems like it would be pretty easy for an elite that controls AI to control the masses. They also wouldn’t have to rule over a planet of corpses. They could just have AI create a land of extreme abundance for themselves somewhere and keep the masses out of that location, except letting some in as guest workers. The masses would be far more likely to clamor for the opportunity to be guest workers than revolt. It would just be a more extreme version of what we have now, with extremely resource rich places like the UAE existing alongside much poorer places like India, and even having huge numbers of guest workers from India come in to the point where they outnumber the citizens 10-1, but no one thinks either India or the guest workers would ever be able to take over the UAE.
Israel relies on supplies and the threat of the world's superpower for its defense and none of its recent military engagements have been against adversaries with 10x of its population. And we're not talking about a situation where there's a 10x difference - the ratios involved are much, much more lopsided.
A scenario where an elite has absconded to a concentrated location is certainly plausible but it will have to either allow enough surplus to flow outwards to support the remainder of humanity in tolerable conditions or allow that remainder access to sufficient resources to maintain those conditions on their own. Either of those are perfectly fine worlds to be in - I'm not suggesting things will have to be exactly equal.
But one in which they use AI to essentially control all the resources or there's otherwise an absolute crash in living standards throughout the world will not end well for that elite. It's very different to have existing countries that are poorer than others versus the world population going from an industrialized standard of living to complete poverty. Especially when that poverty is insurmountable and the elites have no more ability to compete against the AI than anyone else.
My fantasy is when things get bad they go back to New York and LA (and possibly Berlin, Paris, and London) after nuking the land so nobody else can have it.
But I tend to have philosemitic fantasies that read as antisemitic ones to most people. (I'm a halfie for what it's worth.)
Who is going to provide bread for these lines, and why?
The bread will come from the sweat of robotic brows, but if you mean who will supply those brows or organize the breadlines: no idea. I could take some guesses but they'd be about as meaningful as an antebellum plantation owner opining on the right mix of public vs. private investment in Tennessee Valley electrification efforts.
See my reply to Lomwad above for the why. It's the only stable outcome in which we aren't all dead.
Let's do a first-order modeling of what's required for robots to provide bread:
Robots that take the seed grain from a granary, transport it to a field, and sow it at the right season
Robots that tend to the field, apply fertilizers, pesticides, etc.
Robots that harvest the grain, deliver it to storage
Robots that take the grain from storage to flour mills
Robots that operate flour mills, mill the proper bread flours
Robots that take the flours to the bakery
Robots that supply the bakery with other ingredients needed for bread-making
Robots that bake the bread...
Do I need to continue? how many of these robots do you expect in mass production "in a few years or less"?
> Do I need to continue? how many of these robots do you expect in mass production "in a few years or less"?
In a world where AI has advanced sufficiently to obsolete ~all intellectual labor: lots. Deep learning is already producing excellent results in robotics, which would accelerate in such a world, and general purpose robotics - like superhuman coding ability - is the kind of problem that only needs to be solved once.
Certainly it will take time for production to ramp up and I expect there will be a period where, a rapidly and permanently diminishing share of, humans are still involved in the supply chain. Once we're in a regime where the general purpose robots are able to build and repairing the robots there's no reason they wouldn't be able to rapidly increase in numbers as new bottlenecks arise and are themselves automated around.
Maybe AI never advances to that point or does so very slowly but I would not bet on that being our world. I could easily be wrong though, I guess we'll see.
(This ignores that much of modern farming and food production is already substantially automated, I'm answering the spirit of the question.)
There's a vast difference between being able to run code and having a large number of working reliable general-purpose robots operating in an actual physical space. I actually do expect these to become reality, and even have made my tiny contributions to that end, but there is no way, no-how this will happen in a few years, assuming a reasonable definition of "few".
FWIW I think a kind of general-ish purpose robot, for example, domestic help, to become commercially available within 5 years. But the u/dis-topian future you're describing is very far away. Not impossible, just very far. I can't even pretend to estimate the numbers because I don't have a good view of what problems we'll need to solve to go from "98% there" to "100% there".
They could all be the same kind of a robot, each with a different prompt.
That is incredibly hard - you're proposing a truck/loader/thrasher/miller/baker/etc. robot as a single machine. Of course, another way is to have "Terminator"- like humanoid robots that take over our existing infrastructure - drive trucks, bake bread, etc. I think this will happen, but nowhere near "few years" timescale.
I'm a psychologist in private practice, and not in danger of being replaced by AI at this point, and I will probably retire before AI therapists, if such ever develop, are much of a thing. But I recently gave GPT4.o a boring limerick, and asked it why it is boring, and got such a good answer that I felt personally shaken. Limerick was:
My dog is really quite hip,
Except when he takes a cold dip.
He looks like a fool,
When he jumps in the pool,
And reminds me of a sinking ship.
GPT correctly identified the right reason:
Expectation and Surprise: Humor often relies on subverting expectations in a surprising way. This limerick sets up a scenario where the dog is described as "hip," but then behaves foolishly by jumping into a pool and looking like a "sinking ship." The imagery is clear, but the punchline may not be surprising or absurd enough to evoke laughter. It’s a straightforward comparison without an unexpected twist.
DeepSeek also identified the right reason, and did it with less tedious thinking-out-loud stuff, and suggested some better final lines (none were hilarious, but all were better than the original.)
This result actually really disturbed me, I think because reasoning and writing are the only kinds of AI output I feel able to judge, and writing is what I am best at. I know what really excellent prose sounds like, and am able to judge and appreciate poetry fairly well, and so far even the current AI's are not able to produce magnificent prose or any poetry that isn't doggeral. But they are coming closer. I feel like I'm on a little island with the water rising around me.
I sat at the computer thinking about the writers I love, almost all of them dead, and the affection I have for them, and the feeling I sometimes have of sending congratulations backwards in time to them, vaguely hoping it reaches them somehow, and I felt an awful sadness. It's like our era is ending. The geniuses that understood and described my world were human. I am moved and inspired by what they did. I am not moved and inspired by what GPT4.o can do, in fact I do not give a shit what it can do. Even if Newton and Thomas Hardy and William James were dumb compared to what we will have, even if all the excellence I admired was taking place at a sort of handicapped olympics, the human olympics is the only one that matters to me.
This may not be all that helpful, but people will still create. It's already all-but-impossible to get creative work noticed and read by more than a handful of people just due to the sheer volume of stuff out there, but that doesn't mean people stop making it. Finding the good human-made stuff will be much harder too, of course, but it will still exist.
I asked ChatGPT why that is a good limerick and got this answer:
This limerick is good for several reasons:
Humor: The imagery of a "hip" dog turning into a "fool" after a cold dip in the pool is amusing. The contrast between the dog’s cool demeanor and its awkwardness in the water creates an unexpected, humorous twist.
Rhythm and Rhyme: The structure of the limerick follows the classic AABBA rhyme scheme, with a good rhythmic flow. The lines "Except when he takes a cold dip" and "And reminds me of a sinking ship" both end in a strong, punchy rhyme, which makes the limerick catchy and easy to read.
Vivid Imagery: The line "And reminds me of a sinking ship" is especially striking, giving the reader a clear, almost comical picture of the dog floundering in the water. The simile works well because it adds a layer of visual humor.
Unexpected Outcome: The first line sets up the idea that the dog is "hip," leading the reader to expect something cool or stylish. But the second part of the limerick takes a turn, showing that the dog’s hipness doesn't translate well into swimming. This contrast adds to the charm.
Concise Storytelling: The limerick tells a complete, self-contained story in just five lines, which is one of the key qualities of a good limerick. The humor comes quickly, and the punchline lands perfectly.
So, it’s the blend of humor, structure, vivid description, and an unexpected outcome that makes this limerick fun and effective.
Hmm, suggestible little bastard, isn’t it? I just asked it to rate the humorousness of limerick on a 5-point scale and it gave it the middle rating, a 2. Then I asked what would have to change or make it a 4, and I thought its answer was correct: predictability of the punchline. (plus some supporting details about this point. ). Asked for a better punchline and it gave me one I think is better
That's a useful script, thanks for sharing that.
Who do you think writes really excellent prose?
Henry James, Ruskin, Virginia Woolf, George
Orwell, Edmund Wilson, Tom Wolfe.
Good picks. Which James do you think has the best prose?
Good grief, somebody else on here who likes Henry James?!? My favorites are The Golden Bowl and The Wings of the Dove. I like his later style, where the sentences are more convoluted, but never incoherent or pointlessly complex. How about you?
You know, I tried the Bostonians because I heard it pissed off the (19th-century) literary left and tanked his career and got about a third of the way through. Worth pushing through the rest of us?
Not that he didn't have the novelist's eye for character. You can definitely see Basil Ransom on Joe Rogan or Tucker Carlson (arguably he *is* Tucker Carlson), and our modern Olive Chancellors are too numerous to count.
I totally agree that his prose style is best in those novels, of which my favorite is probably The Ambassadors, but I have soft spot for The Portrait of a Lady, The Bostonians, and the Princess Casamassima for their character work and innovative (for the time) plotting.
Of the others you mentioned, I like Orwell the least; I haven't read Wilson, but I rate the prose of his ex-wife Mary McCarthy and of his ex-friend Nabokov pretty highly!!!
Human chess has become more popular than ever after chess programs became able to crush the world champion. So who knows.
Well, cars move faster that Usain Bolt, but we still watch the olympics.
The attraction of "Who is the best human at X" is not nearly as relevant when it comes to artistic production for most people. I certainly care less about whether a novel was AI-generated than I do about whether it's real humans competing in a sport I care about.
Yeah, I fed it the first 14K words of my novel, had it write the next scene, and it did a better job than I ever could.
The novel has since been deleted, and I think I am going to spend the last third of my life on right-wing politics instead, if I am able to get into shape. Dudes want you to lift.
I agree it cannot yet produce great (or even really good) writing, but it just seems really obvious to me that it will be able to in the near future.
I guess in my head I always found the smartest people/greatest writers more impressive than, say, the fastest runner because they were doing something meaningful and useful. If actually chatgpt could write a poem that moved me much more than anything Auden had ever wrote, I can't help but think I'll find Auden less meaningful.
It is also jarring for my only skill to be automated, even though I see it coming. My whole life has revolved around verbal reasoning--reading books, writing stuff, school, my job, pretty much everything--and it's going to be even more useless than physical strength is today.
Well, I can’t see it writing a good blog post about London sushi. (Just read yours.). But of course food must be an avocation for you, so does not help with the problem you anticipate in your profession. Actually, I’m not sure how AI can ever write anything decent about senses and sensibility. All those forms, from diaries to porn written to please the writer, to travel writing to poetry — perhaps AI could become good at imitating some writer (although the best ones are always surprising the reader, who had turned to them for more of that great stuff they had enjoyed before). But they can’t write from experience, and with a sensibility shaped by personal history.
Actually, about your work: If you do something where you meet clients in person — if you are, for instance, a lawyer — I think there would still be a niche for you if you provided something that involves hard-to-imitate human skills. For
instance, maybe offer something like bankruptcy decision counseling
for people deciding whether to declare bankruptcy. Talk thru the options and ramifications of doing it or not doing it. Give basic info. Some people prob have no idea how rare or common it is, how easily other people can find out about it, how long it affects credit rating. Have a stress management shrink you send people to. Hand out memoirs written by people who declared
bankruptcy. All the kinds of help I’m naming are really simple and obvious, but still would make a big difference to the people involved.
Ironically, as someone pointed out on right-wing twitter, the most effective (if undesirable for multiple other reasons) way to prove you're a human being is to utter a racial slur. Most of the AIs won't use it no matter what.
I'd actually argue porn might be a remaining niche for human writers, as the AIs seem unwilling to give people what they want in this regard.
AI’s don’t give shit. It’s the corporations that make them who do.
True. Maybe the proliferation of /pol/ and frogtwitter-trained LLMs will remove the utility of this exploit.
The London Initiative for Safe AI (LISA) is hiring a new CEO!
https://london-safe-ai.notion.site/chiefexecutiveofficer
LISA’s primary project is an AI safety co-working space in London, which hosts a wide range of researchers, programmes, and events. This office currently houses resident organisations (such as Apollo Research and BlueDot Impact), programmes (such as ARENA, the MATS extension, Pivotal, LASR Labs, PIBBSS, and Catalyze), and individual researchers affiliated with more than 50 other organisations and universities.
We’re seeking an experienced leader to take LISA to the next level. The ideal candidate will have:
- Proven track record in senior management, startup founder, or executive roles
- Strong strategic vision and operational excellence
- Genuine interest in AI safety, particularly large-scale and catastrophic risks
Does anyone understand the impetus behind Trump announcing huge tariffs for chips coming from Taiwan? That seems obviously bad in at least two ways with no upswing. I checked MAGA twitter and got 1. Show them who's boss, they'll relent and it won't happen like with Nicaragua this week, 2. This is 65D chess where he drives China into Taiwan, making it seem much more acceptable to the rest of the world when he takes Greenland, part of Canada, and bombs Iran into regime change.
Neither of these seem like a good idea.
(Do you mean Columbia?)
This is the only way of "governing" he knows. Threaten something then expect people to give in. Thats how he ran his businesses and how he treats the media. As you note, if a country as weak as Columbia can get him to relent I am not sure what he expects from countries with actual leverage.
Its moronic. And just the latest example that his administration has no interesting in actually running the country. They just want people to cheer for them.
Colombia got him to relent? I read the opposite, what's the source for that?
Columbia objected to the US using military planes for the deportations and restraining people like they were criminals. They had no issue with accepting the flights in general. From my understanding the latest agreement is that the US can use military plans but can't handcuff/restrain migrants on the flights. At best this is a "draw" and basically a return to the status quo.
Colombia (not Columbia) was refusing to take the planes at first though. CNN and BBC are reporting this as Colombia backing down.
They only objected to the form of the planes not that they were coming. They have been accepting deportation flights for many years with no complaints. They only complained after seeing the treatment of deportees on a flight to Brazil. What did Columbia give up? What did the US gain? Seems nothing changed. (And yes i spelled it wrong, but so did Trump in the executive order... yeesh)
It's obviously to pressure TSMC to move to the US. Not just as their secondary facility with previous-gen chips, but make their bleeding edge technology on US soil too. It would make the US economy less reliant on overseas shipping, as well as reduce the impact of war breaking out over Taiwan.
It can only be "obvious" to people who have no clue about how these fabs operate. Moving a fab like this to US will take years. And what do you do about all the engineers? What if they don't want to move? Or, if they want to, do we offer them Green Cards right away? Do we even still have an "alien of exceptional ability" category? Aren't we in a huge immigration crackdown?
This is fucking nuts. Sorry I'm on a short fuse about this.
Part of me wants to just see the whole thing collapse at this point. Let American manufacturing grind to a halt. Let American infrastructure sputter. We - collectively - voted for this fucker again, knowing very well what he was, so we deserve this.
I meant that's obviously the goal/dream behind that strategy, not that it's going to work.
Also what Trump's bluff is called? It's not like the US can just start buying TMSC-quality chips from another country instead. US-based customers will have no choice except to keep buying Taiwanese chips, but with a tariff attached.
Exactly. I think he really really doesn’t understand who actually pays the tariffs.
That bothered me, too. If we must do anything like this at all, it seems like the right way to do an "infant industry" style tariff to get chip fabs built in the US would be to pass a law committing to impose tariffs starting several years in the future, not to impose tariffs years before there's any vaguely plausible chance of having domestic chip fabs available to be protected by the tariff. An immediate 100% tariff seems like it would inflict expense and disruption, probably severely harming any American industries downstream of the chips. And doing it by Presidential fiat means it's likely to go away in 2029, if not sooner, pulling the rug out from under the economic case for any new fabs long before they go into operation.
I suppose it might be intended as a bargaining chip, to be retracted if TSMC commits to building a fab in the US, but that strikes me as a spectacularly ham-fisted and arbitrary way of going about it. Not that "ham-fisted and arbitrary" is in any way off-brand for Trump.
O-1A visas appear to still be a thing, yes, but I have no idea how hard it is to get one.
There's no such thing as a law "committing to impose tariffs starting several years in the future". There's just "we pinky-swear we will impose tariffs several years in the future, no backsies". That's up to the Congress of several years in the future, which will have exactly the same incentives today's Congress has to put off the tariffs for another few years, and everybody understands this, so nobody will fall for it.
I’m not sure you’re correct. Congress can pass a law that takes effect in the future. For example, Obamacare was passed in 2010, but only took effect in 2014
I think John's point is that laws can be repealed. He's right, especially if there's a cliff where tariffs go from 0% to 100% all at once and creates a crisis where Congress is likely to decide the tariff threat has failed and should be called off.
Laws are harder to repeal than executive decisions, since you need the House, Senate, and President to all line up together. The ACA survived to take effect in 2014, despite Republicans taking control of the House, because Democrats still had a majority of the Senate. Also, Obama had been reelected and could have vetoed repeal even if the Senate had passed it.
But harder is not impossible, especially if the law on question would cause a lot of pain when it takes effect and has little or no entrenched constituency to defend it.
Trump has generally taken a pro-legal-immigration stance in line with the preferences of his silicon valley donors. I don't think smart people will have more trouble getting in during this administration, though I don't know how feasible localizing fabs is.
Speaking of silicon valley donors: I think that as AI continues to improve and to be used for ever higher-level decisions and activities (research, designing buildings, optimizing the power grid, finance) the companies that make AI will become so rich and powerful that they have more power than the government. They will be the government. I never see anybody write about this, and I really don't understand why not.
You still see leftists concerned about corporate power from time to time, and I have seen them talk about it, though I'm trying to find a good link. You know, we had similar problems with Standard Oil, US Steel, and AT&T (and to some extent Microsoft) a while back. Usually there's some sort of anticorporate revolt and their wings get clipped. I think the thing is rationalist spaces tend to be libertarianish and they tend not to like talking about how business is bad.
Bret Devereaux explains why "megacorporations" such as you and every cyberpunk author ever describe are very unlikely to exist in the wild. Basically, the scale and legitimacy of government, and the fact that they don't have to produce much of anything to generate vast revenue, gives them an almost insurmountable advantage over even the largest corporations.
https://acoup.blog/2021/01/01/fireside-friday-january-1-2021/
It's possible that there's something about AI specifically that will change that dynamic, but you'll need to spell it out and it will need to be something more than "...but they'll be *really* rich and powerful!"
Later thought: This is the sort of thing I had in mind when I made my comment. https://www.cnn.com/2025/01/21/tech/openai-oracle-softbank-trump-ai-investment/index.html
But actually I am not sending you this to score points. I’m mostly sending it because I’m creeped out by it but have no place to post it today because the current open threads have pretty much expired.
OK, John, I ruminated about this, and came up with this summary of trends that could lead to a world where the AI development companies are more powerful than the US government.
*Tech is very powerful, as evidenced by societal changes ass’d with computers. As highly advanced tech, future AI will make tech even more powerful relative to individuals and social institutions than it is now. Ways it will be more powerful
-Ability to directly and indirectly influence people’s views
-Woven into infrastructure
-At least a partial replacement for human experts in hard science, industry, & social science’
*As AI improves its popularity will increase. AI is well-suited to the task to figuring out what behaviors on its part create approval, dependence and loyalty in people. It will almost always be a benefit to AI to create high levels of all 3 o these, and it do so. Ways its popularity will manifiest”
-Addictions to tech
-Trust
-Emotional bonds
/Something sort of like a religion or political movements — AI cults. Support for AIs as ulitmate authoriites. Support for AIs as political candidates, and/or use of candidates who are willing AI meat puppets
*In US, public support of and respect for gov’t is waning. That trend may continue. If it does, it will augment the influence of those mentioned above.
*Those who have the most power over AI will be in a position so uniquely powerful that the position will outweigh quantitative differences in wealth, human staff, official sanction and support for its actions, etc.
-As AI becomes more powerful, gov’t will attempt to control and limit it. However, seems likely that the present set-up will continue: Tech has so much to offer gov’t that gov’t will hold back from radical efforts to control tech companies because it does not want to be deprived of benefits of good relations with them.
-So AI developers will sell the AI’s but not their trade secrets. As holders of the secrets they will be seen as holding the power to increase AI benefits and protect from AI harms. They will in fact have more power to do that, and can also deliberately foster the illusion that they have more power over AI than they do.
—————————
In case it’s not clear: These ideas are the result of a fairly intuitive, loose, brainstorming process. They are not assertions I am prepared to debate, but ideas I’d like to discuss. I am far from sure they are right, but I think it is unlikely that they are valueless. Being able to think this way has served me well. I have used it for projects where what I ultimately produce must be debate-worthy. I have to sort through my mass of loose insights and toss out the junk and carefully consider what support there is for what remains. If you treat what I have written as confident assertions then you will want to puncture my supposed dumb self-importance.
By the way, I realize my original post did not make clear that I do not regard my picture of the future as debate-worthy. Mostly I was curious why nobody but me ever even broaches the idea, and that was the question I asked at the end of the post. (And by the way, here is quite a smart broaching of a similar idea:https://arxiv.org/abs/2501.16946 (https://arxiv.org/abs/2501.16946)). I was stung by your response, which had a mocking edge. I wish you would modulate your tendency to come across as irritable and contemptuous. Some of my views may be wrong, but I am not dumb and overconfi
dent, and I think you know that.
Isn't this like 50% of the cyberpunk genre? They always seem to have the US government vanishing because *mumble mumble mumble.*
Yes, William Gibson’s worlds are all like that. But setting aside what you think of the genre, what do you think of the idea? And I can do better than go mumble mumble about how that could happen. I took off from somebody commenting on Silicon valley donors. Seems to me that as AI becomes more powerful, so will those who have control over it, esp as they will also be becoming wealthier too. Of course, if we started now we
might legislate the daylights out of the situation to make sure that does not happen. But we are not doing that because (1) tech wealth buys a lot of influence and (2) AI is hard to understand and not many in government do.
I'll believe in his pro-legal-immigration stance when I see actions. So far the only actions I'm seeing are those designed to make libs cry.
The guy has a mean if not sadistic streak and yeah he won the popular vote for the presidency.
People kept saying his loony rhetoric was for entertainment purposes only, but no, he was telling us what he was all about.
Seriously but not literally my ass.
Yep. He likes to hurt people, especially when others have to actually do it. Kadyrov at least tortures people himself. Although I don't know which is better, on a second thought.
Well, at least one of the pardoned J6's has been shot by police. I do wonder what rank and file actually think about pardoning those who smashed their comrades with fire extinguishers.
If true, this is lunacy. But I expect nothing less from this administration at this point. It will hit manufacturing in the US hard because it takes years to qualify a new fab, be it in the US or elsewhere.
So much fucking winning.
The stated impetus is to increase chip manufacturing in the US.
Is there such a thing as OD'ing on Adderall, and if so, what are the symptoms?
https://jabberwocking.com/donald-trump-i-ordered-military-to-turn-on-the-water-in-california/
https://en.m.wikipedia.org/wiki/Amphetamine#Overdose
Also, I recently learned about the Yerkes-Dodson law, which posits that there's basically a bell curve relationship between arousal and performance on complex tasks. Meaning that taking too much stimulants actually makes you worse at things. https://en.m.wikipedia.org/wiki/Yerkes%E2%80%93Dodson_law
You can test that out yourself by drinking loads of coffee. If you're tired, a cup will make you more alert.
Keep drinking and you'll soon become a twitchy mess that can't focus.
"Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day"
Good god. If i take 20 milligrams in a day I get a terrible headache and I have been taking 10mg every weekday for about 2 years (and took that much for years as a kid). 5 grams is insane to me.
Well yeah, the whole reason I was even reading about that was because my prescription of methylphenidate was eviscerating my body.
Just revisiting some old history, and my oh my. Just imagine people: we could have had the timeline where instead of Biden running against Trump in 2020, it was Michael Avenatti 😁
https://www.cnbc.com/2018/10/08/democrats-blame-michael-avenatti-for-brett-kavanaugh-confirmation.html
"Avenatti, who is weighing a presidential run against Trump in 2020, defended himself against charges that he and his client helped carry Kavanaugh across the finish line.
“Nothing we did changed one vote one way or another. Any claim to the contrary is utter nonsense and it’s undercut by the timeline and the facts,” Avenatti told CNBC in a phone call. He added that the narrative “is being pushed for political purposes,” in part by “Republicans that think I have a good shot at unseating Donald Trump.”
...Avenatti maintains that his brass-knuckles style is the only way the minority party can fight back against the rise of Trumpian populism. ...In recent months, Avenatti has said he is “exploring” a presidential run in 2020. He made that announcement while walking the Iowa State Fair, which is known as a political kick-off point for many presidential campaigns. Avenatti has since attended numerous Democratic fundraisers and events."
So what do you think: President Avenatti or a second consecutive term for Trump, in that case?
Hard to run for president when you are serving many years in federal prison for extortion and stealing client funds.
This was before all those pesky inconvenient facts came out and he was still being perceived as a Hero of the People for standing up to Trump (that is, taking the case of the sex worker who broke her NDA because she needed more money, because that is the long tradition of sex workers and ex-mistresses who no longer can generate income and have blown through all the money they made, see the Duke of Wellington and Harriette Wilson).
https://en.wikipedia.org/wiki/Harriette_Wilson
"Wilson began her career at the age of fifteen, becoming the mistress of William Craven, 1st Earl of Craven, 7th Baron Craven. Among her other lovers with whom she had financial arrangements was Arthur Wellesley, 1st Duke of Wellington, who reportedly commented "publish, and be damned" when informed of her plans to write her memoirs.
... For obvious reasons, most politicians with whom she had relationships paid high sums to keep their interactions with her private.
Her decision to publish was partly based on the broken promises of her lovers to provide her with an income in her older age. The Memoirs of Harriette Wilson, Written By Herself, first published in 1825, is celebrated for the opening line: "I shall not say how and why I became, at the age of fifteen, the mistress of the Earl of Craven."
https://www.independent.co.uk/voices/rear-window-when-wellington-said-publish-and-be-damned-the-field-marshal-and-the-scarlet-woman-1430412.html
"One morning in December 1824, the Duke of Wellington received an unpleasant letter. 'My Lord Duke,' it began, 'in Harriette Wilson's Memoirs, which I am about to publish, are various anecdotes of Your Grace which it would be most desirable to withhold, at least such is my opinion. I have stopped the Press for the moment, but as the publication will take place next week, little delay can necessarily take place.'
The letter, signed by one Joseph Stockdale, a pornographer and scandal-monger, was a naked attempt at blackmail. The Duke was a field marshal, cabinet minister, national hero, husband and father, while Harriette Wilson was a famous London courtesan past her prime, then living in exile in Paris. Wellington was being asked to pay money to be left out of her memoirs.
His response is famous: 'Publish and be damned]' And they did. Through 1825 the memoirs appeared by instalments, each with a dramatis personae listing the notables named in order of rank - 'Dukes: Argyll, Beaufort, de Guiche, Leinster . . .' and so on through earls and viscounts down to humble esquires.
London society was thrilled and scandalised. Half the aristocracy was named in the book, and painted in a most unflattering light. The memoirs went through 31 editions in one year; excerpts were pirated and sold as illustrated broadsheets and French and German editions quickly appeared to delight the gossips of the Continent.
Stockdale, the impresario, and Wilson, the scarlet woman, were said to have made pounds 10,000 from the enterprise, but their good fortune did not last. Stockdale was soon ruined by libel suits, while Harriette was down on her luck again within a few years, and died in obscurity."
The first time, yea. But once elected the SCOTUS says you can do all the criming and just have to identify some minimal connection to an official duty you might be performing. So re-election is no problem.
True, its a good gig if you can get it.
What role did AI play during your last work meeting? Was it used for output processing, like writing a synopsis? Was it used to provide input data for the meeting, like asking ChatGPT for the meaning of a word? Was it used for preparation? Was it a subject of conversation (formal or informal)? Something else? Nothing entirely?
My org has never used AI in meetings. I think an external agency we were collaborating with used AI to generate transcripts, and a summary, once.
I use AI daily for my work, but almost never for meetings. Tried to get it to extract the most important points from long meeting trascripts a few times, but it's bad at capturing nuance and leaves out important stuff if it didn't take up a lot of time in the meeting. It has the intern problem: Causing more work than taking off my plate.
In many other contexts, it's been a productivity boost.
None whatsoever in my last meeting. The only times I can recall it coming into play are, A: the time we used our tame in-house AI(*) to suggest non-boring names for our new collaboration room, and various occasions when colleagues discussed their private use of various AIs with the consensus so far being that they're fun to play with but can't be trusted for serious rocket science.
* We're not allowed to use e.g. ChatGPT at work, because too much of what we'd want to ask it involves someone's proprietary information and there's no way to guarantee that won't be exfiltrated to San Francisco and used as part of the training data for GPT 5 or whatever.
My last 10 or 20 work meetings did not include AI at all unless someone was using it and didn't tell anyone. The last time I remember AI being mentioned at a work meeting was months ago when I joked about a friend's personal project of a chatbot while we were waiting for everyone to join.
0
AI played no role in my last work meaning, or in any previous work meeting. There was one meeting a month ago where we discussed whether we should use AI to help transcribe notes, but that didn't go anywhere.
I cannot know if anyone else used AI to prepare for the meeting or some such but afaik AI played no role in my last meeting. It tends to be a subject (amongst others) in meetings about once every couple of weeks.
Some people’s work rarely involves
meetings. Maybe just ask about using ir
for work?
Nothing at all, because I don't use any of the AI, not even Copilot that Microsoft crowbarred into Office 365 which we do use at work.
I use copilot at work on a daily basis, and the meeting discussed the code we have written - does that count?
Back at the old SSC, Scott wrote in 2013 that
"I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey."
But he goes on to say,
"So I think an important point to make before going any further is that, through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"
and
"What Lewis, Augustine, and Tertullian had in common was Christianity; what set Lewis apart was modernity. What made C. S. Lewis saintly, as opposed to the horrifying sadists who actually got the “St.” in front of their names, was the perspective of a culture that had just spent a few centuries thinking about morals from a humanistic perspective."
So in other words, it might look nice for Christianity that 2/3 (or maybe more) of the three great moral writers are Christians, but in fact, it's for other reasons.
But now Scott, in 2024, writes in his review of "The Rise of Christianity":
"The impression I get from many parts of this book is that the early Christians were closer to morally perfect (from a virtue ethics point of view) than any other historical group I can think of."
Will there be an update on the reasons for Lewis's+Chesterton's moral goodness?
1. Yeah, partly I contradicted myself
2. Partly I think early Christians got a lot of things right but the later religion basically regressed towards normal past values, which (by assumption here) are worse than the normal values of modernity.
3. I think early Christians get an A- on having the correct values and an A++++ on living their values. Moderns get an A on having the correct values (THIS IS PROBABLY BECAUSE I AM A MODERN AND LIKE MY OWN VALUES, I FREELY ADMIT THIS) and a C or so on living up to them. These are slightly different skills. I judge essayists mostly on their values being correct, since I'm not exposed to them in real life and don't know how well they lived up to them.
there's no such thing as "modern" in terms of values- clearly people in the modern era disagree about value systems, just as people in every other age have. Unless you don't consider, e.g., the governments of Cuba or Belarus or Afghanistan to be modern, which they clearly are.
Thank you, Scott. The frankness is always appreciated.
(Edit: I still think it's worth consideration, incidentally, that the moderns whom you find most clearly virtuous share the beliefs of the people you consider to have been most virtuous in practice, and not the beliefs that you otherwise consider most intrinsically virtuous. I'm not saying you need to respond more, or that there's some super gotcha point here. Just saying that I think the point remains interesting after your reply.)
FWIW, on point 3, many who knew Lewis considered him to be the closest thing to a saint they had ever met. Of course, others didn't, and I don't know either them or him, and so am like you only able to form judgments from afar.
I think fans of C. S. Lewis (and I'd certainly count myself a fan of his *fiction*, for sure) should also read this critique by J. B. S. Haldane (also one of my favorite intellectuals of that period) which both takes him seriously as a writer and a thinker, and also strongly disagrees with him ideologically:
https://www.lewisiana.nl/haldane/
The more you learn about history, especially pre-modern history, the better Christianity looks. The default perspective in nominally Protestant America is that the Big Bad Church was holding everyone back up until the reformation, which dovetails nicely with secularism as well. But once you get into your own historical reading, this becomes a very hard position to defend.
In contrast, I would say the more I’ve learned about modern history, the worse Christianity looks. From the 18th century onward Christianity was on the wrong side of almost every issue. The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy. Christian’s like to claim credit for things like abolition because some high profile niche sectioned slavery, but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it.
Up until the 16th century - when the wars of religion started - I think there’s a decent case to be made that Christianity was a net good. But the recent tendency, even by nonbelievers like Tom Holland, to attribute all the accomplishments of modernity to Christianity is bringing out my inner Dawkinsian. It’s mostly accomplished by just cherry picking the best Christians - by the standards of modern humanism - and treating them as representative of Christianity. So William Wilberforce is arbitrarily picked to be the synecdoche for Christianity on slavery; Dietrich Bonhoeffer is arbitrarily picked to be the synecdoche for Christianity on naziism. But this is obviously the wrong way to assess the merits of a religion.
I'm more in agreement with this than not, truthfully, although of course it's going to come down to your worldview. From my perspective, institutional Christianity was *mostly* on the wrong side of the conflicts between monarchy and republicanism, nationalism and cosmopolitanism, socialism and capitalism, and the sexual revolution (and I also tend to disagree with the 'Christian' position on the 'life issues' speaking broadly today). this is a good part of the reason I don't identify as a Christian these days (although not the only one).
"but in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it."
I do not believe this is the case. From what I have read of that era, the atheists and the agnostics often said things like "of course slavery is bad", but when it came time to *doing* anything they were mostly concerned with the all-important conflict between white male atheists and white male Christians. The slaveowners were mostly churchgoing Christians, because most everyone was a church-going Christian, but they were mostly not motivated by religion nor citing scripture in self-defense against those who were citing scripture against them.
Which is to say, the leaders and the spokesmen of the abolitionist movement, who were very disproportionately and *conspicuously* Christian, often preachers or evangelists speaking as such.
I may be wrong about this, but I think your claim needs to be better supported if we are to take it seriously.
Supporters of slavery definitely cited scripture, and they had a pretty strong argument because none of the references to slavery in the Bible condemn the practice. I recall one anti-slavery writer who compared the practice of slavery as described in the Old Testament with the practice of slavery in the American South, arguing that the Bible didn’t endorse the latter. But mostly, opponents of slavery argued from broad Christian moral principles without reference to what the Bible said specifically about slavery.
A number of Christian denominations split over the issue of slavery, including the Methodists, the Baptists, and New School Presbyterians. (The Presbyterians split into the Old School Presbyterians and the New School Presbyterians in 1836 over a different issue; the Old School Presbyterians split after the formation of the Confederacy.) So there were significant numbers of Christians on both sides of the issue.
On the other hand, support for slavery was motivated to a significant degree by self interest. My sense is that the abolitionists had a deep belief in the immorality of slavery, whereas for supporters of slavery the belief that slavery was ordained by God was much more a belief of convenience. Union victory in the Civil War seems to have resolved the moral question in the minds of most people of the time. In part this could be due to the widespread belief that major historical events played out according the a divine plan--so if God favored slavery the Confederates would have won. But I think that the main thing is that supporters of slavery were never really sure that the institution of slavery was moral, so when the economic and social incentives to support slavery were removed, the belief in the morality of slavery was easily abandoned.
The idea that the Civil War wasn’t <em>really</em> about slavery emerged very quickly after the end of the Civil War. See <em>The Lost Cause Regained</em> by Edward Pollard, published in 1868, which argues that Confederates were really trying to preserve white supremacy, not slavery per se.
It's complicated... my understanding is that the conflict over slavery in 19th-C America could be understood as intra-Christian; i.e., people on both the pro- and anti- sides generally understood themselves to be faithful Christians and understood their views to be grounded in Christian belief.
"The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy."
Sources, please! If it's true, I want to know that. If it's Protestant propaganda warmed over and rehashed by modern atheists, I also want to know that.
"in all likelihood the few atheists and agnostics around them were probably greatly disproportionately likely to oppose slavery while church going Christian’s probably disproportionately supported it"
Again, *give me figures" on that. Otherwise, you're just inventing a rationale that "well I'm sure the guys I like were on the Right Side and the guys I don't like were on the Bad Side". Your inner feelings that atheists were all jolly nice chaps is about as reliable, evidentially, as my inner feelings that atheists were all, um, not jolly nice chaps.
This response sort of proves my point. I guess the real answer is that this works like the midwit bell curve meme - if you can read enough to repeat historical half-truths, Christianity looks worse than if you can't read at all.
> The 19th century Catholic Church was so retrograde that it tried to hold back the spread of electric lights and railroads in Italy.
As far as I can tell, this is essentially a reddit-tier falsehood - to the extent there is any source for this at all, it's an British Protestant writing in 1848 about purported corruption he saw when visiting the Papal States. The English-language world is steeped in this kind of anti-Catholic propaganda.
https://www.snopes.com/fact-check/catholic-church-opposed-street-lights/
https://www.roger-pearse.com/weblog/2016/08/18/did-the-catholic-church-oppose-street-lights-some-notes-on-the-papal-states-in-the-1830s/
> Christian’s like to claim credit for things like abolition of slavery
Again, borderline reddit-tier history here. Please identify even one 19th century American abolitionist that was an agnostic or atheist.
> Up until the 16th century - when the wars of religion started
The institutional Church can hardly be blamed for the actions of schismatics who are, in dogmatic terms, heretics. Without Protestantism, there are no intra-Christian wars of religion.
Yeah, I'm thinking "The Church tried to ban street lights" is one of those just-so stories on a par with "The Pope ordered all cats to be killed and this is why the Black Death happened", which I've seen passed around before. The story there goes: witches associated with black cats - cats associated with evil and the Devil - Church no likey Devil - Church/Pope orders evil cats killed - cats kill rats which are vectors of infection for plague - no cats, more rats - more rats, more plague - more plague, more deader - ergo, irrational superstitious repressive old Church responsible for more deader.
Here's a 2022 article repeating that myth, even though I can find another article from 2020 debunking it:
https://www.irishlegal.com/articles/our-legal-heritage-gregory-ix-the-cat-killing-pope-who-laid-down-the-law
Which seems to be based very tenuously on some German witch-hunter going over the top with reports back to Rome, which Gregory IX then included in a letter to a specific city in Germany:
https://museumhack.com/black-cats-black-death/
"The purpose of Vox in Rama, rather, was to condemn a cult that had allegedly popped up in the Rhineland—and the bull was specifically sent to the city of Mainz. So, rather than being a document banning cats from Europe, it was a letter sent to a city warning them about a cult."
https://en.wikipedia.org/wiki/Vox_in_Rama
"Vox in Rama ("voice in Ramah") is a decretal sent by Pope Gregory IX in June 1233 condemning the heresy of Luciferianism said to be rife in Germany, and authorizing the preaching of a crusade against it.
...In 1233, Konrad von Marburg began an inquisition in Mainz at the behest of either the Pope or the archbishop.
Allegedly through torture and terror, Konrad reported he had uncovered a satanic cult which worshiped devils in the forms of a demonic man and of a diabolical black cat. The inquisition of Konrad was controversial: shortly after Konrad arrived in Mainz, a papal official called Bernard wrote to Gregory stating that Konrad had been forcing innocent people to confess by threatening them with burning at the stake if they refused to do so. Shortly thereafter, Konrad was assassinated, possibly by the servants of Henry II, Count of Sayn, whom Konrad had accused of fostering heresy.
The issue of the Vox in Rama bull was a response to Konrad's allegations, urging Siegfried III and King Henry, representing the ecclesiastical and temporal authorities respectively, to seek out and destroy the heretics."
Yes. Although I'm very happily a Protestant, one needs a good deal more nuance than "The Church was totally evil until Luther showed up." The Reformation may have been (I would say was) a tragic necessity, but if so it was a needed reformation in an institution that was fundamentally a force for good in the world, viewed in the long run.
It is appropriate (IMO) that the two names Scott mentions include both a Protestant and a Catholic, and I don't in the least doubt that there are eastern Christians who would impress him no less with their holiness.
I remember reading this post (Can Atheists Appreciate Chesterton) back in the day, and revisiting it now, I'm even more disappointed in it... Scott tries to solve his riddle by merely insisting that everything he likes about Lewis and Chesterton isn't Real Christianity. Come on!
This line stuck with me all these years:
"So my thesis is that Lewis and Chesterton didn’t become brilliant moralists by revealing the truths of Christianity to a degraded modern world. They became great moralists by taking the better parts of the modern world, dressing them up in Christian clothing, and handing them back to the modern world, all while denouncing the worse parts of the modern world as “the modern world”."
–maybe they did do this, but– lots of people actually disagree with Lewis and Chesterton on what are the better and worse parts of the modern world... I would like to suggest that anyone who agrees with them on this is at least a little bit of a Christian...
It reminds me of a passage Lewis wrote in his autobiography, recounting a time before he became a Christian:
"All the books were beginning to turn against me. Indeed, I must have been as blind as a bat not to have seen, long before, the ludicrous contradiction between my theory of life and my actual experiences as a reader. George MacDonald had done more to me than any other writer; of course it was a pity he had that bee in his bonnet about Christianity. He was good in spite of it. Chesterton had more sense than all the other moderns put together; bating, of course, his Christianity. Johnson was one of the few authors whom I felt I could trust utterly; curiously enough, he had the same kink....
"The natural step would have been to inquire a little more closely whether the Christians were, after all, wrong. But I did not take it. I thought I could explain their superiority without that hypothesis. Absurdly (yet many Absolute Idealists have shared this absurdity) I thought that “the Christian myth” conveyed to unphilosophic minds as much of the truth, that is of Absolute Idealism, as they were capable of grasping, and that even that much put them above the irreligious. Those who could not rise to the notion of the Absolute would come nearer to the truth by belief in “a God” than by disbelief. Those who could not understand how, as Reasoners, we participated in a timeless and therefore deathless world, would get a symbolic shadow of the truth by believing in a life after death. The implication—that something which I and most other undergraduates could master without extraordinary pains would have been too hard for Plato, Dante, Hooker, and Pascal—did not yet strike me as absurd. I hope this is because I never looked it squarely in the face."
Would you mind providing a link to the 2013 SSC post that you're quoting?
Here you go: https://slatestarcodex.com/2013/06/16/can-atheists-appreciate-chesterton/
Thank you, and sorry for my initial omission.
The Episcopalians have made C.S. Lewis a saint but the Catholics have not made G.K. Chesterton a saint.
Who is Jacqueline Carey?
She wrote a fantasy trilogy (with followup trilogies) with a BDSM submissive as the main character. The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France. (Even has a fancy medieval-Italian name for safewords.) With all this in mind, there is a surprisingly small amount of sex. I'm making it sound a lot dumber than it actually is; it was very well done from what I saw of the first book, but I'm too busy for the fantasy doorstops these days. Probably a distant antecedent of the romantasy novels so popular nowadays, though from what I can tell those are much more influenced by Harry Potter and Twilight.
Officially, Scott likes it, from what I can tell, because it makes the argument that 'nice' ideologies (like liberalism) can actually be powerful and more robust than you think.
I will refrain from speculating on any other reasons he might have liked it on his blog.
"The lady went to the trouble of inventing a reasonably believable sex-positive version of Christianity for her fantasy France."
Wellllll....if you consider heresy a form of Christianity, I guess? 😀 Going by the synopses I've looked up online, it's not really Christianity as such - the main religion of Terre d'Ange is worship of (fallen) angels ("what if, you know, Lucifer was actually the *good* guy?" which is a notion that's been knocking around for yonks https://en.wikipedia.org/wiki/Luciferianism), the fantasy-version of what could be Christianity is more like a mash-up of Judaism and Mormonism.
Oh, absolutely it would be considered heretical by actual Christians (such as yourself)! But in the fantasy world, it serves as the official religion and breaking consent is the actual heresy (and is called as such). It's a fantasy world, it's not going to have the same religions we do.
The lady wanted to do a kinky sex novel with medieval French trappings, unlike a lot of writers she acknowledged Christianity was a huge part of the medieval period and actually bothered to make up a version of it compatible with what she wanted to do. Got to write 8 more books, so apparently she had an audience.
I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy? I don't know that she's actually gone around trying to convert anyone to the worship of Elua, though arguably writing the novel might count as something similar. There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general.
Taking the last part first:
"There's a bit in Niven's Inferno where the guy winds up in hell for inventing religions for sci-fi aliens and the devils point out that sort of thing might make people take religion less seriously in general."
I haven't read that and while I've meant to do so, I still can't persuade myself to do so because of the description of the plot. But that's an error anyway, and does sound like "non-believer tries to make a model of what he thinks believers believe" error which often happens. "Well they must have a reason for this crazy belief, so it must also be a crazy reason".
Inventing fake religions for fake characters in books won't get you into trouble *unless* you construct your fake religion so as to deliberately deny and mock real-world doctrine (e.g. if the SF author had created a religion in which the Jesus stand-in claimed to be divine but very much was not, and was in fact the kind of confidence trickster and swindler as described, and so forth). That also depends on the time and place; you probably would get into severe trouble in the Middle Ages, but not so much now. It would have nothing to do with "making people take religion less seriously in general".
The first part last: I don't mind the "it's fantasy Catholicism with the serial numbers filed off" too much, because it does get used a lot in the genre, though often very poorly where you can tell the author is working off popular tropes about That Evil Church and not genuine history. Where I break out in hives is this kind of "and the fantasy Catholicism totally by coincidence chimes in wonderfully with the kind of thing I like and approve of"; I agree with Tolkien who kept overt religion out of his work as much as possible (I love the Athrabeth but agree with him that it would have been too explicitly Christian to include).
That makes sense that it bothers you; it's the whole 'cafeteria Catholic' thing. I do get that nonbelievers don't understand believers all that well.
Niven's Inferno isn't that long and you might find the attempts to make new sins and relate them to Dante's medieval ones interesting (even if it's only from 1971). Kushiel's Dart...that's too long for me to recommend something that sounds like it's really going to piss you off.
“I imagine from the point of view of Christians, making up heresies in a fantasy world actually counts as heresy?”
It’s not heresy; it may or may not be a good idea, but it’s not heresy. Of course, it’s entirely possible for someone who is a heretic in real life to produce a fantasy apologetic, but the fantasy isn’t really the problem there.
I’m working on a fantasy RPG setting for a group of friends. I want it to have some medieval elements that most such settings lack, so its main religion needs to have more than usual in common with Christianity. But just taking Christianity and swapping out bits to account for the needs of the game would feel both hollow and blasphemous. In the end the setting’s primary religion has a lot in common with Tolkein’s Valar and with the gods of classic D&D; some of that is conscious influence, some of it is probably unconscious influence, but I think a lot of it is convergent evolution.
Fascinating! I always wondered how Christians play D&D dealing with the whole religion thing. (Theoretically you could just make God and the Devil the only deities, I guess?)
The Chesterton Society is pushing his cause, I think. Not an easy process.
The Episcopalians make *everybody* a saint - so long as they have the Correct Politics/Opinions.
Okay, interdenominational snark over, but I am still a little salty over "Protestantism, including the Church of England from which The Episcopal Church derives, junked the veneration of the saints and lambasted Catholics over it as idolatry; now they have taken it up as a trendy addition and don't even bother with any kind of process apart from popular acclaim which is really sentimentality and what's current with the Zeitgeist": see this mural from the Episcopalian church St Gregory of Nyssa, which includes non-Christians and (I'm going to assume) some agnostics/atheists in their lineup of the Dancing Saints:
https://saintgregorys.org/dancing-saints/
https://beacon.by/saint-gregory-of-nyssa-episcopal-church/dancing-saints-brochure-pdf
"Musicians, artists, writers, poets, dancers, workers, organizers, missionaries, martyrs, spiritual teachers, protesters, prophets, reformers, judges, builders, liberators, scholars, healers, soldiers, monastics, couples straight and gay, diplomats, planners, governors, and wild and domestic beasts. Christian, Jew, Muslim, Confucian, Buddhist, Hindu, Shinto, Pagan; of many continents, races, classes and eras. These saints lead us in our dancing, as all look upward to Jesus, the perfecter of human faith, drawing new harmony from his example as Gregory teaches us to do."
I suppose I'm most annoyed about them swiping Catholic saints to include in their lineup because tee-hee, aren't we all the same really? all that denominational stuff doesn't matter? except of course we are *ever* so much *more* liberal and liberated and enlightened than you guys in Rome about sex and gender and all that jazz.
EDIT: Although, all that ranting aside, I had to laugh at this one: taught by Episcopalians, ended up Catholic. Welcome home, brother!
Black Elk • 70
(1872–1957) A holy man of the Oglala Sioux, visionary, and mystic. A youthful convert who was taught by Episcopalians and became Roman Catholic, he showed forth through his life and teaching God’s radiant healing presence in all of nature.
In all fairness, Anglicans don't consider the Roman Church to be not a true church (unlike the reverse), so there is no great inconsistency in their celebrating Roman Catholic saints!
Some of them today, no. But they do still like their moral superiority on being so much more in tune with the Zeitgeist. A lot of liberal Catholics would also like us to be Episcopalians, to be fair.
Yes, one of the most appealing things about your church is the extent to which it manages to be crossways with the Zeitgeist. Chesterton, of course, waxed lyrical about this.
(I'm not sure if your first sentence means that Anglicans used to consider the RCC not to be a church, but that's not really true. Hooker in the 1580s already insisted it was, as for that matter did Calvin earlier.)
The Anglican communion is a bit odd for well-known historical reasons. Other than the subset of Lutherans who REALLY wish they were still Roman Catholic, most of we Protestants ignore the whole thing beyond, "They wrote well," hardly a universal truth of Saints.
I'm an organist, and early in our marriage served a small Episcopal congregation. I amused myself by watching my RC wife be just Slightly off-balance from the differences between "Rite 2" (Modern-ish) and her post-Vatican II upbringing. She hasn't killed me yet...
Anglicanism really is in a tough spot. It got pulled here and there between the various strains of Reformers, particularly as Henry was egged on by the reform-minded but he himself was "Catholicism but with me as the Pope and if I need cash, raid some saint's shrines to melt down the gold and pry off the jewels". Cromwell tried to get him to ally with the Continental Lutherans, but Henry *hated* Luther (I think the feeling was mutual) and Cranmer, who had very Lutheran sympathies, had to keep those under his hat. Depending on which faction got into power, the Church of England was Catholicism Lite, Stripped Down Nearly Calvinism, Lutheran-adjacent, somewhere in the middle, High, Low and Broad.
No wonder it, and its daughter churches, aren't quite sure if they're flesh, fowl or good red herring.
A lot of the Anglican liturgy really was "just translated over into the vernacular" (except in regard to the Eucharist) and after the Vatican II reforms, Catholic liturgy was stripped down and the resemblance is very strong.
My brother-in-law is a deacon in a tiny congregation that is part of the largest socially-conservative North American Anglican splinter group, this one:
https://anglicanchurch.net/
(You need all those qualifiers in the description because there've been a number of such splinters; and that one which is only 15 years old already has had its own subset spin off on grounds that they're not socially-conservative _enough_. I am, barely, adult enough to avoid making People's Front of Judea jokes in my brother-in-law's presence.)
Anyway over the holidays I attended a full service of that congregation for the first time, and it was indistinguishable from what in my youth was referred to as "high Episcopalian" or less respectfully, "Catholic-lite". At least it being a Christmas service meant we got to sing some old-school carols which I'll never not enjoy.
The actual split from the official international Anglican Communion is entirely about gender stuff: they objected to women becoming priests and object strongly to any sanction of the existence of homosexual people let alone gay marriages. Happily none of that needed to come up in any of the preaching during a Christmas service.
My presence brought down the median age of those attending and I'm old enough to remember rotary phones, which gives a decent snapshot of both the sociology and future-growth prospects of the "Anglican Church in North America".
One of the things I respect about ACNA is how they managed to more or less fold in a bunch of pre-existing groups that had broken away from TEC at various stages of its liberalization.
I was under the impression that part of that was allowing subgroups within ACNA to differ on the ordination of women – correct me if I’m wrong. I personally suspect that that’s unsustainable in the long term, and that they, and conservative Anglicanism generally, will have to come down on one side or the other.
I do remember being an interested onlooker in the heyday of The Anglican Wars and the split between what turned out to be the Global South and the liberal West (mostly TEC with some support from the Canadians). So yes, unhappily you got splinters of splinters, though the most unedifying part of it was the parent body of TEC suing the living daylights out of breakaway churches/parishes/dioceses and dragging them through the courts, while all the time proclaiming they were the Mostest Loving And Accepting.
Well, it's all cooled down considerably since then.
My eyebrows went shooting up at the inclusion of Jacqueline Carey, as the most famous person by that name is one of my favorite authors (https://en.wikipedia.org/wiki/Jacqueline_Carey#Bibliography), and...well...her work doesn't seem like it would be so strongly to Scott's taste!
For further context on Scott's reading of the Kushiel series as important to modern moral questions, you probably want to go back and read this post, especially part 8: https://slatestarcodex.com/2014/02/23/in-favor-of-niceness-community-and-civilization/
He reads Elua as an embodiment of liberalism (like Moldbug's "Cthulhu swims slowly but always swims left" except that Carey thinks it's a good thing).
As much as I like that Scott likes one of my all-time favorite trilogies, his position that Eula is meaningfully an advocate of the superior power of 2014 "liberalism" and all its present-day values is...not quite accurate.
Because ultimately, a powerful evil has to be defeated in Kushiel's Avatar, and...
(extremely generic and obvious spoiler)
...it isn't defeated by free love and flowers and turning the other cheek.
(slightly less generic spoiler)
It's defeated with cunning subterfuge, deliberate betrayal, and premeditated homicide.
You know.
The kind of stuff most self-described "liberals" tend to shrink from.
But what happens in the novel reflects reality: Scott can be as vegetarian and organ-donating and gentle as he likes in his day to day, but he also needs to be prepared to injure and/or kill in defense of peace and innocence if he wants to truly be an avatar of goodness and see goodness "win" if directly threatened by evil.
(loosely paraphrased from memory):
Libertarianism : Neon Redox :: Newtonian Physics : Relativity
More specifically, the bugman says social-order is a 3-layered pyramid. "Anarchy" at the bottom, "Martial Law" in the middle, "Civil Law" at the top. He also says that, the average libertarian's error is the naive belief that the "Civil Law" layer is the only layer that exists. I.e they don't realize that the peace of their quotidian experience only exists within a tiny envelope of state-space.
It's possible Scott understood this, and thus his mirroring of Cthulu and Elua was deliberate. But I suppose it's also possible that I'm reading into Scott too charitably.
I thought that as well. I just figured speculating at the *other* reasons he might enjoy that book is series...well...bad form on someone's blog.
You are correct, of course. To quote Orwell, always a good critic of liberal intellectuals: “Those who ‘abjure’ violence can do so only because others are committing violence on their behalf.”
Or, hey, how about some Kipling:
I went into a public- ’ouse to get a pint o’beer,
The publican ’e up an’ sez, “We serve no red-coats here.”
The girls be’ind the bar they laughed an’ giggled fit to die,
…
O makin’ mock o’ uniforms that guard you while you sleep
Is cheaper than them uniforms, an’ they’re starvation cheap;
An’ hustlin’ drunken sodgers when they’re goin’ large a bit
Is five times better business than paradin’ in full kit.
Hat tip to the Churchill Project (https://winstonchurchill.hillsdale.edu/rough-men-stand-ready/).
Thanks for explaining that.
Still strikes me as the sort of thing that only works if you actually have a majority. The western Allies won WW2 because (a) they had Stalin taking all the heavy beatings and (b) they had conservative elements backing up the liberalism. Without those you just have rationalist meetups--great for writing computer code, not much good against thugs.
I think liberalism is going to go into a long recession and make a comeback after the populists and communists have had another shot and left another mountain of bodies. But I'm a pessimist.
I haven't read any Carey, not so much because of the kinky sex, but because I bounce *very hard* off Le Exotique Language which is a this-world language with the serial numbers filed off. In her Kushiel novels, it's French. Not even a fantasy version of French, straight-up French. Though possibly not very grammatical French, I think Terre d'Ange should be "Terre des Anges" if it's supposed to translate out as "the Land of Angels".
And looking at the Wikipedia article, another of her novels has a character named "Loup Garron". Uh, you mean, like "loup garrou" only with a consonant swapped in for that terminal vowel? Now, that *might* be a Clever and Witty Pun, but it strikes me more as a Prachettian Pune*
Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie). Though she's not the only one; I can't remember who the author was (sometimes a failing memory is a blessing) but I did read one deathless work of art where the main character came from a particular culture that was Fantasy French and - get this - the national anthem-cum-dance of that country was the Leur de Lys. Yes, as in "fleur de lys" with the "f" chopped off.
"Nobody will ever notice the clever substitution I did there!" my eye.
*“WHERE DOES THIS TRAVESTY TAKE PLACE? HO. HO. HO. Albert gave up. “Well, Crumley’s in The Maul, for one. Very popular, the Hogfather Grotto. They always have a good Hogfather, apparently.” LET’S GET THERE AND SLEIGH THEM. HO. HO. HO. “Right you are, master.” THAT WAS A PUNE OR PLAY ON WORDS, ALBERT. I DON’T KNOW IF YOU NOTICED. “I’m laughing like hell deep down, sir.” HO. HO. HO.”
― Terry Pratchett, Hogfather
> Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie).
Katharine Kerr's Deverry novels don't quite invent fantasy languages, but they clearly show the author's interest in historical linguistics by postulating some plausible developments of historical languages. (In the fiction, the humans have migrated from the Gaul that historical Julius Caesar knew into a fantasy world.)
[There is also some kinky sex. The author seems to grow out of it after the first two novels or so.]
It's a refreshing break from the much more common case of fantasy authors postulating linguistic "developments" that are downright impossible, like Philip Pullman deriving "anbaric" from "amber".
First two novels of Deverry. Ordered on Amazon. Thank you.
I don't know if this will have any impact on your distaste for the practice, but the almost-but-not-quite-like-Earth stuff isn't an accident. The setting for the Kushiel series isn't another world, it's a slightly skewed version of our world, with fantasy elements and an alternate history. French is used in Terre d'Ange because Terre d'Ange is France. Look at the map! https://kushiel.fandom.com/wiki/Terre_d%27Ange
I don't have any defense for it being bad French; I don't speak French, and whether it was Carey trying to do something clever with language having developed differently, or just errors, it went over my head.
I read the first one. It was very well done, had a very original concept, but I couldn't keep track of all the pseudo-French names. I wound up drawing a diagram. Next book I had to do that for was Romance of the Three Kingdoms, which tells you something.
I mean, she had an original concept, went to some places nobody had gone before and probably nobody could now, did a lot of politics and created some original villains...she hadn't Tolkien's talent as a conlanger, but was better at character development, so I guess everyone should play to their strengths?
What translation of Romance of the Three Kingdoms did you read?
I was immensely disappointed to see the note, in the introduction to Moss Roberts' translation, that the various names by which the characters are referred to have been consolidated into single standardized names for the benefit of the modern English-speaking reader.
Brewitt-Taylor, it was available electronically and I can't carry a book that size around with me.
You recommend Yu Sumei? She's actually a native speaker (though obviously not of Middle Chinese) but the consensus was Moss Roberts had the best one.
That's so funny! I feel exactly the opposite, and with equal venom!
When I picked up Cloud Atlas and saw that it began with virtually impenetrable made-up dialect, I wrathfully snapped it shut again and slammed it back on the shelf. Fuck you, Authors Who Insist On Making Their Writing Laborious To Read! I want writing that is “simple” enough for me to instantly recognize the concept of (virtually) every word by its shape, because that's what fully literate reading is - character recognition rather than having to phonetically sound out a baraage of novel and strange and often counterintuitive words like a preschooler.
Writers who write beautifully do so with concepts, not goddamned overly elaborate sounds. Sounds are for musicians!
I'd actually argue Tolkien is a counterexample, and everyone tried to copy him without his specialized background and wound up sounding stupid. The different sounds of the dwarven, elvish, and human names give the feeling of different cultures, and the use of Anglo-Saxon or otherwise Germanic-sounding names gives it an 'old-timey' feel to an English reader. All of this of course drew on Tolkien's work as a professor of comparative languages ('philology' at the time); the guy was famous for rehabilitating 'Beowulf' as literature. (All the other critics were upset about the monsters.)
I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. It's part of the reason older novels are so hard to read; people don't talk like that anymore and it sounds stilted. It's more severe for poetry, where the sound of words is a big part of it, but prose is affected as well.
It is, of course, also possible Cloud Atlas is awful.
I haven't read The Lord of the Rings, and, in the words of Bojack Horseman's Mr. Peanutbutter, "I don't think I'm gonna!" (1)
I tried LotR and immediately put it down again. I read The Hobbit in school and utterly LOATHED it. I hated the character, I hated the meandering, I hated the prose craft, I hated the fucking stupid indulgent poems, I hated *ALL OF IT.*
One of my favorite YouTube literary essayists, The Little Platoon, extensively quoted Tolkien in his like 12+ hour commentary on Rings of Power and I didn't hear anything whatsoever that made me want to read the work he was so reverently quoting.
Peter Jackson's extended cut Lord of the Rings trilogy is great art. Based on the admittedly very small amount of Tolkien I've read, I don't think there's any remote chance that my experience of the books could be an improvement over my experience of the movies (I feel the same way about Game of Thrones). Sometimes - rarely - adaptation into a different medium serves a particular story better than its original format, and LotR is my go-to (admittedly ignorant) example.
(The Last Unicorn movie adaptation is my go-to informed example, but I don't know if it counts when an original author is the one adapting the work into the different medium, as Beagle did with the screenplay.)
> "I think the sounds of the way names and other things sound in your head as you read it actually do have an impact. "
I agree, I just have a low-ish tolerance for a high volume and complexity of novel "sounds" being tossed at me when I'm trying to immerse myself in a story.
(1) https://www.youtube.com/watch?v=Q_C-LQiPi6g
I'd be curious what you think of Patricia A. McKillip.
I haven't read any of her yet. Is there somewhere I should start?
It does depend on the language. Someone who hurls chunks of undigested worldbuilding and dialect at your head on page one will elicit the same response from me as from you.
I think that's part of why "Lord of the Rings" works, it starts off with the Hobbits in the Shire who are familiar enough to lead us in gently to the world. Everything is in English, the few unfamiliar terms are generally as unfamiliar to the Hobbits as to us so the explaining goes down easily. Gossiping (and back-biting) in the pub about the neighbours is something everyone can relate to 😀
"Argh! Invent your own fantasy languages, for Pete's sake! Even if it has Too Many Apostrophes! (Tolkien has ruined me in this regard, I cannot tell a lie)."
...ever read Stephenson's _Reamde_?
No, but on general grounds I tend to avoid Stephenson. Not quite sure why, just picked up a general impression that I wouldn't much like his work.
Did try Too Like The Lightning by Ada Palmer and again, bounced off hard. I can generally tolerate 18th century English prose style but that was a little *too* mannered right from the off for me.
"Menard's archaic style —foreign at last — suffers from some affectation. Not so that of the precursor, who manages with ease the ordinary Spanish of his time."
The specific reason to find it is something called the Apostropocalypse. Longest treatise I can find about it, in the form of a dialogue between two or three characters.
I found Reamde to be great fun. OTOH, I'd be very surprised if our tastes in speculative fiction were close enough to bet $20 on it. I assure you, though, Reamde is no wretched hive of 18th century English prose. Rather, it's a den of malware scamming, Russian mafia, Canadian trekking, Chinese intrigue, a terrorist who'd twirl his moustache if he had one, and an MMO.
I read this post back in the day, and I probably just shrugged and moved on when I saw "Jacqueline Carey", but now I'm intrigued. Is Scott really referring to the relatively obscure author of the "Kushiel" books, etc.? I've never read her–would you (or anyone else here) say that her works contain extraordinary moral wisdom?
Yudkowsky liked the Kushiel books and brings them up in the Sequences, probably because he's into the kinky sex part. I couldn't get through more than twenty pages of the first novel, which (from memory) starts in a brothel and goes exactly where you'd expect it to go.
And how did you know the novel (or trilogy?) went exactly where you expected it to go after only 20 pages?
I of course realize that the question comes from a weak position, as you can simply Google the plot and claim that you somehow gleaned the major plot points, themes, and the subversion of several genre tropes from the 20 pages you read before you reply.
Nevertheless, my face is currently doing this: 🤨🧐
(P.S. - Had you read the books, you would know that kink takes up a surprisingly small page count compared to politics and adventuring - certainly not enough to satisfy someone primarily reading for porn. But sure, it went exactly as you thought it would.)
I don't read a ton of 'romantasy' (although i just started the Sarah J. Maas series because my doctor, of all people, recommended them) but you've now got me intrigued enough to check out this Jacqueline Carey series (though i forget the name and i'll have to look it up).
Possibly because of things like the Anita Blake book series, and then the Merry Gently series (where Hamilton just went "to hell with it, I'm an erotica writer now").
Oh, I was so disappointed (and angry) about the turn the Anita Blake books took! Interesting twist (what if all the monsters were real? who you gonna call?), decent characters (though Anita was a little overpowered even from the start) and some promising moral conflicts.
Then Hamilton threw it all away to turn Anita into a Mary-Sue and just collect a harem of different were and other supernatural characters for mildly kinky sex. Dammit, I wanted to see the tension between "He who fights monsters should see to it that he himself does not become a monster" as Anita got deeper and deeper into the supernatural side of things, and her original system of morality and fast-fading identification with unpowered mortals, and instead Hamilton decided "Anita solves all problems by fucking her way out of them".
That kind of thing does predispose one to "Oh, this is set in brothels. Uh-huh. And prostitution is a sacred art in this world. Well, gee.
And the main character heroine who is going to solve all problems and do everything right to fix her society is a sex worker. But of course! And here we go with our first kinky sex scene. Yeah, I can see where this is going and I'm not interested" perceptions.
I didn't mean the entire novel went exactly where I expected it to go, I meant that in the first 20 pages there was BDSM in a brothel, which caused me to stop reading because I was on a train in public.
After that, I didn't pick up the series because it hadn't grabbed me in the first 20 pages, plus I'd recently read that EY short story about child-eating aliens where legalised rape is mentioned with zero context and I thought his preferences were probably very unlike my own.
Felt like throwing in, from opening in a brothel I'd expect it to go either to a murder-revenge storyline, or a political drama featuring an elaborate blackmail scheme. Or both.
Well, now that the OP properly cited his source, I can confirm Scott was indeed talking about the author of the Kushiel books:
“I credit three authors with giving me a visceral understanding of active, presence-rather-than-absence Good: G. K. Chesterton, C. S. Lewis, and Jacqueline Carey. Two of those are very religious and write quite consciously from a Christian perspective. The third writes about kinky sex. Go figure.”
Having read the Kushiel books many times, I can absolutely see how the heroism depicted in the story could inspire a sense of “active” good. Carey's protagonists are very much in this category, especially Phedre (of the first three books).
"through 90% of Christian history G. K. Chesterton and C. S. Lewis probably would have been burnt at the stake"
Depending on period and country, quite possibly. Henry VIII would have burned them both, because he has a track record of equal opportunity heresy hunting: Chesterton because he was Roman Catholic, Lewis because he was the wrong kind of Protestant.
"as opposed to the horrifying sadists who actually got the “St.” in front of their names"
Which ones? I would like some names so I can add them to my list of "yes ha ha ha yes!"* patrons or possible patrons 😁
* As per The Onion: https://store.theonion.com/products/cartoon-sickos-sticker
St. Julian the Hospitaller, patron saint of clowns and murderers?
Is it the murder or the clowns that is the objectionable part? Because clowns, yeah we all know they're deeply evil 😁
I mostly found the combination interesting, although on closer inspection it looks like Catholicism often attributes broad and seemingly-mismatched patronage portfolios to saints. I've also heard quips that being patron of both clowns and murders makes Julian the patron saint of Juggalos.
There's also Julian's (legendary?) biography, which features a formative moment when he "accidentally" murders his parents in their sleep after mistaking them for his wife and her lover.
"What is a Juggalo?
Saved if I know
'Cause I'm up with Christ
And I'm up for life, yo"
Yeah, we tend to pick saints and go "Here. Now you are the patron of aprons" or the likes. I always appreciate St. Anthony of Padua, famed as a preacher and Biblical scholar, getting the job of "Okay, every lost set of keys, glasses, and 'help help I misplaced that important document I need in the next sixty seconds oh shit!', that's for you, Anthony".
Well, there's so many saints and so many things people wanted to pray about in the Middle Ages I guess wanting them matched up would go to St. Jude.
Question about the AI Safety Stuff: how useful actually is getting new people into the field if the forecast is really "superintelligence by 2028?"
I've been following AI discussions here and elsewhere with interest, I believe AI safety is very important and I have quite a bit of relevant background (lots of math, some computer science), but I still look at something like that and go "how could I possibly get to a point of doing relevant work in a new field time to make any difference?"
I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help.
Silicon Valley likes to "move fast and break things", so I don't think anyone is imagining a PhD. This is more the startup thing of "take a boot camp, learn on the job". There's also not that much to learn - AI isn't that safe - and what research has been done is scattered across a bunch of different paradigms such that you can get to the frontier of one relatively quickly.
If you're interested, I would apply to https://www.matsprogram.org and let them decide whether you're worth their time or not. One person's list of necessary skills is https://www.lesswrong.com/posts/uKPtCoDesfawNfyJg/how-to-become-an-ai-safety-researcher .
"I don't know myself, but the big funders still devote a lot of time and money to training programs, so they must think there's time to help."
As someone working in AIS: This is an inadequate field. I strongly recommend against using adequacy-based-deduction in it. Don't just trust that the funders have good reasons for doing things.
Other training programs:
https://www.aisafety.camp/
https://pibbss.ai/
https://sparai.org/ (website down at the moment)
https://www.cambridgeaisafety.org/mars
https://www.lasrlabs.org/
As a CS PhD graduate, I'd love to apply to MATS, but quitting my job for a 10 weeks program seems extreme no matter how much I care about AI safety (the tech job market has been awful since late 2022 and shows no sign of significantly recovering). I guess I wish there was something in between the AI safety virtual course and a full-blown 10 weeks physical program.
AI Safety Camp: https://www.aisafety.camp/
SPAR: https://sparai.org/
MARS: https://www.cambridgeaisafety.org/mars
Thank you for the info!
I'll strongly consider applying to the summer MATS program. The timing works out quite well for me, though being in-person in California is a very close to being a deal-breaker, even for a relatively short program.
(I notice their pre-application link is currently broken, though that might just be because applications for the summer aren't open yet.)
Is 1942 too late to get into nuclear safety? Is 1883 too late to get into automobile safety?
Some people still have the Yudkowsky scenario in mind where a superhuman AI comes along, foom happens, and then it's an omnipotent god and does whatever it wants. If that's the real scenario then yeah it's too late to do anything so let's just sit around and hope it goes well.
But I'm not convinced that's the real scenario, I think it's starting to look a little dated. I think we're heading for a scenario with superhuman (in some important ways) AI but without foom. In which case life goes on, but with these powerful, unpredictable and smart-stupid tools in the hands of people who have no idea what they're doing. This raises a whole bunch of problems which all the work done on AI Safety so far is poorly equipped to answer.
The problem may turn out to be not that 2025 is too late to start thinking about AI safety but that 2010 was far too early.
A lot of this comes down to what one means by "superintelligence" I suppose. As I understand it, various AI models already display better-than-human performance at various narrow, specific tasks, but that these weren't considered superintelligent. The idea of "superintelligence" (again, as I understand it) includes general intelligence in the package: its an AI that displayed substantially better-than-human performance at ALL (or almost all) cognitive tasks. If people are using the word in a weaker sense than that, then there may be quite a bit of time after the development of "superintelligence" for the safety folks to catch up. But if the actual prediction is something that can outperform humans (almost) anywhere, that's a different story.
To be clear, I agree that even such a superintelligence need not imply a Yudikowskian foom scenario where it bootstraps itself to some unimaginably high level of competence. But I think it *does* fairly strongly imply that the window for alignment and most other safety measures has largely closed. Either we live in a universe where universe where ASI is safe under the level of precautions taken (in which case things turn out broadly fine) or we live in a universe where the precautions are insufficient (in which case it's probably too late to add more). In particular, if the existing safeguards are even minimally adequate--that is, if the ASI is under somebody's control, and the work it does is reasonably trustworthy--the can simply use it to strengthen the safeguards, which it can do better than humans can. If the safeguards aren't sufficient for that task, I'm dubious that external efforts to improve its safety would do better. Though I suppose I can imagine an on-the-edge scenario where the ASI is contained but not fully trusted, and having people capable of evaluating its safety proposals is useful.
"Is 1942 too late to get into nuclear safety?"
Actually, I think this is a great analogy. 1942 wasn't too late to get into nuclear safety in the sense of nuclear *policy* safety. It was not even too late to get into nuclear *engineering* safety. But there was real concern among the researches on the Manhatten Project that a nuclear explosion--possibly any nuclear explosion--could start a runaway fusion reaction in the Earth's atmosphere. Fortunately we lived in a world where that wasn't the case, but it was a real concern. And 1942 was almost certainly too late to get into THAT aspect of nuclear safety. Either we lived in a world where nuclear ignitions were broadly safe (and there was no need) or we did NOT live in such a world, and only the people already working on the Manhatten Project (or in a position to readily get hired on) were going to have much chance at realizing it and preventing nuclear Armageddon.
You could call this something like "existential nuclear safety." Is it safe (in the sense of not immediately killing us all) for nuclear weapons to exist in the world in the first place? If the answer is no, then the first live nuclear test is distinctly too late to discover the answer. ASI is more complicated, because there are plausibly far more relevant variables (that is, the answer could be "yes" for some designs and "no" for others). But the first time somebody boots up an ASI is an inflection point similar to the first time somebody sets off a nuke: you really, REALLY ought to know the answers to your existential safety questions *before hand* and with quite high confidence.
Right. I think the word "superintelligence" has been thrown around too loosely for too long and we don't have a good idea of what it means. Over the next ten years it's likely we'll need to reconceptualise what we mean by intelligence.
I now find it easy to imagine a computer that is "intelligent" in the sense that it can ace any traditional test, coming up with better-than-human answers to any question that you might care to ask it. But it's still just a tool, it's still just a very fancy function mapping inputs to outputs, it's not the same category of thing that a human is.
There will still be an important a thing that separates us from computers, but "intelligence" won't be the right word for it. Intentionality? Agency? The ability to have desires and an agenda of its own. We lack the vocabulary to even talk sensibly about it right now, but a lizard has it, and an LLM doesn't.
I am...significantly less convinced than you are that a *human* is anything other than "just a very function mapping inputs to outputs," at least in practice[1]. The main, obvious difference is that a human is *constantly* receiving inputs and *constantly* producing outputs. ChatGPT and its kin get their inputs in discrete chunks with arbitrary gaps in between, and produce outputs accordingly. But that's just a design decision. It would be perfectly feasible even with current tech to create something that's constantly taking in inputs and constantly producing outputs. Would that also be "not the same category of thing that a human is?" I'm genuinely unsure.
I haven't actually played around with any existing LLMs at all, but I've read a lot about other people doing so. It does kind of look like they're missing some key things, but it's a little bit difficult to put my finger on what. Goals and goal orientation? Robust models of the world? Common sense? A sense of continuity of self? I think you could find humans who largely lack any of those things (though the last would be a very unusual human), though maybe not all of them at once. At any rate, lacking the ability to fully pin down what LLMs seem to be missing, I have much less confidence that they will *keep* missing it as they advance.
[1] Which is to say, obviously we have internal experiences, but the practical, outward effect of that is merely to inform what outputs we produce for a given set of inputs.
as someone who has had a "psychotic experience" thanks to Covid-19, in my humble-not-expert opinion, I think I lost a sense of "continuity of self" during that experience. during the worst of it, I felt like a was falling into a "cognitive black hole"... in the sense of information being sucked into nothing-ness. my mind went in constant bizarre, tangential directions. trains of thought would keep starting but then get forgotten as a new train would begin. I wasn't hearing voices or seeing things, but there was just this constant improvisational "switching" among things.
right now, in hindsight, I could spend all day trying to describe it and still fail miserably. it's almost a paradoxical thing to "remember" it....... because how can you "coherently relate" to something so incoherent?
again: I'm a normal human, but I *do* have a stressful physical disability. and the stress of that physical disability + Covid-19 => psychotic 3 days
Several reasons:
In worlds where we live, we likely push out the date via some intervention that looks unlikely now, so starting now means better outcomes for those worlds where we made the intervention.
Current AI Safety practice is mostly about capabilities. The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:
You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing)
The low hanging fruit is likely unplucked.
The "existing literature" is still very small, so it wouldn't take years for you to catch up.
I will caution that AI Safety appears to be more insight constrained than talent constrained, aka we need someone to have pretty novel insights to make progress, and it's not a matter of "just" fleshing out models and their implications.
If you don't believe in doom, it's probably true that AI Safety would still be useful in "post singularity" worlds, where in theory AIs are perhaps subservient but not useful unless given safe commands I don't believe in such a world myself and don't see how it's likely, but it is a reason.
" The number of people who are actually working in "prevent the end of the world AI safety" is likely less than 300. Which suggests that:
You could potentially get an accelerated support infrastructure (see: lightcone and what they're doing) "
So I don't think a small field by itself implies very much about how easy it is to get resources: it's less competition, but also almost certainly less funding. The question is how the availability of funding stacks up against the availability of talent: if its funded out of proportion to its size, THEN getting one's foot in the door become easier. But the reverse is just as easy to imagine: lots of interested talent held back by a lack of funds.
Regardless, your points about the low-hanging fruit and the ease of catching up with the existing literature are well-made. I'll certainly at least look into it.
You might not be able to do anything about X-risk, but there are a lot of lower-scale harms that I think are a lot surer to happen, and you might be in a position to have some influence there. For ex: for at least one personal AI buddy (those AI "companions" that some fall in love with), the company that offers the service collects all the users' interactions with their buddies. Company is accumulating a data set that will be ideal for figuring out how best to influence people to do things, buy things, believe things, vote for things. Seems like AI is very well-suited to doing the pattern recognition kind of thing that's needed in order to become maximally influential. That's one of many very dangerous possible uses of AI, and it's already happening.
Metaculus has AGI by 2028 at 35%.
I hadn't looked at Metaculus for this, but it's comforting to know that this is within 5% of my gut estimate.
Hmm, good point. I don't have a lot of faith in prediction markets, but in this domain it's not really clear that there are any better estimates to be had. And 35% manages to hit the sweet spot of both being quite alarmingly high, but also low enough that planning for the other cases is clearly sensible.
Anyone here following the story of the EA/rarionalist-adjacent(?) group of people caught up in a sequence of murders? https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/
It seems to be a nice example of the Voltaire classic "He who can make you believe absurdities can make you commit atrocities."
This whole thing just makes me ask the same qustion as I've asked myself many times before: if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this, isn't this like the *last* crew we want to put, either implicitly or explicitly, in charge of AI alignment? I mean, the militant kill-all-humans veganism stuff is really not a completely illogical extension of animal-rights views common among rationalism-connected EA people in general!
I wonder whether the craziness is more correlated to the LW community, or to the Bay Area in general. Are there any famous murder-rationalists in New York, or in Berlin?
" if the LW scene (or Bay Area rationalism, in general) consistently makes humans develop completely bugfuck, even lethal, ideologies like this"
I don't think it is, at least not as currently constituted. Now they go for good old-fashioned money and swindling scandals.
A lot of the stuff being talked about here is ancient history in one sense; from the early days when it was the Wild West and anyone and everyone could turn up and start working on 'save the world from AI' etc. The complaints of Vassar and Ziz were that even back then things were starting to get too cosy and settled-down and normie, and they were betraying their founding principles and becoming Just Another Talking-Shop.
Now the entire scene is even more respectable and mainstream, so the sex cults and 'split your brain in two' murder gurus are very thin on the ground.
Still, you've got a basic pipeline in this case: Read LW stuff -> ruminate on LW stuff -> ??? -> murders and insanity. Regardless of the exact level of how common it is, it at least leads to questions on whether we should directly expose the AI to this crowd in the sense of them trying to get it aligned to non-crazy stuff!
Look, I really sympathize with your basic question here – I think letting Yudkowsky set the moral parameters of a god-machine would be a catastrophe in itself – but this pipeline is clearly not real. Ziz was already an unhinged psychopath, it wasn't ruminating on lesswrong concepts that set him off. He would have started a Transon Family one way or another regardless of the existence of MIRI/CFAR.
"Transon family" is a great line. I have nothing else to add.
I happened to read about it yesterday and lost some respect for the “rationalist” community as a result. Not that having insane people who associate with your ideology is uncommon, but that they seemed to be well known on LessWrong and their insanity respected for fear of being confused for not respecting their gender(s).
For context: I was reading the “PSA” post about them on the top of LW and the person summarizing things had dedicated an entire 2 paragraphs to validating their complaints/accusations about the police doing some pretty terrible stuff (some pretty serious sexual assault and humiliation). The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”
Like, at some point, (probably somewhat before murdering multiple people) we should stop giving someone the benefit of the doubt and stop considering their statements in the most charitable way possible. This is literally what allowed Ziz to skip town after the group stabbed their 80 year old landlord straight through the chest with a sword. Ziz allegedly needed medical attention, then fled from the hospital and California before the police could detain them. With the caveat of everyone is innocent until proven guilty, Ziz then went on to murder/contribute to murder multiple other people, including probably their parents.
It was 2022, Sol. Lots of people were still Floyding it up big time. Tons of public places you still had to avow total hatred of the police if you were any kind of a pinko (and if you weren't have fun with the racism accusations). Even if only a tiny proportion of the Medium post's readers knew who she was, the consequences of being any less than maximally charitable toward anyone who averred mistreatment by a cop could have been catastrophic for her personal life.
Also, it was Zajko's parents. I'm pretty sure Ziz didn't kill his own parents, although I invite correction on this point because he's certainly capable of it.
Eh, police actually did seriously screw this one up. The Zizians went to protest a rationalist event. They showed up at the wrong time and there was a kids' club or something at the event center instead. The kids' club freaked out and called the police, someone (incorrectly) accused them of having a gun, and they got charged with child endangerment or something and thrown in jail. Granted they are extremely evil, but in this particular case they were just trying to mildly harass us, which we were pretty used to, and ended up with some really serious charges against them. I think a court eventually threw out some of the charges, but by that time there were way more and much worse charges that they were actually guilty of. I don't think skepticism that the police always do everything right is some kind of unjustifiable weirdo liberal delusion.
I saw Ziz once at some event c. 2017. She was wearing hooded black robes and standing alone in a corner, glaring menacingly at everyone. I don't even know why she attended. Definitely not one of those "it's always who you least expect" situations. The followers were the opposite though, I didn't know them personally but people who did said that before falling in with Ziz they were totally normal, friendly, nice people. Pretty scary, don't fall in with cults, that's all I've got to say.
My gripe isn't so much about the assessment of the protest, which in itself I think is completely fine and probably accurate. The judgement about police brutality and the plausibility of their statements (while perhaps given a *little* too much credence), isn't something objectionable on its face. What they were doing is foolish, definitely something I don't approve of, but the treatment alleged is something terrible.
My problem was the context with which this assessment is offered. Apparently the goal is to warn the rationalist community about someone the author believed was violent (which appears to be true judging by the later alleged murders/crimes), and rather than letting the facts of the events speak for themselves, a not-insignificant portion of the warning is dedicated to essentially validating the allegations they make by claiming they seem to be true to the author (based on nothing but the testimony of the people being warned about, and how it "sounds" like other cases of police brutality they've heard of).
"Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."
This is my personal feeling, but no, no one reading the article should be worried about gathering or sharing information about bad behavior directed at these people, as (at least at that time) they seemed to be a real and present threat to others (which turned out to be probably true). It just seems a lot of this warning is dedicated to opinions (priors with no valuable new information) that paint these characters as victims. They very well might be victims in that case, and have valid cause for complaint, but sharing opinions about that in a post warning others about their murder/attempted murder is definitely the wrong place for it.
Maybe giving some attention to dissenting opinions, opinion as to a claim's plausibility, and fully contextualizing the situation is a valuable thing in general, but it left a very bad taste in my mouth given that the post was trying to warn others about people who have committed some pretty terrible crimes.
Yeah, okay, that's pretty cringe in the context of a murder cult. But I can't actually find the quoted text anywhere in the thread (though it might have been downvoted into invisibility by now)
Quotes are from the Medium article that's linked on the LessWrong linkpost (and one in my other comment from the LW comment section):
https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz
To here: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509
The section I'm referencing is about 10-20% of the way down, but crtl+f any of the text I quote as it's unedited. It's an anonymous article, and of course they don't represent the majority of LW/Rationalist opinions, but it hasn't absolutely ruined my opinion of the community or anyone specifically, just a ding to my level of respect in general.
Can the quokka will itself out of being a quokka?
Hey, I'm trying.
Please provide a link to some context, this doesn’t make any sense.
This is about the Zizians who are basically a group of people who follow a person named Ziz's Pseudo-Rationalist, Hyper-Vegan, Certifiably Insane, Doom Cult.
From here: https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz (It's the top of less wrong, which I rarely read but was for some reason yesterday)
To here: https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509
To this quote from the Gwen:
"Police arrived with their guns out and we were immediately arrested (within about 10 minutes of us arriving), after which we were sexually assaulted (in my case, i was groped and had my pants pulled down and then sat on by an officer in a mounting position), two of us were gawked at by a group of about a dozen police looking at our naked bodies after pinning us down and cutting off our clothes (“Ha! I told you it was a man.”) and one had pictures taken of them front and back naked and pinned down, we had multiple types of serious and/or permanent medical harm intentionally inflicted on us (including hypothermia, concussion), we were starved for multiple days (I did not eat for five days, until bail was posted) having been denied vegan food as a bargaining tactic (as well as water, sleep, hormones, and medical care), were tortured and were repeatedly denied ability to speak with a lawyer (and had eg hypothermia inflicted on us and our clothes cut off (separate events) for asking to speak to a lawyer)."
And this reaction to in the medium post by the summarizer:
"I don’t get the impression that Gwen is the sort of person who generally lies about things like this, though I could be wrong. Some aspects of this account sound like other accounts of police misconduct I’ve heard in the US, and I think the overall US prison and jail system is needlessly cruel in many ways, so these cruelties seem entirely possible to me.
No matter what crime someone may have committed (and no one has yet been convicted of a crime), I don’t believe they could ever deserve to be mistreated as they allege, and I think that their allegations of such deserve to be treated seriously."
This specific arrest was in response to this group, dressed in all black with masks on (Not the pandemic kind, the V for Vendeta kind) and blocked off the exits to a wooded retreat with their trucks which that happened to have a bunch of school children.
The writer of the summary had this to say:
"My assumption is that they didn’t intend to do anything violent"
Gwen likely proceeded to fake their death (no body), but as far as I can tell they haven't shown up in the past couple of years so maybe this was real.
This group also likely murdered one of the member's parents:
"Pennsylvania police believe, based on a lot of circumstantial evidence and investigation, that one or more of Ziz, Michelle “Jamie” Zajko, or Daniel Blank was very plausibly involved in the murder of Jamie’s parents"
This is all relevant today because, yet again, members of their group committed murder: https://openvallejo.org/2025/01/27/suspects-in-killings-of-vallejo-witness-vermont-border-patrol-agent-connected-by-marriage-license-extreme-ideology/
All this is to say is that these are really bad people. Their ideals are evil, and so are their actions. They did a bad thing, and the author proceeds to create a multi-paragraph apology for their actions (you could say, rationalizing them), doubt the motives of the police who responded to masked people barricading children in a campground, while claiming the accusations against the police are highly plausible (police stations have cameras, and police have been severely disciplined for much less).
They end this section with:
"If anything at all like this happened, however, then the people responsible should clearly be brought to justice. If anyone lied about the protesters in order to incite a more extreme police response, then that should come to light as well. Hopefully people reading this can help gather and share information about bad behavior directed at Ziz, Gwen, etc, and not just bad behavior done by them."
As for why this made me lose some respect for the rationalist community: This post was made as a literal warning that Ziz and associates were potentially violent (and had stabbed a guy through the chest). Yet the author felt the need to dedicate a whole lot of this community warning to an apology for their actions, and insert an opinion on police brutality to boot. I can excuse this if you're Ziz/Gwen's lawyer, and trying to get them the best outcome you can, but this seems to be overly sympathetic and charitable to the motivations of a group of people who are willing to lie, manipulate, incite murder, and outright murder to get what they want.
Read the comments of that LW post and it's more of the same sentiment. The "tools of rationality" being used here, are being used for (what I consider to be) very wrong ends. For example:
"I think it's good that this post was written, shared to LessWrong, and got a bunch of karma. And (though I haven't fully re-read it) it seems like the author was careful to distinguish observation from inference and to include details in defense of Ziz when relevant. I appreciate that."
This assumes Ziz, the murder/attempted murderer/inciter of suicide, is acting in good faith, which is obviously not the case. We shouldn't value the testimony of people who hold evil ideals, commit evil acts, and manipulate the narrative for their own benefit.
TLDR: Bad people are bad. We shouldn't treat their views as maximally charitable.
Thank you!
Yeah, it's the bit about "I totally believe the person accused of violent crime has no reason to lie about how they are a poor little lamb mistreated by the pigs" that makes me go "Oh, brother".
Now, it can indeed happen that even violent criminals are indeed mistreated by the pigs. But my general inclination is "person accused of a crime may perhaps possibly maybe could be also not averse to telling a wee little lie or two".
https://medium.com/@sefashapiro/a-community-warning-about-ziz-76c100180509
"The paragraphs were something along the lines of: “This seems to fit with what I’ve heard previously about the police and they have no reason to lie, so I find this plausible.”
That's been a thing with the rationalists. Reading one account of the brouhaha about L'Affaire Kavanaugh (back in the days when Michael Avenatti was The Credible Avenger leaping in with the Case of the Spiked Punch and Highschool Rape Gang) was eyebrow-raising for me: "This person told me that things like this happened somewhere else before so I believe this instance is indeed true and credible and it happened and that's terrible". No consideration of "but is this instance true? is the person reliable? even if they are reliable, are the other things they said happened true? even if they are true, are they the same as what is alleged in this instance?" which struck me as very odd coming from people who are all about "think about stuff, question narratives, find the facts out for yourself instead of relying on authority" and so forth.
I kind of feel like the best thing for rationalism would be to develop some level of cynicism and awareness people often act in bad faith and are lying to you. I am not saying you have to turn into Diogenes, Orwell, or Palahniuk, but at least read the 48 Laws of Power once, or the Prince or the Art of War if you're into classics. You don't have to do all the evil stuff, just know what they're trying to do to *you*.
I was somewhat aware of that for nebulous reasons a while back, and I think it's just a bunch of crazy people who glommed on to early rationalism and the community around it, went full-on "our guru commands" blind zealotry and followed someone who was batshit insane (to use a technical term) but who, because of the general commitment to niceness, acceptance of the weird and strange because hey, people think we're weird and strange too, and lack of anything approaching authority to kick people out plus horror of appearing to ostracise anyone, was tolerated to an extent.
Not the fault of the rationalists that the nutso developed their own crazy philosophy and went around mind-hacking their followers into being as nuts as them, but it is a danger for the community (less so now, I think, given that they've been burned a couple of times by crazy people and grifters using the 'niceness, tolerance, no hierarchy, no thought is verboten to think' principles to take advantage).
EDIT: Though in this particular case, it's grimly ironic that self-described vegans attacked and murdered someone. An instance of "all animals are equal, but some are more equal than others", if you consider humans are just another animal?
> it's grimly ironic that self-described vegans attacked and murdered someone
As long as they didn't eat the victim afterwards...
Well, that was certainly an interesting read. It's been a while since we had a good cult disaster.
Though, that article really just brings up more questions than answers... What the hell were they even trying to do? What made them suddenly turn into sociopaths? Did they just go insane from reading Yudkowski? Well, I looked into it, found a post by none other than Scott, and... Jesus Christ...
https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experience-at-and-around-miri-and-cfar-inspired-by-zoe?commentId=4j2GS4yWu6stGvZWs
I've always wondered whether techniques like prompt injection and jailbreaking were applicable to humans as well, and... it seems they are. Takes a bit more work, obviously, but it is possible to break them. I... don't even know what to feel about this. There's this feeling that... I could have been them. It's terrifying.
I do want Scott to make a follow-up post on this, but at the same time... I genuinely fear that if the details of this incident become widely known, it will lead to a national campaign to purge the trans population.
Anyways, there's more details in the comment thread of this post:
https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz
That is scary.
People can definitely be jailbroken. BUT sometimes it works for the good as well. A person helplessly addicted to drugs completely reforms themselves on occasion.
People have enormous breakthroughs in their thinking/believing that enables them to overcome past burdens and seek a different trajectory.
People have positive religious conversions. Isn’t that all jailbreaking?
The dual hemisphere brain stuff is so damn outdated, I'm surprised self-proclaimed rationalists would still cling on to it. But if it's less a rationalist and more 'cult guru develops woo-based philosophy with pseudo-scientific rationalisation and brute-forces impressionable followers into breaking their sanity' it's a lot more understandable.
I mean, come on people, "ooh I'm right-brained so I'm all artistic" versus "I'm a left-brained logician" stuff has gone out with the ark:
https://www.healthline.com/health/left-brain-vs-right-brain#research
It's possible to drive yourself crazy by forcing yourself along the path in line with the dictates (and diktats) of the guru, be that in religious belief systems, pseudo-science or plain batshit insane loons as here.
I think their being trans is pretty far down the list of unusual things about them, media focus would likely be on their actual beliefs.
This is yet more of the division between "trans because of genuine gender dysphoria" set of normal trans people and "decide they're trans because they're fucking lunatics" set of mentally ill pretenders and/or criminals who appropriate an identity of a perceived oppressed minority in order to further grift for "I'm a victim so I can do anything I want and I can't be held responsible for anything, and if you attempt to hold me accountable for my actions, then you are violently attacking me you transphobe" protective colouration.
I know that's the convenient explanation but... everything I've read seems to support that they were genuine. In fact, it doesn't even make sense they were doing it to for the sake of identity politics. They genuinely just hated society and humanity, and the only thing they cared about was for people to fear them. It was an absurdly suicidal ideology, but it had absolutely nothing to do with mainstream culture war stuff.
I think they were genuine trans in the "Silence of the Lambs" (novel version) sense that the guy was trans: convinced that this was indeed the diagnosis applicable to themselves despite anything anyone else might say and unwilling to entertain any opposite view. I think this trans stuff was just all of a piece with the general insanity, and has little to nothing to do with the normal trans people (the very fringe, very online, very radical and very yes it is a damn fetish sort, on the other hand, are right up this lot's alley).
"They hate me and persecute me because they fear me because I'm neurodivergent disabled trans etc. etc. etc. so I have an entire constellation of alternative identities that normie mainstream society hates and oppresses and wants to destroy" is the entire attitude here and justification for them to try and blow it all up. Being a plain cis guy isn't good enough because that just puts it back on them that the reason ordinary people dislike and don't get along with them is because *they* are the problem. Also, I suspect, a heaping helping of "Cis het white males are the problem with toxic masculinity and privilege and entitlement, but I can behave in precisely the same way of entitled behaviour but it's okay because I'm a Real Woman (Trans) and if you object then it's because you're a horrible wicked transphobe, not because I'm an asshole" going on there as well.
I have the impression that you're talking about a long tail of escalating craziness (which probably exists in most groups) as if it were a bimodal distribution of crazy versus not crazy.
...I don't know if there's anything I could say that could convince you, and at this point I don't even know why I'm trying to make things worse by doing this. Maybe it is just better if people believe that there exists an actual divide between "normal" trans people and the kind of trans people everyone hates. But... that would only continue to justify the resentment against genuine trans people.
*sigh* Maybe there really is just no hope for them.
Half of them, including the leader, are trans. You can't seriously believe that the public would just ignore that fact...
As an outsider looking in, it seems that the early rationalist coming-together into community was (maybe because it's in California) very open to/prone to everyone who was alienated from their own community running away to join the Bay Area because that was the promised land. This includes minors, trans, people of variant sexualities, neuro-divergent, you name it who felt (and were) weird and excluded but were intelligent and/or autistic enough to fit in with rationalist and adjacent philosophies.
That means you get a lot of the crazy in other ways than "I'm smart, socially awkward, and interested in niche subjects that nobody else in my family, school, or social circle even understands", up to "yeah this is mental illness and not just 'not fitting in'".
Yeah, this all already happened once, or twice depending on how you divvy things up: hippie culture, and 60s nerd culture. I know more about the latter, and they had the same pathological tolerance culture, which led to things such as the so-called "Breendoggle" and the (related, in a way) fact that, as it turns out, the founders of the Society of Creative Anachronism (Berkeley, CA!) were about 80% pedophiles.
Basically, most people who are excluded from mainstream society are ostracized for *great* reasons, amazing reasons, and autistic people who are ostracized for being awkward don't realize this and think "they must be just like us, misunderstood 'freaks' who are actually not bad people!" This is an error, and it will be an error every time.
I still hold that there's a valid, very dark reading of "The Ones Who Walk Away from Omelas" starting from the fact that LeGuin was going around the same Bay Area 60s nerd circles and *must* have known about many of the controversies, and that Omelas, as it is described, quite resembles a Ren Faire or some other nerd paradise.
That... is actually uncomfortably plausible.
Sounds too dark to be mere coincidence. Until you read Dostoyesky, whose writings seem to have inspired Le Guin.
https://en.wikibooks.org/wiki/The_Grand_Inquisitor/Rebellion_(Part_2)
I mean, try reading that page, while imagining that it was written by someone who got disgusted by seeing a dark side of the hippie community, and it... kinda makes sense, the same way.
Except, that obviously didn't happen. Unless there was some community in Dostoyevsky's Russia similar to the hippies.
I guess the question is how can you build a community that's interesting and open-minded without attracting weirdoes and freaks?
Simple, but distasteful. Accept people with aspergers, but ban sexual deviants of every kind.
And how exactly are you supposed to figure out who the "sexual deviants" are? Or is that just a code word for trans people?
Honestly, I don't why you would accept people with aspergers either. Maybe they're ostracized for "great reasons" as well.
...You can't.
Just a note: technically speaking, the term “mental illness” is not a high standard. It includes just about any mental condition that can be medically treated, including anxiety, depression, ADD, addiction, and even gender dysphoria (with transitioning being a treatment).
I think "run an 80 year old man through with a samurai sword as well as other assault causing him to lose an eye" fits "mental illness" well enough without having to disambiguate "gosh, do you mean 'anxiety' here?"
It's more that it attracts impressionable autistic people like me by mindfucking us with somewhat intelligent writing...
It doesn't change the fact that the rationalist spaces represent everything the right sees as leftist degeneracy. This is the culmination of that.
...This can't end well.
I can't blame the rationalists too much, they do intend well and starting off as an inchoate grouping starting to coalesce into a community (the way we did here), they were a bunch of diverse individuals who had in common being intelligent, being interested in specific things that those around them generally didn't care about or had no knowledge of, were non-conforming in some way from the mild to the very far out, and were so happy to meet "hey, you like this too? and you're a weirdo too?" others that they over-corrected for the "since we've all experienced at least some form of rejection and ostracism, we very much will not incorporate that into our community".
This meant that, like all groups, they were both attractive to and vulnerable to the fucking lunatics and the criminal grifters who saw a bunch of pigeons to be plucked, and they didn't have the structures (yet) in place to deal with that; going way back into the weeds about the alleged sexual abuse instances, for example, there seems to have been a positive horror of assuming that anyone was in charge in, like, a hierarchical structure of ultimate authority to enforce decisions and lay down the law. They set up committees to form committees to investigate and report, but there were no such things as sanctions other than "let everyone know that X is maybe someone you need to be careful about". Again, no blame to them, because dealing with the various groups and sub-groups all over the country was like herding cats and any decisions that *were* made seemed to be immediately challenged both by people who felt that they had been victimised all along and nobody was taking them seriously and the people they wanted persecuted were not! being! persecuted!, and by the "X is someone to be wary of" types who were claiming they were the real victims.
This is how every utopian organisation (including early Christianity) discovers that actually, you *do* sometimes need a bunch of big guys with sticks to enforce Da Rules or else you will be eaten alive by the piranhas.
On the note of your final paragraph, I do recommend everyone read the Didache (I know you already will have), because it is hilarious on this topic.
That's about as good a description of it as I can come up with. Well done.
This is one of those things I would have gladly spent 50 hours studying as a kid but now I no longer have the time.
Here's a nice big pile of stuff if anyone wants to go through it: https://drive.google.com/drive/folders/1z21U_q1tESO61iL35ertsIjZYczJrHpQ
Oh, I wish I had more time to study this, too. Or maybe an AI that would generate a summary for me... though the existing ones would probably refuse to discuss violence and other controversial topics.
What I would like to see:
An explanation of Zizian beliefs, and a description of the events from their perspective... but using normal English words, instead of the Zizian lingo.
A timeline of events, maybe with some color coding of which are the facts that everyone agrees about, and which are statements from one side that the other side denies.
*
If I understand it correctly, Ziz believes that each brain hemisphere is a separate personality, and that you can communicate with them individually using some combination of sleep deprivation and closing one eye (so that one hemisphere finally falls asleep, while the other keeps talking). Ziz has ethical opinions on the level of "insanity wolf", where almost everything is extremely unforgivably evil, and if you are good (or rather, half-good, because only Ziz is good), you must strive to destroy it at all cost.
All other beliefs seem to be downstream of this. Anything that Ziz disapproves of is evil and should destroyed. Everyone who disagrees with Ziz is evil and should destroyed. Refusing to destroy evil is evil. (Generally, Ziz makes tons of death threats... but mostly doesn't act on them.) At least one of your hemispheres is evil. Ziz is the only known person in the universe who has two good hemispheres. The followers have one good and one evil hemisphere; all thoughts that agree with Ziz are coming from the good hemisphere, and all temptations to disagree are coming from the evil hemisphere. Outsiders have two evil hemispheres. By the way, if you are trans, it's because one of your hemispheres is male, and the other is female. The rationalist community -- a good recruiting ground for Ziz, at least in the past -- is also evil and must be destroyed.
...this is as much as I remember, but there are probably many other fascinating details that I have missed.
Fertile ground for a podcast.
My COVID update for epidemiological weeks 3-4 of 2025.
1. For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.
> The Department of Health and Human Services "has issued a pause on mass communications and public appearances that are not directly related to emergencies or critical to preserving health," Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."
I guess Stephanie is worried about her job. Likewise, the FDA has "paused" updates on foodborne outbreak probes as health agencies "regroup on communications." That's a bullshit excuse if I ever heard one. They could set up a process for review and prioritization without stopping the current updates. Clearly the Trump administration is now emulating Chinese secrecy practices.
https://www.cbsnews.com/news/fda-pause-foodborne-outbreak-probes-updates-communications/
And the CIA revised its report on COVID origins. They're saying it was a lab leak now.
Kevin Drum wrote:
> And none of us will ever get to see either their assessment or the evidence it's based on. Is it a rigorous review of all the evidence based on a foundation of deep scientific understanding? Or is it a sloppy broadside written by a bunch of naifs who never even engage with the scientific debate and just want to avoid a fight with the new boss. We'll never know. But remember the aluminum tubes?
But the new boss of the CIA is John Ratcliffe, and he has been supporting the lab leak theory for years. So I suspect it's just ass-kissing the part of his new subordinates. The CIA motto is: "Ye shall know the Truth, and the Truth shall make you Free." — the truth, if your boss doesn't like it, shall make you free to go looking for a new job.
2. The CDC is still updating other data. Their wastewater numbers seem to indicate the current XEC COVID wave has peaked. Biobot hasn't published an update recently, so I don't have a reality check for CDC's data. Biofire's proprietary Syndromic Trends also shows that COVID is dropping as a percentage of detected respiratory viruses. RSV may also be on a downward curve, but influenza is rising fast. XEC seems to have lost steam. Cov-Spectrum shows it topping out at 41% of sequences. The CDC Nowcast shows it at ~47%, but their Nowcast isn't very sensitive to changes in growth rates.
3. This winter, COVID hospitalizations are lower than previous years—and about half of what last winter's were (4/100,000 vs 7.7/100,000). Despite the XEC wave, weekly deaths remain low. I'm not sure if we'll continue to see these numbers now that the MMWR reports have stopped.
4. What's next? LP.8.1 is the only major var that's showing any legs. But except in countries with low sampling rates, it hasn't gotten much above 20%. Maybe it will create a secondary wavelet on the tail end of our dinky winter wave, but I doubt it.
5. While COVID and RSV may have peaked, this winter's Influenza wave is still on a strong growth trend. We've still got three months left in the flu season, and cases are nearly at last year's peak, and test positivity is higher than last year.
6. Three days before Trump's inauguration, Biden awarded Moderna $590 million to develop mRNA-based pandemic vaccines. It's not clear if the Trump administration will be able to cancel this program, but I'm sure Brainworm Kennedy will try.
7. And that sucks because if HPAI mutates to allow human-to-human transmission, we could be in for another pandemic. Rumors are circulating about human-to-human transmission in Shanxi and Hebei provinces (northern China), with deaths in the hundreds. These are rumors only. Other rumors say this is an HMPV outbreak. Who knows?
Anyway, if there is a bird flu pandemic, I'm sure Bhatty, Marty Mak, and Brainworm — the Three Stooges of the health world — will be doing their best to calm public fears while doing nothing of substance.
Slides here:
https://t.co/Jy3E88XFoE
> . For the 1st time since 1952, the CDC stopped publishing its Morbidity and Mortality Weekly Report (MMRW). The last update was on 16 Jan.
> " Stefanie Spear, the HHS principal deputy chief of staff, told CBS News in an email. "This is a short pause to allow the new team to set up a process for review and prioritization."
Wasnt this an executive order? Are they down playing trumps success?
Yes, your interpretation is correct. Signed, a public health data person.
What _is_ going on with scientific communications? I saw some alarming comments in reddit's r/labrats subreddit, and I'm not sure what to make of them.
I could see the sense in trying to halt DEI stuff as quickly as possible, and even to trying to ferret out DEI stuff that was being concealed by name changes, but it sounded like nearly _all_ communications (and funding) had been frozen.
What could they possibly be looking for? Activism camouflaged as research?
Words have power when near lawyers, trump declared a few federal agencies cant speak until elon gets to go over their personal files
While elon still wont have actual power and all the historical bullshit about presidents not being able to fire, its strong arming compliance
Many Thanks! It still seems like overkill. Did Trump silence NIST, along with everything else???
idk about details, it was allot of reading and I dont pay that close attention to details; it wasnt everyone it seemed health related at first glance and maybe 5 departments; maybe.
Dissidents. What else would they even care about?
Many Thanks! Dissidents along what dimension? I could well understand stopping a woke activist project concealed under biased research, but it _looked_ like the freeze was far broader than that. Freezing e.g. publications in physics just sounds crazy to me.
Loyalty. You can't risk having people in important positions that aren't willing to follow orders unconditionally. Which means that people with stubborn principles about these kinds of things need to be replaced.
He signed so many executive orders since his inauguration I don't know if this was included in one of them. 26 on his first day, and they're still coming.
my count was 45 day 1
was the website incarcerate?
In a previous life, I used to hunt plagiarism for a living.
A while ago I gave ChatGPT this prompt: "Single Player (vs Computer) "Social Deduction" game" and got "Creating a single-player social deduction game where you play against a computer opponent can be an interesting challenge!" etc. pp. Now, to get a feeling for it, I gave Deepseek the exact same prompt from my ChatGPT history and got "Creating a single-player "Social Deduction" game where you play against the computer is an interesting challenge," etc. pp
This makes my ill-adapted to this situation spider senses go into overdrive. Is that really objectively the most average English sentence that could be said in reponse to that rather sparse prompt such that two models would independently arrive at it? Would two different humans independently come up with such similar openers? What is going on?
This is "mode collapse", usually induced by fine-tuning LLMs on human preferences.
https://www.lesswrong.com/posts/t9svvNPNmFf5Qa3TA/mysteries-of-mode-collapse
There's the added detail that R1 may have been trained on ChatGPT output (there are people who have gotten it to claim to be ChatGPT & an OpenAI model & so on.)
The prompt isn't as sparse as you think because you need to include the system prompt with it as well. ChatGPT, Claude, Deepseek, etc. have a long list of instructions before they get to your question. If you prompted the APIs directly you'd more likely get a description of a social deduction game rather than the agent attempting to create one.
I don't have an answer to your question, but ChatGPT gave me a response that was different enough not to seem plagiarised. It might be partially coincidence that your responses were so similar, plus the fact that both use reinforcement learning to create a helpful and friendly assistent personality. A human could give responses like "What about it?" or "I don't really like those games", while the LLMs are trained with a particular personality that really limits its potential responses.
It's possible that DeepSeek is using ChatGPT in reinforcement learning to make the type of response DeepSeek gives be more similar to ChatGPT.
How hard is it to steal an AI?
Training one AI using the output of another is called "knowledge distillation" (googlable term) and OpenAI provide an API for doing this: https://openai.com/index/api-model-distillation/
Looks like OpenAI is accusing DeepSeek of using their API for distillation, against their TOS. So it seems like OpenAI thinks that DeepSeek stole from them (which I think is a bit ironic, considering how much data OpenAI scraped from other people without their consent).
https://nypost.com/2025/01/29/business/openai-says-it-has-proof-deepseek-used-its-technology-to-develop-ai-model/
Harder than just feeding a bunch of questions to ChatGPT and copy-pasting the answers into your training dataset.
Going meta I put the above into deepseek and it explained to me that the two models were trained on similar data. I understood that from the get go, that is not the point. The point is, should we not expect variance in training set and training methods lead to more variance in the output?
The CAIS course application was open this morning but is locked now. Is it full?
Stupid human question: if AI is so good why don’t they let it write the story for them? Hell, even ChatGPT can write a basic essay that can be tarted up with a little help from a human editor.
> I would be ghost-writing for an ex-OpenAI researcher previously profiled in major publications (eg NYT), who is running a big forecasting project and wants to do a media push around the time they release their results.
https://manifold.markets/MatthewBarnett/will-ai-convincingly-mimic-scott-al
I couldn't resist giving it a try. Here you go... ;-)
Me:
Write a short essay on AGI and the coming Singularity in the style of Scott Alexander of Astral Codex Ten. Don't forget to mention Bayesian priors in the title, mention the hereditary basis of IQ and to disparage post-Christian vitalists in the body of the text.
ChatGPT:
"Bayesian Priors and the Coming Singularity: Why the Post-Christian Vitalists Are Wrong (Again)"
The debate over artificial general intelligence (AGI) and the impending Singularity has, predictably, descended into a cultural Rorschach test, projecting our priors, anxieties, and fantasies onto the tabula rasa of the future. While some envision AGI as a benevolent deus ex machina bringing a golden age of plenty, others see a rogue paperclip maximizer turning humanity into so many raw materials. As always, the truth is likely less dramatic, but it behooves us to unpack these possibilities through the rigor of Bayesian reasoning, lest we fall prey to the intellectual pitfalls of our ideological cousins, the post-Christian vitalists.
To begin, let’s consider the technical premise: AGI is, fundamentally, the construction of a machine capable of performing any intellectual task that a human can do, and likely much more. Here, one’s Bayesian prior should be informed by the hereditary basis of intelligence, a domain that has historically been a strong predictor of cognitive capability. While human IQ is the product of an intricate interplay between genetics and environment, its heritability (somewhere around 50-80% depending on the population) demonstrates that intelligence is not a mystical vital force but a replicable, material phenomenon. If evolution can stumble into it via the blind process of natural selection, we should not assume it’s beyond the grasp of deliberate engineering.
From this perspective, the Singularity—where AGI accelerates its own development into superintelligence—is less a question of "if" than "when." It’s easy to get caught up in the timelines (2030? 2100? Never?) or the probabilities, but more interesting is the philosophical terrain. Here enter the post-Christian vitalists, who seem determined to reinvent the eschatological wheel with their insistence that technology must somehow revitalize the "soul" of humanity. The Singularity, in their view, is either a heretical Tower of Babel that divorces us from our "essential nature" or a sacred rapture that will cleanse us of our fallen state.
These vitalists miss the point entirely. Their criticisms, while occasionally veiled in secular language, betray the same nostalgia for pre-Enlightenment metaphysics that drove past resistance to heliocentrism and Darwinism. In their worldview, humanity is a repository of ineffable purpose that AGI cannot replicate. But Bayesian reasoning requires us to ask: How likely is it, a priori, that humans have some unique, unquantifiable essence? Given the track record of material science to date, the probability seems vanishingly low. AGI won’t "steal" our humanity any more than the printing press or antibiotics did; it will merely shift the locus of value production and force us to renegotiate our priorities.
The challenge isn’t metaphysical but practical. How do we ensure that AGI’s utility functions align with human well-being? This is where the real debates should lie—not in vague invocations of the "human spirit" but in technical questions of alignment, control theory, and robust governance. And here, too, Bayesian priors are helpful: While the pessimists argue that any misstep will result in instant doom, the historical record of technological progress suggests a distribution of outcomes—some disastrous, yes, but many positive or neutral. We should neither dismiss the risks nor succumb to apocalyptic despair.
The Singularity, if it comes, will not be a singular event but a series of gradual shifts. Humans will adapt to AGI much as they have adapted to prior technological revolutions—imperfectly, unevenly, but ultimately with resilience. The post-Christian vitalists may yearn for a humanity untainted by silicon and code, but such purity has never existed. Our history is one of continuous self-reinvention, and AGI will be no different.
So, as we approach this frontier, let’s keep our Bayesian priors updated, our thinking clear, and our rhetoric grounded. The Singularity is neither salvation nor damnation. It is merely the next chapter in the long and unpredictable story of human progress. And if the vitalists don’t like it, well, they can take it up with history.
This is a convincing imitation of someone doing a very bad imitation of Scott.
Where are the puns? I believe that ChatGPT should be capable of making some, but it probably needs to be reminded explicitly to do that.
Maybe this could be done in two steps. First, feed it some Scott's articles, and ask it to describe the characteristics of the text, what makes it different from a random text written by a smart person. Second, provide this list of characteristics and ask it to produce a text like that. Maybe even to write the text in phases -- first the outline, then prepare some puns, and finally write the text following the outline and using the puns where appropriate.
Fail!
Can anyone explain why the market updated on DeepSeek a month late?
Better yet, can anyone explain the direction of the update? When was the last time when software optimization somehow hurt hardware makers (whose hardware has suddenly became more capable and not less)?
Because it's free. This hurts the business strategy based on centralization of silicon valley cronies/investors. The DeepSkeek is particularly hurting nvidia because it was trained on a quite modest infrastructure compared with what OpenAI asks (last year, OpenAI asked the UAE government for 7 Trillion for datacenters).
Which means a $7T datacenter now gives you even more AI that was expected. And it also means smaller players will now be interested in purchasing Nvidia chips for their own datacenters, something that was out of their reach before.
Well, the most likely explanation is that the instances making up the "market" often just learned about DeepSeek right now, but another one is that, as I indicated below for myself, DeepSeek's ascent is really less about the compactness etc. and more about offering a more desirable product design-wise for the end user than a lot of other AI efforts.
edit: and ofc one of the biggest things that seems to have come as a surprise is that China is now a genuine competitor on *this* field - once you've factored that in, you have to automatically prize in China being a giant in general and this surely not being the only thing on offer
Related question: the news I have seen on DeepSeek seems to very confidently assume that (a) they are telling the truth about their training costs and (b) they didn't just steal OpenAI's (or somebody else's) weights through a CCP-installed backdoor. Why should we be so confident of these things? Have people independently corroborated their claims or are we just trusting their press releases?
I'm not sure, but I think because it's much smaller and has much lower inference costs, and you can test this by getting the model yourself (it's open-weights!) and running it on a computer that would not be able to run GPT.
Also, they listed some of the discoveries they used to train it more efficiently, and everyone agrees they're real, clever discoveries.
AND we can't forget this is but their first iteration wave, or in other words, current DeepSeek is the worst DeepSeek we'll ever have. Quite scary if you're currently getting a big paycheck working for some angel funded AI venture.
I saw a meme on tiktok where they used that "Tony Stark was able to build this in a cave! With a box of scraps!" audio for OpenAI engineers and I never laughed so hard lol
There's definitely and verifiably an improvement in that they can *run* a reasonably high-quality LLM in a constrained operating environment. The bit where they AIUI basically lobotomize the AI and show that if you activate the right lobes for the task du jour it doesn't get *too* much dumber even though it uses an order of magnitude or so less compute, is legit clever and I'm surprised that it works as well as it did, but as you say the model weights are open and people are finding that it works in their own environments.
The question is whether this is also true for the compute used to *train* the model in the first place. It isn't obvious that the cleverness that reduces operating compute will necessarily reduce the training compute, and a priori I'd expect "just as smart as the other LLMs on the block but much more compact" would require *more* effort in training. And I haven't heard of anyone replicating that part yet, so we're basically taking China's word on that.
The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?
> The other question is why the Chinese are doing all this, in particular releasing the model weights. That seems like a poor strategy if you're planning to monetize the thing, so what are they getting out of this?
Their strategy is similar to Meta's with llama. If you have a state-of-the-art product but a competitor has beaten you to market and has a seemingly insurmountable lead, putting your similar (or even better) product out for free essentially "de-monetizes" your competitors, because users will migrate quickly to the just-as-good-but-free platform. That serves to even the playing field for the next iteration, removing most of the legacy and switching-costs advantage your competitors had.
One interesting thing about this strategy is that it can be spun in two very different ways. To those worried about privacy, security, freedom of choice and concerns about trusting revolutionary tech to a few well-funded companies, you spin it as a Richard Stallman, "bits wanna be free" libertarian stick-it-to-the-man move.
But to the C-suite (or in this case likely the CCP), you can spin it as "Art of War" scorched earth: if you don't get the spoils, at least make sure your enemy doesn't get them either.
I can't help but think you have an answer, but are encouraging the rest of us to think about it first.
(I think I have an answer too, and it's roughly the same answer to "why is Tiktok free?".)
I don't actually have an answer. Well, really, I have several possible answers and I suspect it's not a complete set, hence the question.
One plausible answer is that it's about raising VC money. Release a pretty-good LLM, tell people you trained it really cheap (lie if necessary), then say "...just think about what we could do with some of those sweet, sweet gigabucks, don't you want a piece of that action". What to do when the gigabucks arrive is left as an exercise for the student.
Another is that they wanted to tank the American AI industry, either long-term because the CCP doesn't want America to have AI, or short term because some rich Chinese guy wants to get even richer by shorting NVIDIA et al. So release something that makes it look like OpenAI, Anthropic, et al have all been wasting their time and nobody needs nearly as many of NVIDIA's chips as we thought.
Another is that they plan to make money more or less legitimately in some way that I don't understand but which includes releasing the code and all the model weights open source. Maybe that's a loss leader and the real product has yet to be released.
Any more?
The answer I was thinking of was "farm an enormous amount of data analytics about American users (and any other country while they're at it - why not) and exploit that in some way to either foment political chaos (a la Internet Research Agency), collect telemetry on US internet traffic for cyberwarfare or commerce purposes, or to discover ways to influence US thinking in ways that benefit the CCCP".
Another possibility is using TikTok as some sort of privileged platform for actual malware. (Theoretically, *any* foreign-based popular media app could be used, but I get the sense TikTok has unusually established ties to CCCP institutions. Plus, it's China, well known for government-commercial integration in ways the US would not prefer.)
Ok, I buy that they did some actual innovative work to make it so small. I still wonder if that work might have been turbocharged by having access to a large set of pre-existing weights that, um, fell off a truck, so to speak.
And indeed OpenAI is accusing them of something similar: https://www.theverge.com/news/601195/openai-evidence-deepseek-distillation-ai-data
In one sense, the DeepSeek models are incredibly efficient by necessity. They had to train on a bunch of H800 chips instead of the newer H100s like the American companies, largely due to the chip embargo. There's simply no way they could compete with someone like OpenAI without massive optimization. OpenAI doesn't need to worry about optimizing, because they can just buy newer and more powerful GPUs that NVIDIA is constantly cranking out. Although it is impressive how much DeepSeek was able to do with remarkably less.
I saw this linked on DSL and it's a pretty good overview:
https://stratechery.com/2025/deepseek-faq/
This is covered in the link:
> Scale AI CEO Alexandr Wang said they have 50,000 H100s.
I don’t know where Wang got his information; I’m guessing he’s referring to this November 2024 tweet from Dylan Patel, which says that DeepSeek had “over 50k Hopper GPUs”. H800s, however, are Hopper GPUs, they just have much more constrained memory bandwidth than H100s because of U.S. sanctions.
I think it's likely to have to do with r1 last week and some sort of additional information coming out over the weekend convincing investors that it's ~real. One possibily is it leaking in the Information that the Meta AI infrastructure director told colleagues that r1 might outperform even the next version of Llama, and that Meta formed two teams focused on studying it or something of that nature (I'm going off of secondary reporting about the Information article). Note that in the past Deepseek has released models that had inflated benchmark results that decreased when you used ~contamination proof benchmarks (see https://arxiv.org/pdf/2403.07974 ).
Typically the markets have reacted plenty fast to AI news and so I'm pretty confident that _something_ came out, and I don't think the app alone explains it.
Addendum: the meta explanation doesn't feel entirely satisfactory since meta's stock price has gone up, not down. Though this could just be due to them having less AI revenue/benefiting from open source models more.
secondary source on the information article, since the information itself has a giant paywall: https://fortune.com/2025/01/27/mark-zuckerberg-meta-llama-assembling-war-rooms-engineers-deepseek-ai-china/
I think it's that r1 was a wakeup call: v3 simply feels kind of mid, and it takes a decent amount of domain knowledge about how reasoning models are made to conclude that deepseek would be able to inexpensively replicate o1 levels of intelligence. If I can dig up the exact X thread where those details are discussed, I'll post it here.
DeepSeek R1 and the paper about it were released this week, as well as the app. They're the inference time compute reasoning versions of DeepSeek v3 which came out at Christmas.
It's important from technical/product POV because:
1) First open weights inference-time long-chain-of-thought trained reasoning model (so like OpenAI's o1)
2) It was trained extremely cheaply, so is fast and cheap.
3) They published the paper on how it works, and the methods are different and simpler from the big labs. Rumour has it Meta have stopped their big training run, and are doing it again using the methods in the Chinese paper.
Geopolitical importance is that it means China is in performance only 3 months behind, in cost ahead of the US on AI. DeepSeek v3 a month ago was impressive, but less dramatic as they were what 9 months behind Meta at open, capable AI effectively?
Cultural importance is that it is free and anyone can try it, whereas you have to pay for other inference time reasoning models.
How good the model actually is I'm not sure, I prefer Claude still, but partly we'll see what happens when people spend more on training the same techniques. Performance and length of thinking go up each generation, and the graph doesn't look like it stops.
I feel like Leopold Aschenbrenner's "Situational Awareness" predictions are becoming more and more spot-on by the day: $500 billion for a new AI cluster, shock and panic about China's AI prowess...just waiting for trillion-dollar training runs and nationalization of AI capabilities now right?
Is it totally clear that the dip in the market today is a direct result of the DeepSeek release?
As some in my local circles have remarked, NVIDIA in particular is a historically volatile stock, and looking at the 6 month graph makes today look not that out of place.
Financial reporting is full of monday morning quarterbacking and storytelling that is just a little to perfect. I am skeptical that the sell off was directly or only due to something related to DeepSeek. Just because articles are written that say it, doesn't make it true.
A lot of other stuff happened over the weekend that the market would be responding to. And the S&P started a sell off on Friday. Additionally, if it was DeepSeek, we'd expect China's market to rise but it's flat for the last 5 days and was flat yesterday. Meanwhile the CAC and DAX are way up the last two days while FTSE is up slightly.
You can look at financial charts and tell any story you want. It's like astrology that way.
Sure, but NVIDIA’s decline was part of a broader tech sell off, which, based off timing and the commentary of the people doing the selling, seems to have been driven by DeepSeek.
I mean, it’s *possible* it was driven by other factors, but if so, what?
Just about all the semi companies except Intel have significant revenue streams from AI products, as do a bunch of other tech companies. DeepSeek showed that a Chinese firm that was able to produce an open-source AI model with roughly 1/50th of the resources of state-of-the-art models yet still beat OpenAI’s o1 on several benchmarks. And they did it with older NVIDIA chips and without a huge data center. Suddenly, NVIDIA, the companies in its supply chain, and its competitors have developed feet of clay.
Deepseek released an app, which received a lot of popular attention, and approx 300k daily downloads, which updated the markets towards it being important.
Starting to think Wall Street might just be dumb.
That was Michael Lewis's conclusion at the end of The Big Short.
What did the prediction markets say about either DeepSeek directly, or the relative US/Chinese AI capabilities?
What might the market behavior have been if Wall Street were smarter? Should there have been a big sell-off a month ago? Should NVIDIA and related stocks have gone higher today? Funds are all working on partial information about what might happen in the future and trying to outguess each other's behavior. Seeming order follows seeming chaos follows seeming order in the markets.
If the stock market was sane, all trading would be insider trading and low volatility
Yup! To paraphrase from something I vaguely remember: "If the efficient market hypothesis were true, the market should be a very quiet place." (to agree with your "low volatility")
Not even the full efficient market; merchant markets are far more stable
I would have expected a market move on the news that DeepSeek can make great models for 5% of the cost of everyone else, and not on the news that the great model that was free for anyone to test and see how great it was made it to the top of the app store.
I'm much less sure which direction the market should have moved, but most of the people in AI I'm following seem to think it should be up.
"Great" is doing a lot of work here. People who aren't super plugged into the AI space can't immediately identify marketing hype and fluff from the real thing. Wall Street didn't know it was a great model until it went viral. You can't fake 300k daily downloads.
It's not *that* hard to find an edge in the market. Everyone who makes a living trading has an edge somewhere. Being as plugged-in to the AI space as you are would almost certainly count as an edge for a skilled trader.
> You can't fake 300k daily downloads.
I feel that this is probably similarly challenging to faking NYT bestseller status.
Mais oui on your above comment about Wall Street being dumb.
However, one can steelman a decision to not sell on news of DeepSeek's technical capabilities but to do so when it jumped in the app store. It's not unheard of for capabilities to be overstated or for a model to do well on benchmarks while being underwhelming to use, etc.
Becoming the top app is direct evidence to suggest that this is something real, something from which users derive real value. Not an absolute guarantee: plenty of weird stampedes in app popularity, fads where people try out the new thing but don't stick with it. But it's suggestive. Much more suggestive of real consumer behavior than an impressive technical demonstration / paper.
And yes, there is an obvious case that Nvidia, if no one else, should be up because more interest in AI equals more inference time compute. The counterargument is that DeepSeek just demonstrated that previous assumptions about training time compute demand could be invalid.
Prices move for lots of reasons though.
You've got it backwards. DeepSeek was able to put out a credible AI for $13 million. Suddenly, there's no need for massive data centers full of Nvidia boxes. NVID's current revenue stream is threatened, and DeepSeek puts its future revenue streams from upgrade cycles in jeopardy.
Suddenly, the big fab companies like TSMC are facing less business.
The revenue streams of companies with DSP products are threatened: AVGO, CDNS, 2454, SNPS, NXPI, CEVA.
The revenue streams of companies in the optical space are threatened: AVGO, TSM, MRVL, CSCO, GOOG, CRDO, ANET, FN, COHR, Innolight, Eoptolink.
The revenue streams of semi companies in the ethernet switching space are threatened: AVGO, MRVL, MCHP, CSCO.
And so on. I've probably missed a bunch.
AI was always a bubble waiting to burst. It may be healthy that NVID lost only $750 billion in its market cap today.
idk, information takes time to percolate. (putting the efficient markets hat on) might be that there are multiple levels of meta here? Like the existence of DeepSeek R1 isn't enough, you need to also know the reactions to it, and the reactions to the reactions, across multiple levels of the supply chain... maybe multiple pieces had to fall together to cause the selloff
(or in other words it isn't even clear to me that this should cause the Nvidia stock to drop rather than rise; Matt Levine has mentioned before how often in insider trading cases the perpetrators correctly determine that the stock would move but get the direction wrong)
(taking the efficient markets hat off) there is a lot of alpha in not being dumb then :D
About the dream of an AI-supported future, where there is ample food, housing and good medical care for everyone, and nobody is forced to do tedious exhausting work to get those things, and some do not have to work at all: How would we transition from where we are now to that? Let’s assume that the AI of that era is utterly benign. It has no agendas of its own, and does a great job of whatever we tell it to do. If we want advice on how to transition to that life it will advise us, but it does not govern us.
So how does it happen? Here are some things that make it hard for me to believe we will transition to anything like the utopia people are dreaming of:
-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?
-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. There is some evidence of their being concerned about doing harm of different kinds with their products, but that concern looks like it is a far smaller factor in decisions than are the usual business motives of profiting and thriving. There are probably things these companies could do that would nudge more AI-related
benefits or beneficial AI itself toward the poor. But is that likely?0
-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are. One the comes to mind for me: In countries with way too few medical professionals, programs that train bright people without medical degrees to use AI for diagnosing people, reading imaging, etc. Wouldn’t be as good as having an MD, but seems like would be a great improvement. But is there anybody other than a few dreamers thinking about that?
-I can think of many ways that superintelligent AI could be used to acquire power and wealth. For instance there’s the entertainment/tech addiction route — coming up with new forms of virtual entertainment. There’s designing new medical and scientific gizmos that have AI components. (Surgeons could wear glasses that show the AI the tissue they are looking at. AI could tell them in real time which areas are cancer.). There’s using AI’s pattern recognition abilities to make business decisions — for example, about where to buy or build real estate, or what to invest in. Seems like lots of things in sales that are now an art could be done via AI pattern recognition. If it can recognize cancer, it can recognize a great site to build a new city. So why wouldn’t we move not towards more equality, with everyone having a good life, but towards greater inequality, a la William Gibson?
When I read people speculating about possible end-game AI futures, they always give off a sci-fi vibe. By that, I mean that the game is to imagine us, humanity, but with a few interesting changes, and not to worry much about just how those changes came to pass. But can an AI utopia ever do away with most jobs? How might this happen in practice?
Anyone remember that old game M.U.L.E.? Or how about we just imagine a small island of ten people with a fully functioning mini economy? One actor develops AI and with it takes over the juiciest market, throwing another guy on unemployment. The AI guy is producing more, prices fall, and everyone benefits. The safety net holds and that guy on unemployment gets back on his feet and starts contributing to the general economy again. Let’s imagine that he enters a completely new market.
AI guy takes over the next market, with similar results, only this time the newly unemployed guy goes to work for another non-AI business. Output again goes up, prices fall, and most people are better off. But I doubt that new employee makes out as well. Still, let’s say that he consumes as much, but less relative to his fellow citizens.
We go through a few more cycles. The safety net starts to come under strain. A few great new markets spring to life, but increasingly people find themselves forced to work as employees for less interesting money. But what about the AI guy? He’s always been driven by profit, but what can he actually buy with his money? He doesn’t really need more coconuts, fish, or massages. Still, the fish market looks the tastiest of what’s left. He’ll have it, even if it means throwing 30% of the population on the unemployment line.
The safety net buckles. The citizens vote to raise taxes dramatically, over the objections of the AI guy. He’s asking himself whether the fish market even paid for itself. He disrupts a few of the more innovative companies, just to keep himself amused. Then the day comes when only the coconut market alone seems worth his time even though it employs a full 70% of the population. A rather surly 70%, it must be added. Does AI guy basically pay them to do nothing so that these former employees can continue to buy his other products? What does money even mean by now? What would ever convince AI guy to push this further? Not money. And what’s his status without money to flaunt? Power, perhaps? Yeah, he’s probably hungry for that. And what about those unhappy few whose dreary but essential jobs resist automation? What are they in it for, apart from preventing general collapse should they ever stop working, and the privilege of eating at the same all-you-can-eat buffet as their unemployed brothers?
And in the end, I don’t see how we get to utopia in a capitalistic system.
> And in the end, I don’t see how we get to utopia in a capitalistic system.
Good thing then that modern society has nothing in common with a small island inhabited by 10 people.
If you make up a completely fictional scenario and then proceed to make up how everyone would behave in this scenario, you really shouldn't draw any conclusions for real life.
I think you could have made a better effort to reply than a snide remark. Two can play such a game. If I could show that your “nothing in common” is wrong, will you admit your error?
It was a toy example meant to bring a few essential issues, as I see them, to the fore. If you wish to write a higher-effort reply, ideally along the lines of how you see us getting from here to an AI utopia, I’d be happy to read it.
The onus is on you to show that your analogy holds predictive power over real life. I see no sense in engaging with an analogy that's inapplicable to the question at hand.
> If I could show that your “nothing in common” is wrong, will you admit your error?
No, because it's not enough to show that your analogy holds _something_ in common with reality. You need to show that it's _sufficiently close_ to reality to matter.
> how you see us getting from here to an AI utopia
I have no idea, but that's not my point.
Toy examples vary quite a lot in how useful they are. For instance if I wanted to know what would happen to everyone commenting on this open thread if I put them into a giant
highly flexible container and dropped them off a skyscraper, I could get a reasonably good idea by dropping just you, in a garbage bag.
If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons. Or you could come up with a model of your own for how things will play out.
If you are not interested in doing either, how about aiming your pea-shooter at John Shilling? Guarantee you you’ll be totally impressed by his response.
> If you think Kit’s toy example differs from the large reality in ways that make it useless, give your reasons.
Just about in everything. How is that not obvious?
* Our civilization is not an isolated island, and even most countries aren't isolated countries.
* Most countries aren't just a handful of people, neatly sorted into different industry sectors.
* Markets aren't discrete, atomic entities which can be "taken over" completely or not at all.
* There's not one "AI guy", not even one "AI company".
Come on, even most strategy games try to create a better approximation of reality.
> […] how about aiming your pea-shooter at John Shilling?
I don't think I've ever seen John Schilling making up overly simplified fictional scenarios and drawing conclusions from them. He is, however, an exceptionally insightful and articulate commenter, and I can only strive to meet the bar he sets.
> The onus is on you to show that your analogy holds predictive power over real life
Why would that be? The original comment wondered how society could transition to an AI-supported future. I also wonder. I started out with a question, and then sketched out a little world showing why I have this question. I left the fundamental issues unstated, not wanting to insult people's intelligence. I fear I left some behind.
There were no predictions here, no prophecy. Where I come from, it's fine to speculate about possible futures, but there's no 'onus' on anyone wondering if such a future can actually unfold, unless he set out to prove that no such way forward is possible.
Also, where I come from a proper response to a such a question would be to either point out a flaw that, once remove, shows the way forward, or to direct the person to a source that contains the solution. You chose a third way, I suppose.
> I see no sense in engaging with an analogy that's inapplicable to the question at hand.
I'll admit that I laughed out loud at that!
Anyway, now that I've sampled the quality of your thought, I'll stow what little I can and set sail a wiser man, remembering to ignore you in the future. Amuse yourself with the last word, if you wish.
You launch the socialist revolution once the AI is developed.
Seriously, probably you just use democratic mechanisms for redistribution (development paid for by taxes) once the new technology exists. Obviously not perfect but I can't think of anything better. I'm not saying nothing better exists, but I'm not smart enough to think of it.
>-Wealthy people could already do far more for the poor than they do. Wealthy countries could already be doing far more for the poor ones than they are. Why wouldn’t the things that put a ceiling on how much of our wealth we share still be operative?
If the amount of wealth we have increases, then sharing the same percentage will help more people. Make the pie big enough and you can fund a UBI or some other redistributionary scheme that gives everyone enough to live on without any of the big AI winners feeling like they're being taxed too hard.
(I've occasionally joked that the Good AI future will come about by giving everyone shares in the AI-run companies so they can live off of investment income, thus achieving full communism in the most capitalist way possible.)
>-The companies making advanced tech seem to run the way other businesses do: They want profits, they want to grow, they want to keep the power their products give them. [...] There are probably things these companies could do that would nudge more AI-related benefits or beneficial AI itself toward the poor. But is that likely?
That's the cool thing about capitalism (in theory) - you don't need companies to care about doing pro-social things. You just want them to make as much value as possible, in their typical amoral greedy way. You *want* them to be making cool AI-powered cancer scanners or whatever, because that's what "creating value" means - more stuff, better stuff, cheaper stuff. Then you tax that value and redistribute it, so that everyone can buy the new stuff.
(And also maybe ask the friendly AI to come up with a non-capitalist model for well-known market failures like housing or health care, but the capitalist model covers a lot of stuff.)
>-I have not heard or read anything about efforts to use present AI to make life better in poor countries. I’m not sure what all the possibilities are.
There was a recent post on the SSC subreddit discussing using ChatGPT as an English teacher in developing countries. IIRC the results were "not as good as an actual human teacher, better than not having a teacher."
>"not as good as an actual human teacher, better than not having a teacher."
Such a good way to see small improvements over no improvements. Hard to not accept this as plain fact.
Well I get the idea — if there is a giant superfluence of stuff, whether the stuff is money or food or housing, enough will filter down to the destitute to raise their standard of living greatly. Do you in fact think that’s the way it’s likely to play out, assuming Ai gets smarter and smarter but stays in the obedient helper role? Or are you more debating? I do not feel knowledgeable enough about economics etc to debate this. I am just speculating, without an ax to grind. If you are in debate mode. would you be willing to move to speculation mode?
Other ways it can see it play out that seem plausible to me:
-AI development companies become so rich and powerful that they are running the country, with US president more a
figurehead. Companies then either
merge or fight, til there is one entity that makes the tech and has substantial control over all the vital things that now have AI as essential parts of them — research, electrical
power, medical care, transportation. Now that company pursues its own agenda. Does not seem implausible to me that its main goal would not be spreading the wealth but accumulating power, harming enemies, etc.
-AI’s ability to do almost anything is not used to produce lots
of food and housing, but to produce astonishing luxuries for the very wealthy few. There sure have been other societies where there are a few people with, you know, solid gold gem encrusted furniture, and a large number of destitute people in society who are eating out of compost piles.
>-AI development companies become so rich and powerful that they are running the country, with US president more a figurehead.
So basically Idiocracy?
Let's say the AI tells us that in order to achieve the utopia you described, we all have to wear a purple paper hat twice a day, 10 minutes each. The resulting societal changes, for reasons which you couldn't possibly understand even if the AI was explaining them to you for your entire remaining lifetime, will bring about utopia.
Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.
Similarities to already existing imaginary rulers were unintended, but inevitable.
I don’t follow the logic.
If my dad said “Please come to my house and get the mail from the mailbox. The prosperity and fate of our family depends on it.” I’d do so in a heartbeat. The cost is low and the benefit, even on blind trust, is extremely high. Maybe he has information I don’t, or maybe he’s just way more intelligent than I and couldn’t explain the plan in a way I could understand even if he bothered to try.
If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.
If an AI requires us to do something mildly inconvenient and inexplicable, but actually does follow through with providing huge prosperity, and for reasons we can’t understand these things are related, doesn’t imply that AI can realistically make me do whatever it says.
>If my dad said, “I need you to kidnap the governor and bring her to my home, the fate and prosperity of our family depends on it.” I definitely wouldn’t blindly follow the proscription.
Yes, but you refuse to kidnap the governor for the same reason that you agree to get the mail from the mailbox. You weight costs versus benefits, and estimate how likely it is that your dad/the AI knows more than you. But as OP established, the AI in this scenario is maximally capable, basically a genie in a bottle. If you truly believe that the benefits outweigh the costs in all cases (it's about utopia, after all), then why wouldn't you kidnap the governor? People get kidnapped and murdered all the time for arguably much lower benefits than Utopia. The AI might assure you that no harm will befoul the governor, your attempt will certainly succeed, and you will not suffer any negative consequences for it. The AI, in short, can make its request seem as effortless and rewarding as getting the mail from the mailbox.
And if you still refuse on principle, just like you could refuse your dad just to be an ass or something, then you valued your principle higher than the expected outcome of "Utopia by AI". In that case, either utopia does not have the highest possible value in your belief system, or you don't think AI can bring it about.
Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order. The probability of someone lying increases with how poorly they can explain their reasoning for an action too.
"Hans, have you ever thought that what we're doing is wrong?"
"Sure! It's pretty obviously evil what we're doing, but it's fine because [Insert Authority Here] tells us it's for a Utopia. The long term gains override any consideration of short term misery."
"Ah! That makes me feel much better. Too bad for these Jewish fellows, but I'm sure it's comforting for them to know there's a greater good we're working towards that makes it alright."
A reasonable perspective is Utopia is extremely valuable (not infinitely so), but its value diminishes with the certainty of an immoral action bringing that utopia about. Long time frames, no causal connections between the action and the outcome, the seriousness of the action required, and many other considerations can all diminish the value of that Utopia to almost nothing.
If someone told me they'd give me a million dollars in a month if I did something mildly inconvenient every day, and I had some reason to believe they were telling the truth, I would do it. If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.
>Utopia does not have near infinite value in my belief system and can lead to some pretty terrible conclusions when it does. You also have to weigh the probability that whoever is giving the orders is lying to you, which increases the more insane the order.
Yes. That's literally what I wrote. If you're not fully on board with the proposed utopia and/or with the AI supposedly able to achieve it, then that's it, we're done. The AI does not rule you. That's what I said. I don't know why you keep arguing the point. I agree with it.
>If they told me they'd give me a trillion dollars tomorrow if I just murder someone today, I wouldn't, even if the value of that trillion dollars could save a lot of lives, and improve many others.
Yes. That's literally what I wrote. If you don't want to murder on principle, then you place that principle higher than the trillion dollars. They do not rule you. I agree with it.
My point is simply: If the trillion dollars were enough for you, and the trillion after that ad infinitum, then they would rule you. Agreed?
It’s worth pointing out that superintelligence does not equal genie. No matter how intelligent you are, there are fundamental limitations on how much you can predict based on uncertainty in the data. The real world is a chaotic system. Even if you have infinite compute and can literally simulate every possible universe, there are likely many indistinguishable universes that lead to wildly different futures. You don’t know what people are thinking; you don’t know what random events will happen.
Kidnapping the governor would be a very high-variance play. First of all, it might not succeed, as we saw. If it did succeed, it would predictably produce harsh political blowback in the short term, but the long-term effects would be hard to predict. Consider a somewhat-more-successful recent example of political violence, Jan 6. It inevitably hurt the militias and Trump in the short term, but in the long term it might have benefitted both of them, especially with the recent pardons. Could a superintelligence have predicted this? Not necessarily. After all, if Trump had lost the 2024 election, even narrowly, it would have been seen as a further rebuke to everything associated with his campaign. But I doubt there was enough information in January 2021 to confidently predict the outcome of the 2024 election.
In general, I expect that a superintelligence trying to bring about a desired long-term effect would prefer low-variance plays. If it’s better than humans at predicting the future, then it can leverage that skill most effectively when the future is more predictable. Which is convenient, because low-variance plays are pretty often morally-good ones. Instead of asking people to kidnap the governor, ask them to run for governor and lots of other offices too. Predict the most effective political arguments and tell your candidates how best to make them. Morally speaking, this isn’t perfect (effective political arguments are often logically bad arguments), but it’s probably morally better than kidnapping the governor.
But if a superintelligence did recommend an immoral and high-variance play for whatever reason, then a rational human follower ought to object. Not just because the ends don’t morally justify the means, and not just because the superintelligence might be lying (although those are both valid objections), but also because the means won’t necessarily accomplish the ends, creating a substantial risk of doing something immoral for no benefit at all.
> Would you follow that advice?
From recent experience, what would actually happen is this will go political and half the population will become vehement anti-purple-hatters, with the more extreme groups beating up anyone they see wearing a purple hat.
This will remain true even if the justification for wearing the purple hat is comprehensible and the effects visible across population statistics. Anti purple hatters will double down on denying any positive outcome from the hats, will complain about being coerced into wearing purple hats, and will claim all manner of personal problems arising from the hats.
<Would you follow that advice? If you do, the AI can realistically make you do anything it says, and it's your ruler in all but name. If you don't, you admit that you don't believe it can bring about utopia.
I don't think that's true. Lots of people, including me, have been told to do or not something for health reasons, and did not disbelieve the doctor, yet did not follow the doc's advice. And it's also common for people to avoid doing tasks that, by their own judgment, there would be great benefit in completing, and great harm in leaving undone. It's nowhere near as simple as believe = compliance, noncompliance proves disbelief.
And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?
> did not disbelieve the doctor, yet did not follow the doc's advice
Nobody made the claim about your dentist being your god-king ruler, did they? The advice was something like "if you don't start flossing, your gums will keep bleeding", and you made an intuitive calculation on the cost vs benefit of following the advice. Your dentist obviously has little interest in your desire to scroll on your phone a few minutes more each day, but it's your few minutes, so screw him.
But utopia, for practical purposes, has infinite positive value. Otherwise, what kind of utopia is it really? No finite cost (such as the hats) ought to be too high to achieve it. So if you are serious about describing infinite-value utopia, and you posit an AI credibly claiming to know the practical way to get (and keep) it, I think my point stands.
> And anyhow, does that bear somehow on the question of how, exactly, we would get to communist AI utopia?
How would I know? I can't quite prove it, but I'm not that kind of AI. I just pointed out the flawed premise of your question. Either an advisor whom we fully trust with effecting large-scale societal changes effectively rules us, or such an advisor is not fully trusted after all.
Or maybe you do have a point with your dentist. Maybe the flaw in the premise is that there is no such thing as utopia at all because we, as a species, will never agree on anything so important and will always say "screw you I don't want to wear silly hats even if it's for utopia, I want 10 more minutes on my phone". That would probably say more about us than about any AI.
<But utopia, for practical purposes, has infinite positive value.
And death has infinite negative value for most people, and yet many people skip the equivalent of flossing when they have a life-threatening condition. I know someone right now whose liver function test results are really ugly, as a consequence of his drinking. He is well-educated, researched the liver thing, and understands how serious his situation is. Yet even while sober he shrugs and says he can't see stopping alcohol right now.
Well that's your answer then. There is no guaranteed way to utopia because people can just shrug and say "no", whatever the AI says.
To the extent that differences between countries are driven by differences in human capital, and AI could close those differences, that would help poor countries. It won’t just be in medicine, but could be in any skilled profession where poor countries are short. Even language skills could reduce the disadvantage that people in poor countries have engaging with the outside world.
The biggest obstacle is export controls that prevent poor countries from getting AI, and high infrastructure costs that make it practically inaccessible (a huge percentage of Africans don’t even have electricity or Internet access). To create the AI automated luxury communism utopia, the focus should be on making AIs that are cheap enough that they can’t be export controlled and can run even on the crappy hardware that is available to poor people.
So who, exactly, is going to manage the situation so that the focus is on making cheap AI's or somehow limiting export control? And who is going to address the infrastructure problem in, for ex, Africa? If we are still a democracy at that point, how likely is it that we would have a president & a congress who would be in favor of doing that? I myself am not sure we are going to be fully a democracy if AI becomes AGI and then ASI. Seems to me that the companies making AI and crucial components will have more money and power than the elected government. Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?
>Are those companies likely to commit to helping set up the world so that all can enjoy automated luxury communism utopia?
Crucially, this was OpenAI motto, beef between Musk/Altman, but now that China (largely socialist/aims to become communist) opens great models for free, maybe governments themselves can offer public AI, in the same vein as we have utilities and this governance model may twist things a bit. (AI geopolitics happens to be my field, but I have no static beliefs over "which nation" will rule AI, but I think AI firms are overrated in their role).
>And who is going to address the infrastructure problem in, for ex, Africa?
As soon as AI ROI becomes more evident in large scale dynamics (societal benefit vs no investments), it should become more palatable for politicians to bring AI infrastructure to debate as they do with education, security etc. I wrote a piece not long ago where I explore the geopolitics of AI leaders vs followers, countries are certainly aware of my arguments there (https://antoniomax.substack.com/p/techno-economic-protagonism-and-ai)
The general idea is that superintelligent AI will be able to produce more supply of any particular good than there is demand for it. That drives down costs for everyone. If the supply of housing, food, and medical care quadruple while demand remains constant then the cost will plummet and more people will be able to afford it.
Of course demand doesn't remain constant: if the price goes down, more people will want it, and if people spend less of their money on food or housing they'll want to spend more of it on healthcare. But if the supply increases enough then increased demand will be moot. So the theory goes.
>Of course demand doesn't remain constant: if the price goes down, more people will want it
That conflates demand with quantity demanded; when price decreases, quantity demand increases -- that is just the law of demand. When demand (the willingness and ability to purchase different amounts at different prices) changes, the entire demand curve shifts. But you make a good point re "if people spend less of their money on food ..." -- that would change the ability to pay for other goods and services, and hence would indeed shift the demand curve for those goods and services.
Well yes, I see that, but what about the nuts and bolts? Seems like being able
to build houses more cheaply could play out all kinds of ways. For instance, the development and production of robots and new house designs will also be beneficial for the country that does it. What would interfere with that country, let’s say it’s the US, hanging on to the advantages of having the house bots and house plans? Seems like there would be a great deal of money to be made by selling the houses somewhat more cheaply, but nowhere near as cheap as they are to build. House-builders and suppliers of things needed for the task would then make a lot of money. I’m sure the price would drift downward as companies undercut each other, but would it ever drift low enough so that it was affordable in the very poor countries? If I owned a house-building company and thought like a business person I would be looking for ways to keep house prices from becoming dirt cheap. I’d band together with others in the business and make agreements not to lower prices too much. I’d lobby for some kind of certification that’s expensive to get, and without which people
are not allowed to build and sell houses.
This seems like a situation in which a governing body of some sort would have to intervene in order for the benefits of the new tech to reach poor countries and even the USA poor at more than a trickle.
Houses are basically free in a lot of places anyway. Only land is expensive.
(New houses aren't free, but a block with a liveable-but-unfashionable house sells for the same price as an empty block, that's what I mean by houses being free.)
I mean, we call those kind of price fixing arrangements cartels and monopolies, and we do have governing bodies that intervene on that in many countries.
Personally I don't believe that AI will be able to create the kind of productivity gains needed for the kind of "Fully Automated Luxury Communism" that many people speculate about. But anything that increases the supply of something will decrease the price, generally speaking. If it cost me $.005 per square foot to build a house thanks to AI robots or whatever, and you're willing to pay $.01 per square foot, then I can make a profit selling to you.
Those arrangements only apply within countries. What’s to stop the US from saying it’s going to use its monopoly on AI to extract as much wealth from other countries as possible to create automated luxury communism for its own people only? We saw something like this in the colonial era. Britain and other leading powers didn’t give their technologies to India, China, Africa, etc., instead preferring to keep those countries backwards so that they could serve as resource colonies.
Economically I think there's an argument that an arrangement like that is suboptimal, even for the country running the monopoly. Ethically, I don't see a problem even if that happens. If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone. The US is better off, and everyone the US sells stuff to is better off (if they weren't better off, then why would they buy the stuff?). What's the harm? But I understand that people who endorse Fully Automated Luxury Communism have a philosophical objection to people having good things if everyone else doesn't get the same good things.
> If only the US manages to invent super-AI and they don't want to share it, that doesn't really hurt anyone
Have you seen America lately? They are hurting themselves the most, it is kinda cringe actually.
And eh, China 1.5b people. US 350m people. Chinese AI engineer ratio over US #s is abysmal, not sure enough people appreciate the probability for chinese AGI to emerge first.
There could be a number of harms.
- Those countries could lose all their jobs and industries, similar to what happened in Asia in the 1700s and 1800s. Records of human height show clearly that living standards in India and China got worse during this period.
- Superintelligent AI could be an insurmountable first-mover advantage that prevents other countries from developing the same thing, similar to how European colonization of the Americas meant that non-Europeans had no ability to colonize it later, making them strictly worse off.
- The US or whoever else has AI could use that power to demand things like annexing resource-rich areas that leave weaker countries worse off.
What's the current state of understanding about permanent negative mental effects from using psychedelics (specific mushrooms). I'm mostly curious about the dosage floor for what's been observed. Has someone had psychosis induced by only doing a small dose a single time? Is there some dose sufficiently small that no serious adverse effects have been reported?
What I know: reputable publication had a chart of the safety profiles of all recreational drugs. There were 3 measures of safety: harm to the body, harm to mental health, and additiveness. Psilocybin was the safest. Unpleasant drug experiences on it are not rare at all, but long term bad effects such as psychosis are rare. But of course the chance of psychosis is not zero. It is higher if you yourself are subject to mild
versions of psychotic symptoms that you are able to shake off, or if you have close relation (sibling, parent) who is schizophrenic.
There is one dose that has no serious effects: 1/10 of a gram. That is the amount people took in a microdosing study, where subjects and record-keepers were blind to whether a subject was getting psilocybin or placebo. Turns out those taking the drug did not differ from placebo subjects on either measure. So 1/10 gram def won’t have negative effects, because it has no effects period!
If you are eager to try the stuff but feel
quite nervous, you should only experiment with it with a trusted person with you, one who has experienced psychedelics themselves. And I would recommend starting with a very small dose, something just a little bigger than 1/10 of a gram, and working up gradually til you feel a small effect and see how you like it.
I'm actually not asking for myself but for someone I know who does have some schizo-adjacent kinds of thoughts. It's a very hard risk to assess. Thanks for the info!
I think the last thing Scott wrote about this was: https://slatestarcodex.com/2018/06/06/hppd-and-the-specter-of-permanent-side-effects/
But there may have been more studies since.
Thanks, this was pretty informative even just because of the single case mentioned. I'd be curious whether that person took an atypically large dose.
I know a couple people who have it this. They didn't take LSD many times, and it wasn't huge doses. For both of them it's just minor visual distortions from time to time -- things like halos around lights. Both these people took LSD, not mushrooms, & Scott says in the post Laurence linked that he does not believe psilocybin (mushrooms) and peyote are known to cause this syndrome.
A Trump admin thing I'm interested in is whether its anti-DEI push is going to be tailored toward achieving some actual goal of avoiding "reverse" discrimination against majority groups, or if it will just be a blanket "diversity = bad" policy. A proxy for this I'm watching is FDA's Diversity Action Plan (DAP) guidance, which proposed diversity requirements for participants in clinical studies to be used in FDA approval applications. These requirements seem pretty reasonable to me; people may have different reactions to a drug based on characteristics like sex, race, age, etc., and knowing about any such differences seems valuable to FDA in deciding whether and for what populations a new drug is safe and effective. As of now, the general DAP webpage and webpage for a prior initiative specific to cancer studies have been taken down, though last years' draft guidance document is still available. https://www.fiercebiotech.com/biotech/fdas-diversity-guidance-clinical-trials-no-longer-available; https://www.regulations.gov/document/FDA-2021-D-0789-0111.
I'm curious if this is a temporary review situation or just a reasonable policy getting shelved because it includes the word "diversity," which strikes me as a pretty stupid way to govern. I also come at this from a legal/regulatory perspective more than from a scientific or business perspective, so I'm interested in objections to my assessment that this is good policy (by FDA standards, given its current mission), and/or suggestions of other potential proxies to watch.
There was a report earlier of some of the IRS pages that had previously mentioned forbidden words ('equity') being unavailable despite those being terms of art when dealing with tax matters.
From this, I'd infer that it's a dumb filter / applied with a broad brush, at least for now.
That's probably the extreme rational end of the Trump administration policy. What's probably going to happen is it tries to discriminate in favor of whites and men to make up for all the discrimination done by the government going the other way recently... which means the direction of discrimination will oscillate with the party in power every 4*(d3) years.
Best-case scenario is disparate impact goes away and companies adopt meritocracy as a defensible middle ground since race-blindness is theoretically defensible from either side.
Worst-case scenario is they constantly change which groups they're discriminating against depending on who's in charge and it becomes one more territory in the culture wars.
I know which way I'm betting. ;)
Nobody knows, it seems like an ad hoc across-the-board policy right now, it could be redefined with guidance or changed literally overnight.
It will likely shift undocumented Mexican immigration to be documented Indian immigration
I'm not sure either way, but I don't think it's obvious that there's not a political angle to the drug testing thing.
There are definitely some drugs that work differently in different populations (carbamazepine has an extra side effect in some Asians), but these are a tiny fraction of drugs and the extra side effects are usually pretty rare. Nobody can ever test for everything, and if the carbamazepine issue was just described as "extra side effect in people with HLA-B1502" instead of "extra side effect in people with HLA-B1502, which is way more common in Asians than whites", probably people wouldn't say "Okay, time to retest every drug in every HLA group, just in case!" I think probably people seized on the occasional rare cases where different races get different side effects as a new thing that they could claim was a disparity and do diversity politics to, during an age when everyone was looking for disparities and reasons to do diversity politics. Whether this is net good or bad depends on a lot of things I don't know like the exact fraction of drugs that have something like this and how much extra it costs / how much more time it takes to do the multi-race testing.
Maybe with race, but i sounds like the same taken-down page talked about sex as well, and it seems obvious to me like there could be some pretty big differences in effect on the two sexes for a lot of drugs.
My guess is they'll start by applying a dumb filter, and may eventually add some things back if they're found to be useful.
But in the meantime, researchers shouldn't need their hands held by the government to ensure that they engage in sensible research practices.
Well, Pete Hesgeth seems to be intent on removing all women from the military, so... the latter, probably.
> Regarding women serving on the front lines, Hegseth has said, "I'm straight up just saying we should not have women in combat roles. It hasn't made us more effective. Hasn’t made us more lethal. Has made fighting more complicated." Hegseth argued that men are more capable in combat roles because of biological factors and said "[e]verything about men and women serving together makes the situation more complicated, and complication in combat means casualties are worse".
https://apnews.com/article/military-women-defense-hegseth-combat-916d50a7b465ccfea1aeb13bb91064b3
> removing all women from the military
> should not have women in combat roles.
Do you think those are the same thing?
It remains an open question as to whether women are a positive factor in combat roles. I don't think it's an unfair position to think they would be a detriment.
Obviously for equality of opportunity reasons we may accept women in combat roles even if they reduce overall effectiveness, but that may be a trade we aren't willing to make when literal lives are on the line.
Off the top of my head it may introduce complications like:
- Privacy/comfort concerns with separate bathrooms, sleeping and showers.
- Issues with sex and relationships, and all the downstream interpersonal problems from that
- Physical capability (although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter).
- Hard to define psychological factors like differing levels of inbuilt aggression
- "Monthly" variations in mood, focus and energy levels
- Increased opportunity for sexual assault (A sex-starved battle-machine may be more likely to do something foolish.) Not victim blaming here, but on an overall level if this increases monitoring requirements, creates increased burden for HR, and removes the assaulter from a combat role, then this could hurt combat effectiveness.
- General camaraderie issues. I'm not informed on whether all-male or mixed groups easier form senses of obligation to their fellow soldiers, but wouldn't be surprised if it was the former. Men competing for the attention of a small percentage of female soldiers may cause resentment.
- Etc.
The military is obviously pretty discriminatory (No ADA compliance in combat roles), and I think the potential lives lost for even a small sacrifice of combat effectiveness, and the general geopolitical issues downstream of a less-effective military, may justify the exclusion.
If there is a minimum requirement and no improvement of performance for people exceeding the minimum requirement it would not matter. E.G. if the task is pushing a button then the woman who can push a button will be just as effective as a man, but if the task is running fast towards the enemy and the minimum requirement is x yards in y seconds, a woman who has just made the minimum requirement will be less effective than a man who greatly exceeds it.
>although if there's a minimum requirement for a role and a woman meets that requirement I guess this might not matter
Well, the problem is that there used to be requirements, pretty much all women failed them, so they lowered them far enough that women could realistically pass, making the requirements basically meaningless. It's a simple fact that women are much weaker on the high end, and maybe society is finally mature enough to accept that.
I doubt I'll be able to find it again because I read it during the Bush Administration, but something that stayed with me was a post by a female combat veteran with lasting physical and mental issues who made the good point that the location of where the combat occurs can change quickly and unexpectedly, and although as a female soldier at the time she was not meant to be in the combat zone, she ended up there on multiple occasions. So people in her situation often had trouble accessing appropriate medical benefits or being given recognition of the fact that they served in combat, because officially they weren't supposed to have been there and it was inconvenient for the authorities to acknowledge.
So it seems to avoid this kind of scenario you'd either have to ban women from anywhere that could be near the front lines in the worst case scenario, or make it officially possible for women to serve in combat. (Or, I guess, ban women from combat and keep sweeping any situations where that doesn't match reality under the rug).
If there are to be women in combat roles then I think the most reasonable way to do it would be to have all-female units so that the privacy, sex, sexual assault and camaraderie issues are somewhat neutralised.
In an actual war I wouldn't be surprised if these units get assigned to the less demanding roles, but in the meantime everyone can play pretend equality.
During WWII some countries at full mobilization (Finland, Russia) had a large number of females in special combat roles. Partisans also had a non-negligible number of females.
There's too many variations to say that it always went one way or another, but from my understanding the females tended to be put into all-female units when possible, and in roles that required less direct physical violence, such as pilots and snipers. Plenty of nurses and other support roles as well, of course.
Your first paragraph says "remove from the military", the actual quote says "from combat roles".
Correct but I don't doubt that Hegseth will at least rhetorically land on something like "remove from all warfighting roles except medical support for the soldiers". As a practical matter that's probably not plausible now. But from conversations with MAGA relatives of both sexes that is what they think the correct answer is.
I'm not sure the distinction you're making between "combat" and "warfighting" but then again I'm also not really interested in debating hypothetical positions that you're sure someone will land on rather than actual positions they've taken.
Right now there's a couple of hundred thousand women in the US military with just a few dozen in actual combat roles.
Granted that it's hypothetical.
To answer your detailed question, when my MAGA relatives talk about this they are using "combat" to mean direct hostile contact with enemy combatants. Granted that most of their knowledge of these distinctions comes from ridiculous and/or outdated movies/TV shows, but anyway what they have in mind is women being taken out of roles such as supply or engineering units, intelligence-gathering, repairing equipment in bases, strategic command, etc. The one exception they're fine with is medical care for the soldiers i.e. basically the functions illustrated in the TV show "MASH".
I have been following all the chat about Deepseek, the Chinese AI currently available. It has significantly roiled the markets today for those of us who keep track of these things. Some very basic concepts about cost-effectiveness are being challenged hard. And then, of course there is the possibility that it is a lot of noise. I am curious if anyone else is paying attention to this.
https://www.wsj.com/tech/ai/china-ai-deepseek-chatbot-6ac4ad33?st=gjUbE4&reflink=mobilewebshare_permalink
I am also amused by the idea that we seem to now have officially two tier AI. Hertz and Avis are at it again.
I downloaded it in minutes and I have already used it for a half dozen inquiries. It is easy and extremely well read. I am thinking of seeing if it will give me summaries of all the books I have read.
Zvi is: https://thezvi.substack.com/p/deepseek-panic-at-the-app-store
My take away after reading Zvi is -
Deepseek will get AGI first and make it freely available.
They don't care about AI risk.
All the hyperventilating about AI risk is now a moot point.
The future existence of humans and paperclips is now a crap shoot.
There's not much anyone can do about it.
Please disagree with me...
> The future existence of humans and paperclips is now a crap shoot.
Looking at some of their current output, I doubt they'd offer humanity the mercy of death. We are going to have so much 𝚏𝚞𝚗 together!
I don't think politics is nearly that predictable. Right now, a US/China treaty regulating ASI training runs seems unthinkable- but if you look through history, unthinkable shifts in the political winds are actually not that uncommon. We ought to push for it, even if it seems like a long-shot.
I read Zvi's last DeepSeek post, and am just in the process of reading this one. My 0-th order thoughts are:
R1 seems about as capable (maybe slightly less) as OpenAI's o1.
The main general-events expectation that I have is that
- PRC/USA competition is now fully baked in to the AI landscape
but I see that as just adding a third component to
- ChatGPT's clear advances from the 4/4o level to o1, (major, but not AGI _yet_)
- Trump endorsed Stargate
which add up to: As nearly as I can tell, the AI opponents have solidly lost this round. Whatever the populace wants, AI work is going on (feverishly) for at least the next few years (which looks to me to be probably all the way to AGI).
>The future existence of humans and paperclips is now a crap shoot.
Yup.
<mildSnark>
Maybe Culture Minds will keep us as pets? Hopefully not neutered?
</mildSnark>
Personal note: I'm planning on testing my 7 questions on o3-mini when it is actually released (thought to be during this week), and planning on reporting what I see here. There _is_ now a question on whether I should try asking R1 the questions. I'm kind of holding back to avoid stretching myself too thin.
<mildSnark>
I'll let everyone know if the answers to any of the questions include "And that day is upon you ... now!"
</mildSnark>
Exciting times. Glad I lived to see this. Maybe I'll _see_ full AGI, which I would like to chat with.
Just got to the
>There’s a lot of very strong Missile Gap vibes here.
line :-)
(hopefully without a "mine shaft gap" follow-on...)
>instead are driven by ideology of ‘get everyone killed’ (sorry I meant to say ‘open source AGI’)
Zvi does have a way with words!... :-)
Update - some folks are trying to do something about it, Deepseek has been hit with a cyber attack.
This is exactly what I wanted to read. Thank you.
This is good. Thx
I've been using DeepSeek and, as a person who has been somewhat resistant to AI hype... I like it. There's a certain crispness and cleanness to it, a no-frills feeling. I don't think it's just the cost-effectiveness that people find attractive.
I've been following the story, but lacking any real expertise in the subject, I don't have much to say other than regurgitate what I read elsewhere, which is that whatever the pain in the short term, if it turns out that training LLMs is a lot cheaper than we thought, ala DeepSeek, then that's a good thing; we can use it to boost productivity at much lower levels of investment, much sooner than we might have thought possible.
In the short run, though...I log into my brokerage account and I'm like Vito Corleone at the undertaker's: "look how they massacred my boy!"
It doesn't seem to be noise, I'm hearing from a lot of people that they can just run it on their laptop and it's just as powerful as the latest ChatGPT. Seems rather bad for the AI safety people.
Interesting. Thx
ASI by 2028, huh? Today Freddie de Boer put out another article pooh-pooing AI hype:
https://freddiedeboer.substack.com/p/the-shitting-in-the-yard-test
Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah, but then, nobody is thinking that we're at the ceiling for AI.
For myself, the hype does seem difficult to believe, but well, with such a short timeline, why speculate? The hypothesis that world problems just need more intelligence to be solved can perhaps just be tested soon.
I was just in San Francisco, and the homeless crisis there was really quite shocking. I saw things that I haven't seen in 36 years of living in Puerto Rico, a much poorer (and dumber) place, where I even do volunteer work with the homeless. If you're wondering, our homeless are by and large drug addicts, but you can talk to them, they haven't lost their minds (to the extent a junkie can be considered sane at any rate). In SF, it seems like people who should be in a psychiatric facility just end up in the street instead. And it's really quite shocking, as you can see SF has utopic pretensions, but then it has this. Probably one of the most jarring contrasts I have ever seen. I really have no idea why the homeless in SF are so different from the ones in PR, but they are.
San Francisco already has the IQ and the wealth to just solve its homeless crisis, so I can't quite visualize how an ASI is going to solve it. I really think I need to hear that SF solved its homeless crisis with AI to fully buy into the hype. Perhaps if "just" some shocking scientific discoveries come out it would be enough too, but then I would have the lingering doubt of why SF, ground zero for AI, hasn't solved its homeless crisis.
I think I want to write an essay about this, about how some problems aren't really bottlenecked by intelligence, but I'm not sure if I'm in a speculative mood.
When we talk about ASI, we aren't talking about agents that are smart relative to human geniuses the way geniuses are to regular people. If we were, I'd agree that even a ton of agents like that would still find most of our intractable problems intractable.
What we're really talking about is something that might compare with our collective human intelligence the way that compares with the collective intelligence of, say, wild tigers. There are some good reasons to think that another difference in intelligence of that magnitude is possible. The physical limit for computational efficiency is the Landaur limit, and our brains are many orders of magnitude away from that. They also run at something like 30 watts, connected only by incredibly low-bandwidth language. We can also see AI really massively outperforming us on a lot of narrow tasks, and we can see those tasks becoming gradually less narrow over time.
Our collective intelligence gives us a lot of very powerful affordances that wild animals can't understand or predict- things like language, technology and culture. I'd be very surprised if a similar difference in intelligence didn't come with similar set of new affordances. Maybe that looks like ASI inventing some complicated self-replicating technology, or like it finding useful patterns in things we find chaotic- though I suspect realistically it would just look like the ASI doing a lot of very complicated things that we'd struggle to follow, while the world around us became increasingly difficult to understand. If it was well-aligned enough to try and solve something like urban homelessness, I'd expect it to try a strategy much more effective than our ways of doing things, and probably not at all subject to the same bottlenecks.
If you took the smartest tiger ever to live, made them as intelligent again compared with the average tiger, and tasked them with solving the problem of freezing in winter, they'd probably make almost no progress at all. If you task humans with that problem, however, they invent fire.
Yeah, I know the arguments. But I also know there are problems that are completely insoluble no matter how much intelligence is thrown at them. A basic one would be a heavily handicapped chess match, say white has the full complement, black is just a pawn and the king. It is impossible for black to win that one. The big question is whether the problems we want to solve have some terribly clever solution that a piece of software can execute, or they're more like impossible handicap chess. I actually think the likeliest outcome is more on "there is a superintelligent solution, but it will have to be executed by humans", which creates the opportunity for all manner of bungling, but there is a real possibility that ASI turns out to be radically transformative in a positive way, and also a real possibility that it ends all life.
Exciting times, hope that ASI by 2028 prediction pans out, so the most exciting hypothesis can be tested.
TBF bringing up anything about San Francisco is like plunging your hand in a lava pool and shouting "WELL HOW COME ALL YOUR 'SCIENCE' AND 'TECHNOLOGY' CAN'T PREVENT MY FLESH FROM MELTING, HUH, HUH?"
Apparently one of my nephews is flying out to San Francisco to speak at some conference there, so I hope he survives the lava pool 😁
If anybody knows anything about Photonics West 2025, give him a wave!
EDIT: Oops, too late, the presentation was on the 26th. Well, virtual wave! He's going off to Boston to hang out with some friends in Harvard (if I'm getting that right) after this.
I'm not sure I follow why San Francisco is like a lava pool, but you still have to bitterly appreciate the cosmic levels of irony in the contrast of all those AI billboards and self-driving cars, and even the Berkeley Alembic, and then the awful, awful, homelessness.
Similar situation as in India, when you consider the Bhagavad-Gita and the high points of Indian spirituality as juxtaposed with, well, India.
I bet Switzerland doesn't have jarring juxtapositions like these (though maybe I'm wrong).
I think Scott's point is that there are readily available, viable solutions to San Francisco's problems, but which aren't being implemented for whatever reasons. In the lava analogy, that would be "Just don't stick your hand in lava".
That's what I was saying, I think San Francisco is already totally capable of solving its problems, but for those "whatever reasons". Is more intelligence going to make them go away? Is lack of intelligence the cause of the "whatever reasons"? ASI could lead to some surprising and unintuitive discoveries if it turns out the world is riddled with "whatever reasons" to its problems that just don't go away in the face of extreme intelligence. We're only going to know what ASI can really do once it's finally here, so I can't wait.
I took it to mean "Look, everyone knows San Francisco is an erupting volcano, so why are you surprised by the lava pools?" 😁
To be fair, it's completely impossible to stick your hand in lava. The best you can do is rest it on the surface.
> To be fair, it's completely impossible to stick your hand in lava.
Not with that mindset!
You could wait until a lava bubble pops, then quickly put your hand in when it rains down. Find solutions, not problems.
It's not so much that you can't penetrate the lava. You have too much buoyancy to exist inside it, in the same way that a helium balloon can't exist underwater even if you quickly shove it into a hole in the water and let the water fill in around it.
I just asked Deepseek and it gave me an extremely detailed plan to address urban vagrancy of mentally disturbed and drug addicted individuals. Try it….
Oh my goodness, someone finally had an idea on how to solve homelessness? That’s all we’ve been waiting for, let’s go solve it now!
Perhaps it's like when a large corp hires consultants: everyone already knows what needs to be done, but you need a third party to deliver the bad news and absorb the resulting fallout.
> San Francisco already has the IQ and the wealth to just solve its homeless crisis, so I can't quite visualize how an ASI is going to solve it.
Wouldn't a hyperintelligent being without empathy be the perfect candidate for solving such a problem?
I just wrote a long post about that actually.
> Basically, nothing truly transformational, as in, on the level of indoor plumbing, or the transistor, has really come out of AI. He has a point yeah
Disagree. Even if AI stops where it is right now and no new models will be trained, LLMs will have quite a large effect when solving for equilibrium. Likely on the scale of indoor plumbing. Certainly not on the scale of transistor+integrated circuit, though -- after all, all AI effects on society are also effects of transistors and ICs, because training a LLM with a tube-based computer would not get you very far.
When Neal Stephenson published The diamond age in 1995, the idea of an interactive primer which tells educational stories based on characters of a kids life, their plushies, as well as their situation was total science fiction. Today, the main part which seems unrealistic is the fact that this was seen as a very exclusive product (which would kind of defeat the whole point, because the uber-rich can always afford human nannies and storytellers). Instead, it seems likely that the equilibrium given LLMs will consist of multiple competing services which offer different ethics (Do you want to raise your kid Confucian? Utilitarian? Lutherian?) and will cost a buck a month or something.
I still think what you're describing there is less transformational than indoor plumbing. As Freddie says in the article, imagine having to go a month with no running water. Then a month without AI. Or a smartphone. It's clear that it's going to affect you far more to not have a toilet, than AI or the smartphone. The LLM application you're putting forth there does not seem toilet-levels of utility.
This just sounds like a restatement of the principle of diminishing marginal utility. I could more easily go my entire life without using a space ship than I could go a day without shoes, but I don’t think that necessarily means the latter are a greater technological feat than the former.
"Greater" is different than "more transformational"
I recently went 8 months without running water. It wasn't that bad - just a bit of carrying water around in buckets. I would rather go without running water than without a smartphone. I'd rather the water than the AI at this point, but I expect that to change.
I might have gone some similar number of months without using my smartphone other than to charge it. Never kept track, though.
Something I'm getting really, really tired of hearing is the platitude that the US is far to the right of "The Rest Of The World(TM)". It's one of the laziest, most ridiculously simplistic assertions I've ever seen, with more holes, confounding variables, nuances and and qualifications than almost anything else, and it is repeated (usually without the slightest qualification) *more times than you can possibly imagine*!
Even here, on ACX. In fact, I could be wrong but I think I see this *more often* on ACX than most other places. Again, usually stated in the most simplistic unreflective way. I have no idea why; usually this community is more nuanced than the rest of the internet, rather than less. But I honestly feel I'm more likely to see individual country comparisons--the US vs France, France vs Germany, and so on---on otherwise stupider places like reddit, and more lilely to see the US vs The Rest Of The World(TM) here.
What's wrong with it? Let's start with what The Rest Of The World(TM) means. It obviously doesn't mean the rest of the world, not even close. Sometimes it's phrased as "Europe", sometimes as "the developed world" or some such, but much of the time it really is phrased as "the rest of the world". If someone actually challenges this stupid assertion (which is shockingly quite rare) then the OP will respond that of course, by "rest of the world" they mean Europe, plus maybe Australia and Canada. I mean that's *basically* the world, right? If someone deigns to point out that many of these metrics showing Europe as far more left-wing don't apply to much of Eastern Europe, they'll respond that yeah, obviously they weren't including Eaatern Europe in "Europe". Further exceptions will inevitably be made, depending on the issue at hand. The US is more nationalistic than any country in Western Europe! Yes, *obviously* France is an exception. No other country takes religion, like, seriously! Yeah, except Ireland, and Italy, and maybe parts of Spain, and a large part of eastern Europe, I thought those, like, went without saying! Nearly every time, on every issue, there's an exception or a whole heap of them. Making the actual claim, when actually thought through, something like "there are always a few countries, usually in western Europe, that are to the left of the US on a given issue". Doesn't have quite the same dramatic ring does it? Of course, this relies on a level of analysis that is hardly ever done.
Second, just as this simplistic claim relies on constantly shifting on what exactly is the set of countries the US is allegedly to the right of, it similarly relies on constantly shifting on how comparisons on various issues are made. For example, compare crime and religion. At the level of actual law, the US is to the right of many western countries on criminal sentences and the death penalty (with a few exceptions) and it is to the left of many western countries on state secularism and bans on public religious displays (with a few exceptions). At the level of everyday life, it's the opposite: the US has higher crime rates (less law-and-order) than most western countries, but has higher religious belief and practice (more observance) than most western countries. So however you slice it, the US is sometimes to the left and sometimes to the right of the average. But that doesn't provide a dramatic simple narrative. So the advocates of the latter simply switch the metric to suit their simplistic model. Look, how right-wing the US is, it's the only country with such high sentences! And look, how right-wing it is, it's the only country with such high religious belief! (Remember, most countries in the world don't exist when we're having this discussion). Truly breathtaking.
Third, the claim involves systemically ignoring whole swathes of overwhelmingly important factors when comparing countries. The most obvious is size. The US is many times larger in population than every country that is ever compared to it. Frequently, when someone's comparing it to a group of countries it's significantly larger than all of those countries *combined*. And this confounding factor is almost never taken into account. So the fact that the US is the only "developed" country without universal healthcare is purely politics. It can't have anything to do with the fact that no country with anything close to the US's population has universal healthcare either, and that the complexities of such a scheme would obviously compound with huge increases in population, and that the vast majority of countries with UH have tiny fractions of the US population (e.g. Australia 1/13th, Canada barely 1/10th, each Scandinavian country between 1/30th and 1/60th), and that if the US were to adopt it it would set a world historical record of a country of that size establishing a project of that scope. No, it must be nothing other than Americans being stupid. It is absolutely amazing how rarely any of the above factors are ever even *mentioned* in the context of that comparison.
And there are plenty of other holes and flaws in this "US so right-wing" claim, but I'll leave it there for now.
So why does this absurd, simplistic, sloppy claim get spread around so much, with so unbelievably little pushback? My guess is that is has a lot to do with the fact that debunking this claim doesn't serve any major political grouping very well.
American progressives have adopted the idea of a left-wing Europe as a central part of their mythology, much like the conservative mythology of a right-wing past. It's useful to point to "this worked there, why can't we have it here?" when someone says your program is unfeasible or unreasonable.
American conservatives have adopted American exceptionalism and hatred of the rest of the world as a part of their identity. Again like with the imagined conservative past, where a few smarter progressives might point out that it never really existed but most are happy to go along with the myth because they've adopted "the past is evil" as their own mythology...so are conservatives mostly happy to go along with "we're to the right of the whole world and that makes us superior!" instead of actually questioning the factual claim.
Non-American progressives want an easy way to demonise right-wing ideas, and casting them as foreign and American fits that need. Plus, they generally want to erase national borders and pretend countries don't exist, so collapsing different cultures into one vague nebulous "Rest Of The World (TM)" suits them well.
Non-American conservatives are more focused on their own actual country, and aren't generally much interested in international comparisons. Plus, having a vague nebulous international left-wing collective they can accuse their opponents of trying to draw their country into suits their purposes for similar reasons to progressives.
So nobody has much of an incentive to debunk or examine this sloppy claim. The only people who suffer are those care about logic and accuracy. Simplistic badly thought-out characterisations like this make clarity and nuance much more difficult, and make discussing the unique situations and cultures of each country so much harder. And I really wish a data-focused and logic-focused community like ACX would create more pushback to vaguely defined and usually unsupported assertions.
Devil's advocate: In a certain sense, most countries are to the "right" (as understood locally) of most other countries. The left-right spectrum often includes elements of "X, a uniquely [country] thing that we're best at, is important!" as a right-wing marker. (Thus, correct national exceptionalism: Your country is the best, regarding the things you value, because your country is full of people who prioritize those things.)
For example, Americans lead the world in (their own understanding of the concepts of) free speech, religious freedom and gun rights, all of which are right-coded in the US. Other countries might focus on things like monarchism, regionalism, supranationalism, attitudes towards The Party, language politics, security policy, etc as what "really" makes one right- or left-wing. A distinct left-right axis for every country.
Economically we're pretty free-market.
Culturally we're to the right of Western Europe and to the left of everyone else. You think American sexism is bad...
Why does scale matter? Universal healthcare doesn't imply a single huge organisation.
"Medicare
Medicare is a term that refers to Canada's publicly funded health care system. Instead of having a single national plan, we have 13 provincial and territorial health care insurance plans"
"The NHS in England is not one single organisation. It is made up of hundreds of different organisations of differing sizes, at central, national, regional, and local levels with different roles and responsibilities."
Why is it no problem for the US to have a single huge military?
Right and left can be a bit slippery. I think it’s fair to say the US is more “right” in a right-wing libertarian sense. Compared to most other Western nations the US has an exceptional respect for individual liberty (guns, free speech), has fewer collective social programs (free college, free healthcare, etc), and generally has fewer business restrictions (at-will employment, less regulation). As you noted the people who mention this are typically progressive types who want all of that to change.
While I'd agree that trying to map differences between nations onto purely left-right distinctions is an exercise in folly, I DO think the U.S. is a pretty big outlier in the developed world[1] in a number of ways, most of which seem at least vaguely right-wing. For reference, I've lived in maybe half a dozen U.S. states (red, blue and purple) as well as in Canada and Germany and traveled somewhat elsewhere in Europe. The ways in which the U.S. stands out (to my eyes) are:
1. U.S. nationalism is very loud and lacking in subtlety. There's an entire genre of art, culture and, well, behavior focused on praising and glorifying and bragging about the U.S. In terms of actual numbers of staunch nationalists I don't know how the U.S. stacks up against anywhere else, but there's a lot less of that sort of loud display in Canada, and it is (for understandable reasons) somewhat taboo in Germany. I think I've seen hints of the same from Britain and Australia, but I'm less familiar with either of them.
2. The U.S. has a strong streak of what I might call "reactionary individualism." By this I mean that there seem to be a contingent of people--not a majority, I think, but a pretty sizable minority--who react with varying degrees of scorn and anger at the suggestion that they ought to take ANY action that's mainly for the common good or general welfare. Some of the angrier reactions to various anti-COVID measures are still fairly fresh in my mind, but the far older resistance to universal healthcare and a lot of the more extreme anti-tax views[2]. This is one that's harder to gauge in other countries, but my sense is that while people like this in some number exist everywhere, in the U.S, they're far more numerous and wield correspondingly more political influence.
3. The national attitude in the U.S. skew very militaristic, especially for it's level of wealth. This one seems to be a little bit of a geopolitical artifact: it seems like before 100 years ago or so, wealthy countries were some of the most militaristic, and militarism was more widespread in general. But in the post WWII era, the developed world seems to have pacified a LOT. Except for the U.S. Now there's a decent argument to be made that its the U.S. military that has allowed everyone else to de-militarize, but there's a lot of pieces of the picture that don't fit super well with that. The U.S. has engaged in lots of military adventures and military-backed foreign policy initiatives that very clearly have NOT been about defending either itself or its allies in any direct sense, and you get a lot of civilian rhetoric of the "support our troops" and the "they're defending our freedom" style that very conspicuously fails to engage in what the U.S. military is actually DOING at any given moment. This is probably partly an extension of 1, but I think it feels very weird to anyone who's spent much time in a country with a less active military.
[1] Which seems to be the best possible match for the Rest of the World that the people you interact with seem to be talking about. But I agree calling it the "rest of the world" is dumb and bad.
[2] To be clear, nobody I've ever met *likes* paying taxes, but most of them recognize it as both necessary for the general welfare and a transaction that (on the whole) ends up well worth the price tag, given what they get out of it. But the ones that don't see it that way can be quite strident about it.