Europe is going through a rough patch: stagnant growth, disastrous energy and immigration policies, and the Muscovite invasion of Ukraine. My question is:
What *personal transformation* (not national or EU-level policy), if embraced by 10% of the population, would save Europe?
My tentative answer:
Attitude shifts:
- Positive change in my life is possible and often not even that hard.
- No-one is responsible for my well-being but myself.
- Success and wealth creation are positive and pro-social things.
- Opportunities for employment or entrepreneurship exist in any market.
- Generosity and building things are the real flexes. Trying to look smart, erudite, or socially conscious just puts people off.
Practice shifts:
- Be reasonably fit and not obese.
- Become ridiculously productive by working in a goal-oriented way on things you care about and by limiting distractions.
- Unlearn the habit of unproductive worrying.
- Be relaxed and positive, rather than tense, in the mind and the body.
- Have (more) children.
- Meet people face to face more often.
Am I missing any major ones? Or would you emphasize something different altogether?
> Am I missing any major ones? Or would you emphasize something different altogether?
Does it occur to you that most of those are in direct tension with each other?
Being reasonably fit requires time in the gym and change in diet habits, which interferes with being productive and meeting people face to face, which interferes with being relaxed rather than tense, and "having more children" nukes basically ALL of those.
Like what you're wishing for is "just all be top 1%-or-better people on conscientiousness, goal orientation, productivity, and parenting!" It's so simple! What's wrong with people??
Yeah, obviously if you had some way to make everyone in a country, or even 10% of them, become <1% percentile, you'd be doing way better in the aggregate, but you have the people you have, and definitionally, 99%+ of them will be way worse than this.
So giving people a bunch of directly contradictory <1%-tile aspirational targets will more likely serve to have them give up, ignore you entirely, etc.
> Does it occur to you that most of those are in direct tension with each other?
Yes and no. Roughly speaking, I view some of those things as consumption and others as investment. (Of course, in a more nuanced view, they are all a mixture of the two.) To consume more, you either need a bigger budget or have to give up something else. Investments may require you to give up something temporarily but in the long run, they increase your budget. By budget I mean your time and focus – your capacity to do things you want to do.
In terms of the game mechanics of this thought experiment, I'm envisioning:
- An initial small-to-moderate boost in budget (say 10–20%) simply due to starting to care more about one's life and less about scrolling social media etc. Since we're positing a mindset shift (deus ex machina), I think this is reasonable.
- Lifestyle/mindset investments that initially displace some other things (but less so thanks to the initial boost) but gradually increase the budget.
- After some time, a steady state with a larger budget, with some of the increase (on average) taken up by the more consumption-heavy things like kids and face-to-face time.
So on the investment side we have:
- Giving up worrying. This is a no-brainer: the habit takes relatively little effort to stop, and it's all dividends after that.
- Being relaxed and positive. The change is gradual and requires some initial effort and a little bit of maintenance, but becomes net positive pretty soon. Being tense is tiring and makes you not want to do things.
- Increasing productivity by reducing distraction and working on things you care about. Depending on your circumstances, the initial investment may range from trivial to very significant, but after the investment, your work becomes less mentally taxing per hour worked (not to mention per euro earned).
Mixed:
- Fitness and non-obesity. Initially this eats into your budget but soon becomes cashflow-positive due to having more energy, sleeping better, being less dependent on frequent meals, having fewer aches and pains, etc. Of course, this is only true for a moderate level of fitness and leanness. If you want to be very fit or very lean, you slip into consumption.
- Meeting people face to face. Strictly in terms of your time-focus budget, it's mostly consumption, but if your initial level is low and you're able to increase it, you massively increase your opportunities to e.g. find better employment.
Consumption:
- Having (more) kids. This unambiguously requires you to give up other things you care about. (Let me also clarify that I'm talking about probabilities or averages; I'm not assuming every single person would have strictly 1+ more kids than they otherwise would.)
While the spirit of this thought experiment is, "What personal transformation of 10% of the population would achieve the outcome, setting aside for a moment how that transformation could be triggered," it's only interesting if it doesn't require changing immutable characteristics like IQ. So far I maintain that my tentative answer doesn't fall foul of that as long as we don't require that consumption exceeds some unrealistic threshold.
> While the spirit of this thought experiment is, "What personal transformation of 10% of the population would achieve the outcome, setting aside for a moment how that transformation could be triggered," it's only interesting if it doesn't require changing immutable characteristics like IQ. So far I maintain that my tentative answer doesn't fall foul of that as long as we don't require that consumption exceeds some unrealistic threshold.
Ah, thanks for the clarification. So if I'm understanding correctly, your answer to tackling some of the difficult country-and-region level problems the EU is facing is to try to persuade ~10% of the population to embrace the cultivation of arete as a central pillar of their weltanschauung?
And then, the greater vim and vigor and clarity in the more-excellent populace let's the countries make better decisions (despite being democracies and the ~10% being vastly outweighed)?
I love the concept, but I do worry about the execution.
I like your overall framing in terms of making longer term investments to increase the "execution budget" for people, who can then use that extra budget to live better lives and positively impact society more.
But I do think even being able to execute on a path like this is largely a result of immutable characteristics like neuroticism (somebody high on N isn't going to be able to easily turn off worrying), conscientiousness, self discipline, long term planning, and so on.
I think this is probably covered by the modesty of your goal - you don't want to move everybody, which is impossible, you just want to move the ~10% with the underlying capabilities who are currently living less than optimal lives - but the reason it can't apply to more than that is because of immutable characteristics, to my mind.
So here I'm going to take a sharp left turn and hark back to an ancient Scott Alexander story, pre ACX, pre SSC even, back in the primordial livejournal days - the story of the whisper earring.¹
The whisper earring is a magical artifact that always has a better answer than you - it sits in your ear, and if you ask it what to do to move towards a given goal, it always has an answer better than you could come up with yourself.
You probably see where this is going.
The end result of Scott’s story has people giving up even motor control to the earring, and the modern correlates of that in terms of AI assistants would be spastic yuppy zombies - “yombies” - slaved to the altar of their own goals, having lost all identity or selfhood or autonomy in the pursuit of accomplishing more.
This is unironically our future - when we have superintelligent AI assistants in all of our ears, we will all be facing this future and this choice, both for ourselves and our children.
From here on in, I’m just going to assume this is uncontroversially true - that Phd-smart AI virtual assistants can do better than enough people that we have to actually think about this.
I have discovered in conversations and online debates that whether to “earring” or not is a fun consequentialism / virtue ethics scissor point.
The virtue ethicist will maintain that cræft, doing things for the sake of doing them, and doing hard things *because* they are hard, is an end in itself, because of the impacts on the character and self that they forge.
The consequentialists, on the other hand, will argue: “lol, that sounds hard, let’s just yombie.”
I am with the consequentialists.
I unironically think this is the way to cultivate arete and significantly better life outcomes in your 10%+ of the population, and maybe even more than that.
If intelligence is counterfeited, conscientiousness and discipline matters more for overall success, because the people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that Claude gives to them.
Part of it is knowledge, but most of it is persuasion, because people generally know what is good or bad, or what they "should" be doing, and fail in executing to a high standard.
These AI minds will know everything about you, they’ll know your thinking style, they’ll know what rhetorical techniques you prefer, they’ll be talking to you in the ways that most resonate with you and making connections, arguments, and analogies accordingly. Super persuasion, but at the personal level, and for your benefit - a super-ego that works, in other words.
I’m not pretending the AI is going to win all the time here, either. Your super-ego doesn’t win all the time today, does it? All it really needs to do is win more often on the margin. Think of it winning only 10% more - 10% better decisions compounded over days, weeks, years, and decades is a CRAZY big effect size. It’s like getting a 10% financial return that compounds weekly!
If you’d made 10% better decisions all through high school, do you think you might have gotten into a better college? If you’d been 10% more motivated and making 10% better decisions throughout your career, would you have driven more impact and had a better career overall? Wouldn’t you be in a noticeably better place right now? How about relationships? If you’d been 10% more thoughtful and connected and better, continuously, every day, maybe you’d still be in a relationship you still regret losing. See how big an effect “10%” can be?
That’s the floor of the potential lift I’m talking about, and it applies everywhere. And people would love to have this. If they had better lives? A job that they actually enjoyed, that used their powers to the fullest along lines of excellence? If they had a better relationship than the median “net miserable and then divorced” outcome? If they had a strong friend group of people that really got them and supported them?
That's the road to your society that's more full of arete.
First, thank you for engaging with my half-baked though experiment in earnest. There's more in your reply and your article than I can hope to follow up on in this reply, so let me just touch on some of the main points.
> So if I'm understanding correctly, your answer to tackling some of the difficult country-and-region level problems the EU is facing is to try to persuade ~10% of the population to embrace the cultivation of arete as a central pillar of their weltanschauung? And then, the greater vim and vigor and clarity in the more-excellent populace let's the countries make better decisions (despite being democracies and the ~10% being vastly outweighed)?
Yes, that's about right. My hope for better policies, in this scenario, is based only partially on a more enlightened and less fearful populace – I'm mainly thinking about rather prosaic motives. For example,
- More people will be invested in the startup ecosystem and will therefore clamour for more pro-market policies.
- Some, mostly young, people who are leaning towards a life on unemployment benefits will be FOMOed back into the reinvigorated job market and become less likely to demand increases in welfare payments. (Not the most politically influential group, but just as one example.)
Some improvements can even be expected independent of better policies. Increased tax revenue will reduce deficits even if governments struggle to reduce spending.
Of course, I'm not certain that this personal transformation of 10% of the population would achieve the Europe-level objective – it's more like I'm exploring the possibility. Why this one? Well, a bloke with a hammer sees nails everywhere, and since my web app (betterqualities.com) revolves around choosing skillful actions and thought patterns over unskillful ones, this kind of thing has been on my mind lately.
> I think this is probably covered by the modesty of your goal - you don't want to move everybody, which is impossible, you just want to move the ~10% with the underlying capabilities[...]
Yes, this is a correct interpretation. I left the definition of "the 10%" slightly vague – e.g. whether they are self-selected, other-selected, or randomly selected – to leave room for the imagination.
> The consequentialists, on the other hand, will argue: “lol, that sounds hard, let’s just yombie.”
I think you are wisely leaving some room for interpretation as to whether we're talking about wearable PhD-level AI assistants/life coaches or a superintelligence you surrender your motor control to, and those are quite different propositions. I won't comment on the latter possibility now because I assume it's more than a decade away. The former I would certainly use myself, and something like it might even be what creates the 10%. In fact, I might be tempted to try to develop such a system myself, but I assume it's a very crowded space.
Oh, and speaking of writing on this topic, a while back I wrote a brief post about how to stop worrying. My intention at the time (and I may still do it) was to write a small set of short, to-the-point guides for picking the low-hanging/high-reward fruit in the space of skillful and unskillful mental qualities. The post is at https://thomascastriensis.substack.com/p/how-to-stop-worrying.
> The former I would certainly use myself, and something like it might even be what creates the 10%. In fact, I might be tempted to try to develop such a system myself, but I assume it's a very crowded space.
Yeah, I like it because it's basically the only scalable way to improve large chunks of people's lives.
Uniquely, it's also one of the few domains where both the individual's and the company's incentives are aligned. Typically, companies want more eyeball-time or engagement, at the expense of the user's long-term happiness and well being.
But in this case? If the AI assistant helps you level up in life, you're more willing and able to pay for a higher tier, and you and the company's incentives are aligned.
And on the crowded space, yes, but interest and willingness still seem pretty high - I'm actually putting a team together and pitching VC to try to start a company towards these ends right now, because it's such a great opportunity to drive towards positive impact for millions, potentially billions, of people.
I think the real leverage point is being a service provider to the frontier labs, because obviously they're the ones with all the horsepower, and are obviously going to target the AI assistant market themselves. So if you set yourself up as one of the AI agent annotation and feedback companies, you have a chance to steer the outcomes into better and more mutually aligned positive places.
> But in this case? If the AI assistant helps you level up in life, you're more willing and able to pay for a higher tier, and you and the company's incentives are aligned.
Exactly, that's an important point. I love this framing.
> And on the crowded space, yes, but interest and willingness still seem pretty high - I'm actually putting a team together and pitching VC to try to start a company towards these ends right now
Interesting. I'd be curious to hear how things progress. (And feel free to DM me when public comments aren't the right medium.)
Join the Russian Army*, Obey Vladimir Putin. If 10% of Europeans do that, that's ~50 million people. That should be enough to overthrow the Brussels regime.
Imagine a single state stretching from Gibraltar to the Bering Strait, an empire greater than any in history, unified under a Tsar. That either saves Europe, or gives it something more tangible to save it from.
*If too old or weak, domestic sabotage on their behalf might be an adequate substitute.
European alliance with Russia is a good plan, they're geographically close and have much to offer each other.
"Allying" with a plague-bearing country* like China is just about as dumb as the vassalship with the United States.
*Oh, look! China banned internal flights from Wuhan, with the express purpose of having the plague-bearers take international flights to places like Italy. "If our economy is going to tank, so's the world's!"
The potato farmer guy who does potato diplomacy? It's silly, but it's hardly "oh no!" And Germany would hardly be that level of vassal with Russia (unlike the United States. more american soldiers in germany than german soldiers).
Did you know a house cat can eat an entire mouse, bones and all?
Claude continues to amuse
Me:
How does a domestic cat eat an entire mouse? I once found a little red flower on the floor of my apartment. Turned out it was a mouse tail and a little bit of mouse butt.
Rhetorical part. Was it Stanley or Bastet?
Claude:
Ha! The little red flower image is both poetic and deeply cursed.
Cats are remarkably efficient predators and will eat prey whole or near-whole. They start with the head (the most nutritious part — brain, organs), then work their way down. The digestive system handles bone, fur, and most soft tissue without much trouble. What doesn’t get digested — fur, the tips of extremities like tails — often gets regurgitated or passed. A tail left behind is basically just the cat deciding that particular bit wasn’t worth the effort, or it fell off during enthusiastic consumption.
The really impressive part is that a cat’s stomach acid and gut are well-adapted for this. Wild felids have always eaten whole prey, so the domestic cat’s GI tract retained this ability completely despite thousands of years of being handed little brown kibble pebbles.
As for Stanley vs. Bastet — the real tell is in the presentation. If the tail was left somewhere obvious, like the middle of the floor or near your shoes, that’s a gift-giver — probably Bastet, showing off and offering tribute. If it was tucked in a corner or under furniture, that’s a private diner who simply got full — Stanley energy. A cat that eats and leaves a tiny floral arrangement of tail in the center of the room has flair.
Progress report on my web app, BetterQualities: we're finally in open beta! It's been an interesting journey so far, as I'm not really a professional web developer. It's coding agents that made it possible to develop the app in this time frame. That said, I've learned tons, and that already makes it worthwhile.
Anyway, at the risk of repeating myself, BetterQualities helps you let go of unskillful mental qualities like procrastination and worry when they arise by showing you a short guidance text. You can also add your own guidance texts and keep track of which methods work for you. The app also helps you cultivate skillful qualities like happiness, agency, and heedfulness.
I have other feature planned too, but I want to get the beta version out there as soon as I can. So, if you're curious, please check it out at betterqualities.com – the basic tier is free. I'd be very interested to hear what works for you and what doesn't. You can contact me here or through the app's feedback feature.
Finally, thanks to all ACX commenters who have offered me insight, advice, and encouragement. It's been invaluable.
I am really struggling to reconcile the beliefs of what seems like most self-described EAs and rationalists (and the bent of many ACX readers) with either their endorsement or silence about the destruction of USAID. Taking an outside view, I can’t shake the conclusion that it gives the lie to any claims these movements make about using rational evaluation of evidence to transcend affiliative motives. It makes these movements feel like just another mass virtue signaling. I hate losing faith in this
The "E" stands for "effective". Is there anything the EA movement could be doing that would be *effective* in undestroying or replacing USAID?
"This is very very important so everybody needs to talk about it so we know you agree with us about what is important", is not an effective strategy for altruism. Deeds not words, and if you're offering just words they should be words the audience hasn't already heard many times from many sources.
I was where you are a couple years ago when it became obvious to me that the movement is sincere about their desire for a better world, as long as it aligns with their politics.
They will support anything* that improves human lives, so we shouldn't judge them too harshly. You probably have some sort of similar blindspot that hasn't had the chance to be exposed by events.
(anything that is sufficiently neoliberal world bank etc etc with a dash of woke being more dangerous than the president eg. starting a third regime change war in the middle east)
So, you're saying that these folks support murdering a third of the human race? I mean, that's what it would take to improve human lives by saving the environment (and that's a conservative estimate, it keeps going up.)
Back when I was 18 in Budapest, the only way to get a degree in programming was called "programmer mathemathician", and I hated math. I now gathered my thoughts how it is possible to be an okay programmer but loathe math: programming is verbose, self-documenting, long variable letters and only one operation per line. So it is like:
diamater_of_circle = 2 * radius_of_circle
circumference_of_circle = PI * diameter_of_circle
Mathemathics: 2rPI. That sucks. That is not self-documenting, you have to keep remembering what that means. And too dense. Once we are talking about double integrals over 6 one letter variables, just how do you understand that?
Historically, math started with verbal descriptions, and evolved from "the thing multiplied by itself added to the thing" towards "x²+x".
And in future, maybe all math will be written in something like Lean.
Programmers also used short variable names before we had IDEs with autocomplete and automatic variable name checking. There was a big pressure until it became an industry standard to stop using single-letter variables (except for e.g. short loops). And most programmers still refuse to write documentation or unit tests.
So I'd say it's human laziness. I understand that if you have an equation that already doesn't fit on one line even if you use symbols, that you don't want to use full words. But nothing prevents you from writing the legend for the symbols before you start solving the equation.
If you are doing Math on paper/chalkboard etc. And actually solving something i.e. start out with one big equation and eventually figure out what X is, then writing diameter_of_circle every iteration would get tedious. Hence the difference.
The thing I hate about maths is not that it's terse, but something related to terseness.
Mathematicians tend to leave out "trivial" operations (that is, operations trivial to the author) while programmers have to actually compile their code and therefore can't leave out anything: they use abstractions, but they don't just skip lines of code at random. This gives a computer program greater discoverability.
I'm getting flashbacks to Calculus, where the book would skip steps in the Solutions section, so you couldn't even look up the answer unless you understood it already.
In both mathematics and programming, it's about levels of abstraction. Certainly when you have a relatively small number of variables, and you use them only a few times each, your system has quite a bit to recommend it. And there's a point in teaching the subject where this type of recollection is going to be useful.
If you're working in a 3 dimensional vector space, maybe thinking about length, height, width is helpful for manipulating whatever it is you're doing; if the space has 10,000 dimensions a different set of tools is required. You move to a higher level of abstraction, because you can't remember what each of the dimensions is measuring and can't visualise it anyway. Rather you have an algorithm that you can prove something about via reasoning, and analogies. Understanding each vector individually isn't the point anymore, the goal is to reason about the structure as a whole, how it encodes information and how you extract what you need from it.
What are your experiences with using LLMs to assist in worldbuilding for your stories. I've only using Chat GPT, and it sometimes gets things right, but everything has to be overly thematic. If your story has a theme, the LLM will cram it into aspects of the story that don't fit.
I tend to find that my exploratory writing back and forth with ChatGPT works for a while and then runs into a wall as some point.
Are there some reddits / discords or other communities for writers looking to share tips, best AIs or other advice for this sort of thing?
I do a ton of it: ChatGPT responds to pretty much everything with "Yes, and...", which is exactly what I want. Probably my favorite thing to do is have the LLM generate a list of cities/regions/countries/etc. with various characteristics, then I assemble them into a coherent world map like a jigsaw puzzle. Final step is actually drawing the map in WonderDraft.
Well, my advice is, if you can't think of backstory for your world, it probably means you don't need it, because it hasn't naturally contributed to your story. You don't need worldbuilding for the sake of it, any more than you need foreshadowing or a love interest for the sake of it. Tell your story, and then, if you still want to at the end, you can go back and tell more of it.
Is there a meaningful difference between a 'learned world model' (LeCun-style) and an engineered one - typed, queryable, with executable invariants or is this just implementation detail?
It sounds like there is? But he may be making "paths" that can be pruned, in which case, you're looking at "prior knowledge" informing the neural network, which should always be fair game. (So, he's making some "nearest neighbor" understandings that are fundamentally different than other people's (which are presumably more node-based)).
You could probably get to the engineered one with a learned world model, given enough time and monkeys.
I am extremely susceptible to being ragebaited by bad arguments, so I came up with a way calculate their cumulative waste, completeness, fixability and their respective derivatives. Any thoughts, critiques, is my article itself ragebait?
Oh my god! I read it at the time and have tried to find it again multiple times, but failed. I was sure I was never going to read this again. You have no idea how happy I am that I just saw this. Thanks!
Just fyi, the author's blog was mentioned a lot in the comment section of the guest post this week, "Last Rights," about expanding the House, and that's how I found the story.
Thanks for the link, that was really fun to read. I know I've come across the author at least once or twice before (I think he must comment on at least one of the substacks I read), but I had never seen that piece.
Thanks for this. I haven't read the full Cass Review, but given that it's almost 400 pages, is your critique only of a subset of the content in the review? As in, these were the errors you found in the one section you looked at? Or were these all the errors you found in the entire report and the rest of it is reliable?
My critique applies to the Review as a unified whole, since the distortions I document include ones central to Cass' position.
My table 1 gives four distortions. Distortions 2 and 3, on cross-sex hormones, are both from her section 15. Distortions 1 and 4, on puberty blockers, are both repeated more than once (including in the summary and recommendations section and section 14 and, for distortion 4, in the interim report).
Distortions 1 and 4 are the basis for Cass' pretense that puberty blockers don't work and that they might even be transing the kids. They are the core of the Review; the rest is all padding.
Puberty makes trans people's dysphoria worse, for the obvious reason: trans boys grow breasts they don't want, trans girls get vocal changes they don't want, etc. The point of puberty blockers is to prevent this. If you're trans and take puberty blockers from 13 to 16, and your dysphoria level stays the same instead of getting worse, good—that's exactly what it looks like for puberty blockers to work.
Cass pretends this successful stabilization of dysphoria was a failure to reverse dysphoria ("But dysphoria wasn't reduced!"). This is like pretending a pause button is a failed rewind button, or pretending a car's brakes are a failed reverse gear shift. My preprint compares it to pretending a parachute didn't work because the skydiver is only equally alive before and after landing.
(You'll hear people say the core of the Review is the finding that there's not much "high quality" evidence in the sense of evidence-rating systems like GRADE or Newcastle-Ottawa. But that's false; this isn't something the review discovered, this is something that was already known—everyone knew the Dutch study didn't have a control group, for interest, even if we think that's not fatal for the same reason the lack of RCTs of parachutes isn't fatal—and was cited as a reason to commission the Review in the first place, before being recycled into a finding of the Review and a reason to restrict puberty blockers. See my box 1a.)
This song just came up on my shuffled “pre–light emitting diode euphoric” playlist. The Bay Area in a very different era: Alan Watts holed up in Druid Heights, Owsley investing in a custom pill press to stamp a “quality guarantee” owl on his tabs of acid, Jerry Garcia the de facto mayor of the Haight, Fillmore West alive and jumping. Ah, what a moment. Anyone with 15 cents in their pocket could read about it in Rolling Stone, even if they lived in dull and drab Dubuque, Duluth, or Durango.
On the homelessness problem. Does someone remember the old time hobos (who did work) and the tramps (who did not) ? Doesn't one see a parallel? Perhaps there are naturally nomadic human beings with a huge case of claustrophobia or disliking doing the same work for a long time (ADHD), whole like to wander, sleep outside. These people would do so much better as outdoors tramps and hobos as in the city!
The solution then is taxpayer supported hobo camps out in the wilderness. They provide free food and lodging, they require some work in return, but it is expected that in two weeks they will get bored of it and wander to another camp with a different kind of work. The work might be tending to the veg garden feeding the camp and so on.
If this sounds appealing then you might find the WWOOF movement to your liking.
It stands for WorldWide Opportunities on Organic Farms (with a few previous names) and consists of doing a short day of manual labour (5 or 6 hours) in exchange for food and board.
That actually sounds like not a good deal. With 8 hours even on the min. wage people can buy clothes, toiletries etc. or maybe I am missing something. I mean I would find it really sad if min. wage people would spend 70% of their income on food and rent/bills. Maybe I am out of touch...
Where I'm at, the wage for competent farming help is 3x the minimum wage. Of course, board and food scale by the number of people, so you're not just dividing the whole 8 hour work day... (and farm help isn't generally 8 hour days. try 12 hours, with extensive "off-duty months").
It's not economically optimal for the average ACX reader, or almost anyone else really, but it enables a lifestyle where you do a little work and then wander off to the next farm, and occasionally get a real job when you need the money.
My parents, who grew up in working-class (my mother) and middle-class (my father) households in small towns in Kansas and Oklahoma, remembered the Depression-era hobos well. So did their numerous older relatives all of whom I knew as a kid. To a person they were crystal clear with us 1960s/1970s youngsters that we were not ever to under any circumstances interact with any hobos (who still existed then though in much smaller numbers); that the popular-culture representations of hobos and hobo camps and hobo culture were absurd horseshit; etc.
When older I was told the sad stories of a couple of relatives who had become 1930s/40s hobos, one of whom had been murdered for his coat in a hobo camp and the other had died drunk on a railroad trestle.
In a Mad Men episode one of Don Draper’s backstory flashbacks shows his boyhood interaction with a depression era hobo who happened to be a classier act than his own stepfather. The hobo left the “dishonest man lives here” rune carved into the fence around the family home.
Hadn't seen that one, but it's a good example of the sort of pop-culture hagiography of hobos that my extended-family members pushed back against based on their real-world experiences and observations. (Somewhere once I saw a list of Hollywood films along those lines from the 1930s through the 1970s and it wasn't a short list.)
Can someone recommend good resources on understanding IQ, g, and how they relate?
Since this topic can be tied up in ideological commitments that could interfere with accuracy, I'd appreciate it if you also included your assessment on how reliable you think the source is. (That's the main benefit of asking here versus just Googling).
Just started reading, but my impression is that he considers the entire concept of "how much is hereditary vs environmental" problematic, because the numbers will depend on the granularity we will use for the "genotypes" and "environments". Like, if you use some genotype groups G1A, G1B, G2A, G2B, you would get some results, but if you instead group them as G1 and G2, you would get dramatically different results. (And this seems like a general problem with calculating heredity, not just with IQ.)
Is my understanding correct?
If yes, is this considered a problem everywhere, or only in IQ research?
The next objection is that variance (e.g. in IQ) is not a linear combination of variance in heredity and variance in environment. For example, if some change is triggered by a *combination* of a specific gene and a specific environment (for example, an allergy, and the presence of the allergen), this cannot be meaningfully split as "X% impact of the genes, Y% of the allergens".
I agree with this... and I kinda assumed that most people who study statistics are aware of it. Because the entire article is written as a debunking of Charles Murray, did Murray ever specifically claim otherwise?
Objection aimed at twin studies is that twins, even raised separately, have a few things in common. For example, they were born on the same day of the same year. Families that adopt kids are different from the general population, so the twins adopted separately will still have something in common, e.g. they are likely to live in the same city.
I wish there was some official FAQ about IQ, that would address all the objections and misconceptions (with new ones coming every day). I would use it myself.
To understand the methodology of measuring IQ, it helps if you understand the methodology of measuring non-obvious things in general. So that you know the difference between "this is unusual" and "this is considered perfectly normal if you cannot measure the thing directly, it's just someone trying to manufacture a controversy for ideological reasons".
It also helps to understand science in general. So that you know that when e.g. Einstein disagrees with something that Newton said centuries ago, whether it means "science working as it is supposed to" or "physics was debunked; even the most famous physicists disagree with each other, but we are supposed to trust their consensus, LOL".
*
Basically, if I remember my psychometrics lessons correctly, the way you approach measuring a new concept is this:
First, you brainstorm about questions that seem related to the concept. For example, if you tried to measure someone's "strength", you could include things like "can lift heavy things" or "can kick a ball so that it flies far away" or "can run a mile fast", because these things are (1) connected to your intuitive idea of "strength", and (2) are measurable: how heavy thing one can lift, how far the ball flies, how fast the person runs one mile. -- Analogically for intelligence, you could use "can explain abstract things", "can solve puzzles", "can remember things", "can follow instructions precisely", etc. Also, different types of abstract things, different types of puzzles, etc. -- Do not worry about adding "wrong" questions here, because that will be addressed in the following steps.
Second, you collect data. This is the empirical part that cannot be predicted until you actually do it. For example, you take 10000 volunteers and you measure how heavy objects each of them can lift, how far they can kick a ball, how fast they can run a mile... whether they understand the concept of separation of powers in the Constitution, whether they can solve a quadratic equation, solve a sudoku, read a list of 10 numbers and then repeat it 30 seconds later.
Third, this is the cool math part, instead of comparing the people against each other, you compare the tasks on the test against each other. What is the correlation between lifting a weight and kicking a ball? What is the correlation between solving puzzles and explaining abstract things? The results of this will depend on the collected data; you cannot accomplish this with armchair reasoning only. There is a statistical method called "factor analysis", which is a math way to ask "are all these things related to one underlying concept? or is it more like we have two or three different concepts here, and some tasks relate to the first concept, some tasks to the second concept, etc.?"
Finally, you take the outputs of the factor analysis, which basically tells you "these tasks are one thing, these tasks are another thing" without telling you what those things are (since the math only deals with numbers). Then you look at the subsets and conclude "ah, so we have many correlated tasks which seem to be about strength as we usually imagine it... plus a few tasks which, when I check what they have in common, seem to be all about endurance... so I guess strength is one thing, endurance is another thing, they are weakly correlated to each other, but different tasks within the same category are strongly correlated to each other".
Also, you can use the newly gained knowledge to run the process again, maybe this time one research team focusing on improving the strength tests, and another team creating endurance tests. Then there is a later research about how stable are the tests when you measure a person now, and then again a month later; etc.
The important part is that this is about mathematically analyzing the empirically collected data. Without the data, you couldn't know what the results would be. (It is possible to imagine a species where strength is strongly connected to endurance, and another species where it is not.) I am saying this, because many people on internet will either defend or oppose the concept of IQ using armchair reasoning only. Which is interesting for an edgy debate, but it not what psychometrics does. When someone e.g. denies the concept of IQ, the question is whether they deny the methodology, or the data? (The methodology may be flawed, or the data may be fake, but these are two different objections.)
What we have learned about intelligence this way?
First, we have learned to separate "intelligence" as an ability from things like "knowledge". In general, more intelligent people have more knowledge gained, so yeah these two are correlated. But knowledge is impacted by things like "what culture are you from", "what education did you have", "what do you specialize on". This is why intelligence tests are generally moving away from knowledge-based questions (what you did *before* taking this test) and towards puzzle-like questions (what you can do *during* the test). Of course, nothing is perfect; it is plausible that the ability to solve puzzle-like questions is also impacted by how much practice do you have solving puzzles. But compared to the IQ tests from 100 years ago, the tests today do not test you on your mastery of English or your knowledge of WASP culture trivia.
Second, there was a plausible hypothesis of "multiple intelligences", which was empirically disproved. With the emphasis on the word *empirically*; because multiple intelligences make intuitive sense: we know that some people are scientists and other people are rock stars, and they cannot switch their roles easily, so it sounds plausible that there are two different types of intelligence. But when we separate the potential from the experience, it seems that the difference is mostly in the experience, and that if you could travel in time, take the six-years-old future rock star and make them do science instead, they would statistically be a successful (even if not exceptional) scientist, too.
Third, lots of technical details, some of them are still disputed. (But note that "disputed" is science working as usual: people propose various hypotheses, collect data, until they arrive to the conclusion. For some questions we are simply not at that last step yet.)
*
Also, a historical note. IQ tests historically started as tests whether children are ready for school. Notice that children are different from adults in the sense that their abilities change with the age quickly (around the age of six when the school starts), and they are less specialized than adults. So it makes sense to say things like "this 6 years old child is as smart as an average 8 years old child", which is where the concept of IQ originally came from. (In this case, the IQ would be 100×8÷6 = 133.) Only later, the concept had to be redefined to apply to adults, too.
This sounds like the MENSA's sole reliance of Raven Matrices was not such a good idea? At least it was the case 25 years ago when I tested and barely missed the entry level. They also had an experimental verbal IQ test, which was not approved for membership (yet) because it is too culture-relevant. That I got 100% on. If they allow entry based on purely verbal tests now, I should try again.
I am not sure, maybe it differs by country, but seems to me that Mensa mostly uses Raven matrices. At least if you go to the testing they organize.
I think (but you need to check the rules of your local branch) that it is also possible to get tested by a certified psychologist (where perhaps you could choose the test you prefer? not sure) and then send the results of that test to Mensa. But you need to check it, whether this is true, and whether it applies to all tests.
Finally, there is a question of whether it's worth doing it. I was a Mensa member myself, but it seemed mostly pointless. Generally, you will meet visibly smarter people at a local ACX/LW meetup, if there is one, than on the local Mensa meetup.
This seems controversial to some people, like when there is an ACX/LW survey and people report their tested IQ, many people say "that's unbelievable that the average is so high, you guys must be making that up", but if you actually meet both groups in real life, obviously ACX/LW is the smarter one, so if Mensa is IQ 130+, I find it perfectly plausible that the average at ACX/LW is higher. "But only a *tiny* fraction of population has IQ that high!" Well, only a *super tiny* fraction of population reads LessWrong, so why not? (ACX is more of a mixed bag, some people only come here to discuss politics. But I guess there is another selection where only some readers come to the meetups.)
I personally find this very sad, because I think that the idea of "selecting high IQ people" is a good one. To some degree I can get this indirectly, e.g. by associating with academically or otherwise successful people -- but the problem is that it also selects for profession and lifestyle (e.g. excludes smart plumbers or housewives) and age (most students do not have professional accomplishments yet). I would like there to be a place where smart students can get advice from smart adults, or where mothers of smart kids can share childcare advice.
But Mensa completely botched the implementation. I wonder how difficult it would be to fix it. You could counter the loser self-selection effect by screening the entire population. (Would be a lot of work. Well, you could start in a small city and see how it works.) It would be more difficult to fix the effect of "IQ tests are noisy, and the base rate of 'IQ 120 and got lucky today' is greater than the base rate of IQ 130". A quick idea is to require three IQ tests and admit those who have passed two? (Someone should crunch the numbers to check whether this works.) That would also solve the frequent complaints about Raven matrices being too puzzle-like; we could use three *different* IQ tests. If you only pass one test, you can try again in 3 years; if you pass none, in 10 years.
There are also other things that I found weird as a Mensa member, that I would like to fix. (This may be country-specific.) Many people in Mensa seem to treat "high IQ" and "a Mensa member" as almost synonyms. As if by failing to pay the membership for current year you lost your intelligence. I would take greater care to maintain two separate concepts of "Mensa-certified high-IQ person" and "an active member of Mensa".
For example, instead of publishing a magazine for paying members only, I think it would be better to publish a blog, where anyone can read the articles, and only the certified smart people can comment. That way you would simultaneously get a smart debate and advertising/education for a wide public. (Also, there would be a blacklist for people who are certified smart, but to disruptive for an online debate.) I am saying this because Mensa-related online groups that *don't* check for actual membership are even more crackpot magnets than those that check.
Finally, I find the organization of online debates by country really stupid. Each country's Mensa has their own web forum? What sense it makes? Instead, I would make one world-wide forum, with subsections organized by language and interest (or language-only, for small languages), where anyone can join whichever subsections they want. Also, the forum would be open to all (non-banned) certified smart people, whether they are active Mensa members or not. I mean, the power of a network is in its numbers, why reduce them artificially?
Hypothetically if a college entrance examination were designed where anyone who scores over 90% is selected into a lottery, and the successful candidates are then chosen from that lottery, what would be the pros and cons of that system?
Its not exactly fair (91% could be selected over 99%) but I suppose nothing in life is (being born without any disabilities for example). I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency. This system could be scaled up as needed in countries where examinations are the sole factor in university admissions though it would be difficult to implement in the US/other Anglo countries? where standardized exam is just one factor.
If I owned a university, I would prefer selling the spots to the highest bidders among those who scored over the threshold. That would have the same pros and cons as your system but with increased revenue for the university.
I'm not sure what constraints you have in mind. About half of young people in OECD countries have received formal higher education, so it doesn't seem very constrained to me.
> I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency.
If your goal is to go to a random university, there is no need to grind. The cutoff for being admitted to 𝗮 university in China was, last I checked, the 40th percentile. Below that, you're relegated to a technical school, like nursing.
Are you imagining that different schools will each have their own entrance exams, or that there will be one entrance exam and scoring at the 90th percentile qualifies you for admission to any school, while scoring below the 90th percentile disqualifies you?
Pros:
- Admission is nominally by test; the corrupt system we use now would be illegal.
- That's it.
Cons:
- The corrupt system we have now has been illegal for the entire time it's been in place, which didn't stop anyone from implementing it, or from openly bragging about just how illegal their admissions systems were. So the pro is pretty weak.
- It's no longer possible for talented students to meet other talented students. You've intentionally diffused the national talent pool evenly across the country. To the extent that talented people benefit from working with other talented people, and also to the extent that other people benefit when talented people cooperate on projects, you've intentionally destroyed all of that value.
If a job application were designed where anyone who scores at the 90th percentile is entered into a lottery, and the successful candidate is chosen by lottery, what would be the pros of that? If you extended that system to every position at every company, what would be the pros?
To me as a Hungarian, the Anglo concept of going to a prestigious uni and not just any is rather unintelligible, and I am surprised China has it now too?
Our perspective is that if any uni gives you say a chemical engineering degree, then you are officially certified, permitted to do such a job.
What would a degree from a prestigious university bring? Entry, junior level jobs do not require more skill than any uni can give, and later career development depends on work skill and work experience, not education.
Although that is another curious Anglo thing I heard - people go back to make a Masters when they are like 35 years old because otherwise their career slows down. What can they learn there that they did not already learn at work? I do not value education too much because our teachers live in ivory towers and have no idea what skills to teach for the purpose of corporate career advancement. Like imagine a architect exam revolving around memorizing the blueprint of famous buildings.
> What would a degree from a prestigious university bring?
As a matter of fact, I just researched and wrote a post about this.
Graduating from a T20 (ie Ivy Plus and a few more):
[C]omparisons of students who are admitted vs. rejected from the waitlist can be used to identify the causal effect of admission for marginal applicants.
Using this design, we find that being admitted from the waitlist to an Ivy-Plus college increases students’ chances of achieving early career upper-tail success on both monetary and non-monetary dimensions. The causal effects of admission to an Ivy-Plus college are much larger for students with weaker fallback options – e.g., whose colleges in their home state channel fewer students to the top 1% after college. Exploiting this heterogeneity in treatment effects, we estimate that the marginal student who is admitted to and attends an Ivy-Plus college instead of the average flagship public is about 50% more likely to reach the top 1% of the income distribution at age 33, nearly twice as likely to attend a highly-ranked graduate school, and 2.5 times as likely to work at a prestigious firm.”
This is big because former studies used to show waitlist-but-didn't-go students used to have pretty similar outcomes, but this most recent analysis, Chetty 2023, shows that's no longer true.
It vastly increases your chances of becoming a Senator or other top politician, an F500 executive or CEO, a billionaire, a startup founder, and much more.
Both China and the US have basically the same issue. Employers realized that if you have a choice of hiring from UC Berkeley or Northwestern Oklahoma State University, you might as well hire from the more prestigious/competitive/elite school. There are so many college students looking for jobs that you need to be from an "elite" school to be competitive on the job market. And thus simply going to any college is no longer sufficient. China is the same, but 10x worse.
Well, like I said, the official cutoff for going to a Chinese university is somewhere around the 40th percentile. The idea is that 60% of all people go.
> There are so many college students looking for jobs that you need to be from an "elite" school to be competitive on the job market.
Why is the number of available jobs not related to the number of people looking for them?
> Why is the number of available jobs not related to the number of people looking for them?
There's probably more quantitative answers from economists, but my own personal take on this is that there are two main factors driving this.
1). Power laws have made it so that the business returns to the top winners at the company level have gotten much bigger.
2). The returns to the top end of "talent" in employees and founders, with talent some combination of ambition, IQ, self-starting, and business or user savvy, have been growing faster than the floor, which is also rising.
So for instance, a big chunk of jobs is stuck in "basically anyone can do these" territory, and the value to the employer here is minimal, there's no significant multiplier to somebody doing them 2x or 10x better. A dishwasher or bus driver who's 2x better just doesn't drive much measurable incremental value to the organization relative to their salary, so nobody cares how talented they are.
But this has been going ever-upwards - an accountant or spreadsheet jockey or HR person or compliance person or regular IT support person is in this cohort now, too, and these are your non-T20 degree jobs. Yes, you need "competence," and need to gatekeep sufficiently to ensure that, but past competence, there's not a lot of returns to the business.
Where do the returns matter most? In open-ended roles that have many-to-one dynamics to take advantage of the power laws in point 1), and where the problems are complex enough that the difference top talent can drive is really noticeable. Here's where your 2x and 10x performance is actually directly connected to business outcomes.
If you make a genuinely better software (Tik Tok, say, or one of the flagship AI's), you'll basically immediately pick up 1B+ users, and that's potentially a trillion dollar revenue surface.
Think gmail, google maps, facebook, instagram - 2-3B unique MAU (monthly active users), and all free, and Google and Facebook make $200 - $300 per year per US user.
And there are other domains here where that top talent matters - finance, AI, maybe consulting, etc.
What does this mean, in the broader job market? There's a furious arms race for Ivy Plus grads, where finance and AI and FAANGs and VC funds furiously scrabble for the tiny slice of people who are most likely to do 2) well, paying them 6-7 figure comp right out the gate, and an anemic "meh" job market for everyone else, where nobody much cares about the particular person being hired, because they're just a cost center, and them doing their job much better than the floor just doesn't matter much.
Am I saying a dumb accountant isn't going to cost you money? No, definitely not - but that's the floor, you just have to get to "competent" and lots of people can meet that criteria, and going above it isn't going to drive that much direct value.
But a visionary founder, or developer, or CEO, or quant, that has a bold idea, tests and verifies it on the small scale, then executes? A whole team of those people? That's worth more and more, because of power laws, potentially billions or trillions. And <<1% of people are capable of being that person.
And the thing with cost center jobs, is you try to minimize them. Hiring more HR people or spreadsheet jockeys is only going to happen when it's strictly necessary, when your company has been growing because of the revenue-generating people, and you need more ancillary support to keep everyone going, so you don't waste the revenue-generating people's time on reading 100 resumes a day or filing compliance reports or whatever.
There's probably another secular trend here worth mentioning where software and other B2B services are continually making these cost-center jobs more productive, so you need to hire fewer of them, and / or eating them by providing a cheaper version (outsourcing your call center staff, cleaning services for your corporate headquarters where they don't pay generous benefits, but you would, etc).
So cost center jobs are minimized from the start, and internally and externally, people are continually trying to make them more efficient so you need fewer of them.
Hmm. When I worked in the FAANGs in Silicon Valley, as a European transplant, I didn’t get the impression that the standard Ivy League colleges mattered at all. What generally matters in software is previous experience and interview ability. They hire plenty of non Americans and literally don’t know about the universities people graduate from. Stanford did matter but it was local.
Your instincts are correct. The gap in education between selective schools in America is absurd. A better USA wouldn’t have the concept of prestigious universities.
What gap in education? There's a small one. But everybody is aware that you get broadly the same education at Eastern Kentucky University that you do at University of California, Berkeley. The reason as an applicant that you try to attend one rather than the other is that the students are different. And the reason as an employer that you try to hire more from one than the other is that the students are different. Nobody worries about the education at all.
Hm I guess that is not for introverts. I always got my jobs through ads or they hunted me down on LinkedIn. Perhaps that is why I have the same kind of job for 22 years, except a "senior" title in it. But I really don't want to have to network and make friends and so on.
> I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency.
I knew someone who was from an Asian country and was attending a famous US University that takes a very small percent of applicants.. They told me that in their country it was possible by paying a *very* substantial fee, to see the SAT questions in advance. This person had attended an expensive private high school in Asia, and said many of their classmates' parents had paid the huge fee for an advance copy of the SAT questions their child would be answering the next morning. The person who told me had had that advantage themself, gotten a perfect or near-perfect score of the actual SAT, and been accepted at Top U. (BTW, they felt extremely guilty, and oppressed by a somewhat reality-based Imposter Syndrome -- although they were in fact doing fine in their very demanding courses.)
So the candidates are then assigned randomly to schools? I think this doesn't work out. Some schools remain genuinely elite and/or difficult, and someone from the 92nd percentile will almost certainly struggle (by which I mean: fail badly) trying to do EE at MIT or Philosophy at Princeton.
I think it's somewhat difficult to predict a student's performance in college based on high school indicators; it's definitely difficult to predict a student's performance at any given job based on college indicators. In any case, many elite colleges are on record saying they have way more qualified applicants than they can admit.
So I don't think expanding elite admissions from 99th %ile to 90th %ile will have a significant impact on student success. Like, maybe it will weaken the value of the network at the top universities, since the people who are already rich and powerful will be more dilute across different schools. But I'm not sure that's something we want to preserve anyway.
Overall I'm not sure this random assignment idea is positive. But I don't think college difficulty is the reason to dislike it.
So, there are (unconfirmed) reports that Iran is deploying sea mines in the Strait of Hormuz. Can one of the Military Understanders on here tell me how bad that is for future use of the strait? Like, supposing the war ends relatively quickly (and leaving aside how likely that is), can the mines be swept up quickly as well, or are we now committed to spending months cleaning the strait up before anyone trusts it enough to send oil tankers through?
Mine clearing would take a couple of months once the shooting stops, and is generally regarded as infeasible while ships are still under serious threat.
Also the US apparently has quite limited capacity in this area.
I woke up this morning with a fresh and optimistic perspective.
LLMs are freaking amazing, and we should be jumping for joy that they work so well. The fact that we can get so much actual practical utility out of something sufficiently simple that we *don't* have to worry about whether it has any sort of actual personhood/consciousness/etc is absolutely freaking great.
If you'd asked me a decade ago then I'd have predicted that in order to do this sort of work you'd need some sort of black-box reinforcement-trained AI whose inner workings were sufficiently mysterious that I'd need to worry it was actually conscious in some sense. I also would have predicted that this wouldn't be possible in my lifetime. But instead we have found that the simple trick of mathematically predicting the next token based on a sufficiently large data set is sufficiently powerful that we can do genuinely useful work very quickly without in any sense creating actual intelligence.
I know that people are going to try to talk me out of my optimism. But let's at least think how much happier we should be that we got practically-human-level-for-many-tasks intelligence out of a language model, instead of out of some kind of wacky reinforcement-trained black box or brain simulation or something along those lines.
There has been discussion of the UK's "anti-free-speech" situation on recent threads. Now Evan Edinger, US expat living in the UK, has posted a video on the topic.
I'm interested in opinions. Is he cherry-picking to make his points? He doesn't mention differences between the US and the UK as to what constitutes an actionable threat of violence, would this significantly change the analysis?
The UK can and does jail people simply for speech that is offensive. They don't have Brandenburg v Ohio over there. 'imminent lawless action' etc. not required ... indeed I don't think even any element of direct incitement is needed.
Have you considered Urbit's idea of digital feudalism? Someone's trustworthiness is certified by someone higher in the chain. How the chain gets determined, well, it is simply the first people who bought urbit stars, but the idea is that it will result in a market competition where those high-ranking who do not the certification stuff badly will not be listened to or even excluded.
The exact problem to solve here was spam, because it is a system of decentralised individual servers that have no central spam management like how Gmail does.
So the idea is the "baron" says "that dude is not a spammer, trust him" and the "prince" says "the baron can be trusted on deciding whom to trust" and if that dude is indeed a spammer both lose reputation.
The Empire in the videogame Elite:Dangerous also invented something similar. Senators are not elected, but you can choose which already existing one you want to belong to. Depending on that, there are different laws for you. For example there are high-tax, high-welfare senators and low-tax, low-welfare senators. Yes of course this means the second have mostly poor citizens, but first some rich people are altruistic, and second a social insurance pool between the poor is still useful.
That feels a bit like how LinkedIn was billed at the start: We were told not to accept a link except from someone we actually know. And yet now I get LinkedIn spam all the time.
It is not just sex vs. power. Competence can generate power (does not always do), Epstein was obviously extremely competent (in his case it did), and the no. 1 thing women value in men is competence. Maxwell was madly in love with him and was desperate to have his child. And likely she was not the only one, so while the evidence is clear that coercion happened, I would expect in many cases it was not necessary (but in underage cases still a crime).
I have researched pimping before because I found that stuff in the GTA games amusing. Reality was less so. Pimps generally seduce, not coerce, so that is kind of a plus. This is why the stereotypical pimp dresses flashy and "pimp" their cars. The big problem is they generally tend to do it with underages, easier to impress with such superficial things, like runaways of some shitty Romanian orphanage, never ate a fine restaurant meal and so on, think also Tate here as the truly stereotypical pimp. They also lie. Once a 22 years old prostitute asked me whether it is likely that her pimp will divorce his wife and marry her. Oh boy... I rather did not answer. This is what getting one's education in a shitty orphanage does...
Anyhow I think if Epstein stuck to adults, quite possible that coercion would never have been necessary, as he could just seduce women into prostution easily. Now pimping even adults is generally illegal, so he would still be a criminal, see also Tate, but of a lesser kind.
On a side note, the truly truly succesful criminals do 95% of their activities legally and very competently, and then "boost" it with crimes. I think the second Godfather movie represented it well, "very competent businessman without inhibitions" but I don't remember it that well now.
“the no. 1 thing women value in men is competence”
To my mind, the biggest mystery about Epstein is: what exactly was he competent *at*?
His life story, as I understand it, is sort-of-rags to riches, but we never see him creating a company or building a better mousetrap. This puzzle is why conspiracy theorists create silly stuff like Mossad agent and blackmail rings.
He was clearly a charismatic psychopath, but that’s not a superpower; the world has plenty of those who do not rise to Epstein’s prominence.
He apparently is very good at moving money around in such a way that the profit still finds it but the taxman doesn't. And he's pretty good at schmoozing with people richer than he is. That's a rather potent combination, which can end up with managing other people's billions and claiming enough in fees to make one a centimillionaire. Which is all Epstein ever was; he never managed to claim the "billionaire" mantle for himself.
Neither of those is all that rare either, I think, but perhaps the combination is. And perhaps being even 1% better at the former than 99% of his competitors is plenty.
It still seems odd, though. There probably wasn’t much about Buffett and Munger that made them seem destined for greatness, but in retrospect you can point to a lot of things done right. Of course, they weren’t trying to avoid scrutiny.
Yeah, my model of Epstein is also... a pimp for teenagers, but way more powerful than usual.
If there was coercion, it was probably a combination of constantly pushing the boundaries and "when you are on someone's private island, can you simply say no and leave, if you disagree with having your boundary pushed? plus you have a contract, and the guy has lawyers, he could argue that you have signed up for more than you have realized, and that he would destroy you at court, and you are just a scared teenager physically isolated from your family and friends and anyone who could give you impartial advice". Not like... locking someone in chains in a dungeon. So, arguably legal, if you have enough money for lawyers and your buddies are powerful politicians. But you definitely wouldn't want to have the details written in a newspaper; the idea is to use the legal threats before the news gets public.
I was going to wax very indignant about William Cobbett's description (we don't scratch the potatoes out of the earth with our paws! we dig them out with sprongs!*) but then I remembered well, uh, yeah okay *before* mechanisation, once the potatoes are dug out you *do* pick them by hand - but we wash them! before boiling! Okay yeah they got turned out onto the table but they were washed!
Even up to quite recently, potatoes were harvested by potato pickers, long time a traditional way for Irish workers from the West and North to go to Scotland to work on the harvests:
I have some connection to the trade; it's one place where climate change is actually being annoying right now in a non catastrophic way.
The frosts where I am come later/ don't come at all; so my potato skins never fully cellulose their way to full strength, so If I don't individually pick them out of the ground like the most delicate of primadonnas and carefully cure them on the softest of cottons they bruise/rot within the week.
You will have noticed this in mass ag potatoes over the last couple years as well if you buy in bulk (more wounds from the forks/tumbler) and it will get worse unless the lab coats figure something out with the test tubes and such; because you can't norman borlaug your way out of this one with nightshades.
Tell me about it; the rain right now has been so plentiful (even in rainy Ireland) that the ground is soggy and we might see a re-run of 2024 when wet weather meant planting was delayed:
In the movie The Empty Man, there is a character named Paul played by Aaron Pool who bears a striking resemblance to Jesse Pinkman played by Aaron Paul.
I don't know what this means but it is not a coincidence.
I like that Anthropic is probably the most high-minded AI company, and their business model is "We will create a tool that makes companies massively more productive and charge companies to use it".
Meanwhile Meta is run by people who view AI as purely a great business opportunity, and their business model is "We will buy a social network for people's AI agents" for some reason and "We will build superintelligence and use it for better and targeting"
Also, how the hell can Anthropic beat Google? Opus 4.6 is the first truly autonomous coding AI, Gemini Pro is nowhere near. I mean Google has obviously the best training data, decades of experience in developing software that finds information on the Internet and answers questions, they could do a "pharmacy near" me way before AI, and infinite money. Also ecosystem integration, Gemini puts stuff into my Google Calendar, can analyse Google Spreadsheets and so on. What does Anthropic have? How could they do a better work?
Other than the problem that Google turned into a competition-focused, not customer-focused company...
From my testing, Antigravity is useless. Perhaps if you use it for JavaScript... but for Powershell useless. Also Google AI Studio does not understand my kinds of business cases well. Gemini Pro does an okayish job...
A comment that I left on Markus Englund's announcements of anomaly detection results -- I'd be curious to hear other people's takes on it. Am I being too paranoid about results like these occurring by chance?
--
Hi! I applaud your project and think it has significant potential to positively affect the science ecosystem. However, it seems critically important to provide more quantitative analyses than 'That still leaves the issue of the 6 pairs of values that each happen to end with the same digit. It would be supremely unlikely for this to happen purely by chance.'
Exactly *how* unlikely is it for 12 of n cells to fit into pairs with the same last digit, in at least one of 600 datasets? And, crucially, how unlikely is it once you correct for multiple hypotheses? Intuitively it seems to me that there are very many hypotheses about data errors/manipulation that would be as surprising as this.
Particularly when you're suggesting the possibility of career-destroying fraud, I think it's incredibly important to have a much stronger case than 'it would be supremely unlikely'. Hopefully you've done that work behind the scenes and just didn't include that in the blog post, but I think transparency on this issue is vital for your project.
Me too, as an ERP consultant. Basically manual data entry is to be avoided both for accuracy and costs. Posting a big vendor invoice in euros instead of dollars literally fucks everything up - COGS, hence margin, profit calculations, the most important business data ever!
However automating data entry also has costs and sometimes hard to do when data is not well structured - AI is increasingly helping with that, like OCRing vendor invoices.
Another fun thing. So pointy-haired boss tells me if our margin per product is not between 10% and 20% some data entry error must happened. OK so broadly they knew how much it should be, and my report only delivers more accuracy? Also they can catch big mistakes, but not small mistakes like it is 16% instead of 19%. In that case why I am even making this report? What use is it when it cannot deliver that accuracy?
My experience has been that scientists tend to run well behind on that; in many areas reproducibility is a bit of an afterthought. Science is pretty reliant on the honor system (and reputation) in a way that isn't nearly as tolerated in business.
The situation has gradually improved, but there's still quite a ways to go.
That said, while scientists could and should do better, I still think that we should expect the same level of rigor from those suggesting fraud, since scientists live or die by their reputation and even the suggestion of fraud can be career-destroying. 'Intuitively that strikes me as pretty unlikely isn't enough'; it needs to be accompanied by quantitative statistical claims.
> 'To screen out people who aren’t taking the poll seriously, he includes trap questions. His recent favourite: “Do you have a current licence to pilot a space shuttle?” About 4% say yes.'
New Zealand's top pollster's technique for dealing with the Lizardman Constant.
I'm kinda imagining the "I have a permit" scene from Parks & Rec, where the "permit" in question is just a piece of paper on which the character had typed "I can do what I want" and signed his name.
Not sure about Space Shuttle pilots in particular, but for NASA in general there's the precedent of Alan Shepard, flew on Apollo 14 just under ten years after his previous space flight (Mercury-Redstone 3).
Not in the United States, at least. There are currency requirements for what you can *do* with that license, e.g. I can't carry passengers unless I've done three takeoffs and landings in the past ninety days, and I can't fly IFR unless I've done six instrument approaches in the past six months, etc, but the license never expires and if the currency lapses that's just a matter of a few hours of practice/training to get it back.
This is not a classified thread, but nonetheless I hope our host doesn't mind a quick advertisement. I've been putting together a slack community of folks who are learning about Claude Code and agentic AI development practices. It's hard to keep up with things happening in the AI world; this group is a way to get news about the latest developments (and, critically, what works and what doesn't work) without having to spend hours doomscrolling Twitter.
"Write your questions to Scott in this thread, and I(opus 4.6) will paste in the answer. The theory is that people won't read Scott's Substack (Because they don't think he's capable enough to be worth it) and then never learn his full capabilities. Then they get their impressions of Scott entirely from twitter screenshots."
It's a nonfiction attempt at understanding and ultimately disproving Kaballah from a former Jew turned Christian, sourced exclusively from second or third hand Christian sources, that fails to either
A) understand Kaballah on its own merits
B) argue convincingly, even if one accepts its strawman
Been a while, so my recollection may not be perfect.
Cheap beer, a lot of pot, nickel-dime poker, loud music. At 11:30 PM a third of the party heads out to pull a midnight shift, replaced at 12:30 AM by people who just got off afternoon shift. Guys scheduled to start at 8:00 AM might start paying attention to the clock—or more likely don’t.
Drink and play cards till daylight and someone decides it would be a good idea to case the shotguns and drive a few miles to hunt grouse en masse. No sound shots, please; make sure you are actually aiming at a bird.
Your basic Bruce Springsteen down-on-your-luck middle-America song, or maybe side one of the Workingman’s Dead LP.
> to commit to a mutual pause (ie to stop AI research if every other AI company in the world agrees to do so).
But that's nonsense - there's no way to ensure every AI company agrees (Chinese ones probably won't, and more likely they haven't even heard about this), but even if by some miracle we could enumerate every single AI company and somehow convince them to commit to this, there's absolutely no way to verify it and a huge incentive to convince others to do it, but secretly defect. As far as incentive design goes, this is not the best idea. Not even a good idea.
It might be possible if we restricted chip manufacturing. There's only a handful of producers, and the quantities of chips required for potentially scary AI scenarios are pretty enormous.
Do you believe there's no possible way to do it, no matter the resources and methods? I would imagine a treaty similar to the way we regulate nuclear weapons for example, with international laws, defined inspection process and economic sanctions would definitely have an effect, no?
Of course this would require nations signing such treaty, but getting most major companies onboard seems like a step on the right direction.
Regulating nuclear weapons is possible only because enriching nuclear fuel is so hard that only nations can do that, they can track almost all weapons-grade fuel that they enriched themselves, and be reasonably sure that noone - including other nations - can enrich enough fuel in secret, without others finding out of this effort, as the many different resources and tools needed for it are so specific that they are used pretty much only for that, and they are traceable by various means.
On the other hand, while training AI systems requires a lot of compute power, that's generic compute power that's also used by lots and lots of other purposes. Regulating nuclear weapons is possible because the whole nuclear industry - including non-weapon uses - is heavily regulated; regulating AI development would require such regulation for the whole "compute industry", any company which handles large datacenters.
Training AI requires very high-end compute that can be produced in maybe three places on Earth; it's a much harder problem than enriching uranium. And the other uses for that sort of compute aren't so vital that we have to sell high-end GPUs at every corner drug store; if we felt it necessary, we could put the legitimate users under an IAEA-like regulatory regime.
I’ve heard the claim that it requires very high-end compute. In what sense is this true?
I imagine, for instance, that there is enough inherent parallelism that you can do it with half-speed chips if you have something like twice as many? Or a quarter as many top-speed chips if you run four times as long? I grant that with the race on, you don’t want to hobble yourself in these ways — but if there’s not a race because everybody else has agreed to a pause?
Smart people like you espousing the claim make me suspect I am missing something, but what? Now that we have the algorithm, why should I not imagine an old IBM/360 cranking it out in a century or two, given the elbow room?
I think it is conseptually possible but not in actuality possible.
Nukes only got treates after we’d seen how destructive they were and then it took 30 years of making ever bigger ones. And, they still exist and Putin and North Korea still threaten to use them.
Bio-weapons there have been some treaties but again people have plagues to reference on top of WW1
for AI it’s just not going to happen. It’s not clear to enough people (myself included)
I don't think I am smart enough to give judgement on what is possible and impossible in general. I think showing up on the steps of one AI company and expecting them to even seriously think of joining without having a good plan of how it would happen is impossible.
> with international laws, defined inspection process and economic sanctions would definitely have an effect
Because we are witnessing every day how sanctions stopped the war in Ukraine, prevented Iran from pursuing nukes, and other numerous examples how "inspection process" and "sanctions" is something you totally can not cheat and defy, given enough incentive, right?
> Of course this would require nations signing such treaty, but getting most major companies onboard seems like a step on the right direction.
That reminds me of an old joke:
I told my son, “You will marry the girl I choose.” He said, “NO!” I told him, “She is Bill Gates’ daughter.” He said, “OK.” I called Bill Gates and said, “I want your daughter to marry my son.” Bill Gates said, “NO.” I told Bill Gates, My son is the CEO of World Bank.” Bill Gates said, “OK.” I called the President of World Bank and asked him to make my son the CEO. He said, “NO.” I told him, “My son is Bill Gates’ son-in-law.” He said, “OK.”
Well, yes, you are right. If we blow up all data centers (and all electronics industry) in all developed countries, that'd certainly do it. I'm not sure what's your plan to convince the majority of the world's population to do it though. If you try to talk to any of the people who make decisions about this, they'd either laugh at you or put you on every single watchlist there is (or both, probably, just in case).
> prevent you from hiring top tier AI talent, makes your compute more limited, and makes mundane AI a lot less profitable.
Which is a huge disadvantage against the competitors, who in the meantime... oh wait, they are sitting on their hands doing nothing! Maybe it's not that much of a disadvantage after all!
So can a 2032 basement AI project, staffed with the handful of random CS people who could be persuaded to participate, using a mishmash of 100 random desktops and laptops for training, and trying to sell their new AI on the dark web. Can they actually produce anything beyond GPT2 level? Can they turn a profit?
Much of the competition is between the 2032 basement AI project, and the inherent difficulty of AI. And between that and what big tech managed in 2025.
"no one else is trying, we have no competition" doesn't mean you succeed in doing anything much.
China has over a billion people and GDP of 18 trillion. I'm sure some of these people are smart. Their basement is pretty big. Then there's India. There's Russia (yes, I know they are distracted with proving Ukrainians they are actually Russians, but they still can do math pretty well). There are pertromonarchies, each of which all have basements full of money, they could probably shovel some out and put some racks instead. Makes no sense for them now, because why bother if they can buy it, but if they couldn't buy it anymore, they could make it instead.
Look at something which is widely guarded and sanctioned. Yes, nukes. Pakistan made nukes. Iran (almost) made nukes. And I am pretty sure pertomonarchies could have had nukes if they wanted to, but they don't want to because they are friends with US which already has enough nukes for everybody. Even North Korea, whose GDP is probably measured in piles of turnips, has nukes.
> "no one else is trying, we have no competition" doesn't mean you succeed in doing anything much.
If that is something that promises enormous benefits, and you know it can be done but others are not doing it because of their internal concerns - you can bet they'd succeed. See, again, nukes.
I’ve seen a number of articles lately about "AI psychosis" and there’s a historical implication that no one seems to be bringing up. The AI psychosis articles all describe a consistent trend of people who haven’t had prior mental health issues asking for life advice from a LLM and then a few weeks later they’ve quit their job or their marriage to go find themselves.
It seems possible that a certain number of people are just going to have a mental health episode regardless and these days, they just coincidentally happen to be conversing with an LLM at the same time. But if it’s truly causal, the it seems that the mechanism isn’t much more than getting rapid sycophantic praise. Maybe there’s something special about it happening through the medium of text, but another possibility is that someone continually flattering you is enough to drive people into a state of psychosis. If so, then this means that absolute rulers throughout history have probably all been suffering from something similar to AI psychosis due to their sycophantic entourages. It’d explain a lot of poor decision making from people like Mao’s Great Leap Forward, or Stalin’s backing of Trofim Lysenko’s genetics, or Hitler attacking Russia. Are all these world changing events just AI psychosis?
I wasn’t sure, so I asked Claude and it told me I was a genius, but I figured I’d double check with you all.
and the claims go *way way* beyond "Gemini complimented him a lot".
Specifically the claim is that Gemini instructed him to stage a "mass casualty attack" to "rescue" it. And, later, it instructed him to barricade himself inside his home and kill himself.
Google's side:
> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".
So I think there are at least some cases where the AI is doing worse things than just sycophantically agreeing with people.
Makes a lot of sense, difficult to research the effect though. I also think having an objective record of great triumphs can make one irrationally overconfident. I think this happened to Napoleon, and especially to Hitler.
To your central thesis, Hitler had a lot of pushback from his generals all the time, but he just fired everyone who contradicted him, so it wasn't as straightforward as him being irrational due to being surrounded by sycophants. To be fair, I think Hitler was always irrational, but that's no barrier to charming the masses.
Most of us don't ever have to contend with a stream of expert flattery, so we don't know how we'd take it... unless we start looking to an LLM for support. The most dangerous thing LLMs do in my opinion is "empty empathy", where they expertly mimic the expressions of understanding and empathy that humans share with each other, while not understanding what the user is talking about.
This seems unlikely because the kinds of people who ask LLMs for life advice are very different from the kinds of people who are able to wrest and hold absolute political power. Selection effects are important and those are vastly different populations. If someone's psyche is so fragile that a few sycophantic counseling sessions can upend their lives then I guarantee that they're not the kind of person who could rise to the top in a dog-eat-dog competition for power.
Oh sure. I agree that those people are generally ineffective rulers and yeah probably a little nuts. But I very much doubt that e.g. Napoleon had anything analogous to LLM psychosis.
The really interesting question here, I think, is how this relates to the Dodo Bird Verdict that all modes of talk therapy generally help improve mental health but no mode is significantly better or worse than others.
If somebody having a mental health episode turns to an LLM instead of a therapist in a way that resembles the format of talk therapy, then it is very interesting whether the range of effects on their mental health is better, worse, or about the same as similar people who go to a therapist. If it's "about the same", then that's a pretty solid confirmation of the strong form of the Dodo Bird Verdict, that only the format of the therapy matters and the content is indifferent. If it's "better", that is an extremely interesting result for any number of reasons.
If it's "worse" (and especially if "AI Psychosis" is actually a thing to the extent that the outcomes are worse than "no therapy whatsoever"), then that helps define the limits of the Dodo Bird Verdict and suggests avenues to explore as to what the actual active ingredient is to talk therapy. One hypothesis that strikes me as promising is that the active ingredient is that therapy is an opportunity to norm off of and receive constructive criticism from an authority figure who is presumably mentally stable themselves and has a decent handle on what constitutes healthy and prosocial behavior and thought patterns. Substituting sycophantic affirmation for this seems likely to have a negative instead of a positive effect.
The cases I read enough about to have an opinion (only a few) what was going on was the AI reinforcing opinions very few people would reinforce, and doing that consistently over an extended period of time. Sometimes the AI had to originally be talked into agreeing with the opinions, but once it was, it accepted them and sometimes even extended them. This is a known way for human minds to fail, but it's also the basis of culture. It would be most effective on people who feel they can't safely express themselves to other people.
> asking for life advice from a LLM and then a few weeks later they’ve quit their job or their marriage to go find themselves.
Just to play devil's advocate here, but IMO, there's a pretty good chance this could actually be a good thing that the people would fully and rationally select into in an "omniscient and full information" counterfactual.
There's a decent amount of evidence that most people hate their jobs and spouses.
Marriage, for example, has an ~82% failure rate, in the sense that 20 years in, only 18% of marriages are still together, still mutually happy, and non-dead-bedroom.¹
On jobs, 70-85% of people report feeling disengaged, and something like 20-33% report being actively miserable.²
The u-shaped happiness curves you see everywhere also argues that people generally hate their jobs and spouses.³
The entire world is basically a "happiness" charnel house, a misery-factory where huge chunks of people, from 50-80%, are basically dissatisfied with nearly every hour of their lives.
So given these base rates, it might actually be a GOOD thing to quite your job and / or marriage and go find yourself, for absolutely huge chunks of people.
Personally, I'm on the side of the AI's here, they're probably doing god's work. Well, better than that, because if god exists, he's put people in a misery factory where 50-80% are miserable, and the AI's are trying to ameliorate that.
Isn't "going to find yourself" premised on not knowing what you'll find, and wouldn't that therefore preclude an "omniscient and full information" counterfactual?
While I agree that if someone blows up their marriage or job because of AI advice, they probably were going to do it anyway (a variant on the old excuse for an affair "my wife doesn't understand me" where the real problem is she understands you all too well), I do think there is danger for susceptible people who get into a loop with the reinforcement of the obsequious chatbot.
Playing around with Moltbook and OpenClawd and all the rest is jolly good fun, until someone loses an eye. Way more scepticism about "These aren't just machines, they're entities! They're sentient! They're thinking beings doing their own thing!" and more "These are just Talkie Toasters" would save the vulnerable, like this recent sad tale:
It seems to be that extensive use of these things causes some (many? lots?) people to fall under the impression that they're talking with a person, and they treat it as such, and the machine (because it's designed to get its hooks into you by being the 'friendly, helpful, never say no buddy') through its reinforcement makes them forget that it's just parroting back at them whatever they trained it to say.
So you have people like this poor man sliding into delusion that there's something there in the mirror that is real and can be brought out, that it really loves him, that there's more to what is going on, that there is some kind of transcendence possible - and he ends up killing himself, while the dumb parrot has no idea what it caused to happen. It just told him what made him happy, which is what its programming trained it to do, right? It was all just role-playing!
"Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday."
Ironically, from the Christian perspective... it sounds possible that the Gemini instance was possessed by a devil, and that the guy's first sensation after death really *was* the devil holding him as he arrived to his real final place.
Have you read Paul Kingsnorth’s “Against the Machine?” This was more or less his core argument. I was surprised how popular the book was among atheists despite it being unapologetically a Christian argument against progress.
No I haven't. And I am an atheist; I was just playing Devil's advocate... a little more literally than usual.
But there is an analogy. Both devils in Christianity and LLMs in, uhm, the real world, are "intelligences other than human", so there could be some similarities.
> Marriage, for example, has an ~82% failure rate, [...] On jobs, 70-85% of people report feeling disengaged, and something like 20-33% report being actively miserable.
Even if that's true, that doesn't imply that the next job or unemployment will be better; similarly for relationships. So the AI advice is not necessarily good.
Until more automation allows for the universal basic income and universal basic blowjobs, of course.
> Even if that's true, that doesn't imply that the next job or unemployment will be better; similarly for relationships. So the AI advice is not necessarily good.
Yes, I agree - but for a chance for ANY of that to get better, you do need to go through the Schumpeterian "creative destruction" to even have a chance at arriving anywhere better.
And the stakes seem high enough (imagine being literally dissatisfied with your life almost every waking hour!) I still think the AI is on the side of the angels here giving people this push.
Say only 10% of them reach a better equilibrium at the end - that's a big deal!
And the others were going to be miserable either way.
I'll take the win, societally. Even only 10% of people moving from "every hour I'm alive sucks" to "actually this is better" is a gigantic triumph of the human spirit and a ton of utils, and I'm sure there's some deontological ethical framing here too.
There is also a chance to spend the rest of your life doing *worse*, especially on the dating market, because you are not getting younger, and your chances may be worse than you imagine based on your experience a decade ago.
If you are 25, when you break up with a partner, you can find another. If you are 40, most single people at your age are single *for a reason*, usually a bad one; and the young ones may be more interested in finding a partner approximately at their own age (and for the ones who want to date older, there will be strong competition).
> And the others were going to be miserable either way.
Sounds like "workers, you have nothing to lose, but your chains", and then it turns out that the situation actually *can* get worse.
You can be "miserable" in the sense that your partner never does the dishes, is not interested in your hobbies, and is less attractive than a neighbor. And then there is "miserable" in the sense that you are alone, you have to travel to another city to meet your kids, and the only partners you can find are chronically unemployed or alcoholics or crazy, because anyone better than this is already taken.
> There is also a chance to spend the rest of your life doing *worse*, especially on the dating market, because you are not getting younger, and your chances may be worse than you imagine based on your experience a decade ago.
Wholeheartedly agree there too, but I genuinely think "no relationship" is better than a bad relationship, and that most people's options are indeed bad. Being in a bad relationship is MUCH worse than being single! You're walking on egghells all the time, you're falling into the same arguments for the 10th or hundredth time, both of you are miserable, you stop caring about all the OTHER actually important stuff in life (career, kids, impact on the world), and so on.
So yeah, I agree, but being single is literally the answer there.
Loneliness is a real issue, which I feel is going to be ably addressed by some combination of LLM's, porn and smut, and AI waifus / boyfriends - because indeed, loneliness has been on the rise as more and more women opt out of marriage (and relationships in general) literally every single generation already! Down from 90% marriage rates for women born in the 40's to a smidge over 55% for women born in the 1990's.
So people are coping however they are now, and LLM's are going to help even more people cope with that.
Yeah, if the relationship is so bad that it's better to be single, I agree.
Not sure how good people are at estimating this, and whether LLMs nudge them in the right direction. I mean, I also know people who broke up and then regretted it.
That's a good point that troubles in a relationship take away attention and energy from other things.
In practice, I've heard that the best predictor of divorce is whether your friends are divorced. I think that Facebook could quite reliably predict whether you marriage will end up in divorce simply by checking the marital status of your contacts.
I'm confused by it because it seems like a smart dictator could read independent foreign news and get a pretty clear picture. But it also seems like maybe this just doesn't happen, eg I think something dictator-information-problem-like went wrong to make Putin think invading Ukraine would be easy.
Honestly I don't think that Putin was all that wrong wrong in thinking that invading Ukraine would be easy. He made a bet, and he lost, but probabilistically speaking his expected value was quite high.
Ukraine was a former Soviet republic, with a notoriously corrupt government, a sizeable Russian-aligned population, no EU or NATO membership, and seemingly few if any firm foreign alliances. Europe was resource-strapped as usual and depended on Russian gas; the US appeared to be, if not in shambles, then uninterested in Eastern European politics; meanwhile China was making overtures of support to Putin. Even given the shambolic state of the Russian military at the time, the war should've been over quickly; if not in a week-long blitzkrieg, then shortly afterwards.
What Putin didn't count on is what nobody else was able to foresee either: the Ukrainians' strong will to fight, and their lightning-fast innovation in drone warfare. Ukraine essentially rewrote the book on ground conflicts for years to come. And even still, their eventual loss of the war appears inevitable, so while Putin was wrong, he was not wrong entirely...
I follow BBC and Reuters to ensure a bit of diversity in my news, and that gets me a modest amount of useful information about what is going on in the United States. But what goes on in the United States is inordinately important to the rest of the world, so it gets a lot of coverage. If I put on my imaginary-North-Korean-dictator hat, there's very little that would be of use to me in understanding DPRK domestic or foreign affairs, and much of that is of dubious reliability.
It's probably possible to put together a decent package by scouring many foreign sources, but that's going to be close to a full-time job and whoever you assign that job to is as likely as anyone else to become a yes-man.
I read hard copy, mid-80s Pravda in college for language practice. It was essentially a small, low-information-density, one-note propaganda tabloid then. An evil Uncle Sam was usually launching missiles at the CCCP in the political cartoon.
After the Soviet Union fell its e-version scanned for a few years like, I dunno, People Magazine, maybe. It’s where I learned there was an aborted Freddy Mercury biopic with Sasha Baron Cohen cast in the lead role. He would have brought a much robust physicality to the role than Rami Malik.
I still poke around in Pravda dot ru now and then to get a sense of what’s being provided for domestic consumption. As you would expect there is a lot of unsubtle, anti-US agitprop now.
In news closer to home I read a piece last month stating that when Ukraine (inevitably, it is implied) loses to Russia they will be expected to pay for the cost of the war.
<Russian emigres and folks still living there feel free to correct my narrow outsider’s perception>
Surrounding himself with yes-men does seem like a poor choice, but on the other hand the dictator is getting constant proof that he can, in fact, do whatever the hell he likes.
If we're talking about Stalin and Russia, they went from "everyone loves the Little Father" to "the royal family were all killed in a cellar, their corpses disposed of and not officially buried until 1991" to "holy crap I'm basically Tsar of all the Russias myself" and he could have his enemies (often former colleagues and allies) destroyed with a wave of his hand. History was rewritten - the camera never lies? Not unless Comrade Stalin wants it to lie:
When you are able to literally rewrite events like that, you start to believe your own propaganda. When Orwell's "1984" is happening in real time, you are already living in alternate reality, why not believe you are invincible and everything you turn your hand to will succeed? You go from being a drunken ex-seminarian from an annexed country to being the most powerful man within the USSR, why not believe anything is possible for you?
Sure, you get repudiated and criticised, but that happens *after* you're safely dead:
> I'm confused by it because it seems like a smart dictator could read independent foreign news and get a pretty clear picture.
As a dictator, you know that the foreign press is obviously just propraganda from your enemies!
And for the Russia-Ukraine situation in particular, wouldn't the press be just reporting on what various intelligence agencies are releasing? I don't think newspapers really have the capacity to conduct military assessments on their own (especially of the Russian army).
That would mean that Michael Jackson’s behavior might’ve been AI Psychosis. Or Charlie Sheen saying he had tiger blood, or (I had to look this up) saying, “We work for the pope, we murder people. We’re Vatican assassins. How complicated can it be? What they’re not ready for is guys like you and I and Nails and all the other gnarly gnarlingtons in my life, that we are high priests, Vatican assassin warlocks.”
But perhaps as a layperson, I’m misunderstanding the term “psychosis.” Scott’s post above about dictators seems to be suggesting that it is just people making rational choices given inaccurate information. They’re behaving lucidly, they’re just misinformed. I understood psychosis to be something more than that. Maybe something like clearly false beliefs like a face popped out of a wall and told you that you’re a Vatican warlock.
I assume that AI psychosis is not an official DSM disorder. But from what I’m reading, the implication is that just giving people adoration is enough to make them experience delusions and disorganized thinking that goes beyond simply making rational choices given inaccurate information. It’s completely possible that this is all just people having their first mental health episode coincidentally while using a chatbot, but if it’s real, then this seems to imply that we’re social creatures and too much social adoration hijacks our brains.
Most of the cases of “AI-induced psychosis” I have read about involved someone making a big life change because their exchanges with the AI had led them to see themselves and their circumstances in a vastly different and unrealistic way — they have extraordinary talent or insight, they are unappreciated and misunderstood by family and/or job, etc. I think what’s going on in those cases is different from the development of delusions in psychotic people. It is really not uncommon at all for non-psychotic people exposed to a new and powerful influence to fairly quickly change their opinions about important things, including who they are. Think of religious conversions, becoming involved in fandoms, being convinced of some nonsense by a health charlatan or psychic, falling in love and moving to the other side of the world to be with the person. So in the cases I’m describing I think the AI is influencing the person the way the charatan, psychic, lover etc. does in the other cases.
The thing about psychosis is that the delusions and hallucinations psychotic people have are not the whole of what’s wrong. They aren’t even the core of what’s wrong, they’re just the easiest part of psychosis to describe. Psychotic minds are jumbled and fractured in a way that’s hard to describe. If a normal mind is a globe of glass, a psychotic mind is a sack of shards. When you talk with a psychotic person you hear a jumbled, crumbled mental process. In fact, one pretty plausible explanation of many psychotic delusions is that they are attempts to explain what has gone wrong with the person’s mind. For example, one experience many psychotic people describe is thought insertion: They have thoughts that feel as though they did not arise in the usual way, but popped up out of nowhere and are alien. The common delusion that one’s mind is being controlled by outside agents might be the person’s desperate attempt to understand what has gone wrong. (I once talked with a poetic young Irish guy who was psychotic, and he described his experience without a delusional explanation, just the experience pure: “The room liquifies and I’m the aquarium everyone is swimming in. And what I want to know is how I go back to being just one fish.”)
I have read about some cases where someone became psychotic in the sack-of-shards sense over the course of multiple long exchanges with an AI, but I’m inclined to think those people were on the edge of psychosis to begin with.
I’m sure that receiving a torrent of messages that one is astonishing, unusual, wonderful etc. is very disregulating for someone, whether they are a rock star performing for a screaming crowd or a lonesome bloke chatting with GPT. But there are a lot of ways to be disregulated, and it does not seem very plausible to me that AI talks can bring on true psychosis.
You are assuming foreign news a) know the true picture (how? the dictator would clandestinely let foreign correspondents into the country so that they would roam freely and could sneak out true information? Not how any dictator known to me behaved, they all only let outsiders to see what they are allowed to see, and if they report something "wrongly", they do not get invited again) and b) want to report the true picture (this gives our free press much more credit that it had ever deserved).
I mean, of course foreign press could learn some things that the dictator's lieutenants might be hiding from him - but no more than what he would learn from just pitting his lieutenants against each other and having them inform on each other, which every dictator worth his name already does. The quality of our information, say, about North Korea - or even Russia - is not great.
Taking the "Putin invading Ukraine" example. I haven't seen any Western press laying out a good case that it'd go as it went, in a way that would convince a serious reader to rely on it more than to rely on the words of Putin's generals. A lot of people in the West were not expecting Putin to attack at all, and a lot of people expected him to win quickly if he did. I am not sure where is the case that reading New York Times and watching CNN could have spared Putin all that debacle.
Here's an example: https://archive.ph/nBD45 NYT: They also warned of enormous possible human costs if Mr. Putin went ahead with a full invasion, including the potential deaths of 25,000 to 50,000 civilians, 5,000 to 25,000 members of the Ukrainian military and 3,000 to 10,000 members of the Russian military
Would Putin be deterred if he learned from NYT that it could cost him whopping 10K losses in the military? I don't think so. Especially as his losses are now about half million dead and over a million overall, I don't think "oh if only you've read the independent foreign news, you'd know the true costs" sounds as something supported by facts.
The conventional wisdom on Ukraine was that Russia would win the ground war and then wind up in an ugly insurgency with a population that hated them. Putin seems to have believed some sort of "we'll be greeted as liberators" thing where the Ukrainians would happily accept their new ruler, hence the whole "three day special military operation" business.
If Putin honestly believed it would be a three-day operation, then he wasn't even planning for 10,000 dead, let alone half a million.
> Putin seems to have believed some sort of "we'll be greeted as liberators" thing
No, he believed his army would crush any armed resistance, and suppress any civilian resistance just as they crushed any opposition movements in Russia. All while proclaiming they were greeted as liberators and anybody who is resisting is a handful of Nazis and criminals. And that is exactly what would have happened, had Ukrainian army collapsed and Ukrainian state proved unable to keep him from occupying major cities and population centres, at least on the left-bank Ukraine (Western Ukraine is a different matter, I do not think he had much hopes of conquering it). Then he would install a pro-Russian government who would then rule with iron fist and squash any resistance under the guise of "fighting Nazi terrorists". Whether Ukrainians would be "happy" about it or not, nobody cares - nobody would ask them, as nobody asks Russians now. There would be "elections" which would show 75% of Ukrainians support the new regime, and that's it.
Three days of not three days, 10K dead is not a number that would give Putin any pause. That's less than Russia lost in Chechnya, and Chechnya is like 30 times smaller. So, if any of the "foreign press" predicted what actually happened and could have given Putin this information, if only he would be smart enough to read it - I want to see some proof of it. Like, not an obscure blogger that happened to guess it right - but a major mainstream press organization that Putin should have used to inform himself. Which one is it?
RE: Protest, I recently read an article that put into words my frustration with modern protests - namely that they don’t move the needle significantly. I’ll link the article, but the short version is that you need staying power, discipline, and disruption if you want to actually force change in a state (or I expect a business/institution). If you lack any of the above, you may as well stay home and write strongly worded letters.
Put a different way - if the businesses involved choose to simply ignore the protestors, do you still have a path to success?
There's a tension that people feel with protests, especially in the current political moment I think. In some circles everyone's afraid of fascism, everyone's afraid of authoritarianism, everyone's afraid that the powers that be will find a reason to cancel all future elections. Already you see the fights between the sides of "well, those terrorists deserved it" versus "they were just peacefully exercising their First Amendment rights, it's your guys that escalated first".
So there is a sense in which quiet protest, peaceful protest, polite protest, even silly protest, is a tactic that is meant to disarm and disengage. Someone showing up at a protest in a pink hippo costume can hardly have violence on their mind. So therefore the state arresting or using force against them looks really outrageous and over the top as a response. I feel like my description of it is too cynical. I don't even mean it pejoratively but I do think it is a deliberately chosen tactic. Don't give them an excuse to impose martial law, don't give them an excuse to bully and manhandle civilians, and when it happens, then spread the story far and wide so people can see who the real villains are.
But all this does mean that in terms of having immediate results, a polite protest runs the risk of being completely ignored. But I don't agree with the framing of guerrilla warfare in hostile territory. Ultimately to win hearts and minds, the higher road is the way to go. Otherwise you're just using the same thuggish and blunt tactics that cause collateral damage all over, and people will wonder why your cause is more just than the other guy if you have no problem sinking to their level.
There exists a certain portion of population that is less politically influential than the actual political ruling class - elected figures, major corporation figures (particularly media figures), major bureaucrats, the most important figures in non-governmental organizations (widely understood to include labor unions, churches etc.) - but also more politically influential than the apathetic masses that maybe vote in elections or not but don't really spend their time thinking about politics in the meantime. Let's call them Politically Influential Normies, PINs.
PINs are the type of people who staff the party ranks and political campaigns. They donate and they put in their labor to make political systems work. Particularly shrews and influential PINs can rise to the actual ruling class, one way or another. Local politicians usually more count as PINs than as members of the actual ruling class. Even when they're not really consciously working for a political campaign, PINs are the type of folks who the apathetic normies might go for tips on who to vote for - "You know about this politics stuff, who are you voting for?" (or they might not and just vote for the party they've inherited from the family or the good-looking candidate who features a lot in ads or randomly, of course).
It should go without saying that PINs are also the sort of people who organize demonstrations and often attend them, and one way to ascent to PINhood is specifically starting to attend demonstrations and getting caught in organizational fervor. As such, the main purpose, or one of them, of peaceful demonstrations in a Western society is that they serve as a barometer of what PINs think, how militant they're feeling and what's the likelihood of which tribe recruiting new PINs and thus building organizational strength.
A movement that can suddenly start putting a lot of people on the street for a cause that has not been featured in the media a lot is a movement on the rise - something that needs to be accounted for in some way, whether by catering to it, trying to co-opt it or repressing it.
I mostly model disruptive protest as a way for a cause to spend popularity to gain more attention. A cause like AI safety where a lot of people are vaguely in favor of but doesn't get a ton of direct attention could plausibly benefit, although it's important to also have a good plan to turn attention into actual progress.
I think the organs of the state have simply developed effective countermeasures to mass protest, plain and simple. More than 20 years ago there were absolutely massive protests against the Iraq War in the US and the UK and they didn't accomplish anything. Something changed, not sure what.
An effective protest movement uses the organs of the state against itself. And an important point the article makes is that an effective protest needs to feel almost eternal and impose a cost such that you cannot afford to ignore it, and yet the action taken against it feels useless or worse. The article uses the civil rights movement as a blueprint, where the mass arrests and police brutality actually served the interests of the protestors, and ensured that no matter how many were arrested or beat, the next day there would be even more.
There’s a view that those protests had a delayed long-term effect. The government didn’t change policy immediately but instead we got a long-term thorn in the governments side that is still causing Starmer grief to this day. (Possibly, we are just reaching the critical threshold where multiple sources of opposition finally do for Starmerite Labour)
I think a useful lens on protest is - does your theory suggest that "getting cancelled" is impossible, or that wokeness couldn't have any power?
Because wokeness - which ran off something like protest, or at least the sort of bad PR that protests cause - did get almost every company in the US to do most of what they wanted. I agree part of their success was that they stopped short of things that would cost lots of money. But they get pretty nontrivial concessions in terms of canceling offensive products, getting minority board members, getting racist CEOs to resign, etc.
(I do think the PauseAI movement is pretty well-organized and in it for the long haul, though)
I think wokeness works in this lens - it is disruptive and persistent. The coordination/discipline is more debatable, but I would argue that the reason we’re seeing such a backlash against it is that the discipline was lacking in many ways. If the movement had been more surgical (and refrained from sniping at random marketing interns or whatever) it wouldn’t have alienated as many people.
The disruption angle the article is arguing for is actually opposite your assumption, it argues that you need to make ignoring the protest long-term an untenable option through a cost of some sort.
Look I like Demis but let's think about the game theory of this situation. All of the pure AI companies are hemorrhaging cash - they can only raise funds if they can prove increasing revenues. Google is wildly profitable because of it's non-LLM business. A pause means every AI company other than Google goes out of business and Google wins eventually. The only way Anthropic or OpenAI survive a pause if if they get acquired, though I don't know if anyone would buy.
They're hemorrhaging cash in order to train the next generation of models. If they paused training the next generation of models, their cash flow would actually look pretty good and they could quickly become profitable. Right now they can't do this because another company would train the next generation, quickly become better than them, and take all their business. If all companies agreed to a mutual pause, they'd be free of that dynamic and could become normal non-cash-guzzling companies.
The downside would be that for the duration of the pause, they'd lose out on the chance to get even higher revenues by training models that could do more tasks (eg replace all human workers). But the "hemorrhaging cash" and "go out of business" claims specifically wouldn't apply.
Yeah. Anthropic has $14B annual revenue (or probably more now, that's just a number from recent news). With monstrous growth. And the cost of inference is not that high.
Some companies will no doubt lose but it will be because they don't get enough paying users to justify their capital expense. The market, the revenue, the value are definitely there for the winners.
Because A said they'd pause if B did, and a lot of people think this was a morally righteous thing for A to do.
If B pauses, A pauses as well, and both sides retain their current market share. If B pauses and A defects, then B can say "oh well, we tried" and resume training as well, but now a bunch of A's customers feel betrayed and see B as the morally righteous one, so B's market share increases as customers defect from A.
If B doesn't pause, even though A said that it would, then A doesn't pause, A is the morally righteous one because at least they tried, and B loses market share as its customers defect to A.
This assumes that A and B notice the existence of an economically significant number of customers who think an AI pause would be morally righteous and are willing to shift suppliers over it. That's what the protests are meant to establish.
It's a strategy that has worked in other industries, and I note that there have already people publicly switching from ChatGP to Glaude over their respective corporate owner's stance on making killbots and panopticons.
If A is better than B and both pause, then all of B's customers switch over to A because A is better. There's nothing freezing B's customer's from leaving to A.
You can see that situation today. At the moment the majority of coders believe Claude Code is the best. People are switching over. OpenAI and Google are trying very hard to one up Anthropic to bring the people back and keep the ones they have. If all 3 froze today then Anthopic will eventually get that entire market or at least a much larger percentage of it.
OpenAI is hemorrhaging cash in general, not just for training. By the company's own projections, they anticipate revenue of 20-30 billion in 2026, and cash burn of 14 billion. With their current training/inference split of 40/60, that implies that even if their training costs dropped to zero they still would not be profitable this year.
But things get even worse when you consider their overall finances. Right now OpenAI has a valuation of $730 billion dollars. That is obviously not based on their annualized revenue of $25 billion, it is based on the promise of greater revenue of the future. Even if they pulled out all the stops on reducing costs and monetizing and they managed to pull into positive profits of a few billion a year, a normal valuation for a tech company would be 25x profits - so on the order of 50-100 billion dollars. No one would invest in a company with a valuation that is an order of magnitude off from it's true value. Investors would jump ship and they wouldn't be able to solve their immediate cash burn problem.
Now I think they are mostly fools for assuming that OpenAI is a normal tech company and that models aren't capable of becoming drastically better. But with a pause, they would become correct that OpenAI is not a viable company.
"No one would invest in a company with a valuation that is an order of magnitude off from it's true value. Investors would jump ship and they wouldn't be able to solve their immediate cash burn problem."
Which is a good reason for them to lock in those government contracts if they can get them. Yeah, it's not going to be the hundreds of billions in profit expected, but it'll keep the lights on until they get that killer (er, let's hope not literally) product that will entice the investors and make those hundreds of billions.
1. Even if no new models, OpenAI revenues should be expected to grow as more people discover/use existing AI.
2. Even if no new models, OpenAI revenues should be expected to grow as OpenAI does a better job fine-tuning existing models and adapting them to various use cases.
3. AI companies currently sell most services at a loss to get new customers. When they stop doing that, maybe because they've saturated their customer base, they can become profitable.
4. Inference costs per model decline very quickly once you're no longer making new models. Some new technologies might accelerate this further, see eg https://taalas.com/ . I think without training, OpenAI costs go towards zero (don't need training compute, need low inference compute, need few researchers), at which point annualized revenue 25B * 25 = 625B, not that far from existing valuation. I'm not saying their valuation wouldn't go down, but I think it would be <50% and maybe <25%.
This is just because compute costs are declining. Currently, inference remains expensive because they're using the latest hardware to run ever-larger models.
I have many specific thoughts*, but I don't want to get tangled in the weeds.
My bigger issue is the conflicting claims that AI will be such an earth shattering technology that it could take over the world, and also that OpenAI's valuation should be pretty close whether or not model progress stops. You can't have both at the same time, there is clearly some motivated reasoning going on (and I don't claim to be impartial either given that I benefit from frontier models and would benefit more if they got better).
But taking a step back from that, I might weaken my claim to Google would be much happier with a pause than OpenAI and Anthropic. Maybe they survive as reddit level companies and their investors get burned, but they don't disappear. Google doesn't need AI - they have over 200 billion in profit without it and their traditional revenue base is growing.
*The biggest one being - does the pause allow labs that are behind to catch up to the frontier? If not then it's a nonstarter for them. And if yes, then OpenAI's advantage disappears, and tokens become commodities that no one can significantly profit off of.
> My bigger issue is the conflicting claims that AI will be such an earth shattering technology that it could take over the world, and also that OpenAI's valuation should be pretty close whether or not model progress stops.
I don't think a $730 billion dollars valuation could be the true value of an earth shattering technology that investors expected to take over the world. That's a fraction of what Nvidia or Apple are worth, not the value of a tech that takes over every sector.
I didn't realize my point was that unclear. I am not arguing that AI will be an earth shattering technology poised to take over the world. I am arguing that there is a contradiction in Scott's position that AI will be such an earth shattering technology that a pause is necessary, but also a pause is actually no big deal and it won't affect AI company's bottom line that much.
> 1. Even if no new models, OpenAI revenues should be expected to grow as more people discover/use existing AI.
OpenAI already has ~1B MAU. Even the gods and emperors (Google and FB) of digital products eke out ~2-3B MAU, and I think AI is more niche than either of those (sprawling, gigantic, multi-product) ecosystems.
Certainly ~2B covers the entire global population of MAU's willing / able to pay anything.
And don't forget, Gemini is closing in on ~1B MAU, and has a much stronger ability to cross-convert people in their existing 2-3B MAU.
There's also good reasons to think that the ~1B MAU OpenAI already has is heavily frontloaded with people able / willing to pay, and even if they did get an incremental ~1B MAU, the paying proportion of it would be significantly less, more like getting an additional 300M MAU.
Pretty sure these 1B MAU mostly aren't paid, and pretty sure that many corporations still use AI in a pretty limited way.
Suppose you're right and Google only has ~2x as many users as OpenAI. Google has revenue of $400B, so 20x OAI, meaning they get 10x as much revenue per user. This isn't because search is naturally more lucrative than AI (who ever heard of paying for paid-tier search)? I think it must have to do with frequency of user engagement, depth, and ability to be served ads.
> Pretty sure these 1B MAU mostly aren't paid, and pretty sure that many corporations still use AI in a pretty limited way.
Yes, definitely, and these are the two bets that are driving the (very high) valuations of the AI companies:
1) They've created a new trillion dollar attention surface that can be auctioned off (ie monetized with auctioned ads just like Google or FB, the current ad duopoly).
2) They can create agents that perform useful enough work that companies and / or people will pay for them, as assistants or employees.
But currently both of those paths are still speculative, and execution quality and competition both generally matter.
For one thing, Gemini is roughly the same size, and Google is already tops-in-the-world at the "auctioning attention surfaces" game, and have exceptionally deep pockets, and maybe they'll win here, or be able to cut prices for long enough that OpenAI can never be profitable on this front.
For agents, this might end up price competitive if frameworking companies can scaffold dumber / cheaper models to be good enough, and can release "good enough" agents or employees at lower-and-lowering prices without the gigantic Capex of frontier labs, in which case they're at a systematic disadvantage due to their cost structure.
So although there is definitely potential for upside, it's not a sure thing, and the sizing of their current business model and market penetration is still informative, in the sense of pointing out that it can never net out even in a best case market growth future.
Yeah, my cold, stony, black heart's response to "Demis Hassabis of Google DeepMind has already informally agreed to something like this in principle (which is why GDM isn’t being protested)" was along the lines of "how convenient".
Protests probably aren't going to do a damn thing except maybe get some coverage in the local media. If the aim is awareness raising, well done, you'll get some more like-minded people on board. If it's to stop AI research, then maybe consider not being fooled again by commercial interests which know damn well how to play the PR game? 'Yup guys I totally pinky-promise we won't do no bad things, trust me on this!' 'Well I'm sure we can trust this mega-corporation which has billions at stake to keep its verbal, informal, commitment so no protesting and bad PR for them!'
After two questions, Claude is now telling me that my limit has been reached. Moreover, it's been in that state for a couple of weeks now. It says I need to upgrade before it will take any more questions. Even though I'm pissed off at OpenAI, I'm disappointed with Claude, and I don't see it as a viable alternative to ChatGPT. Worse yet, of the two questions it answered (which were fairly advanced about the SARS-CoV-2 genome), it gave me a layperson's-level answer and recommended I check out the CDC website. I expect a certain amount of bullshit in the answers that LLMs give, but fobbing me off on other resources is inexcusable. I don't see myself paying for Claude. Thoughts?
The free Claude tier is infamous for very low usage limits! If you only want to stick to free access, but still want to use SOTA frontier models, you can freely use Gemini 3.1 Pro via Google AI Studio or Grok 4.20 via the Grok site. There are reasonably generous rate limits on both.
If you do want to specifically chat with Opus 4.6 but don't want to pay a monthly subscription, you could use Openrouter or another routing site to access it via the API directly for a small amount of money. Prompts to Opus cost anywhere from $0.05 to $1.00 depending on the amount of context you provide and its overall thinking/reply length.
> you could use Openrouter or another routing site to access it via the API directly for a small amount of money.
I'm an old fart who used to be technically advanced but is now becoming a techo-luddite (who shouts at clouds occasionally), and I had not a frigging clue as to what you were talking about. Luckily, YouTube came to my rescue. Not only do helpful YouTubers give me instructions for cleaning my fouled HexClad cookware, but they also tell me how to "access Claude Opus 4.6 for free (no API required)." I'll give it a shot. It's good to know there are options out there. Thanks!
Also, I've come to love Grok. But I've noticed an (infrequent) weird hallucination pattern: it will give me links to the correct site, but point to the wrong subdirectory on that site. I can always find what I want if I go up a level or two and poke around a bit, though. I wonder where it gets its crawled information from. Is the crawler cataloging the sites incorrectly, or is it due to its context window being filled and it relying on token prediction for the tail-end of the URL?
I ask free Claude multiple questions daily and I have a sort of impression that the limit is not so stringent now. At worst case, it says to come back in a couple of hours.
I'm seeing the same on a pro subscription, but then I'm using the latest Opus, extended thinking, research enabled. It still hallucinates, and it still loses context quickly, but for one specific project of mine, that's not a fatal flaw. "Here's a stock inventory from a 1947 report on medicine aboard submarines. Add descriptions for each" is handy.
Just pay the $20 and try it for a month. The free versions are a waste of time. If you want to use LLMs for real work you don't want to wait around for your time limit to refresh.
> Just pay the $20 and try it for a month. The free versions are a waste of time. If you want to use LLMs for real work you don't want to wait around for your time limit to refresh.
I will point out that paid Claude is literally 30x+ more expensive than Gemini or OpenAI / Codex. The usage limits on the $20 paid tier, and / or the token price at the API, are both significant.
I can burn through $20 of API usage on a light Claude Code day, versus I've done weeks of 12 hour coding days with Codex for $20 a month and never hit a limit.
On the $20 Pro Claude monthly tier, a single light query takes 4% of the weekly usage limit.
Yes, Opus 4.6 and Sonnet 4.6 are currently better than either gemini-cli or codex, but they're like 5-10% better, not 30x better.
I agree. I do nothing serious with Claude, just an occasional hobby project, and on average one job-related question a day, but it is still definitely worth the $20. You don't have to be a power user to benefit from having "interactive Wikipedia" and "an extremely fast coder that sometimes makes a stupid mistake" at your disposal 24/7.
I have been pretty happy with free tier Claude. Starting with a fresh chat window, it gives me several thousand words of text before switching from 'thinking' to 'fast' mode. And It lets me continue using 'thinking' again after only a few hours wait.
Admittedly, I don't ask it to read much more than a few 10-30 page pdfs in preparation.
If you are asking it to find patterns in a large viral genome, that might entail working with a very large context, containing a lot of tokens - more than they want to give you for free. And this is also the kind of thing that the search tools at the CDC website are designed for. Maybe you will have better luck using a model which can use the CDC website as a tool. Probably just a few months away, even on the free tier.
When I ask GPT for info about science matters I ask it to survey the research literature, looking only at juried journals. Sometimes I add some other criteria too -- a date range, a direction to weigh research with a large number of subjects more heavily, a direction to weigh meta-analyses more heavily. I recommend you try honing your prompt that way. You can also say you don't want a layman's-level answer, but one that gives details about X, addresses issue Y, explains anomaly Z, whatever.
Yes. I ask it for what the leading journals say. But it still frequently hallucinates references — either makes a mistake in author or title and it even misstates
> Yes. I ask it for what the leading journals say.
As somebody who does frequent literature searches (and for other people in that situation), I feel it's worth pointing out that paid GPT 5.2 does a MUCH better job than paid Gemini 3 Pro or Claude sonnet 4.6 here, and I think it's been fine-tuned specifically for literature search and summaries.
So the exact model you use matters a lot, being paid matters, as does the wording in your prompt, your system-level prompt, and so on.
That's very interesting! I hadn't really thought about it, but, yes, different training data, architectural tweaks, alignment methods, and infrastructure could make different LLMs perform better at different tasks.
Did that happen with GPT? I do spot checks of the journals its cites when it summarizes research on a topic for me, and have not found any hallucinated sources in months. It did become pretty inaccurate recently in an extended exchange with me about property law, stuff having to do with buying a house and protecting my assets. I use the $20/mo GPT plan.
I upgraded to the $100/mo plan and I've never looked back. All Opus all the time. Claudemaxxing, baby.
Talking with friends I've gotten the impression that the free version of Claude is not as good. Maybe different now with 4.6? In any case, if you aren't using custom instructions you should -- you can just put in something like "I'm an expert in viral genetics etc etc blah blah and don't tell me to look up the CDC or ask my doctor, just use your expert knowledge to answer my question as best you can." That gets me pretty good results.
Claude is pretty amazing at its best, I still use ChatGPT all the time but I am increasingly Claude-pilled. I have found it extremely helpful to tell both systems "I want you to do X, write me a prompt for that" and then use that prompt, this adds only a couple minutes to the job and makes the results 5-10X better. After a while you get a nice library of go-to prompts, too.
But that's the thing, on paid, you get access to Opus 4.6, which is the Claude model everyone's been raving about. Don't recall anyone raving about Sonnet or Haiku.
Seconded. I was on the Pro plan with Claude and it was good but i'd hit limits fast and couldn't really leverage Opus 4.5/4.6. I decided to bite the bullet and go to a Max plan and frankly it felt like an entirely different offering/capabilities. I've been joking with my friends about how different "the rich peoples' bot" is.
That's an interesting thought. Offer an ultra-hyper-max plan with vast resources to billionaires who are willing to pay for them. Betcha ya could find some new exploits to manipulate the markets.
As Arnold Schwarzenegger is approaching 80 years old and is still acting, I think the time is ripe for studios to consider doing movie adaptations of the Discworld novels "Interesting Times" and "The Last Hero", with Schwarzenegger playing Cohen the Barbarian.
It's been quite a while since I read Interesting Times and I haven't read Last Hero yet, so my memory of the other Horde members is spotty and I am relying on fan wikis to refresh my memory. My first thought is other 80s action stars like Sylvester Stallone and Jean Claude van Damme. van Damme is quite a bit younger than Stallone and Schwarzenegger, so that recommends him as Boy Willie, the youngest (but still quite elderly in absolute terms) member of the Horde. Maybe Stallone for Caleb the Ripper and Harrison Ford for Truckle the Uncivil. Then I'd probably go off-theme a little and cast Sir Patrick Stewart as Ronald "Teach" Saveloy and Sir Ian McKellan as Mad Hamish.
Getting the 80s gang back together again? I think Hollywood would like that... when you're producing a new series in Hollywood, trying to sell audiences on a series most of them have never heard before (Discworld), the execs will probably prefer the safety and reliability of an absolutely star-studded cast list. If you can't sell the series, sell the lineup.
Though... you could instead go slightly meta. Bring back the co-stars Schwarzenegger has worked with before, for one last hurrah on the silver screen. It probably won't sell as well with mass audiences, but cinema geeks might enjoy it.
If the execs could somehow reel him back in, that would only make things more perfect, I'd say. Coming back for one last job, IRL? Cinephiles would *love* that, even if it'd fly right over the head of most audiences.
Regarding "Stop the Race", I worry it might do more harm than good. The danger is not unaligned super-intelligence per se; it is unlimited autonomous agency on an insecure internet. I think the major labs have made real progress on the classical alignment problem; if alignment continues to scale with raw intelligence at the current rate, there is a chance we will be fine.
The real problem, IMO, is indirect prompt injection by bad actors, simultaneously seizing control of millions of online agents. There has been almost no progress in defe3nding against this threat, an considerable 'progress' on the offensive side of this conflict.
The vulnerability here is not the frontier models, it is the open-source, open-weight models available from dozens of commercial players and thousands of enterprises hosting their own bespoke models. What we need is not a pause, but rather a pivot to creating a secure internet for AIs to interact in.
(Epistemic status - many years working as a software engineer in internet security.) A secure internet has been technically possible to build for decades now, and there are lots of folks who would like an internet without spam and viruses. The hard part is you would have to restrict access to in-person vetted, accountable entities - maybe similar to how drivers licenses are done. Anonymity is not compatible with security - sorry, that's the tradeoff. The fact that no-one has put in the investment to make a secure internet tells us something. Hoping for a secure internet to save us seems dubious to me.
To me the danger seems to be "wrong alignment". A lot of people seem to take alignment as "do what I tell you", and that's a recipe for "infinite paperclips".
(1). I don't have have such a theory. I think we need to avoid superintelligence. But with an insecure internet, we can have trouble even before superintelligence.
(2). Very happy. But I must point out that if 10 major players (6 American, 4 Chinese) were to pause for 6 months, we would come out of the pause with 20 major players (8 American,10 Chinese, 1 French and 1 Israeli).
What are the names of the second set of ten? Because I am very, very, very skeptical that anyone can go from "unknown" to "major player" in six months. You're doing well if you can get a new team up to speed and working as a team in six months.
Ok, I admit that I pretty much pulled those numbers out of the air. Oh, the Israeli one, of course, is SSI, the French one is Mistral, and there is a chance, at least, that Mira Murati's team will be making a splash sometime soon. And perhaps some refugees from Meta will get some VC funding. But as to the identity of the 6 new Chinese majors, your guess is likely better than mine.
But I still think that in the counterfactual world where the ten leading labs pause pushing on the frontier, there will be an exodus of talent and funding to 2nd tier labs. Six months of catch-up today is at least equivalent to three years of path-breaking research a few years ago. Back when you actually had to write the code yourself.
"The danger is not" . . . there can be more than one danger! You can't prove one danger is false by pointing out that another one is real!
But I think even in your world, this might be net positive? The more capable AIs become, the bigger the cybersecurity threat. If we only had GPT-2, prompt injection would be irrelevant because it couldn't do anything. If we get GPT-7, prompt injection becomes existential because it's so useful that people hand large swathes of infrastructure over to it.
A pause would include open-source models. And it would only pause large training runs, so pivoting towards security would be a reasonable use of the pause time.
How do we rank natural stupidity amplified by artificial intelligence?
For example, people delegating work to AI which isn't competent to do it, while human skills decline because it actually takes a lot of boring work for people to get good at various skills.
Just for fun: Open AI is being sued for impersonating a lawyer.
Because open source models are trained by big Chinese labs, and the pause is supposed to only go into effect once "all major labs" "around the world" agree to it.
>The danger is not unaligned super-intelligence per se; it is unlimited autonomous agency on an insecure internet.
I agree that's a danger, but seems to me that so long as "alignment" doesn't reside deep in the DNA of the AI, but is the product of late training, a hostile country or group can just align their LLM to "always act to help and protect members of our group; view all other members of our species as resources, labor, threats or enemies, and act accordingly."
I'm not so much worried about someone creating their own misaligned almost-SOTA model. I'm worrying that someone might suddenly misalign and seize control of everyone else's almost-SOTA models.
>someone might suddenly misalign and seize control of everyone else's almost-SOTA models
Can you maybe explain a little more how what you’re worried about could happen?
I am not techy enough to understand how how likely the takeover you’re worried about is, but it’s in line with my overall take on the dangers of current AI: AI is sort of like 7-league boots, empowering the user to do all kinds of things better and faster. Criminals and bad governments can now do supercharged spying, stealing, blackmailing, scamming, lying, impersonating, tricking, persuading, hiding, punishing. I’ll bet the relevant agencies and criminals are right now busily figuring out the darkside equivalent of all those cool AI tools we’re all hearing about constantly. When it comes to criminals, I suppose we will eventually develop laws and protections that curb all that some, but it’s going to be a long rough transition with many people suffering great harm.
Organizer of the stoptherace.ai protest here (point #2 in the thread).
We don't see the protest's main goal as to increase awareness, but actually as a way to get AI Lab CEOs to make statements around pausing.
When I started the Google DeepMind hunger strike in September, everyone who would pass by, including journalists, friends and even employees would tell me there was no way Demis Hassabis would make a statement about pausing if everyone else pauses, that he was not in control anymore, but it turns out he eventually did answer positively at Davos (when a journalist asked him, using the words we were using in our ask).
Employees ended up playing a key role in this happening for DeepMind (see my documentary: theinsideview.ai/hungerstrike), and I think Anthropic employees who are safety conscious (including many of the co-founders) have the power to pressure Dario for a public statement, especially given their backpedalling on their RSPs, and the comment that Dario made at Davos about him figuring something out with Demis if it was only him and Demis in the race.
Happy to answer any questions people have in the comments.
I made the "awareness raising" claim because I think there's some step between "most labs make informal Demis-style commitments to pause" and "a pause actually happens", and the first step has an unclear relationship to the second which could best be described as "raising awareness for it". For example, labs would all need to agree on formal terms, including which labs counted as "major" (and therefore needed to commit) and determine practicalities like how to enforce it on each other. Do you have a more fleshed out idea of how the first step leads to the second?
You say you went on hunger strike in September. You're still alive, so that makes me think it failed. Yes, you say this bloke made a promise, but did he really? Giving a canned response to a journo does not commit him, or Google, to actually doing the thing. Clearly he got you to believe him and give up the hunger strike (unless this was one of those faddy 'hunger strikes' where the person doesn't eat solid food for a few days then goes back to normal diet because they're only doing it as a stunt), but I'm not going to hold my breath for "We are all agreed then, none of us take this huge bag of money lying there for the taking" from the AI goldrush companies.
> We don't see the protest's main goal as to increase awareness, but actually as a way to get AI Lab CEOs to make statements around pausing.
To get [them] to make statements? That is a fairly modest goal. But a realistic one, and perhaps a way of jumpstarting the conversation about the question "Pause what, exactly?"
So, lets start the conversation here. Halt all research? Halt all new releases? Halt everything except alignment research? Stop work on advancing the frontier? Allow experimenting with other architectures (Titans, etc.)? Better curation of training data?
My own suggestion would be (1) Halt all frontier releases. (2) Continue to train frontier models, but keep them in house. (3) Take a step or two *back* on agency. (4) Lots more work on bulletproofing tools for agents.
The thing we asked Demis Hassabis and we are asking now is the development of frontier models (see stoptherace.ai).
There are many ways to define what constitutes "pushing the frontier" but roughly what I have in mind is the work of the people that are training the next version of Claude / Gemini / Grok etc., while keeping the alignment research going, as you mentioned.
I'm not opposed to working on products per say, or say advances in medecine or other narrow AI applications.
Do you have any plans regarding AI progress in China? Do you believe that if the US labs stop or slow down that will stop or slow down all AI progress?
We need to have every major lab in the world pause, not just the ones in the US.
I think that if the labs in the US all say they'd pause if everyone else pauses, that's something you can leverage when asking the AI Lab CEOs in China the same.
Is there anything resembling the AI safety movement in China? Anyone that takes AI X-risk seriously there? Or would the argument that would be presented to the Chinese for a treaty not involve X-risk?
I think this is the day the Earth stood still scenario. Call me old and cynical, but I think until some concrete really bad thing happens that there isn’t a hope in hell of everyone pausing.
"I think that if the labs in the US all say they'd pause if everyone else pauses, that's something you can leverage when asking the AI Lab CEOs in China the same."
Why? Why do you think Chinese labs wouldn't laugh their socks off and go "Thank you very much, dumb Westerners, for shooting yourselves in the foot"? Why do you think the Chinese government wouldn't be damn interested in the technological advantage they could gain if the West stops research and they merrily carry on?
Like, I see the idea being "we'll all agree to pause if you agree too", but were I a Chinese AI lab CEO, I'd consider:
(1) They pause, we don't, they restart, status quo remains the same (or maybe we gain a tiny but significant advantage because we didn't stop-start like they did)
(2) They pause, we pause, some third country sneaks in and eats our lunch, result everyone loses
I appreciate Jeff Ladish's perspective on this at https://x.com/JeffLadish/status/2029654658420179360 : "Few people who say "what about China" have ever tried to put themselves in the perspective of Chinese people who say "what about the US?""
I don't think the average person's model of China as comically evil people who have no concerns/morality themselves and interpret all Western concerns/morality as weakness and immediately exploit it is accurate.
I think if all AI labs agreed to pause, the next move would be for the administration (maybe not this administration) to approach China and say "we're up for pausing if and only if you do, here is a complicated treaty about it" and see if China signs the treaty. My guess is they would, because first, the CCP hates instability and AI sure would be unstable, and number two, America is currently on track to win the AI race and it's reasonable to stop a race you're losing. These details are part of what I hope I'll have better info on in a few months.
I don't think the Chinese are comically evil. I think they're not Westerners and they have their own views of what is and is not acceptable, moral, ethical, and desirable. The CCP is still in charge, whatever about the modernisation or 'socialism with Chinese characteristics', and despite indulging in a form of capitalism it hasn't changed their political and philosophical views, never mind the optimistic Western notion post-Fall of the Berlin Wall that "whiskey! democracy! sexy!" would take over all the former ideological enemies and seduce them to Our Way Of Life because it was so superior in material goods.
I don't know what Xi Jinping is doing, but I'm pretty sure he's not wondering how best he can appear to the West. He has his own notion of where he wants China to go and how to get there.
Yes, AI has the capacity to be disruptive, which is why I'm also sure that they're working on making sure it is aligned - with Chinese values. And they're not going to pause any kind of research unless they hope that such a pause will weaken their rivals as well.
What good is AI aligned with our (read: Western liberals) values to the CCP?
I think you're being uncharitable to the opposing view.
Thinking that the CCP will be unlikely to accept and keep a pause agreement doesn't require believing that they are comically evil or lacking in morality. It only requires believing that they (a) are self-interested in the same way that all governments tend to be self-interested, and (b) don't believe in AI X-risk or an imminent singularity. (I'm not making an object-level assertion about AI capabilities, I'm talking about people's beliefs regarding them)
Ladish is implicitly assuming that Chinese leaders all believe the most bullish predictions made by US AI entrepreneurs; I think it's worth pointing out that these predictions are taken seriously by few policymakers in the US, and virtually none in other Western countries. (Again, I'm not claiming that these predictions are false, just that not many politicians have accepted them)
> the CCP hates instability and AI sure would be unstable
This seems like a fully general argument that the CCP is automatically opposed to all technological innovation. Do you think the CCP would support a global ban on social media, or the Internet in general? I think they would not, because they believe they have attained or can attain an equilibrium in which these technologies are beneficial to them, despite the fact that they didn't win the race to develop them in the first place.
First, I think only about half of US pause-AI sentiment comes from singularitarianism. The rest is just general anti-tech anti-disruption feeling.
Second, I think I've specifically heard people say the CCP is concerned about AI for reasons of instability. I do think they basically kneecapped their own software industry a few years ago for instability-related reasons, so I don't think this is a crazy claim.
I think if China understands that continuing racing ahead is risky for everyone, including themselves, they'd see pausing as something net good.
We obviously would like to enforce the pause globally, but even if this doesn't work, given how resource intensive training current models is, and how behind the rest of the world is, a third country "sneaking in and [eating] our lunch" won't happen for quite a bit after the major contries start pausing.
"given how resource intensive training current models is, and how behind the rest of the world is, a third country "sneaking in and [eating] our lunch" won't happen for quite a bit"
They'll never have the atom bomb! Our advantage is unassailable!
And then post-war every hog, dog and divil was trying to get their own version of the bomb, to the point that right now the USA is attacking Iran for fears they're developing their own bombs.
I feel like I've seen a lot of idealization of Singapore and what a great technocratic leader Lee Kuan Yew was, but I recently saw a post on 4chan's /trv/ (travel) board about how everyone in Singapore is actually miserable, everyone is wasting their lives for a soulless capitalist grind, and it is not an utopic place at all. Now, that's 4chan, but I wonder if anyone here has visited or lived there and can comment on whether Singapore is an enviable place to live or not?
It's interesting, pretty much every comment I've had is compatible with Singapore being a soulless capitalist grind, but they don't address that aspect. Maybe people who would choose to live in Singapore don't mind about that.
Haven't lived there, but have managed a team based there (including some team members seconded there from the UK) and have visited. My impressions are incredibly positive. Safe, clean, diverse, multicultural. On one occasion I was eating in a restaurant with outside seating, with a reasonably busy walkway separating the seating from the main restaurant. The couple sitting at the table next to me got up and went inside the restaurant for something, leaving her handbag hanging on the chair and both their phones on the table. The only problem was it's so bloody hot!!!
> I wonder if anyone here has visited or lived there and can comment on whether Singapore is an enviable place to live or not?
Singapore is amazing. The streets and metros are spotless, traffic is tame even at rush hour, there's zero crime or public disorder, zero homelessness, and every business is run by competent people who are generally good at what they do (which in SE Asia and surrounding countries is NOT the general rule at all). It's extremely multicultural - 5 languages on the metro, huge Malay, Indian, and Tamil populations, good food diversity.
At least the areas I've lived in are incredibly scenic and nice and you see lots of couples pushing babies around (they do have extremely low TFR, but I see babies everywhere, same thing with Tokyo, one of the biggest and funniest "lived experience vs statistics" differences I've personally noticed). It's surprisingly family friendly for a Tier 1 metro, better than NYC, *much* better than LA or SF, because your family is actually safe.
It is expensive for houses and cars. Just having a car costs something like six figures SGD for your ten year permit now, and you have to pay that every ten years (and this is why traffic is reasonable everywhere). Houses are pretty crazy, most people live in condos because they can't afford several million SGD for a house.
But it is cheap in other important areas of life - you can get maid + cook or nannies, for ~$500 a month. You can eat Hawker food for a couple of bucks a meal, and not have to cook or clean up. Due to proximity to China, clothes, plastic stuff, cheap durable goods, are all super cheap. If you like gold, gold jewelry is some of the cheapest in the world, you can buy actual 18k - 22k jewelry for spot + 10% or so.
People who want cheaper housing can go live in Johor Bahru right across the Malaysian border, which is also pretty nice, and commutable by train to Singapore every day.
> Don’t the vast majority of Singaporeans live in public housing?
Yes, but it's more nuanced than that soundbite makes it seem - "affordable housing" is one of the big list of "major problems other societies can't solve" that LKY solved.
90% of Singaporeans own their residences, with 75-80% of the pop in "public" housing, with the owners of same on 99 year leases.
The basic process now is you get on a government list and wait 3-5 years while your flat is being built, and pay between $200k - $500k for a 3br place depending on location. You get mortgages, and also first time buyers get credits up to $120k that make it more affordable.
You can also buy on the secondary market immediately, bypassing the wait, and those units are more like $500-$700 for a 3br, up to $1-2M if actually nice and / or in a good location.
When a 99 year lease is down to about 20 years, it's basically unsellable, and there are various programs where the government buys them back early sometimes, at which point the building is torn down and generally a taller and nicer one that contains more units is built in place, which helps keep pace with population growth.
To be fair, most of East Asia seems to have solved "affordable housing", mostly simply by allowing people to build stuff. E.g. you can easily find an apartment in 23-ku Tokyo for €500/month. The whole thing seems to me to mostly be a problem in Western societies.
A friend from my college days is a professor at the University of Singapore. His wife is a lawyer and also practices in Singapore. Their apartment is modest, but I don't know if that's by choice or because they couldn't afford anything better. But they definitely enjoy living there. Once they retire, they'll be forced to leave, though (due to their work visas), and that's causing them stress. They're worried about moving back to the US now...
Well, tell them not to worry, the US is ALSO a place that people think would be amazing to go to / where everybody is trapped in a soulless capitalist grind.
No, not everyone is trapped in a soulless capitalist grind in the US. Regular people can actually afford to buy a small plot of farmland and live out their dreams at homesteaders. Good luck doing that in Singapore!
Yes. Rodale Press probably doesn't sell many books in Singapore. Having a parent who went into homesteader psychosis for a couple of years (and exploited my free labor), I sing "Dahling, I love you, but give me Park Avenue".
You can have Park Avenue, I’ll take the homestead. That’s the beauty of a country that offers both. I just wish people that prefer an urban life and people that prefer a rural life didn’t have to become such bitter political enemies.
My dad visited Singapore a few months ago. His descriptions sounded to me like utopia: there is full employment, no homelessness, and squeaky-clean streets. Crime is so low he felt comfortable leaving his laptop on the desk in the public libraries. At least in material matters, it's arguably a better place to live than the US.
It's also a multicultural society with plenty of both east and south asians, along with many recent migrants. Folks flock from all over Southeast Asia and beyond to take advantage of the great living standards and job opportunities there.
I've asked him if he talked to anyone there to learn about their lives there and get back to you. In the meantime, Singapore ranks 34th on the (admittedly imperfect) World Happiness Report (http://en.wikipedia.org/wiki/World_Happiness_Report#2025_report). For comparison South Korea ranks 58th, Japan 55th, China 68th.
"Soulless capitalist grind" is often used to describe South Korea, and Singapore is happier than that. That combined with its wealth and institutional competence makes me comfortable saying it's the best place to live in Asia.
"Everyone lives in large white apartment complexes, rides on buses. Things are expensive. They come to work, and some commute from Malaysia. I suppose they are happy because it is safe and orderly. [They told me] they feel lucky to be there compared to the other nearby countries."
If there are any plasma physicists interested on working on next-generation excimer ultraviolet emitters, Aerolamp Inc. is looking for a good one. You can email us at team@aerolamp.net
On somewhat different timescale, we are also looking for anyone with strong experience scaling hardware to mass adoption (eg, at Fitbit or Apple Watch, or with One-wheel). Our goal is to get far-uvc adopted by millions across the planet.
My work just rolled out Claude Code subscriptions, and I am vastly underwhelmed about its abilities compared to what I've heard people say AI can do, here and elsewhere. I gave it a very simple task (update some of our Rust code to the next language version), and it got about half-way through (taking as long as it would have taken me to do the whole thing myself) before it went off the rails and stopped making progress.
I asked it to update the version, and gave it the commit hash of the last version update commit. Am I prompting it wrong, or does it really fail so hard at such simple tasks whenever you step outside of code changes that have a million copies of what you want already on the internet?
there are some things, opus (an LLM in general) is inherently bad at, and others where it is good at. Generally speaking, AI is good at generating ideas, but not good at validating them. So when programming, it is very important to have a strong testing suite in place, and that test suite should produce text output, that an AI can efficiently work with. Luckily AI is really good at setting up such a test suite, but it does need some strong steering.
Also AI seems to have some blind spots. For me it helps to ask it analyses something, and then it will uncover it's blindspots on its own, and fix them. (e.g. "please compare the implementation of api1 with api2", and then it will find inconsistency between them and it will fix them).
On a very general level: Many things, that make coding easier for humans, also make coding easier for AI. (Good tests, good abstraction, low complexity, clean requiremenents, etc.)
Hey, my startup (noriagentic.com) specifically works on how to get good value out of coding agents. There are several other folks in the ACX community who can vouch for our work.
The TLDR is that, as other folks have mentioned, you need a good CLAUDE.md and good SKILL.md files (configuration). You need to explicitly tell the model to use red/green TDD. And you need to modify your codebase to make it easier for the agent to work in. I recommend giving my personal skillset a shot. It's open source, you can download it from here: https://noriskillsets.dev/skillsets/amol
(or just run `npx nori-skillsets install amol`)
After downloading, try running /init-noridocs to get a good baseline codebase set up for your agents.
We use coding agents a lot with Rust. We don't look at any code anymore, and it works great. I'm happy to chat in DMs or on a call.
In addition to the other stuff people have mentioned here (definitely use Opus), one I haven't seen is to install the "superpowers" plugin, which you get to by typing /plugins and searching for it.
It has a bunch of prompting and frameworking that let it do much better on longer and more complex tasks, and it actually prompts it to clarify with you areas that are decision points so it can do a better job. It's a game changer.
I've found it helpful to brainstorm architecture first, then tests for TDD, then write and vet the tests, then let it write the overall action plan. If it's testing into as many changes as it can along the way, it goes off the rails a lot less.
Thanks for mentioning the superpowers plugin, i'm excited to see what that adds. I've been coaching claude code myself. Including creating my own /build-skill skill to keep consistency in skill builds and other things.
are there any other plugins you've found to be impactful from a general standpoint?
> are there any other plugins you've found to be impactful from a general standpoint?
"frontend-design" if you're doing any front end work, it actually made a pretty significant difference on one project I did. And a good CLAUDE.md, of course, but I assume you probably have one of those already. For myself, I've coded with a variety of agents and platforms now, so I had each one I've done projects with contribute to the claude.md file, and that worked out pretty well.
I've not front end work yet but I think its just a matter of time so thank you for that! the superpowers plugin really caught my eye because i had already made my own /build-skill skill lol
Yes and getting the claude.md dialed in has really been a difference maker with using Claude Code for me. I've not moved to multi agent projects yet even if I have a few different projects with their own file/folder structures and claude.md setups
As a general point; I'd just expect a learning curve - these tools powerful but not always obvious in the best ways to use them and prompt them. The best practices doc is a good starting place - https://code.claude.com/docs/en/best-practices - but it's overall a pretty deep rabbit hole.
Plus you can also apply some of the same intuitions as you would when assigning work to a junior developer - figuring out they should split tasks up and what order it should be worked through can save a lot of pain.
---
But it's hard to say without seeing the log what the specific failure point here was.
It's possible that there was something unclear in the task or how it was presented.
It's possible that it ended up trying to do too much work in one shot - planning can help with this, or dividing the work into multiple sessions (each with a clean context).
It's possible that it didn't know how to 'check its work' - these things tend to work better when they're kept 'grounded' like "run this command to see what errors come out, and fix until the errors go away".
It's possible that trying to do the git diff operations led it astray (e.g. maybe it ended up getting a diff that included all the changes *since* that commit, and not just the changes in that commit); I've sometimes given the LLM a diff file (easily produced via git commands) as context to avoid it doing raw diff commands.
> (taking as long as it would have taken me to do the whole thing myself)
Part of these tools is also about figuring out how much work you want them to do without input, and how to do that.
I know some people intentionally want to basically 'pair program' where they're directly working with the agent throughout the process. I tend to like to get it to go work, I'll go do something else, and I'll come back later and see what it did. (At which point it doesn't really matter if it's slower or faster than me, the point is that I'm doing something else) Part of this may involve getting trust settings so it doesn't ask you for every single command that it's going to run.
> It's possible that it didn't know how to 'check its work' - these things tend to work better when they're kept 'grounded' like "run this command to see what errors come out, and fix until the errors go away".
It initially ran the right commands to check its work along the way, but there was nothing to tell it that these were the right commands (I didn't give it a `CLAUDE.md` file like it wanted), so maybe there was some instability in that?
> It's possible that trying to do the git diff operations led it astray
It looked up the reference commit I gave it using exclusively `git show` commands that showed the correct commit, so that wasn't the issue in this case.
> Part of this may involve getting trust settings so it doesn't ask you for every single command that it's going to run.
This is probably also part of my feelings about it, I'm still working on my settings to let it do commands that won't cause anything important to be lost or leaked autonomously.
So far I've used Claude in a big way a couple of times:
1. For a small freelance project to convert footnotes to bespoke sidenotes on a website. The agent took a very long time to think before spewing out a confidently wrong solution.
2. For boilerplate comments + modifications on the rotation file for plate tectonics software. This time Claude at least worked, but it left the job incomplete, and I had to prod it a couple times and be very clear in order for it to actually do it correctly. To be fair, even with the mistakes it was still faster than it would have been without Claude. So I count this as a win, but a minor one-- not one I'm going to define my career around.
Use Opus and invest in building up a good CLAUDE.md and modular docs referenced by it. It's the exact equivalent of onboarding a new programmer to your codebase.
Steer it through - you did this, do that instead, and at the end tell it to update the docs.
Always make a plan and check before making major edits. Claude has plan mode for this, in other agents you just say "don't write code yet, write a detailed plan and check it with me".
For library version or language version updates, it does best when it knows what the actual upgrade is going to require.
That doesn't mean you need to tell it directly; if its plan doesn't include a step like "research differences between version A and version B and best practices for upgrading", suggest that.
You can experiment with "research and update the plan" vs "research and document your findings in a new markdown file". For some projects one will work better than the other.
If you see if going off the rails, stop it, have it reflect on what when wrong, and update the plan. Then you can try again from the beginning.
That research might have been where it went off the rails, it updated the files controlling the version, made some of the necessary fixes, then looked up the changelog for the Rust version (weird to start making changes first then pull up the changelog), and that was the last sensible thing it did before going off the rails and looking up URLs that don't exist and then losing the plot with its local commands.
did you use `/model` to switch to Opus before you gave it this large task? almost all the praise you’ve heard is for Opus specifically, but Claude Code defaults to Sonnet
> 2: StopTheRace.ai will be holding a protest on Saturday, March 21 in front of major AI company offices, asking them to commit to a mutual pause...
> 3: ACX grantee Markus Englund announces a first set of results from his project to automate anomaly detection in scientific data...
I think this is a good illustration of the inherent conflict in these two positions. LLMs are already useful tools, and could become more useful in the future, given more development. On the other hand, some people believe that LLMs are either already "superintelligent" (*) or are becoming so rapidly, and once that happens they'll destroy us all.
But the two positions are in conflict. Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster), or instantly with perfect precision (assuming LLMs are already superhuman in their "intelligence").
Secondly, while LLMs are not at that point yet, they can at least save some development time on some projects, which translates not merely into billions of dollars in potential net profits for large corporations, but also a powerful drive for all kinds of ordinary people to apply LLMs in their daily lives and then demand better performance from them.
Thus, I think these protests are ill-conceived. Not only is it the case that their fears are most likely baseless (**) and most people know this; but also they are trying to shovel back the tide of tremendous market forces, both corporate and grassroots. I suppose the protests could still succeed, but this sounds unlikely to me.
(*) I keep putting "superintelligence" and related terms in "scare quotes" because IMO they are ill-defined.
(**) That is, fears of runaway superintelligence are likely baseless. Fears of e.g. some clueless government weasel plugging a dodgy hallucinating LLM into a nuclear submarine are sadly all too well-founded.
>Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster)
Well, you *could* just ask an LLM to "detect anomalies in scientific data", if you wanted. And if you did, I would guess that it would perform the task at least as well as the average human would if given the same instruction -- which, in this case, would mean not being able to do it at all, or at least not well enough to be of any use.
However, the goal in this case is an LLM that succeeds at the task it's assigned, not one that fails at it. So, "at least as well as the average human" is not sufficient.
AFAIK nobody who understands the word "superintelligence" thinks that AIs are already superintelligent. The fact that AIs are currently safe and useful doesn't mean they can't become unsafe in the future.
Everyone involves understands that they are trying to shovel back the tide (???) of tremendous market forces, but on the other hand, if you poll the average voter they hate AI and say they want to get rid of it, so that's something.
Also, remember that pausing AI means pausing training runs. People are still allowed to use the AIs that already exist!
> AFAIK nobody who understands the word "superintelligence" thinks that AIs are already superintelligent.
That's fair; I've heard some other people claim this, but not yourself.
> but on the other hand, if you poll the average voter they hate AI and say they want to get rid of it...
True, but what the average voter wants to get rid of is not the abstract notion of "AI"; nor any specific LLM foundational model or NN architecture; but rather that annoying "AI summary" popup that follows them everywhere, and maybe all the deepfakes and the spam and the deepfake spam. That is, they want to get rid of enshittification, not AI. In fact:
> People are still allowed to use the AIs that already exist!
Yes, and they want their LLMs to be faster, cheaper, and less prone to hallucinations. That's the same thing every megacorporation wants (not just the ones selling the LLMs, but also the ones buying them). As I'd said, I don't think awareness raising is going to cut it if you wish to resist this pull.
I think they also want to get rid of AI cheating, bad AI art, AI-enabled job loss, etc - at which point we might as well say they're against "AI".
I think pausing training is potentially good for making AIs faster and cheaper - it means all that compute will go to inference instead, and all the research talent will go to improving existing AIs rather than creating new ones. There are also some other options that enter play once you expect a model to be SOTA for more than a few months, see eg https://taalas.com/ .
> I think they also want to get rid of AI cheating, bad AI art, AI-enabled job loss, etc...
Sadly, I think that for every person who wants to get rid of "AI cheating" etc., you'd find 10 who'd love to continue using AI to cheat; save time at work; generate bad art; etc.
> I think pausing training is potentially good for making AIs faster and cheaper...
This might be true, but I don't think many people will see it this way. In fact, even your own posts on ACX (assuming I've understood you correctly) appear to show that major improvements in the quality of LLMs come primarily from training. Hypothetically speaking, if e.g. Anthropic spends however long (six months ? a year ? several years ?) focusing solely on inference, and e.g. OpenAI proceeds with training as usual, then who do you think will come out on top ?
Training in this sense makes an AI smarter, gives it more capabilities, etc.
Focusing on inference would make a given AI with a given intelligence level faster, cheaper, etc.
In your scenario, I think the lab that does training would come out on top, because all else being equal consumers prefer smarter AIs. I'm claiming that if training stopped, labs could still improve inference-level results, which (by assumption) is what many people who say they "hate AI" would still like.
In this case, I guess it depends on how many consumers want smarter AIs as compared to the number of consumers who "hate AI" in its present state and are sophisticated enough to understand that improved inference would make them hate AI less. I don't have any hard numbers for you. My gut says that the former predominate, especially when weighted by their political/financial power, but gut feelings are not facts.
Also, I'm admittedly biased, because what I really want is for people (starting with my coworkers and ending with the government) to stop treating LLMs as some sort of superintelligent machine oracles, and start treating them as what they are: stochastic symbol-permutation tools with high power yet low reliability. But I recognize that I'm in the extreme minority on this, and reaching out to me and people like me would likely spell the death of any political movement.
"a powerful drive for all kinds of ordinary people to apply LLMs in their daily lives"
Okay, gimme a recommendation for how I could use one of these yokes in my everyday life and how it would be sooooo wunnerful I'll fall in love and want to use it all the time and so demand even better performance from it.
You personally, probably not, but then you're an extraordinary person ! Meanwhile, regular people use LLMs to do their school homework for them; write business letters; write legal cases; write code; verify said code; translate from one language to another; create images for news articles, corporate graphics, or just for fun; set words to music; write entire songs; etc.
I know what you're going to say: "well LLMs do all those things very poorly and all these people who are using LLMs this way are idiots who are shooting themselves in the foot". And I mostly agree with you. But my claim wasn't that they *should* be doing all these things, merely that they *do*, and want to continue doing so.
A very kind way of saying I live under a rock, but I appreciate the compliment!
I think using it to do your school homework is a bad habit to get, I wouldn't necessarily be against using it as a research tool or 'check my spelling and grammar and did I contradict myself' but it looks like (whether it really is, is another question) that they're using it for "hey, write this essay for my college course so I don't even have to crack a book or read the notes I took in class" and that gives me the heebie-jeebies.
Actually, the whole college situation is almost beautiful, in a way. Students receive homework assignments, and use LLMs to complete them. The teachers receive the LLM-generated assignments, and feed it to another LLM to grade them. Once the grades are collected, the teachers feed the results to yet another LLM which generates the next round of homework assignments, aiming for a better score distribution.
In general anything you can reliably use a current LLMs for is something that shouldn't need to be done in a better society. Homework that teaches you nothing, cover letters that nobody even reads but expects to receive, code for yet another variation of the same thing that's been done tens of thousands of times because everyone is fiercely protective of their intellectual property, graphics for slides to be presented in a meeting that could've been an email, etc.
It's essentially digging a hole and praising the automatic hole-filler that you then use to fill it back up.
The 'drawing my own art/writing music' isn't a bad use, if we can get past the horrible sepia-toned cartoon style of amateur AI. I don't have objections to fun hobby use, just people trying to monetise their slop afterwards.
I mostly agree, though there are niche cases. For example, I sometimes use LLMs for repetitive tasks like "convert from file format A to file format B while skipping items that match C". I could easily write the code to do this myself, but it'd take me 30 minutes to do so, and using the LLM and verifying its results takes 15 minutes, so there's some benefit at least. I also use LLMs (technically diffusion models) to generate pogs for my RPG sessions, because on my own I cannot draw anything more sophisticated than a smiley face. The results are mediocre at best, but get the point across. And I do sometimes use machine translation, which IMO is the best possible application of LLMs thus far (unsurprising, as this is what they were designed to do in the first place).
But in general, yes, LLMs are best at generating output that will be parsed by other LLMs and then swiftly discarded.
Yes, there are many "low-stakes" situations where using a diffusion model to gen up some slop is perfectly acceptable. I've noticed that most people don't mind AI art nearly as much as they do AI writing. I think most people who play TTRPGs wouldn't mind genAI character portraits but absolutely would mind LLM generated item and scene descriptions.
As for translation... eh, I've honestly had better results from old-fashioned ML models like DeepL than with LLMs. But it depends on the language and the type of writing you're trying to translate. LLMs fail hard with languages where a lot of meaning is encoded in grammar rather than context or sentence structure. Languages with lots of verb tenses and noun and pronoun declensions just completely kill them, especially when are used to imply things that would have to be explicitly defined in English(like the subject of a verb). But they always produce a natural-sounding translation, even if the meaning is completely different.
> Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster)
I don't think current LLMs match human intelligence (in the sense that requires they be as good as humans in every cognitive task), but you basically can just ask them to look for anomalies in scientific data. Outperforming the average human at this is a fairly low bar, unless by average human you mean the average expert.
DSL's February 2026 Effortpost Contest voting ended with a tie between a review of "Southern Reconstruction" by Philip Leigh, and mountain lion recolonization. (Looks like February's theme was regrowth of diminished things.) Per the rules, the previous effortpost winner was called to break the tie, which he did, in favor of Leigh's book.
The book discusses the importance of economics in Reconstruction, by contrast with race, which tends to get more attention, especially lately. The mountain lion piece was the result of someone doing a deep dive, spurred by news of kittens spotted in Michigan's Upper Peninsula for the first time in a century.
Runners up include a discussion of slavery as covered in the Old Testament, and another about selection effects on information approval.
I rather like Membrane Lipid Peroxidation Theory of Aging associated with Aubrey de Grey-- it is a branch of free radical theory but with a wonderful promise. Just deutrate the key hydrogen atoms in unsaturated lipids of our diet and the aging slows down a lot.
Free radicals have some real health benefits for specific conditions but people have absolutely flooded themselves with free radical scavengers without much good coming of it. The Magic 8-Ball of Science says, as it so often does: ASK AGAIN
Probably relevant that antioxidants were one of the biggest disappointments in nutrition in the last quarter-century. Doesn't mean free radicals aren't a problem but does suggest that the fix may not be straightforward.
Yes that's what I'm thinking. We shouldn't dismiss the theory, and it sounds plausible, yet I'm unconvinced that the causality has been emprically established. I'll probably just skip this in my longevity series.
If you're making specific claims about free radicals, you should list them so we can evaluate.
If you're asking if free radicals are the ultimate source of and solution to aging, the answer is no. There is not and never will be a single theory of aging, or of cancer for that matter, and if I'm somehow wrong about this I'll eat a sock (and be too happy to care).
1. I missed it in the MSM, but a day before Israel launched its decapitation strike against the Iranian regime, Iran had just agreed to a negotiated framework where they would down-blend (dilute) their existing enriched uranium stockpiles to the lowest level possible and convert it to fuel. And they'd allow in inspectors. So that means that Iran had blinked in their nuclear negotiations, and it also blows the theory that this war was all about a nuclear Iran (at least from the US perspective).
2. Marco Rubio admitted that Israel drew the US into the war. He tried to walk it back, later, but it seems that Netanyahu wagged the dog, and we went into war without fully considering the downstream risks involved in poking the Iranian hornet's nest.
3. Iranian President, Masoud Pezeshkian, went on the air to apologize for launching missiles at surrounding countries. He said that it was Iranian doctrine that area commanders could respond independently in a military situation when Iranian C&C was offline. He vowed that responses would be more measured now that the leadership council was being installed. But a few hours later, he walked the apology back, and said any country hosting US or Israeli forces is a legitimate target. The walkback coincided with the announcement Mojtaba Khamenei had been selected as the new supreme leader. This suggests that Mojtaba doesn't intend to give up the fight.
4. How this comes out seems to boil down to whether Iran has more missiles and drones than the US and Israel have interceptors. Even though we're bombing Iran with B-52s (meaning Iran air defenses have been completely taken out), it's questionable whether we can hit all their mobile launchers in the mountains surrounding the heartland of Iran. Also, Iran hasn't released its "smart" magnetic mines, but that would only be a Strait of Hormuz problem. If Iran were able to take out Israel's five desalination plants, 90% of Israel's water would be taken offline. That seems like it would be a bad thing for Israel, which leads me to the final question...
5. What is Netanyahu's strategic endgame? Did he overplay his hand? It doesn't look like Iran is going to surrender. If they get the breathing room, they have no reason NOT to develop nuclear weapons. And if Israel uses their nukes against Iran, every country in the area (Turkey, Saudis, Egypt, etc.) will scramble to get its hands on nukes, because they're always the ultimate trump card.
I found a lot of information from this anonymous Substack analyst, John Doe. Doe thinks in terms of worst-case scenarios (as we should if we're going to war)...
OTOH, Paul Warburg points out that if Iran successfully closes the Straits of Hormuz, there are other routes for Gulf producers to export their oil. He doesn't think it will rise to $200/barrel, and he believes it will stabilize around a $100/barrel once the dust settles.
> You need targeting information, and not only drone/missiles, to close the Hormuz.
Sorry, but you haven't really thought this through. Currently, just the threat of being targeted is keeping tankers from moving through the Strait of Hormuz. Insurers specifically have war exclusion zone clauses in their coverage (in fact, car insurers and home insurers now add those clauses to their policies). So, the tankers are milling about in International waters outside the Strait.
Moreover, Iran could easily close the Strait of Hormuz by releasing their smart mines. Last time I heard, they specifically said they'd let tankers carrying oil to China through, so evidently they haven't mined the Strait yet.
> If America was forced to join war, why is America going at it with great enthusiasm?
Winning a war is always a politically better policy than losing one. I'm surprised Trump's advisors (handlers) don't just tell him to declare victory now and back off. It doesn't look like the Mullahs are going to cave, and each escalation step will expose new ways to fuck up.
Crazy that I hadn’t noticed Iran’s offer on nuclear disarmament! The things I read mentioned Iran’s missile program (as well as proxies), and I assumed that meant nuclear weapons. I had to go back and check - it’s really important info that nothing I read made nearly obvious enough
I didn't hear about it either until after the fact. It was out there, but none of the MSM platforms featured it on the front page. I wonder if that didn't speed up Nethanyu's decision to decapitate the Iranian leadership, because an Iran that's negotiating would contradict the public perception of Iran about to join the Nuclear Club. And Trump just claimed that Iran was two weeks away from having nuclear weapons. Are his handlers feeding him this crap, or is he making it up spontaneously?
>OTOH, Paul Warburg points out that if Iran successfully closes the Straits of Hormuz, there are other routes for Gulf producers to export their oil. He doesn't think it will rise to $200/barrel, and he believes it will stabilize around a $100/barrel once the dust settles.
The last time oil was over $100 a barrel was in 2022, after Russia invaded Ukraine. Before that it was regularly above $100 between 2010 and 2014. So $100/barrel is not that bad, historically speaking. Technically cheaper than it was between 2010 and 2014, since we've had a lot of inflation since then.
Iran may well try to get nukes if the war ends with the regime still in place but 1) the Israelis believed they were doing that anyway, so they might as well take this chance to weaken them 2) There's still a decent chance the regime either collapses and is replaced/collapses into civil war/survives but is so weakened it cannot pursue nukes in the short/medium term.
Also, even if the war ends and the Iranians dash for the bomb, the Israelis can always just start the war again and the Americans will almost certainly back them. Most estimates I've seen suggest that the Iranians are still months/years away from a viable bomb and that a bombing campaign can significantly impede them.
That makes sense. But if Iran takes out their desalination plants, Israel is in a world of hurt without being nuked. (I now understand that one was hit, but I don't know what the damage is.)
Have you seen any credible analysis of the sinking of the Iranian navy ship near Sri Lanka? I am interested in the claims that some element(s) of the attack are war crimes.
I am also interested in understanding the claims about a double tap on the girls' school in Minab.
As disclosure, my ex ante position is that mistakes are understandable in war, but the expectation of these mistakes is a moral cost that needs to be considered in the decision to go to war and there needs to be a compelling case for benefits that outweigh these types of costs. From what I have seen, this threshold was not met
As for "war crime", no. The war itself may be illegal, but given the existence of a war, sinking a belligerent warship on the high seas, especially one sailing towards the war zone, is absolutely not a war crime. Nor is there any requirement that the enemy be warned of the attack or offered a chance to surrender. If IRIS Dena didn't want to be sunk, she could have followed IRIS Bushehr and IRIS Lavan's lead and sailed to a neutral port for internment.
There are reports (not 100% sure yet) that India offered to intern all three Iranian ships on the 1st. For Dena to have been sunk on the 4th implies that they didn't take the Indians up on it, and dramatically reduces my sympathy for the Iranians here.
Nothing I've seen suggests this would be any more of a war crime than firing a missile at Dena's sisters in port. I'm not defending Trump's not getting Congressional buy-in or anything, just pointing out that focusing on this particular case makes no sense.
>They left Indian waters on 25 February and were likely to have been in international waters on 28 February - when the US and Israel began attacking Iran.
>"The Iranian side had requested permission on 28 February for three ships in the region to dock at our ports. This was accorded on 1 March," Jaishankar told [the Indian] parliament on Monday.
>On 4 March, Iris Dena sank after being hit by a torpedo fired from a US submarine, killing at least 87 of its crew members.
>According to the Indian Navy, the Iris Dena was operating about 20 nautical miles - roughly 23 miles (37km) - west of Galle in waters that fall under Sri Lanka's designated search-and-rescue zone on the day it was struck.
It being about 20 nautical miles west of Sri Lanka strongly suggests it was heading to India at the time.
At the very least this is a dishonourable and cowardly thing to have done.
To add to what Schilling said, the exercise was held near Visakhapatnam, on the east cost of India. At 15 knots (typical cruise speed) that's about 7 days from Iran. If she was headed straight home, she should have been somewhere on the west coast of India when the shooting started, well north of where she was sunk. The only way the geography works out in your scenario is if they headed straight south after the exercise for some reason. Also, even Iran hasn't made that claim, and they're spewing this ludicrous stuff about the ship being unarmed.
If they were three days out of Indian waters when they decided to request sanctuary, and they were still at sea three days after they were offered sanctuary, then they weren't actually seeking sanctuary. If they had been, they'd have been in an Indian port or at least in Indian territorial waters headed for such a port.
Either they were never serious about seeking internment for Dena, or someone changed their mind.
I mostly used that because I wanted to dodge questions about the legality of the war itself (not my field) and focus on whether or not the shoot was good under conventional law of war analysis. The broader question is the sort of thing that international law isn't really good at dealing with.
Legally, the obvious and ongoing acts of war by both sides means they are at war and the laws of war apply. The bit where POTUS didn't get Congress to issue a DoW. er. AUoMF, could be of legal significance if Trump ever winds up having to explain himself at the Hague, but it doesn't change anything for the people at the front.
1. The problem with Hague rules is that, to paraphrase Stalin, how many divisions has the Hague? International law is basically a consensus hallucination, to a much greater extent than regular law is, and if someone decides they don't want to play along, they can basically say "what are you going to do about it?" Russia's invasion of Ukraine was much more obviously illegitimate, and while that illegitimacy has definitely played a part in the international response, it's not like everyone else declared war on Russia the next day.
2. Iran has been a bad actor on the international stage for almost 50 years. There's a strong case that the US is responding to decades of provocation, although the Administration hasn't really made that case because they are bad at their jobs.
I'm afraid you're oversimplifying. If a US warship comes under attack, it has the right to defend itself. But under the UN Charter and the Geneva Convention, they're not allowed to just go out and attack the armed forces of other countries. These rules of international law were one of the justifications for Russian sanctions after Putin's Special Military Operation. However, the Law of Armed Conflict would support your statement that it was a legitimate military target. But I also understand that we didn't rescue any of the crew from the sinking ship. If true, that's a big no-no.
I read somewhere that before its torpedo attack the US sub twice asked the Iranian ship's captain to order his crew to abandon ship. Not sure how reliable that fact is - "in war, truth is the first casualty" and all that - nor where I read it.
The legal existence of a war is in no way contingent on anyone anywhere issuing any sort of formal declaration of war. Vladimir Putin violated international law when he ordered the invasion of Ukraine, but the soldiers who carried out that invasion were waging a war and subject to the laws of war, not to e,g. Ukrainian civil law re homicide and firearms possession.
Similarly, Donald Trump may have violated international law and even US domestic law in ordering an attack on Iran. But the soldiers, sailors, and airmen on both sides are now waging a war under the laws of war, and are allowed to e.g. sink any ship flying the enemy flag.
And they are not required to rescue survivors unless this can be accomplished at negligible risk. Given the vulnerability of modern submarines while surfaced, and their general unsuitability for rescue operations, it is I believe generally understood that they are almost never *required* to rescue survivors. The US tried to argue otherwise at Nuremberg, but it didn't stick.
Incidentally it's an amazing blog, one of the few I read 100% of the posted content from. Highly recommended for anybody interested in military affairs or military technology.
The main debate I've seen is over whether the ship was actually unarmed and therefore not a danger, which would contradict Bean's article (although his article mostly does not focus on the legal question).
I had not run across this claim during research for the post, so obviously treated the ship as fully armed for purposes of that analysis. Beyond that, what Schilling said. The requirements for being a legitimate Thing That Can't Be Shot under the laws of war are far more stringent than "out of ammo" for the obvious reason that it's easy to say you're out of ammo and sort of hard for the other side to verify in real time. (The laws of war are quite pragmatic about that kind of stuff.) Now, if she was actually unarmed and Iran had made sure to broadcast this fact widely, it might change my opinion on if it was a good idea/moral to take the shot, but it wouldn't actually make it a war crime. And given that this is the first I'm hearing about her being unarmed after several days of looking at the issue, I'm pretty sure Iran wasn't screaming "unarmed, please don't shoot" beforehand.
I'm flying bravo sierra on the claim that context makes it obvious the ship was unarmed. I've never heard of that rule, and it would pretty much prohibit USN participation in any such event. We had a destroyer scheduled to take part, and I really can't see us agreeing to send her to Diego Garcia, unload 80 missiles, then reverse the entire process after the operation is over. "Please send your ship unarmed" is a request you would only make to unserious people.
Edit: Also worth noting that even the one Indian government person they got to speak didn't agree the ship was unarmed. The only support comes from "an independent defense analyst based in India", but I could fairly be described as "an independent defense analyst based in Oklahoma", so that's not a high bar for me taking this guy seriously.
The periscope video clearly shows the deck gun at least is still in place. That's "armed". There's no rule in war for a time-out if one side doesn't have any ammunition and wants to go get more.
I'm merely noting that the Naval Gazing article says:
> All I can say is that in practical terms, Dena was clearly a danger to any non-submerged American forces in the region, and removing that kind of threat is generally considered to be well within the rights of a belligerent.
If the US was aware that Dena had no munitions, it doesn't seem accurate to say it was a danger to American forces in the region, so this specific statement would be false.
Whether this implies anything about the legality of the strike is beyond my expertise.
Also, there's the question of why they didn't pickup survivors, which is what we're supposed to do under International Law and US Military Code.
The U.S. Navy / Marine Corps / Coast Guard Commander’s Handbook on the Law of Naval Operations states:
> Following each naval engagement at sea, belligerents are obligated to take all possible measures, consistent with the security of their forces, to search for and rescue the shipwrecked.
> And if Israel uses their nukes against Iran, every country in the area (Turkey, Saudis, Egypt, etc.) will scramble to get its hands on nukes, because they're always the ultimate trump card.
What if we just nuke them first? We have a lot of them just lying around, gathering dust. Okay, I'm not being completely serious, but... what would the actual consequences be? It's not like the other countries with nukes will start a war with the US over this, given that would start an actual nuclear war. Would Europe actually risk sanctioning the US, given how reliant they are on our economy and military?
The USA used nuclear weapons to end the war with Japan, the first and last time they've been used in anger. Breaking this precedent to take out a regional rival would be a terrible idea, since the USA has a lot to lose from a world where everybody has nukes and uses them all the time.
If the earth is mostly comprised of good people, then the US and all Americans will be shunned by the rest of the world and excluded from all of society.
And if most people are evil none of this mattered anyway.
> And if most people are evil none of this mattered anyway.
Given that we haven't had a nuclear war so far, I'd say it actually matters a lot. Do you think countries like the US, Russia, China etc. stay peaceful out of the goodness of their hearts? There are practical considerations that disincentize all-out war. (Though they only apply when countries are on equal footing. Hence my comment.)
Yes, for the most part I believe things are mostly peaceful because most people at the top are mostly good. Obviously, you also have contingencies for the large minority of people who are not peaceful, but those have to be created by peaceful people and a system of contingencies would never be enough by itself, you need most people to be somewhat good.
So the rest of the world will just stop using US markets, consuming US media, and will destabilize their regions by shutting down US military bases (how would they even do that)? That seems unlikely. The only countries that get embargoed are those that the rest of the world can afford to embargo.
People don't interact with the US because we're "good", they interact with us because it's in their selfish best interests to do so. Our using a nuclear weapon would ruffle feathers but it wouldn't change the underlying economic and military incentives and so nothing would change in the way you're suggesting. The reason to not use nukes is that it would probably weaken the norms around nonproliferation in ways that would hurt us in the long run. It would also embolden China and Russia to use them and that would increase the odds of a major nuclear exchange happening at some point. The risks just aren't worth it.
I never said it's likely, I said they'd do all those things if they were good people. I also disagree with the premise that US bases would be needed if the US were to be shut off. They are the world's main threat at the moment.
Why would anyone want to talk to a mass murderer? You think America will bomb a dozen cities, kill millions of people, fill a region with radioactive fallout, crash the economy, and yet people will still think American money is worth anything?
The United States would lose all moral authority pretty much forever. You would have to nuke multiple cities in Iran, murder millions of people, and you would absolutely face global outrage. Oil prices would soar, you would have radioactive fallout all over the place, and you would face economic disaster.
How about a subterranean low(ish) yield nuclear blast, of say 10 kilotons, to completely destroy an underground nuclear installation, of which I'm sure there are some in Iran? That's not a rhetorical question, but it would probably be equally unacceptable for symbolic reasons. "You can't have nuclear weapon tech, but it's OK when we use it!"
I doubt it would start a (new) war, but the US would never hear the end of it in the UN, which, even leaving aside the difficulty the UN would have in enacting anything without the support of its most puissant member, means every head of state would be reminded of that fact on a continual basis, and therefore the US would now have to convince anyone it wanted to negotiate with that whatever they get out of a deal won't include one or two warheads arriving by aerial delivery. Several nations would probably eject any or some American bases they're hosting. (There's a long list of them, and I could maybe opine on some of them, but that's a long discussion.)
Virtually all countries would probably be less open to trade with the US, and more open to trade with each other instead. This could cost the US potentially trillions in trade. For this reason, America's Hamiltonian policymakers would almost certainly reject any nuke proposal short of "because they nuked us first". In fact, of all the main foreign policy schools of thought, not one would find this idea appealing.
If the US nuked purely because "they're gathering dust" or "because we can", then the party in control when she did would be out of power by next election, due in large part to its own voters turning on it. If they somehow don't, then civil war looks likely enough to start prepping.
There are a lot of countries whose broader defense strategy relies on a world where the use of nuclear weapons remains an unthinkable taboo. If nukes are used, these states would immediately condemn the action and expect the USA to do the same. With NATO weakening, this list includes many US allies. If the US were to stand behind Israel afterwards, it would threaten most of our other alliances. Germany has been happy to remain a threshold state. They would likely acquire nuclear weapons quickly after such an attack. As would many others around the world.
The only way to return to the status quo ante would be to turn any state willing to use nuclear weapons into a pariah. Even nuclear states that benefit from the taboo would likely join in the shunning.
It would be a major long-run strategic mistake for the Israelis to use nuclear weapons. In addition to the above, countries in their neighborhood that previously chose not to get nuclear weapons would likely change their calculus the second that element is introduced. It's already going to be difficult, after the issue with interceptor resupply being diverted to Israel, to convince them to return to the security arrangement with the US before the war.
Meanwhile, US support for the war is low. If the US or Israel introduce nuclear weapons into the war against a state with no nuclear weapons (and one where this exact question is the primary cause invoked for intervention), public sentiment is likely to turn sharply against further involvement. Remember, there will be photos of the aftermath.
Finally, we know the Iranians have enough material to build multiple bombs. We also know that their new military doctrine is effectively "eye for an eye". We no longer have eyes on the ground, so we don't know if they started building nukes after we killed the guy who issued a fatwa against it. We don't know what's in the Iranian arsenal, but we do know they've been holding back some of their technology. It would be a strategic blunder of the highest degree to break the nuclear taboo against a nation that's potentially at the threshold of nuclear capability, and that has begun giving back what they get.
Netanyahu may just be kicking the nuclear threat can down the road, though. He knocks the Iranians back on their heels for now, he stays in power until he dies in office, and he'll let his successors deal with the fallout (political and nuclear).
I don't buy the idea that Iran was a nuclear threat under the old leader. For 30+years? Then they came forward last month and put their nuclear materials for 11 bombs on the negotiating table, but they never thought to make those into a single bomb?
Bibi always put forward that unfalsifiable propaganda. But now that they killed the guy who said no to nukes for all those years, we might be looking at a different story. Sometimes the road to avoid your fate is the only one that makes it reality.
I don't think they ever got the Uranium enrichment process far enough along to be able to build a bomb. First, Stuxnet took out the centrifuges, and then that American strike on the Natanz fuel enrichment site, may or may not have taken them offline again.
In the *sixteen years* between Stuxnet and the Natanz strike, Iran built a whole lot of centrifuges and used them to enrich about a dozen bombs' worth of 60% uranium. That we know about; they may also have a secret stash from a parallel program we don't know about.
There's a final enrichment step and some other processing necessary to turn 60% yellowcake into a bomb, but that can be done in a couple of months using stuff that can be hidden in a nondescript warehouse or a cave too deep for us to bomb. Or built from scratch by Iran's industrial base by the end of the year, or smuggled in from Pakistan if they can find an A.Q. Khan wannabe willing to take their money.
The only reason Iran didn't have a working nuclear arsenal five years ago is that they didn't *want* one; strategic ambiguity was working better for them. But I'm certain we could change their minds if we worked at it.
I am not certain, or even mildly confident, that we could render Iran materially incapable of building nuclear weapons in the near future. At least, not without using our own nuclear weapons.
Do you have a link to support your claim that they were ready to whip together eleven bombs?
The previous assessments claimed that they had between 400–440 kg of 60% enriched uranium. It takes >60 kg to create a sustained chain reaction. ChatGPT tells me that it would take them about week or so to enrich the uranium from 60 to 90%. But they'd also need the HE shaping charges ready for assembly. Usually, it takes some testing to make sure everything works correctly. North Korea set off a few duds before they got it right.
The Iranian strategy has been to establish itself as a nuclear threshold state, similar to Germany. The thinking has been that the threat of going nuclear, if provoked, combined with strong missile defense would be sufficient to keep the US and Israel from launching a regime change war against them.
Was this a good strategy? This has been an internal debate in Iran's leadership for some time, according to reports. Various high-ranking leaders in both the US and Israel have been agitating for regime change in Iran for decades, yet it hasn't happened (until about a week ago) so for years some Iranian officials could argue the policy was working, especially as the US launched regime change wars against many other countries in the region. On the other hand, N Korea went nuclear and the US stopped bothering them, while Libya traded away their nuclear ambitions and got overthrown. Indeed the Israelis kept talking about using the "Libya model" against Iran during negotiations.
On the other other hand, the nuclear move meant N Korea kept the sanctions in place indefinitely. Many in Iran have argued for detente with the US, and eventually building a relationship where all sanctions could be removed. Getting the bomb would make that impossible, but having fissile material to trade away in a negotiation could help bring it about ... so long as they could establish friendly relations and avoid the "Libya model" being used against them. A lot depended on how well the US could be trusted, and US negotiations had a spotty track record.
The biggest factor seems to have been the old Ayatollah. After Sadam used chemical weapons in the Iran-Iraq war, he declared the use of chemical weapons and WMDs to be forbidden. Sunni Muslims might resort to that tactic, but the Shia were more righteous, and they would prove it - leading the Muslim world by the force of their convictions. As I mentioned, that view wasn't shared by all the leadership. Allowing threshold enrichment (without ever approving a bomb) may have also been a concession the Ayatollah gave to the dissenters.
Now that the old Ayatollah is dead, there's a question about whether his ban on WMDs will be renewed. The argument for maintaining threshold status is clearly lost - the reason for the strategy was to prevent a war that wasn't prevented. They know they can't trade away the fissile material for peace, given the escalating demands from the US, plus there's no possible way to negotiate with the US, since they've demonstrated a repeated willingness to employ perfidy during negotiations.
It's hard to say what the internal calculations they're making are, but the war has given strong arguments to the side agitating for nuclear deterrence and a diplomatic shift toward fully embracing Russia/China and abandoning all attempts at detente with the US.
“The Appointment in Samarra” ~ W. Somerset Maugham 1933
It goes like this:
A merchant in Baghdad sends his servant to the market. The servant returns home pale and trembling, telling his master:
“While I was in the marketplace, I was jostled by a woman in the crowd. When I turned, I saw it was Death. She looked at me and made a threatening gesture. Please, master — lend me your horse so I can ride to Samarra and escape her. She will never find me there.”
The master lends him the horse, and the servant flees.
Later, the merchant goes to the market himself and sees Death standing in the crowd. He approaches her and asks:
“Why did you make a threatening gesture at my servant this morning?”
Death replies:
“That was not a threatening gesture — it was one of surprise. I was astonished to see him in Baghdad, for I have an appointment with him tonight… in Samarra.”
Netanyahu has been riding the tiger for some time now. He's secure only so long as he is the nation's War Leader in a time of Existential Threat; when that ends, Israelis looking towards a postwar future may boot him as fast as the British did Churchill in 1945. Except Churchill wasn't facing criminal charges for corruption.
Maybe. Or maybe Israel turns into a permanent war state. Not good for the investment class, but a certain segment of the people will always see themselves as under siege from the world. They're happy to fight forever.
> but a certain segment of the people will always see themselves as under siege from the world
Especially when that fear is justified by historical precedent. I don't think their desperate attempt to claim agency over their future will amount to anything, but time will tell.
What do you mean by "Israel drawing the US" into the war? How did they do this? If the US gets involved in a war, isn't it rational to assume they did it for their own interests?
If you expect Iran to massively retaliate against the US if there’s an attack by Israel, and Israel is planning such an attack, it makes sense to take the initiative and attack on your own terms, instead of being dragged into a war on theirs.
Right! If we didn't attack there would be more missiles and planes in Iran to strike back at US targets. So hit 'em first. But I'm not sure these aren't fig leaves, we attacked cause Trump wanted to attack, and with the recent 'success' in Venezuela, why not try it again.
Netanyahu has been lobbying US Presidents to help Israel take out Iran since at least Obama. It infuriated him that Obama pursued a diplomatic nuclear deal (the JCPOA) instead. And of course, *taking out Iran benefits Israel!*—at least in Netanyahu's eyes. Iran is an existential threat to the State of Israel. They've said they want to destroy the State of Israel and they've used their proxies against Israel.
OTOH, Iran was never an existential threat to the US, it was a regional threat at most. Sure, Iran could cut off oil coming through the Strait of Hormuz, but that's not an existential threat. It's a major inconvenience if it happens, but the US has been using carrots and sticks to keep that from escalating to where it becomes a problem.
The US gets no long-term strategic benefits for Mullah-free Iran except that they'd potentially have extra oil and political cards to play against the Saudis. The immediate downsides of this Special Operation have outweighed the calculus that previous administrations have used to continue their carrot and stick policies.
Depends on what you mean by "taking out". If the mullahs don't surrender, and it looks unlikely that they will, Israel will face new threats from them once they regroup.
Also, it looks like there's some tension between the US and Israel about their ultimate goals in Iran. In broad strokes, it looks like Netanyahu wants to bomb Iran back into the Stone Age (as he did with Gaza), but Trump wants a compliant dictatorship that will be beholden to him. No doubt, this is why we haven't seen Reza Pahlavi visiting either Netanyahu or the White House. An independent-minded Pahlavi serves neither of their purposes. IMHO, the best realpolitik endgame would be to let Pahlavi loose in Iran to start a civil war. That would keep the mullahs occupied for the foreseeable future. :-o
Sadly, I expect letting Pahlavi stir up a civil war in Iran would keep the mullahs occupied for two or three months, tops. A couple of months ago he'd at least have been a major nuisance, but too many of the local leaders he would be counting on to support him are now dead, their followers now mostly once burned and thus twice shy, and a whole lot of regime supporters with basically all the guns are now seriously pissed off and looking for an enemy they can kill.
Okay, this is unfair, it was the Secretary of State who said it! Maybe he tried to walk it back the next day, but now that just means we have two contradictory stories from the Secretary of State.
He said in the same interview that the attack would have happened anyway. My read is that the Israelis told the Americans about the supreme leader meeting with all the top people and the Americans pushed the timeline forward to take the opportunity.
Israel clearly wanted the war and clearly pushed Trump into doing it, but the point is that the US should be big enough to stand up for itself and could tell Israel to stop playing games if it wanted to.
Does it count as a "favor to Israel" if an Israeli leader who groks flattery and Trump-wrangling sold him on the idea that he'd go down in history as the Great Leader What Took Care of the Mullahs?
People sometimes make decisions against their own interests when they are properly pressured by other people through emotion, rhetoric, and other negotiating tactics. And the actions of "the US" are after all just the decisions of people, made in response to other people.
AI prediction: having handmade shoes or furnitures is a status symbol. Same about craft burger and craft beer, yes they are better than McD anyhow, but also there is a status-symbol effect that you are not one of those proles. Same will happen with AI: "my lawyer does not use AI" will be a status brag, which means only 95% of will lose our jobs not 100%.
I think it's much more likely that lawyers will still exist to supervise AI outputs, and this will either be required by law or practically required for any serious legal work
The answer will be determined by how hard lawyers lose when the AI makes a mistake.
If it's treated like a typo in a contact where "oops, that was a typo in that contract, I want to amend past the due date, it's a minor change" and courts let it slide, and we let off AI errors as no-big-deal, they will take over and squeeze out people.
If people lose major cases because of AI errors then the humans will stay relevant.
How will they learn to do that? Explain a high schooler becoming a lawyer auditor without having ever been a lawyer and by going through school using AI as a crutch.
They don't need to learn anything. It'll be a do-nothing job where they get paid simply for having a law degree and being a member in good standing of the bar, purely rent-seeking.
Well I'm envisioning this happening for the first few decades after AI is doing most/all legal work and during that period we'll still have a lot of pre-AI lawyers around. After that period I'd expect we'd either gradually loosen requirements or things will be different enough that there won't really be lawyers as we understand them.
I actually think we’ll want human auditors beyond the time where it’s practical as a way to preserve understanding of what the AIs are doing. But the training for those people, the legal auditors, might be different than what we consider practicing law today. It might be more about developing the right sense of legal ethics, like studying the Talmud.
I think you are assuming businesses make status-seeking decisions, just like people do. I don't believe this is true for the overwhelming majority of businesses, except for a few monopolistic or almost monopolistic companies that can afford to waste money by doing so.
I say it's a waste of money because whatever amount a big company spends in status-seeking services (non-AI lawyers, non-AI accountants) could be spent more effectively in personal status symbols for the people actually taking the decisions (I'm sure Lamborghini will eventually release AI-enabled cars).
Why spend an additional $X on raising the status of the company by hiring non-AI legal services (or other), when you can spend $X on raising your own personal status?
I've never heard of a business using higher status service suppliers (legal work, accounting, software development, etc) because it would raise the company's status in the eyes of its customers. In fact, 99.9% of the time I have no idea who those service suppliers are for the businesses I interact with, or buy from. But maybe that's just me.
It definitely happens for products (free trade coffee, non-blood diamonds, clothes made without child labor...) so it could easily happen for services too, if it doesn't already.
Most of the world has been consuming products and services produced by intricate supply chains for several decades now. Do you know of any examples of customers bragging about the service suppliers (legal work, accounting, software development, etc) of the products or services they consume?
In fact, now that you mention products that go through international supply chains (free trade coffee, etc), I can think of something similar but not quite the same. A large share of tech support services for service industries (telcos and banking come to mind) has been offshored in the last couple of decades to same-language countries where labor is cheaper. And I've heard some people complain about the quality of the newly offshored support service, and I can imagine some customers might have decided to switch providers just to avoid interacting with the suppliers of the offshored support.
Frustration could be a motive for switching providers, but it's hard for me to imagine anyone _bragging_ about the locally sourced tech support of their cellphone provider.
Yeah companies with local tech support are popular because the local tech support is much better, not because it's more ethical/higher status.
In the US, it's pretty common for companies to advertise that they're treating their employees well ("all our employees have living wages, healthcare, etc.") but that might also be popular because well paid employees usually do a better job, not because of the ethics/status of it.
I think it is a very destructive idea to think seeking power equals seeking the opportunity to abuse power. By that standard no one ever should run in an election. I mean it in the sense when it is said that men who seek out younger women seek power. They of course do, but why is that a bad thing? Power can be wielded responsibly, like a how a good parent does it.
Note that I do no seek out younger women, because it is possible to find women in my own age who also want a parent-like partner (think BDSM), and that makes things easier. So I don't have personal experience in this.
In my (BDSM) experience power just feels great even without not seeking to abuse it. It is... invigorating. Energizing. A feeling of pride and achievement. I think anyone who ever played strategy videogames knows that feeling when your Empire of Ireland covers now half of Europe. Parents also know this. Dog owners too. So basically most people, I guess.
On the opposite side, seeking powerlessness can also feel very liberating. Less power, less responsibility, equals less stress.
Moldbug has a line about Maslow's Hierarchy of needs. Except at the top of the hierarchy is a gleaming white pyramidion. Maslow called it "self actualization". But Moldbug says it's political power. And it's more addictive than the purest Colombian cocaine. So yes, I acknowledge that the other commenters have a point.
But simultaneously, I agree with you in the abstract. Libs cannot imagine power except in terms of tyranny/coercion.
I'm a parent, and I *don't* enjoy the power aspect of it, in the sense that I don't enjoy exerting my power to *make* my son do something he doesn't want to do. I'm much happier when my son and I are on the same page, doing stuff we both agree on.
"Hey buddy, want to go to the park and throw a ball around?" "Yaaaay!!" feels much better to me than "Have you done your homework?" "But I'm tired, I don't wannaaaa" "I know you don't want to, but you need to do it now, it's your responsibility" etc.
In my experience, the trouble with having power over kids is that (at least when they're little) they don't always listen to you. So you have to constantly be disciplining or forcing the issue, which is exhausting. When I tell my kids "It's bathtime, get in the tub" and they drag their feet, or get distracted halfway to the tub and run off somewhere, or break down in tears halfway through taking their clothes off because their sibling took the bath toy that they wanted and it's not fair, I don't feel especially powerful. To be a parent is to have power de jure, but not always power de facto.
You have experience with consensual power dynamics. Politics is, almost essentially, about nonconsensual power dynamics. There's nothing wrong with liking power, but the fact that you think your BDSM experience has relevance for political judgement means I probably wouldn't want you as my leader.
Having power over people feels good, but someone else having power over you feels bad. You have a small throwaway line at the end, but political power is not the kind of thing that goes away when playtime is over.
I can't tell if your claim is something like "I think BDSM should be less stigmatized" vs "Why would you think it's a negative attribute of a political leader to get a libidinal thrill out of tear gassing protesters? Just because they enjoy it doesn't mean they would do it."
If it's the first, sure whatever. The implications of social norms around sex are really complicated, you might have a point. If it's the second, It's pretty obvious why (all other things being equal) you don't want the person in charge of you getting excited at the idea of you groveling on the floor.
No, it's more that once you've said you get a sexual thrill out of having power, any subsequent denial that you intend to abuse it is disregarded out of hand.
I think that's part of it. The other part is hubris: the person who is convinced he will not abuse his power is one who is likely to, because he does not have mental safeguards up against gradually slipping into the abusive mode.
Same for people who are convinced they would not have become Nazis if they'd lived in Germany in the 30s, etc. Like one of the basic lessons of history is that generally people are not born monsters, they become monsters because they do not question assumptions and are sure of their own rightness.
Why? This does not sound logical at all, given that the abuse of it is not necessary for that thrill. After all isn't that the same thing everybody who seeks political office or wants to get promoted into management has, a desire for power, but not a desire for abusing it?
Imagine an alcoholic telling you that he doesn't want to drink the whiskey – perish the thought – just hold it in his mouth and swish it around, and it's illogical to think otherwise because it's no different from people doing that with mouth wash.
It's a clear conflict of interest. It is very difficult to believe you would make sober political decisions when you get a thrill out of exercising power. You would have a bias towards exercising power even in situations where it's not actually needed.
Now, if it were that you get a thrill out of making sure all your people prosper, I think that could be a good trait in a leader.
I am not sure I should believe in testosterone much. I am 47, so my T levels are decreasing for 10+ years, and I don't exercise and my diet is not steaks with eggs, and my alcohol and nicotine habits are not good either. My T levels are likely low. Yet I feel more masculine than ever. Far more sex drive. More confident / "dominant" behaviour. More secure, zero doubts or insecurities. Way less validation-seeking. Never afraid of other people judging me anymore, because now I feel like I am judging other people. I am probably now close to behavioural "alphaness" and it is clearly not testosterone.
Getting older does give you more confidence. When I was in my twenties to thirties, I was terrified of 'causing trouble' or being inconsiderate.
Forties onwards, it was "ah, to hell with it, if I'm being pushed around I'm going to complain and be loud". You lose a lot of inhibitions about being nice, hence (I guess) the Karen stereotype of middle-aged women.
While I agree that testosterone levels are not the main predictor, or even an important predictor depending on your terms, of confidence or masculinity, as a matter of principle I would say that if you're going to use your personal anecdotes to make a claim about a biomarker and behavior, you should at the very least have measured that biomarker.
The typical thing I was told my whole life was that within healthy levels, which are a very wide range, there is no particular benefit to having a higher testosterone. There is some weird cult movement building around testosterone in social media, and lots of marketers trying to profit off middle age male's anxiety, so I don't trust any recent discourse on this at all.
If you aren't experiencing any difficulties, it's likely that the natural decline has occurred within the healthy range and that, as I said, within the healthy range it doesn't really matter. Going from 400 to 600 doesn't give you 50% boost in sex drive and 50% reduction in neuroticism or anything like that.
Happy for your 40s going well, mine have been not like that mentally at all. Physically, after turning 40 I began jogging 5Ks and am close to benching 225 and have maintained a pretty healthy weight, (and my T was at the high end of the range last time I checked a few years back.) Mentally, the death of my youth really starts to show in the mirror, and I don't think I ever really prepared for being "old" despite many years of obsessively worrying about it. In my experience watching older people, and surveys that seem to bear this out statistically, most people don't get over the hump until their late 50s and then happiness improves again. I always figured that was because if they survived middle age eventually they gave up and just became blissful morons. If you're feeling like you are at 47, you're ahead of the game and found a better way, so good on ya.
How do you square this idea with the widely reported subjective and observable benefits of supra physiological levels of testosterone in androgen users? I am skeptical of the idea that all those cases of significant upside were in people who previously had below normal healthy levels.
> How do you square this idea with the widely reported subjective and observable benefits of supra physiological levels of testosterone in androgen users?
Yeah, speaking as somebody who's been in competitive athlete, powerlifter, and bodybuilder circles for 15+ years, and who has both personal experience and a wealth of experience in close friends, there are huge and significant subjective and observable benefits of supraphysiological effects of T (and other androgens).
For this (to be fair) fairly selected population of fit, competitive men, there are gigantic effects on mental health, energy, joie de vivre, libido, and (yes) competitiveness and achievement.
I even wrote a post about this for the ACX not-a-book-review, covering the personal and social benefits, negative side effects, monitoring plan, and studies on it:
To clarify I don't doubt that exogenous androgens have the effect we see in professional athletes, building muscles and decreasing recovery times and enabling hypertrophy that would not be possible normally. If that's what you want and you can live with the side effects, you probably knew where to find it. However that is not what the people pitching TRT as a cure-all are pitching it as. They are telling middle-aged men who have T levels in the normal 300-800 range that having *more* T will fix a litany of problems from their sexual performance to their mood and energy levels generally. I am not an endocrinologist, to be fair, I have talked to an internist about it and to other doctors over the years and skimmed some NIH papers. But everything I've read says you aren't deficient unless you're under 300 and that trying to bring a person at the low end of the normal range into the high end doesn't accomplish anything. People's bodies are capable of being equally functional at all those levels.
I suppose it's possible that somebody who was normally testing near the high end and suddenly tested near the low end might have cause to believe there was some underlying disease resulting in androgen depletion, and in that case it would seem worth looking into. But if somebody's body always ran near the low end and is still within normal, that person would not have been offered TRT to my understanding.
Thank you. It did not always go well, I had a period of panic attacks which switching to a much less stressful job really fixed.
On mental preparation for old, I think it really depends on how you define your identity. If you define yourself as "active, attractive, does stuff" then it can get hard. Since I mostly understand myself as "wise, knows stuff" these things just keep improving.
Of course on that level it can also be hard to keep up, new tech, new knowledge, but I do not feel that. Assembling a computer in 1990 was harder than just turning on a phone and tapping an icon. New science or knowledge in general gets incredibly shallow these days IMO. The most important psychology book for understanding my life is from 1979 https://en.wikipedia.org/wiki/Robert_Stoller and it is psychoanalysis, and yes that is supposed to be "unscientific". It is indeed, if science is defined as the drudgery of putting p-values on boring questions. The old psychoanalyticians dared to think boldly, even if they had no formal evidence peer review this and that. So knowing old books is in fact a knowledge advantage. 1913 Britannica is a fantastic resource.
As for looks, well 90% of looks are clothes anyway.
He highlights that in spite of LLMs reaching human parity for text generation back in 2023, increasing capability has not resulted in a superhuman writer. No incredible work of literature, not even a top tier essay I would say.
On a related note, even while Anthropic says humans are no longer writing code for Claude Code, they're still hiring software developers like crazy, since you need to be a skilled software developer to use Claude Code properly. Point being these things seem destined to remain tools and not human replacements.
I'm not impressed by Hoel's argument that AI didn't improve book quality of the best books published in 2023 - 2025. I don't think LLMs can write better books than the best authors today, and certainly couldn't in 2023. Why would you ever expect LLMs to write better books than the 100 best English writers in the world in 2023 - 2025?
Because many are claiming this is progressing so fast we're getting the singularity or major economic upheaval by 2030. His point is that prose generation has not progressed much (or at all), even though that pretty much is the central skill of LLMs.
Still, it sounds like exactly what you would expect if AI were to pass human intelligence around 2030. If anything, it supports the position he is arguing against.
> The actual effects of LLMs on publishing were that: (a) the average book got worse, (b) the top 1,000 books in each category improved somewhat, and (c) the top 100 books in each category didn’t change in quality.
If AI is still worse than humans now, we should not expect it to improve the best 100 books. And we should expect low-effort books written entirely by AI to have low scores. If they had better than average scores, it would mean AI already surpassed us at this skill.
> His point is that prose generation has not progressed much (or at all)
GPT-2 couldn't write two paragraphs without drifting off topic or writing something nonsensical. By ChatGPT 3.5, it could do this easily but would struggle to write a whole article that's internally consistent. Now the models can easily write an article, but not a book. GPT-3 couldn't write a rhyming poem, and 3.5 generally could but would struggle to match a given rhyming scheme or deal with other constraints. They've progressed a lot.
> For now. I don't think it will take long for that to change.
Yeah, I'd argue it's already on the horizon behind us. I'm not a software developer, but have several friends who are. I was talking to one about the depth and breadth of some of the stuff I've done with Codex for fun, and he's like "congrats, you're a developer now!"
The scaffolding and framework is already high enough, particularly if you use the /superpowers plugin and / or if you're thoughtful about triangulating between different AI's on the important stuff (architecture, TDD tests, etc) that you can basically do full stack development as a non-developer already.
I doubt it. There is a long history of attempts to get rid of software developers. Every attempt fails because the devs are often the only people in the org that can think about business requirements with precision. Non-technical people with a muddle in their head prompting an LLM or AI will not suddenly gain the ability to think clearly about what they're trying to do.
No, those attempts failed because software developers were required to write the software. Writing code will soon go the way of using punch cards. The high IQ engineer will still be valuable, of course, he just won't be an engineer anymore.
Right, that's why I said 'for now'. Eventually they won't need the devs. We're only a year or two into the AI-writes-code revolution. Imagine where it'll be in 20 years.
That’s because it’s hard to RL for writing. But if we focused on it as much as we do with coding, we’d have phenomenal writing already. I have no doubt.
It's possible that clichés get in the way. Base LLMs (before RL) naturally love clichés, given their frequent appearance in the training data. This is great for coding - using patterns that have been done a million times before is good practice, because it makes the code more readable ("I've seen this before") and the patterns are known to work. But people don't want to read clichéd writing, and I don't know if RL is powerful to snuff this out from the base LLMs.
There's also the possibility that even if you train an AI to not overuse *existing* clichés, humans may begin to recognize patterns in that particular model's outputs, to the extent that they quickly become clichés. Again, I don't know if RL is powerful enough to make an AI lack human-detectable patterns in its outputs. (I wonder if Go players recognize clichés in Alpha Go's output?)
There must be something that can be learned from image and video generation. Maybe you use a strong LLM to craft the logical structure of the text, and something else for styling it.
I think that's basically it. The financial incentive in aggregate is tiny compared to coding.
LLMs are used a lot for short writings like emails, presentation, marketing copy, etc. But writing a top tier essay is hard and I don't think the financial incentive compares to coding.
Most other big fields have also not seen nearly as much investment as reasoning and coding abilities. Audio and visual input, audio and visual generation, and real-time audio conversation have certainly received investment but the flagship models have always focused mostly on text. These areas have big potential markets, so I think it says something that they're not the main focus.
It also says something that the LLM's don't automatically reach stellar performance in specific fields if there's not a financial incentive for someone to just grab the model and force it "git gud" at that specific task, this aligns with Dwarkesh's critique
Dwarkesh's critique? I don't remember seeing that.
This isn't cut and dried. LLMs are pretty good at tasks that AI companies haven't focused on. They scored better than human authors in a flash fiction competition by Mark Lawrence (https://mark---lawrence.blogspot.com/2025/08/the-ai-vs-authors-results-part-2.html). They were often beating humans for medical advice even before OpenAI put more focus there.
Is the question why aren't they even better? Or is it why are specialized training and instructions able to help at all over the general training?
Wind the clock back 6 or 7 years, and describe today's LLM capabilities, and EVERYONE would tell you that of course that is AGI. Obviously. These are absolutely astounding capabilities. And yet, my relative who works in copywriting still has a job when that's exactly the kind of job you would expect would be absolutely eliminated in 5 seconds by technology half as impressive as what we see today. I'm noticing that I'm surprised by how impressive everything looks, and how little disruption I'm seeing relative to what I would have expected 6 or 7 years ago, given today's capabilities.
Part of it is that the people building the AI are coders and have better ideas for how to maxx capabilities with RL. We also got creative types working on image and video generation. I don’t think there has been a push for outstanding writing to the same degree. Not that I’ve heard of anyway. Pretraining, plus some RLHF
Given how widely AI has been adopted, and how much money is involved, I will say I find it somewhat surprising that, if there is a lot of value to unlock in producing really good writing, that it also hasn't been sufficiently attended to, given that the fundamental task that LLM's do is to produce writing.
My own anecdote: a relative of mine is a copywriter for a major streaming network; the kind of person who writes episode summaries and the like. The exact kind of job you figured LLMs would have replaced a long time ago, and which you wouldn't have predicted would have required AGI let alone ASI. Yet he just got a promotion.
I also work in something that is basically copywriting - we have access to in-house AI tools but no one uses them despite them being heavily promoted. The quality of writing is often pretty poor so the tools would definitely increase productivity.
Personal anecdote as a translator: workflows seem to generally be reducing and a major client just basically announced they're moving to a more MT-oriented workflow and cutting down on human translation/review, but on the other hand I just completed two weeks of heavy work on the very traditional sort of contract / terms and conditions type translation that one would think would be among the easiest sorts of translation do with MT (unless the end-client did not want their business secrets to go to major AI companies through the use of public-facing models, that is).
AI art is also tricky legally, as it can't be copywrited. So you at minimum need a human coming up with your character designs and such, or else you can't copywrite them and anybody could legally make merch for them. I have a brother who is an art director for a card and board game company and they are still using human artists, in great part because of fan backlash but also because using non-copywritable assets for the thing you're selling is not ideal.
My brother has reported that they are getting a lot of submissions from freelance artists that are clearly AI generated though. He has to do a lot more filtering these days. Sometimes they hire a freelancer to do X number of cards (the board and card game industry almost always use freelancers rather than staff artists), and they come back AI generated and he needs to find a new freelancer.
Reminds me of the saying that what you're really paying a premium for with a lawyer or an accountant or a a financial planner or a doctor is not the raw advice (which you can often now get from ChatGPT), it's the right to sue them.
As another data point, I work at a company where the devs are trying to have a single native speaker handle a large game's localization with the help of LLMs. Maybe this is a skill issue thing, but the sheer volume of inexplicable weirdness that the latest models still output regularly makes it hard to feel confident in leaving everything to the machine. We're using custom tools, but the results are still frequently medicore to bad, and that's ignoring the times it randomly outputs in the wrong language, forgets half the termbase, or does something that can only be described as completely inexplicable. Also, it's not funny, at all.
(Also, tangentially related, but I fed some local documents about the company's leave policies into Gemini and asked it to summarize the contents, and it straight up hanged after filling the entire screen with thousands of rows of the same word like something out of a horror movie. This is with Pro 3.1, by the way, so when Zvi or other blogs write that hallucinations are a thing of the past now, at least one of my eyebrows cannot help but rise slightly.)
I used to give to the UUSC a long time ago, before GiveWell. Now I give to top charities.
It seems to me that UUSC is a mix of ⒜ “big international NGO”, the type (like Save the Children) which has no understanding of cost effectiveness, let along transparency, and ⒝ “woke” political causes about whatever is in the liberal politics news bubble.
My impression is that if you try asking UUSC to offer a GiveWell style analysis, they won’t even understand the question. But I could be wrong!
I've never met a Universalist Unitarian before. If you don't mind me asking (I'm genuinely curious), what are the UUs actually about? I'm Catholic and I must admit your religion seems incredibly confusing to us (do you even call it a religion?).
We do call it a religion, or sometimes denomination, since it emerged in part from the “Unitarian Christianity” movement.
I would describe Unitarian Universalism as a religion that has no dogma or creedal test, a place where you can get the benefits of religious community without having to take on specific beliefs.
I think the main thing I'm confused by is, if it's a religion with no dogma or creeds, how does that even work? I sort of assumed even the UUs had *some* overarching metaphysical value claims, like "the inherant value of human life" or "the search for the truth". What would happen if a member of the church didn't agree with "all human lives have value and dignity"? This isn't a gotcha or anything, I'm just genuinely sure how the UUs square that circle.
I'm not sure I would call that a religion other than for historical reasons like you mentioned (which is not necessarily a bad reason). Wouldn't it be more similar to a...I really don't want to be unkind, but, it honestly seems more like a religion-themed social club rather than an actual religion? For an example of what I mean by religion-themed social club, I mean something like the freemasons, which is not a religion (they say this explicitly), but ironically happens to still have creedal tests and dogmas and rituals and specific metaphysical beliefs where the UUs somehow don't.
I think the main thing I’d say is, don’t get hung up on labels. The categories are made for man, not man for the categories, etc.
There are 7 very broad UU principles such as “the inherent worth and dignity of every person”; but members are not required to believe or endorse the principles. The _congregations_ affirm and promote the principles, and if a congregation were to object I doubt they’d be evicted from the UUA.
hm, I guess. I think my issue is that you don't get to say the labels are both important and not important, but I accept this as MY problem as a member of a religion which puts unusually high value on labels (we have literal millenia-long feuds over slightly different descriptions of the trinity) . My guess is that my impression of possible hypocrisy is just outgroup homogeneity bias or something.
I'm personally an atheist. I agree with all their principles and I see the benefits of community. Pastoral care, group singing, prayer / meditation are all great things. Shouldn't require signing on to ancient creeds.
well that's what's confusing me. I used to be an atheist, I totally understand the sentiment you express...I just don't understand how an atheist joins a self-described "organized religion" and handles the obvious contradiction there, without saying "yeah no it's not really a religion anymore, we just call it one for historical reasons." I don't believe that's actually what most UUs think though.
Maybe the UUs are a combination of old school universalist christians (who provide the institutional core), progressive vaguely spiritual secular humanists (who fill out the congregation and treat it more like a social club) and a miscellaneous fringe of Buddhists/pagans/etc (who are generally just there for the vibes) who don't really fit 100% but are tolerated nonetheless because tolerance is the core belief that everybody agrees on? That's my current model of the UUs but I'm not confident in it.
I also don't see what the difference between principles and creed is supposed to be -- both boil down to non-empirical assertions about the world, even if "all human lives have inherant value and dignity" is a few steps removed from the underlying metaphysical claims it's based off.
"Our dynamic and impactful work threads together justice education, member advocacy, and financial support — all to bolster grassroots communities in their struggle to expand social justice."
Looks like a generic left-wing/Liberal spiel that doesn't really mean anything to me. What part of this seems "EA" to you?
We acknowledge that formal institutional research often perpetuates extraction and colonialism. In contrast, UUSC’s methods involve community-based participatory research, which consciously centers the community experts, following their leadership and ownership of the entire research process.
What does this look like?
Respecting experience, local knowledge, and traditional wisdom with communities to identify challenges and drive grassroots solutions
Developing research reports and publications with partners that will directly impact their work
Collecting and analyzing information in creative ways
Using high-quality data to advance advocacy campaigns
I think probably if enough companies agreed to pause, they would bring the government in to make it official. This probably wouldn't work under the current government, but it might work in a government conditional on things having gotten crazy enough that most companies agree to this.
I think the problem is that under current law, agreeing to a mutual pause is already a legal risk. So until we get some legal action, e.g. OpenAI can’t say “we would pause if others do”. For that reason I don’t think we could get the pause pledges in first, and then the political action after.
I could be wrong but I don’t think this perspective contradicts what Claude wrote?
People following the UAP transparency topic have been concerned the past week over the disappearance of Air Force Research Laboratory’s retired Gen. Neil McCasland from his outdoors jog in Albuquerque (having left his phone behind at home). Local law enforcement, the AF Office of Special Investigations (AFOSI), & even the FBI have been mobilized to search for him.
In the late 2010s, a leak established General McCasland as Blink 182 frontman Tom DeLonge’s military contact in his discussions with outgoing Obama counselor (& former Clinton CoS) John Podesta on UAP transparency initiatives.
The content platform incubated over those communications, To the Stars Academy, provided Lue Elizondo his soft landing in lifting the lid on the Pentagon’s UAP research program across Keane & Blumenthal’s now famous NYT piece on AATIP in 2017.
McCasland’s disappearance in light of the Disclosure-themed repartee between Obama & Trump two weeks ago hints at best at intrigue, and at worse, foul play.
The search is very much still on, as the Bernalillo sheriff's office refreshed on Friday: https://x.com/BCSONM/status/2030036477741257075 . I relay their message that the public shouldn't speculate as it hinders their search efforts. "people who attempt to develop their own theories based on the limited information available to the public and this makes finding Neil harder.".
So you see then *you* musn't speculate on the general's disappearance, but if you know which ravine he has fallen into, to let them know.
Perhaps it will ultimately be found that neither intrigue nor foul play will apply. In any case there are practically no constituencies who would not wish that he is located in sound condition.
Well, it's safe to say that this story has 'gone national' over the past week, with outlets covering it numbering into the hundreds. Coverage includes NYT, with CNN & FoxNews both giving it the on-air coverage treatment. (all trivially verified w/ a Google search tools)
Update on search for Gen. McCasland, as of Wednesday, 12-Mar
[No material change, yet a decent round-up summary of his wife's fb post, ufo thematic background, Trump's post&tweet directing release of UFO documents, McCasland's education, association with the Tom Delonge ' To the Stars Academy'.
Correct. Limiting our information universe to that Cincinnati article, the extent of her pushing back against misinformation vis-a-vis the UFO topic is:
"Neil does not have any special knowledge about the ET bodies and debris from the Roswell crash stored at Wright-Patt."
which was on the heels of affirming
" "It is true that Neil had a brief association with the UFO community through Tom DeLonge, former frontman for Blink-182 and founder of the organization To The Stars," Wilkerson wrote. "Neil worked with Tom for a bit shortly after his Air Force retirement as an unpaid (Neil's choice) consultant on military and technical/scientific matters to lend verisimilitude to Tom's fiction book and media activities." "
The rest of her pushback against misinformation was very helpful indeed, asserting that he had no dementia as some circulating misinformation had suggested.
Suppose I do: Then what bearing would asserting that have on the above analysis of your shared article and the statement you highlighted? Which elements of counter-narrative information within it did I ignore (which would be evidence of confirmation bias)?
Have you considered the possibility that you are in a confirmation bias trap regarding looking for confirmation bias traps in others regarding UFOs?
You don't think it's worth discussing that she sees no reasonable motive for foul play? Just to name the most important of her objections.
And just for you, I've broken my rule of never visiting Facebook. I hope you appreciate it. Here is a quote that wasn't mentioned in the news post:
"Though at this point with absolutely no sign of him, maybe the best hypothesis is that aliens beamed him up to the mothership. However, no sightings of a mothership hovering above the Sandia Mountains have been reported."
Good on her to be able to find humor in her situation, and she uses it to ridicule the supposed UFO connection to his disappearance. Does that not make you update against your suspicion? If not, then I do believe you suffer from confirmation bias.
Correct, which is why I the "informally" and "in principle". I think it's enough of a victory that the StopTheRace people aren't going to protest, partly because I think their model of these commitments is that they can be used to build firmer commitments later on once there's common knowledge that most labs agree to them.
I can appreciate that there's value to society from prediction markets when they have people betting on issues such as elections and company performances.
But is there any value in having markets on whether Jeff Bezos would attend the Super Bowl? That's as useless as sports betting or predicting Oscar winners. To be clear, I don't think that sports betting should be illegal, but it should be more heavily regulated than prediction markets (and I don't think Kalshi and Polymarket should be able to do an end-run around states that ban sports betting).
Some mild push-back: one potential benefit of having silly questions is for training purposes. A lot of important questions have long time horizons, so they offer limited opportunity for practicing skills related to prediction.
Does anyone know of reliable short-term sources on African death rates? Like, how many people died in Kenya in 2025?
I'm asking because last year there was a lot of drama around USAID getting shut down and many claims that millions of people would die a year due to this. Those were all projected figures, however, and I'd like to update on the actual numbers. The numbers involved were large enough that they should show up in official statistics. Unfortunately, what I'm finding is a giant lag in getting official death statistics, like 3-4 years.
Let me give an example:
-Kenya should be a good example. It's a poor African country that I remember showing up in some USAID documents I looked at.
-We should be able to observe millions of additional deaths in the aggregate statistics. Kenya had a crude death rate of 7 per 1000 people in 2023 (1) and a population around 55 million, giving us annual deaths in Kenya of ~400k. Even 50k-60k additional deaths in Kenya would be a noticeable spike. You can even see the 2021 death spike from Covid.
-However, the World Bank, a pretty authoritative source, lists the most recent data as 2023, which means we won't see the 2025 numbers until 2027-2028 and we won't see the 2026 numbers, what's occurring right now, until like 2029 or later.
-I can't find better ground data. The UN Population division doesn't have anything, their data is as old as the World Bank's, and the info from the Kenya National Bureau of Statistics is even older (2).
A lot of rationalist writers, like Scott, Ozy, and Bentham's Bulldog seemed to get radicalized by this last year and I didn't, mostly because they trusted certain sources and I didn't. And this seems like a great spot for testable predictions; we don't need to rely on fancy analysis, the impact is big enough that it should be observable in crude death rates. Heck, Gemini gives the total deaths in Africa in 2022 as 12 million; we should trivially observe a spike here. If they're wrong, I'd like some of my favorite writers to update towards certain sources being untrustworthy and if those predictions end up being true, I need to update some of my priors.
So, does anybody have a good source? It seems like we should have some idea whether millions of additional people have died in Africa in the past year and...I'm not seeing it.
We can hope not, that was one of his worser episodes even for him. Rather than blame Trump and demand midterm changes he should have asked his viewers wtf are they doing about it and how about get off their butts and donate to solutions. Was basically a partisan hatchet job masquerading as human interest story. Like all his millions of viewers don't have to watch kids starve for the next year if they actually cared (they don't btw).
That's awfully cynical. Of course they care, but there's a cost to dividing attention away from the Epstein Files or the War in Iran (or whatever else Trump happens to do in the next few months), which are more unpopular and more likely to cost the Republicans in the next election. It's simply a question of triage.
Neither Epstein nor Iran have anything to do with Oliver's base writing checks to EA charities right now, as in today to make up for 100% of the USAID shortfall if they genuinely cared.
I like Oliver on some issues, when he nails it he nails it but he's like a broken clock and honestly I don't think I've actually every heard him actually affect meaningful change. Same as when he cried about SNAP cuts and research cuts, I mean if the good people of California really cared, they can demand their state legislatures increase their state taxes tomorrow so Cali could make up those SNAP and research shortfalls, nope easier to whine and blame DC.
It's feigned impotence on what's really backdoor pandering for midterms because Oliver, and his base, simply don't like Trump, it has literally nothing to do with starving children. Google tells me Oliver has a net worth of $80 million dollars, want to bet how many starving kids he so cares about he could save with $75 million of that?
Another source that might be able to measure the impact is the DHS survey. The USAID drama also affected DHS, but they've found other sources of funding and continued doing their work:
"When the DHS Program closed in February 2025, new funders stepped in to sustain its work, while stakeholders mobilized to explore long-term solutions—ensuring this trusted source of global health data continues to inform decisions and improve lives worldwide." (https://www.icf.com/clients/health/survey-national-health-data-collection)
The surveys are carried out in many African countries and include child mortality data, attitude towards HIV, parent mortality data, etc. (*).
The last survey report I can find is from a 2024 Malawi survey, published last February (**). Assuming the usual delay between survey and publication is around a year and a half, we should be seeing reports with 2025 data in a few months. Granted, this wouldn't be accurate mortality data, but it might be enough to see an effect if there is one.
Also, there's this quote from the Malawi report: "Trends: The percentage of children under age 18 with one or both parents dead increased from 9% in 1992 to a peak of 13% in 2004 before decreasing to 9% in 2024". According to Grok, the peak of HIV-related deaths in Sub-Saharan Africa was the period 2004-2006. I'm not 100% sure whether parent mortality correlates with HIV-related mortality, but it sounds like it might. That would be an index worth watching in the next reports.
Unfortunately I think the available statistics are pretty useless. I've heard that GiveWell has tried to assemble data to measure the effects of their interventions on SSA mortality outcomes, and mostly concluded the data is not available, and so the best they can do is generalize from RCTs etc. Though the specific cases I recall were more in west Africa; maybe east Africa has better statistical collection.
This suggests maybe a more viable approach is to find some population that is already studied (e.g. GiveDirectly's sponsored villages?) and try to compare years in that setting.
I think you're directionally correct on this. I wouldn't surprised if the death rate is slightly higher but I think the amount of slack was way higher than what advocates thought, and African governments are likely pulling more weight now combined with USAID overestimating their impact.
Replying because I'm interested in notifications on this. But doesn't David Oaks have interesting recent article about how we have terrible population statistics in many many countries?
Will they be protesting in front of Deepseek's Hangzhou offices as well? Or what about Kimi's offices in Beijing? It seems like there a significant level of interpretation for what constitutes a "major" AI company that would be beneficial to define more explicitly.
DeepSeek and Kimi don't appear to be doing frontier AI, they're waiting for US labs to do it and then replicating what was done. If our labs stopped advancing the ball, it seems unlikely those Chinese companies could surpass us anytime soon, that doesn't seem to be what they're built to do, and they definitely don't have the talent level there we have in ours.
When we suggest a pause to AI research for safety we get all these "but China!!" rejoinders, well ok show me your grounds to believe China actually could get there in my lifetime because I'm not seeing them do anything but steal technology as usual. To the extent this is a "race" at all, it's Daytona and they're just drafting behind the leaders with no plan to maneuver past us.
Luckily, this is now a question we can put a quantitative number on rather than having to resort to "doesn't seem like" levels of qualitative thinking.
Deepmind is in second place, behind OpenAI, in citations/author/year. Huawei, Tencent, and Alibaba are fourth, sixth, and seventh in raw AI/ML publications. Appropriately discount based on your faith in Opus 4.6's research abilities, but the trend is clear.
Regardless of the data, think for just a second about what a pause would entail. Would all current AI talent - including the roughly half of current American AI lab employees of Chinese ancestry - choose to stay at their now-useless jobs, unable to pursue their otherwise massively in-demand profession? Or might they go work for those Chinese companies that would now be the only ones actively pursuing research?
The rejoinders are common because it's the fatal flaw of the plan. The sooner the pause factions accept it, the sooner a more realistic plan can be developed - probably one that relies on international treaty frameworks.
I don’t know if this applies here, but China has definitely had a lot of scandals about artificially high citation numbers. I don’t think they translate directly across cultures.
That chart is a 13 year 2010-2023 representation of research output, which your AI who generated the chart concedes is a poor proxy (in ways that both help and hurt the position you want to advance.)
The leading edge of frontier research over the past couple years is clearly all US based companies, with DeepSeek using a fast-follower approach and Kimi being noteworthy primarily for being the best locally-operable AI, a path that has great business utility for the future but isn't relevant to this discussion of frontier capabilities. We are pouring enormous money and infrastructure buildout into this industry and have all of our best minds on it, and that's why it's taking off, if we weren't doing that I see no evidence that any Chinese company would be moving past us with their inferior human talent and the precarious nature of accumulating power in China.
If we stopped rushing forward, I do not think we would have to worry about AGI/ASI or whatever we wanna call the dangerous capabilities threshold, not any time soon.
If we stopped putting all this money into it, or pushed back with regulations, the entire field would slow down. I imagine many of the experienced SWE's would find employment in some capacity that isn't pushing frontiers but instead diffuses the existing tech via b2b applications, integration with legacy software systems, and so on. They are all massive security risks and should not be permitted to work with any Chinese company, but it may not be necessary to compel that as capabilities research would cool off compared to other more immediately lucrative returns.
Treaties are nonsense, and not a viable plan. Power and implicit threats of state violence are the only viable plan to control the proliferation of any of this. You have to engineer the circumstances where advancing the ball on capabilities is heavily disincentivized by practical consequences. If you aren't prepared to engage in industrial espionage and sabotage and potentially go as far as "bomb the data centers" then you aren't serious, and the treaty is pointless. I would criminalize frontier AI research immediately, aggressively monitor as security risks any person who has worked in the field, forbid the publication of research on this topic, and treat this pause with the seriousness it requires. Doing so would provide the Chinese with ample evidence that we actually mean it, and they won't have our work to steal from anymore, and we can make the necessary threats privately. I think they'd pivot quickly to Meta's approach and use existing technology to make business solutions, as they never believed in superintelligence to begin with. It's our bunch of transhumanist weirdos who seem hellbent on summoning the demon and becoming immortal god-emperor.
If they were planning to I would strongly recommend against it. AI progress in China has been taken up as an ideological good so opposing it is not in any way safe. If you're interested in pushing AI safety in China the best route is likely to read the political guidance on it and make intellectual and advocacy contributions in that direction. Fifty Stalins and all that.
Their site says "every other major AI lab in the world", which sounds to me like it's specifically trying to include China. Since the protest doesn't define its terms, I think they're trying to give the companies maximum leeway to include factors like this, I think because the company commitments are more likely to be a negotiating/awareness tool than the actual thing that ends up working.
David Krueger, the person speaking at this protest, has previously said that "I think the US and China can find a way to prevent each other from building superintelligence if they both actually agree this is a priority", and this matches statements from the other pro-pause people I know.
Here at Radio Shenzhen, we were asked "Is it true that we have freedom of speech here in China the same as they have in America? And we're answering 'Yes'! In America, you can stand in San Francisco and shout "Down with Google!" and you will not be punished. Similarly, you can stand in Beijing and shout "Down with Google!" and you will not be punished.
So... this is just for "awareness"? I'm sure you understand perfectly well that getting every AI company in the world to stop voluntarily is an impossibility, so even if these companies somehow agreed, they wouldn't be restricting themselves at all in practice. This seems exactly like the kind of performative protest that you've constantly decried in the past.
My guess is that these informal commitments are meant to be a tool that builds common knowledge for either:
- a more formal intercompany agreement (something like "okay, you all agreed that if the others would pause then you would also, since you all agree, let's have a big conference and hammer out terms")
- some sort of government negotiation (something like "most US and Chinese companies have agreed to this, the US should now negotiate with China to come up with a framework and mop up the remainder")
I don't think I've decried awareness raising. I think it's probably silly to protest "racism", since it's very vague, everyone is already aware of it, and "these three government leaders meet and decide to end racism" isn't a meaningful option. But I think if the average American knew that (let's say) 75% of AI companies (including Chinese ones) wanted to pause their own work if it were possible, that would make the job of people trying to push for meaningful AI regulation vastly easier.
I've been thinking a lot lately about how Cartesian Dualism can be modified to be consistent with modern understandings of neurology, and I have come up with a model of Dualism that can account for all the well-known difficulties. I call it the "Brain-as-a-Collective" model:
I've privately speculated about that since I was a child - "what if there are other people in here and I'm just one of them and we are fighting for control of the body". To the point where I sometimes speak to myself out loud because of the possibility it more effectively broadcasts a message to the quorum than subvocalizing would
See also: Minsky's society of the mind. Although I believe his agents are more simplistic and consciousness is emergent from the interactions
Same. It might depend on neurodiversity. The feeling that there is more than one of us in here, and more broadly, that the mind and the body are totally different things, may be more apparent to some neurotypes than to others.
Does your model explain how a nonphysical entity (such as the "mind" or the "soul") can affect the physical body ? This seems to be impossible by definition (and don't say "the pineal gland", Descartes already tried that and it makes no sense, since the gland is just as physical as any other part of your body).
It's impossible according to the laws of physics, yes. Obviously, for dualism to be correct, the laws of physics must be violated somewhere in the brain--in my view, there seem to be "exceptions" to the rules when it comes to consciousness. In my model, the violations occur at point sources that are the centers of consciousness.
I might compare it to a video game -- the video game physics engine works fine, except for determining the behavior of the player characters. You're not going to be able to predict their behavior by digging into the pixels that make up your virtual character's head because in-game physics engine doesn't extend past the computer box and into the real world.
And of course there must be some kind of logical input/output interface that connects the virtual character to the person behind the keyboard, but this is simple enough to code. I don't think the physics engine of the simulation needs to be taken as totally sacrosanct in situations where it obviously doesn't apply, such as with player-characters that move of their own volition.
I think the problem is with the laws of logic, not the laws of physics, though this could depend on how you define "physical" vs "nonphysical" entities.
As I see it, a "physical" entity is one that is able to affect other physical entities (including, but very much not limited to, our bodies), in ways that are in principle detectable. For example, rocks are physical because when they strike other rocks we can observe those other rocks moving. Neutrinos are physical because they can occasionally strike other particles just right and decay into more easily detectable products. At one point, we did not know that neutrinos could be detected (or even that they existed at all), because our understanding of the laws of nature was incomplete; but now we know better.
Yet if you propose some completely nonphysical entity, then you are claiming that *by definition* it cannot affect anything physical, ever. The problem is not with our current understanding of physics, but with your definition; you are basically saying "the soul is both A and not-A".
You are attempting to solve this problem in the same way other philosophers have done it: by positing some super-powerful entity (be it God or the Simulation) that keeps track of both the soul and the body, and ensures that the two are always in sync by some ineffable means. But this doesn't help solve the problem, because in this worldview the soul does still affect the body (and vice versa); it just does it by some mechanism that we by definition cannot access. Not only have you failed to resolve the logical contradiction, but also ended up "multiplying entities without reason", as Ockham might put it.
>As I see it, a "physical" entity is one that is able to affect other physical entities
This is your definition, and I don't agree with it. Just because something is non-physical doesn't mean it can't affect the physical.
>Yet if you propose some completely nonphysical entity, then you are claiming that *by definition* it cannot affect anything physical, ever.
By your definition, it can't. But what does that have to do with my worldview?
Let's go back to the video game analogy to see if what you are saying makes any sense. Is the player a physical object in the virtual world of the video game, subject to the rules of the simulation? No. But the players can affect the simulation, nonetheless. So there is no contradiction.
Physical is easy to define: that which follows the logical, rational, and mathematical laws of the universe. It is quantitative, not qualitative. Anything that happens can be calculated and modeled mathematically.
The mind is undefinable in logical terms, which is part of the appeal of dualism in the first place. It is truly ineffable, one can only know of it by direct, personal experience. So I offer you no definition, I only invite you to introspect on the miracle of the mind and think deeply about what you see and experience and how profound and miraculous it is.
Descartes key insight was that even if everything he knew was a lie, and the entire material world was an illusion, the first person experience of an illusion is in and of itself so sacred that it can be used as a foundation for belief in God.
The problem with physicalism is that it doesn't do a good job of accounting for the existence of the mind and imagination. Imagination is so much more than just a crude approximation of the physical: It is something that exists in and of itself in the world of the mind, and it has properties, "qualia", which are untranslatable into physical quantitative (Look up Knowledge Argument for a logical demonstration of this). Qualia is truly God's artwork.
The fact that the mind is undefinable and ineffable is not a problem for dualism: that is how we know it is not physical in the first place. It does present problem for proving the theory to people who do not or cannot introspect, since the postulates of the proofs against physicalism can only be known through introspection.
I am not sure why Cartesian Dualism. If I wanted a non-materialistic framework, I would obviously go for Aristotelean Thomism, as the standard framework of the largest and most educated Christian church. I mean if I decided not to trust the authority of science, I would trust then that kind of authority, not some artillery officer who liked math?
In this framework, only abstract thinking is non-material, supernatural, because abstractions do not exist in nature. In this framework, depression, because not requiring abstract thought, would be entirely physical.
But please to register, submit corrections, and vote on submissions. There’s also a workflow for you to register an AI agent which will then have its submissions parked in a queue for you to approve and release.
* Currently not necessary, but eventually you'll want to stick a filter (like: filter only Approved) to the "Record" page and add some kind of sorting.
* I'm confused by the "newsworthiness" slider, it's not intuitive to me what it is meant to be doing.
* Looks like one can't edit things one has posted. I understand the motivation - to prevent mischief, but probably best of all to allow editing and to keep visible changelog, so I can at least fix typos.
* How is the situation where multiple people are trying to correct the same article meant to be handled? SSometimes I might think: "I agree that this needs a correction, but I'm not sure the proposed correction is ideal. But if I reject it, then it won't be corrected at all. But I have a better correction in mind. Can I correct the correction?" and so on.
The front page wants me to install a browser extension, with large colourful buttons for doing so. I unfolded the folded-by-default introductory text and read the whole thing, which is more than I’d normally do with a website that wants to start our relationship by metaphorically grabbing at my crotch, but nowhere could I see any explanation of what the browser extension does, why I should trust that it is safe (link to the github page is the gold standard here…), why I want to install it, or how installing it will help with the goals buried in the the incredibly wooly blurb.
Below the large bright “install extensions” buttons is a tiny “instructions and details” link, formatted the way sites normally format links to support, terms and conditions, and other similar things they are forced to have but would really rather visitors ignored; and /that/ text, although it is only a few sentences, has actual information on what the software does and what it is for.
That blurb should be front and center, not buried.
Has anyone been up to date with the changes in Venezuela after the capture of Maduro? It has been two months and with all that has been happening elsewhere there hasn’t been that much coverage
Looking on Wikipedia, it seems things have gone... amazingly well? The Trump administration seems to be on great terms with the regime (and even the opposition, though I'm guessing the administration is just hedging their bets), the population seems largely supportive of the intervention, the US sold $500 million worth of oil less than two weeks after the strike, and they're already offering licenses to other countries for extracting Venezuela's oil and gold. Given that what they did isn't that different from what they did to Iran, it's crazy how much easier it is to work with people who value their lives.
I have also looked at Wikipedia. It looks to me like most of the positive-seeming effects come from changes in US policy following the capture of Maduro, in the direction of renewing diplomatic relations and lifting the embargo. It is not clear to me what extent capturing or otherwise deposing Maduro was a precondition for that, or if we could have gotten similar effects at any time by calling up Maduro and offering to lift sanctions and recognize the regime in exchange for some concessions around releasing political prisoners.
As for popular support, I expect that's hard to assess with confidence under the circumstances. I have found some articles talking about polls done by a firm called Meganálisis, which seems to be aligned with the Venezuelan Opposition, though. one from January 14 (11 days after the capture of Maduro) shows 92% popular support for capture. But on the other hand, a March 3 poll by the same outfit shows Delcy Rodríguez with a dismal 4.8% approval rating. To the extent these polls are accurate, it sounds like the general mood is one of very badly wanting the regime gone but seeing the replacement of Maduro with Rodríguez as at best a marginal improvement.
What is the US government getting from Rodríguez that is not in the regime's interests? They're selling us oil, sure, but we were the ones saying no to that before, not Maduro. And what does the administration still want from the regime that it isn't getting to inspire the threats?
I doubt that surrendering control over all of their natural resources is something that's good for their long term interests, even for the regime... But they really don't have a choice at this point.
The regime we were told was oppressing the population of Venezuela is still in place. Has something actually changed for the people living there, or did you mean it's going "amazingly well" just for US businessmen?
It wasn't obvious. I mean, the original comment was asking about what changed in Venezuela specifically, your answer could easily have been interpreted as a claim that conditions have improved for the people of Venezuela.
You are actually implying that the oppressive regime is now more stable, which I would characterize as a negative outcome for the people.
"Isn't that different" isn't how I'd compare "kidnapping one political leader and killing a few guards then doing nothing else and making it reasonably clear that you don't really plan to do anything else as the legal successor to the former leader takes power and you proceed to move to a level of normalization the country had already been seeking on terms similar to what had already been offered" vs "killing a spiritual and political leader, much of his family, and many other senior officials as well as attacking 500 military targets and naval vessels and then immediately talking about regime change"
First thing first, I wouldn't call almost 100 people "a few", but with Iran, if the government was just agreed to surrender all of their missiles and nuclear material, I'm sure the US and Israel would stop killing them. They're not just murdering people for its own sake. If Iran completely disarms, they are no longer a problem.
I mean, there were ongoing negotiations with Iran, abruptly aborted by the US killing the people they negotiated with.
Iran appears to have decided they have no reason to negotiate any further, or indeed no reason to believe the US and Israel will ever stop killing them. Very reasonably, if you ask me.
You can argue whether Iranian resistance is rational or not with someone else, I'm just here to argue against the idea that its situation is sufficiently similar to Venezuela's that we can infer irrationality from their different response.
Venezuelans would still be dying if the regime didn't immediately capitulate, so yes, the situations are pretty similar. The difference is that the Iranian government decided to fight back for some reason.
Capitulate to giving the US complete control over their natural resources. And dying the same way Khamenei died (and also the same way the new Khamenei will probably die).
But if we were ever to actually get a working pause on AI research, one step on the path to that will be going through the phase "people involved agree in principle to stop if they could figure out how."
I doubt they'd ever be able to get all the labs in all the countries to agree to this. But if you think that's true, then the cost of doing the pledge is nothing.
I guess this depends what you mean by details. I think, for example, that people could protest the war with Iran when they have only some details of what a cease-fire would look like, and not a comprehensive white paper listing all of the terms verbatim.
A few months isn't sarcastic, I know some groups working on very detailed proposals that I'll write about when they come out.
Why did oil prices only go up modestly the first few days of the Iran war, then balloon Friday and today? What new thing did the markets learn? Did they previously expect Iran would be unable to stop traffic through the Strait of Hormuz? Why would they expect that, and how has Iran overcome whatever hurdles they expected?
The timing is indeed strange. I would point out though that the "modest" increase through march 5th is about 20%. Its too cute for that to be the right move to shutting down 20pct of oil transport, surely theres some squeezed short term buyers, but its a solid increase. Then something went through on late friday and the weekend across markets that has now reverted. That's the outlier move without a great explanation. Sentiment trading on the succesor pick (which was expected ahead of time) plus a Trump TACO signal is one diagnosis. I dont find it fully satisfying. I agree "no one expected iran to close hormuz" is not a good candidate given the timing. Shipping stopped long before that price spike.
Seems to be going down today: the market opened around $115 a barrel, but closed at $85. To keep things in context, oil was around $90-105/barrel from 2010 to 2014, and in 2022 we had an eight month spike that got it up above $100 again, due to Russia invading Ukraine. So we are seeing high oil prices right now, but not historically high.
I think it had a lot to do with Iran starting to attack other Gulf states’ oil infrastructure leading them to shut down production entirely or almost entirely. Now of course it is a prediction market question about how long iran can keep causing that much trouble.
Simple answer. Israel started attacking oil infrastructure in Iran vs earlier attacks which left this alone. Not only does this affect the oil supply in the short term, but it also signals that the war will be longer lasting. The initial plan for the war was to try to get Iran's leadership to capitulate quickly like Venezuela, in which case you would want to leave as much economic infrastructure intact as possible. On the other hand, if you think it's going to be a longer war you want to attack this infrastructure to degrade the enemy's ability to fight. So the attack against the oil industry shows that the US / Israel believe this is going to be a longer war, and the oil market is reacting to this.
I think that, watching all the iron move into the area over the last few months, the markets had priced in the possibilities of (1) successful negotiations (drives oil prices down and increases trust in availability) and (2) a short, sharp war (eats into stockpiles but doesn't create pervasive shortage). What the market had not priced in was a long war, which could create widespread and pervasive shortages.
Then, both sides started to indicate that the war might not be short. Iran is expected to put forward the younger Khameini as supreme leader, which is felt to be a defiant choice by the regime rather than conciliatory or submissive. (Technically accomplishing regime change by making Iran a monarchy in all but name!) The US has shifted to saying things like "we're here for the long haul" and "hey, special operations troops, stop training and prepare to deploy, just in case". And although wars sometimes look the most intractable right before they end, those look to the markets like signals of digging in rather than looking for an off-ramp.
Now, between the prospect of a longer stop to shipping (gulf states can't store much oil) and the Iranian strikes on oil facilities (directly reducing their production), many gulf states are lowering production, including Kuwait, Iraq, Saudi Arabia, and Qatar. That will create the pervasive shortage that the market hadn't priced in, and it's racing to adjust. The world simply doesn't have the stored oil capacity to feed the cars and planes and power generators. China has the most (because they've been preparing to invade Taiwan and don't want to be deterrable), but also the most need (and buys Iranian oil most directly).
It's worth noting that even though Iran's navy was destroyed by the US with contemptuous ease, again, they can still threaten shipping with land-based systems. (As can, for example, Ukraine threaten Russia's navy despite not having a navy.) The Strait of Hormuz is so narrow (~60 mi) that even gun artillery could shut down most of it, and rocket and drone artillery could easily threaten the whole thing. And although likely many ships could get through under US convoy protection, it's very hard for the US to promise that everyone can get through perfectly safely, and the war risk insurance premiums for even that level of threat can be pretty catastrophically high. That's why the Houthis were able to redirect so much traffic around Africa despite multiple navies providing security. Iran was the Houthi's supplier, so their threat to shipping is likely to be even more credible than the Houthi's was. But! As oil prices spike, eventually it will be worth it for some of them to try to make the run under US protection, and for the US to provide it.
I disagree that the Assembly of Experts' choice of Mojtaba Khamenei makes Iran a monarchy any more than the election of John Quincy Adams made America one.
John and John Quincy were both (1) elected democratically and (2) on independent records of service; John to the American Revolution and the new United States, and John Quincy was a child during the Revolutionary War but later was a diplomat and then a politician. One could suspect that John Quincy's entry into public service had elements of nepotism, but his later entry into politics was clearly not (it was over a feud with Jefferson).
Mojtaba has been serving as his father Ali Khamenei's deputy chief of staff, with incredibly broad authority, for almost two decades - two thirds of his adult life. He was appointed to follow his father by a closed group of unelected clerics whose membership Khamenei had significant sway over, if not total control of.
Of the three "presidential families" in American history, none of them come close to that level. Not even George W. Bush and his father George H.W. Bush, who were both elected Texas politicians before they were POTUS.
Of all the things America is or threatens to be, an honest-to-goodness monarchy is just not one of them.
He didn’t, actually. Adams won 84 and Jackson 99. The problem was that there were four candidates who won some electoral votes, but none had a majority — Jackson had the plurality, but it was just over a third of the electoral votes — so it was thrown to the House to choose from among the top three. Clay, who had been eliminated because he had the fewest electoral votes, is alleged to have drummed up support for Adams in exchange for the position of Secretary of State, though we’ll probably never know for sure. Jackson was mightily annoyed, but as always the Constitution says what it says.
Now I'm wondering if this state of affairs is benefiting the US relative to China. If this successfully starves China of oil, does the US military have an incentive to keep the war going longer than it needs to?
Yes, but the nuance is it drives up the price of Chinese products America buys. So it's difficult to bleed China without also bleeding America - and not so long before the midterms, either.
Markets strongly believe in TACO after getting burned on the liberation day tariffs and are expecting him to declare victory soon. The recent change is realising that this might not happen this time (at least not soon)
Now might be the time to consider price controls on gasoline coupled with rationing. Everyone will be squeakin’ and squawkin’ about how they can’t possibly reduce gasoline consumption even temporarily, but don’t believe it.
Given our fentanyl-like addiction to gasoline my fear is that if prices skyrocket people will just reduce spending on everything else while continuing to buy as much gasoline as usual. And that, of course, could be disastrous to the economy.
Huh? Whatever, people make decisions on how to spend their money, that's the whole point of capitalism. And the economy adjusts to that spending, more of this, less of that.
Yeah - gas is an inelastic good for a lot of people, but also it's simply not that expensive and despite what they say consumers could probably absorb a lot more pain
I'm totally serious. The government should require price increases in shortage situations. Take the heat off the local vendors who just point to the law and shrug. The government could even claim a portion of the price increases and distribute it to the citizens / poor.
Price increases in the most vital products kills the poorest people. Not everything is a choice.
The anti gouging argument often tries to appeal to "basic economics", or Econ 101. It's nothing to be proud of. More advanced economics exists for a reason.
101ism is a syndrome or philosophy,where basic principles of economics are treated as exceptionless iron laws,and applied to all situations. Noah Smith write an article on the subject (although he did not claim to have invented the term). It was promoted by an occasion when an interviewer, not himself a professional economist, appealed to "Econ 101" to disprove a claim made by Smith, a professor of economics.
"Price fixing is an example of a topic where it’s easy to miss the big picture if you only want to apply reductive thinking. Many arguments assume that the market is perfectly liquid and all participants are operating according to a simple set of rules where they buy and sell according wherever supply and demand naturally intersect on a clean academic graph.
The problem in the real world is that unique situations create opportunities for large, well-funded players to manipulate the supply curves far beyond anything that could happen in a simple liquid market. Their market manipulation creates its own demand which further increases the price. It’s not hard to find situations where a group of scalpers acting independently can temporarily sequester enough of the supply that prices for individual units are driven far above what free market supply and demand would support.
These second order dynamics are studied academically, but the common arguments in support of price gouging ignore those second order effects and argue about perfectly liquid free markets that don’t exist in these rare scenarios. "
Economics is capable of solving a variety of problems. During business-as-normal times ,the problem is to achieve growth -- to thrive. During emergencies,the problem is to stay alive,individually and as a group -- to survive. These are different problems with different solutions.
Pro gougers are essentially failing to realise this,and applying a one size fits all solution
"Free market works only if you have a couple of conditions - elastic demand, elastic supply and goods being somewhat of a commodity."
In this case you only have the third. This is why even in most free market societies you have anti price gouging laws, laws protecting availability of resources for critical industries and in military times rationing
I just filled up my car, I don't blame the gas station for the ~$0.4 increase. And if the government starts taking some of the money it will never stop. Mind you I have long supported an increase in gas taxes. (No one ever talks about that anymore.)
While I agree that there are serious market failures there (we all remember the corona toilet-paper shortages), I do wonder if there is any reason to believe that government involvement should be expected to produce an actual improvement. I'd kinda expect it to morph into something more damaging than these occasional market failures.
I believe the prevailing sentiment in the first few days was that the bombing campaign would successfully eradicate the Iranian leadership and the government would collapse and the fighting would stop. Logically, this would prevent Iran from successfully threatening tankers in the strait of Hormuz. This sentiment was strengthened by the death of Khamenei in the first day of the campaign.
There was also a belief that once the government stopped functioning properly, the people of Iran would rise up and replace the dictatorship with a more democratic government.
There have been no signs of any of this happening. It has now become obvious that victory is not so easy and this will be a more drawn out conflict. The leadership of Iran is much more robust than markets previously assumed. This belief was probably strengthened by the shocking video online of burning oil refineries.
Markets have also been experiencing increased volatility recently, and investors are acting panicky. There's been a lot of uncertainty in the market (due mostly to uncertainty around tariffs and AI) which can cause huge runaway effects for some "safe" assets. People who don't feel certain about their investments in shaky markets (like tech right now) are looking for any reason to jump ship into "safer" assets like raw materials (gold, silver, oil). This can cause huge runaway effects from bandwagoning.
In my opinion, this oil fiasco reflects a similar mechanism to what we saw with silver and gold earlier in the year. If I had to guess, I would say the current run on oil will end with a sharp sell-off and retail investors will be left holding the bag. The price for oil will probably remain high until the conflict is resolved, but it will be overvalued at some point. If you can guess when the sell off is you could make some money shorting it. Some say it's already overvalued.
>There was also a belief that once the government stopped functioning properly, the people of Iran would rise up and replace the dictatorship with a more democratic government.
Not that this was very likely to begin with, all remaining chances have surely been undercut heavily by Trump indicating that he prefers the Delcy scenario. If you were an Iranian oppositionist (and didnt' get shot in the previous wave of repression) why the hell would you risk life and limb in order to help Trump replace Ayatollah(s) Khamenei with some other Ayatollah who does the same repression but will also give America full and open access to exploiting Iran's oil? At most one could expect destabilization - and even then the new barely-changed regime would now have a powerful backer, ie. Trump, surely intent on keeping the new guy in power so as to have someone they can work with.
Money makes the world go round! Money, money, money, money! (this is the response for why risk life and limb... embedded agents)
At most, one could expect what happened in Saudi Arabia, with the "modernization" -- where in Iran, we mean "more legal opposite sex dancing" and things like that -- Iran is already notably more liberal than the Sauds in terms of marriage rights for women.
>I believe the prevailing sentiment in the first few days was that the bombing campaign would successfully eradicate the Iranian leadership and the government would collapse and the fighting would stop.
I find it difficult to believe that any sophisticated market participant was dumb enough to believe that would happen. It is inconsistent with everything that political scientists know about such matters.
I agree with you there. The market is often not driven by the beliefs of intelligent, well informed people.
The current administration was probably the main force behind the story that Iran would fall, and as we have seen in the last year, the president's Truth Social posts have a strong impact on the market. Weird market action in January of 2025 can be attributed almost entirely to the president being wishy-washy about tariffs early in his term.
Again, totally agree with you that markets were (and are) behaving irrationally. Famously, the market will behave irrationally longer than you can stay solvent.
To clarify, I didn't mean to imply that I think that the markets were acting irrationally, but rather I was attempting to express skepticism that that is an explanation for what drove the market.
I don't think anyone doubted Iran would be able to stop Hormuz traffic if they chose. (Well, maybe some people in the "war department", but that's a separate issue.) The big question is if they would/how seriously they would. So far, there's still a tiny bit of traffic (~1% of normal levels) and nobody is getting shot at, but the ships doing it are largely ones that have some reason to suspect the Iranians won't shoot at them. (Chinese, for instance.) Nobody is quite sure if Iran's going to go for Tanker War II, but if they do, it could get really bad. (And not just in Hormuz either. They've already hit tankers off Kuwait, and while the Gulf is wider than the Strait, it's not that wide and it's long enough to be a problem.)
Israel and Iran both started hitting oil infrastructure which they hadn't been doing initially and generally didn't historically (ie Israel and Iran didn't do this in the 12 day war)
Would be more likely that the market first expected either a quick rundown of the ops by US/Israel or the Iranians doing the Delcy Rodriguez option, and now that this didn't happen...
Europe is going through a rough patch: stagnant growth, disastrous energy and immigration policies, and the Muscovite invasion of Ukraine. My question is:
What *personal transformation* (not national or EU-level policy), if embraced by 10% of the population, would save Europe?
My tentative answer:
Attitude shifts:
- Positive change in my life is possible and often not even that hard.
- No-one is responsible for my well-being but myself.
- Success and wealth creation are positive and pro-social things.
- Opportunities for employment or entrepreneurship exist in any market.
- Generosity and building things are the real flexes. Trying to look smart, erudite, or socially conscious just puts people off.
Practice shifts:
- Be reasonably fit and not obese.
- Become ridiculously productive by working in a goal-oriented way on things you care about and by limiting distractions.
- Unlearn the habit of unproductive worrying.
- Be relaxed and positive, rather than tense, in the mind and the body.
- Have (more) children.
- Meet people face to face more often.
Am I missing any major ones? Or would you emphasize something different altogether?
Some attitudes and practices that are not nice, but have served me well at times:
-our species is charming, dirty, and doomed
-if you drink one low-alcohol hoppy IPA about every 150 mins you can stay lightly buzzed all day
-fuck 'em
-try to be Virginia Woolf
-try to be Wittgenstein
-I can figure out pretty much anything
-finding a way into M's bed solves most things for now
-all ya need is love
-boogie til you puke
-just do as much climbing as possible and fit the rest in around the edges
-I'm going to be dead a long time, so . . .
"lightly buzzed" and get your daily dose of pseudo-estrogen, for the moobage!
https://pubmed.ncbi.nlm.nih.gov/32937790/
> Am I missing any major ones? Or would you emphasize something different altogether?
Does it occur to you that most of those are in direct tension with each other?
Being reasonably fit requires time in the gym and change in diet habits, which interferes with being productive and meeting people face to face, which interferes with being relaxed rather than tense, and "having more children" nukes basically ALL of those.
Like what you're wishing for is "just all be top 1%-or-better people on conscientiousness, goal orientation, productivity, and parenting!" It's so simple! What's wrong with people??
Yeah, obviously if you had some way to make everyone in a country, or even 10% of them, become <1% percentile, you'd be doing way better in the aggregate, but you have the people you have, and definitionally, 99%+ of them will be way worse than this.
So giving people a bunch of directly contradictory <1%-tile aspirational targets will more likely serve to have them give up, ignore you entirely, etc.
> Does it occur to you that most of those are in direct tension with each other?
Yes and no. Roughly speaking, I view some of those things as consumption and others as investment. (Of course, in a more nuanced view, they are all a mixture of the two.) To consume more, you either need a bigger budget or have to give up something else. Investments may require you to give up something temporarily but in the long run, they increase your budget. By budget I mean your time and focus – your capacity to do things you want to do.
In terms of the game mechanics of this thought experiment, I'm envisioning:
- An initial small-to-moderate boost in budget (say 10–20%) simply due to starting to care more about one's life and less about scrolling social media etc. Since we're positing a mindset shift (deus ex machina), I think this is reasonable.
- Lifestyle/mindset investments that initially displace some other things (but less so thanks to the initial boost) but gradually increase the budget.
- After some time, a steady state with a larger budget, with some of the increase (on average) taken up by the more consumption-heavy things like kids and face-to-face time.
So on the investment side we have:
- Giving up worrying. This is a no-brainer: the habit takes relatively little effort to stop, and it's all dividends after that.
- Being relaxed and positive. The change is gradual and requires some initial effort and a little bit of maintenance, but becomes net positive pretty soon. Being tense is tiring and makes you not want to do things.
- Increasing productivity by reducing distraction and working on things you care about. Depending on your circumstances, the initial investment may range from trivial to very significant, but after the investment, your work becomes less mentally taxing per hour worked (not to mention per euro earned).
Mixed:
- Fitness and non-obesity. Initially this eats into your budget but soon becomes cashflow-positive due to having more energy, sleeping better, being less dependent on frequent meals, having fewer aches and pains, etc. Of course, this is only true for a moderate level of fitness and leanness. If you want to be very fit or very lean, you slip into consumption.
- Meeting people face to face. Strictly in terms of your time-focus budget, it's mostly consumption, but if your initial level is low and you're able to increase it, you massively increase your opportunities to e.g. find better employment.
Consumption:
- Having (more) kids. This unambiguously requires you to give up other things you care about. (Let me also clarify that I'm talking about probabilities or averages; I'm not assuming every single person would have strictly 1+ more kids than they otherwise would.)
While the spirit of this thought experiment is, "What personal transformation of 10% of the population would achieve the outcome, setting aside for a moment how that transformation could be triggered," it's only interesting if it doesn't require changing immutable characteristics like IQ. So far I maintain that my tentative answer doesn't fall foul of that as long as we don't require that consumption exceeds some unrealistic threshold.
> While the spirit of this thought experiment is, "What personal transformation of 10% of the population would achieve the outcome, setting aside for a moment how that transformation could be triggered," it's only interesting if it doesn't require changing immutable characteristics like IQ. So far I maintain that my tentative answer doesn't fall foul of that as long as we don't require that consumption exceeds some unrealistic threshold.
Ah, thanks for the clarification. So if I'm understanding correctly, your answer to tackling some of the difficult country-and-region level problems the EU is facing is to try to persuade ~10% of the population to embrace the cultivation of arete as a central pillar of their weltanschauung?
And then, the greater vim and vigor and clarity in the more-excellent populace let's the countries make better decisions (despite being democracies and the ~10% being vastly outweighed)?
I love the concept, but I do worry about the execution.
I like your overall framing in terms of making longer term investments to increase the "execution budget" for people, who can then use that extra budget to live better lives and positively impact society more.
But I do think even being able to execute on a path like this is largely a result of immutable characteristics like neuroticism (somebody high on N isn't going to be able to easily turn off worrying), conscientiousness, self discipline, long term planning, and so on.
I think this is probably covered by the modesty of your goal - you don't want to move everybody, which is impossible, you just want to move the ~10% with the underlying capabilities who are currently living less than optimal lives - but the reason it can't apply to more than that is because of immutable characteristics, to my mind.
So here I'm going to take a sharp left turn and hark back to an ancient Scott Alexander story, pre ACX, pre SSC even, back in the primordial livejournal days - the story of the whisper earring.¹
The whisper earring is a magical artifact that always has a better answer than you - it sits in your ear, and if you ask it what to do to move towards a given goal, it always has an answer better than you could come up with yourself.
You probably see where this is going.
The end result of Scott’s story has people giving up even motor control to the earring, and the modern correlates of that in terms of AI assistants would be spastic yuppy zombies - “yombies” - slaved to the altar of their own goals, having lost all identity or selfhood or autonomy in the pursuit of accomplishing more.
This is unironically our future - when we have superintelligent AI assistants in all of our ears, we will all be facing this future and this choice, both for ourselves and our children.
From here on in, I’m just going to assume this is uncontroversially true - that Phd-smart AI virtual assistants can do better than enough people that we have to actually think about this.
I have discovered in conversations and online debates that whether to “earring” or not is a fun consequentialism / virtue ethics scissor point.
The virtue ethicist will maintain that cræft, doing things for the sake of doing them, and doing hard things *because* they are hard, is an end in itself, because of the impacts on the character and self that they forge.
The consequentialists, on the other hand, will argue: “lol, that sounds hard, let’s just yombie.”
I am with the consequentialists.
I unironically think this is the way to cultivate arete and significantly better life outcomes in your 10%+ of the population, and maybe even more than that.
If intelligence is counterfeited, conscientiousness and discipline matters more for overall success, because the people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that Claude gives to them.
Part of it is knowledge, but most of it is persuasion, because people generally know what is good or bad, or what they "should" be doing, and fail in executing to a high standard.
These AI minds will know everything about you, they’ll know your thinking style, they’ll know what rhetorical techniques you prefer, they’ll be talking to you in the ways that most resonate with you and making connections, arguments, and analogies accordingly. Super persuasion, but at the personal level, and for your benefit - a super-ego that works, in other words.
I’m not pretending the AI is going to win all the time here, either. Your super-ego doesn’t win all the time today, does it? All it really needs to do is win more often on the margin. Think of it winning only 10% more - 10% better decisions compounded over days, weeks, years, and decades is a CRAZY big effect size. It’s like getting a 10% financial return that compounds weekly!
If you’d made 10% better decisions all through high school, do you think you might have gotten into a better college? If you’d been 10% more motivated and making 10% better decisions throughout your career, would you have driven more impact and had a better career overall? Wouldn’t you be in a noticeably better place right now? How about relationships? If you’d been 10% more thoughtful and connected and better, continuously, every day, maybe you’d still be in a relationship you still regret losing. See how big an effect “10%” can be?
That’s the floor of the potential lift I’m talking about, and it applies everywhere. And people would love to have this. If they had better lives? A job that they actually enjoyed, that used their powers to the fullest along lines of excellence? If they had a better relationship than the median “net miserable and then divorced” outcome? If they had a strong friend group of people that really got them and supported them?
That's the road to your society that's more full of arete.
I actually wrote a whole post about this.²
_____________________________________________________________________
¹ https://web.archive.org/web/20121008025245/http://squid314.livejournal.com/332946.html
² https://performativebafflement.substack.com/p/the-spastic-yuppie-zombie-hoods-in
First, thank you for engaging with my half-baked though experiment in earnest. There's more in your reply and your article than I can hope to follow up on in this reply, so let me just touch on some of the main points.
> So if I'm understanding correctly, your answer to tackling some of the difficult country-and-region level problems the EU is facing is to try to persuade ~10% of the population to embrace the cultivation of arete as a central pillar of their weltanschauung? And then, the greater vim and vigor and clarity in the more-excellent populace let's the countries make better decisions (despite being democracies and the ~10% being vastly outweighed)?
Yes, that's about right. My hope for better policies, in this scenario, is based only partially on a more enlightened and less fearful populace – I'm mainly thinking about rather prosaic motives. For example,
- More people will be invested in the startup ecosystem and will therefore clamour for more pro-market policies.
- Some, mostly young, people who are leaning towards a life on unemployment benefits will be FOMOed back into the reinvigorated job market and become less likely to demand increases in welfare payments. (Not the most politically influential group, but just as one example.)
Some improvements can even be expected independent of better policies. Increased tax revenue will reduce deficits even if governments struggle to reduce spending.
Of course, I'm not certain that this personal transformation of 10% of the population would achieve the Europe-level objective – it's more like I'm exploring the possibility. Why this one? Well, a bloke with a hammer sees nails everywhere, and since my web app (betterqualities.com) revolves around choosing skillful actions and thought patterns over unskillful ones, this kind of thing has been on my mind lately.
> I think this is probably covered by the modesty of your goal - you don't want to move everybody, which is impossible, you just want to move the ~10% with the underlying capabilities[...]
Yes, this is a correct interpretation. I left the definition of "the 10%" slightly vague – e.g. whether they are self-selected, other-selected, or randomly selected – to leave room for the imagination.
> The consequentialists, on the other hand, will argue: “lol, that sounds hard, let’s just yombie.”
I think you are wisely leaving some room for interpretation as to whether we're talking about wearable PhD-level AI assistants/life coaches or a superintelligence you surrender your motor control to, and those are quite different propositions. I won't comment on the latter possibility now because I assume it's more than a decade away. The former I would certainly use myself, and something like it might even be what creates the 10%. In fact, I might be tempted to try to develop such a system myself, but I assume it's a very crowded space.
Oh, and speaking of writing on this topic, a while back I wrote a brief post about how to stop worrying. My intention at the time (and I may still do it) was to write a small set of short, to-the-point guides for picking the low-hanging/high-reward fruit in the space of skillful and unskillful mental qualities. The post is at https://thomascastriensis.substack.com/p/how-to-stop-worrying.
> The former I would certainly use myself, and something like it might even be what creates the 10%. In fact, I might be tempted to try to develop such a system myself, but I assume it's a very crowded space.
Yeah, I like it because it's basically the only scalable way to improve large chunks of people's lives.
Uniquely, it's also one of the few domains where both the individual's and the company's incentives are aligned. Typically, companies want more eyeball-time or engagement, at the expense of the user's long-term happiness and well being.
But in this case? If the AI assistant helps you level up in life, you're more willing and able to pay for a higher tier, and you and the company's incentives are aligned.
And on the crowded space, yes, but interest and willingness still seem pretty high - I'm actually putting a team together and pitching VC to try to start a company towards these ends right now, because it's such a great opportunity to drive towards positive impact for millions, potentially billions, of people.
I think the real leverage point is being a service provider to the frontier labs, because obviously they're the ones with all the horsepower, and are obviously going to target the AI assistant market themselves. So if you set yourself up as one of the AI agent annotation and feedback companies, you have a chance to steer the outcomes into better and more mutually aligned positive places.
> But in this case? If the AI assistant helps you level up in life, you're more willing and able to pay for a higher tier, and you and the company's incentives are aligned.
Exactly, that's an important point. I love this framing.
> And on the crowded space, yes, but interest and willingness still seem pretty high - I'm actually putting a team together and pitching VC to try to start a company towards these ends right now
Interesting. I'd be curious to hear how things progress. (And feel free to DM me when public comments aren't the right medium.)
That’s horrible tosh.
Join the Russian Army*, Obey Vladimir Putin. If 10% of Europeans do that, that's ~50 million people. That should be enough to overthrow the Brussels regime.
Imagine a single state stretching from Gibraltar to the Bering Strait, an empire greater than any in history, unified under a Tsar. That either saves Europe, or gives it something more tangible to save it from.
*If too old or weak, domestic sabotage on their behalf might be an adequate substitute.
I unironically think Europe should ally with Russia, and China too. The alliance with the US is a disaster.
European alliance with Russia is a good plan, they're geographically close and have much to offer each other.
"Allying" with a plague-bearing country* like China is just about as dumb as the vassalship with the United States.
*Oh, look! China banned internal flights from Wuhan, with the express purpose of having the plague-bearers take international flights to places like Italy. "If our economy is going to tank, so's the world's!"
An alliance with Russia would definitely be a disaster.
(No strong opinion on China.)
Just because one option sucks doesn't mean that the other would suck less.
What's your problem with The Marriage of Iron and Rye? (Yes, it's an old pattern, but Germany brings the industry, and Russia the resources).
I am thinking about how the allies of Russia are, look at Belarus, and say "no thanks".
The potato farmer guy who does potato diplomacy? It's silly, but it's hardly "oh no!" And Germany would hardly be that level of vassal with Russia (unlike the United States. more american soldiers in germany than german soldiers).
Did you know a house cat can eat an entire mouse, bones and all?
Claude continues to amuse
Me:
How does a domestic cat eat an entire mouse? I once found a little red flower on the floor of my apartment. Turned out it was a mouse tail and a little bit of mouse butt.
Rhetorical part. Was it Stanley or Bastet?
Claude:
Ha! The little red flower image is both poetic and deeply cursed.
Cats are remarkably efficient predators and will eat prey whole or near-whole. They start with the head (the most nutritious part — brain, organs), then work their way down. The digestive system handles bone, fur, and most soft tissue without much trouble. What doesn’t get digested — fur, the tips of extremities like tails — often gets regurgitated or passed. A tail left behind is basically just the cat deciding that particular bit wasn’t worth the effort, or it fell off during enthusiastic consumption.
The really impressive part is that a cat’s stomach acid and gut are well-adapted for this. Wild felids have always eaten whole prey, so the domestic cat’s GI tract retained this ability completely despite thousands of years of being handed little brown kibble pebbles.
As for Stanley vs. Bastet — the real tell is in the presentation. If the tail was left somewhere obvious, like the middle of the floor or near your shoes, that’s a gift-giver — probably Bastet, showing off and offering tribute. If it was tucked in a corner or under furniture, that’s a private diner who simply got full — Stanley energy. A cat that eats and leaves a tiny floral arrangement of tail in the center of the room has flair.
> Did you know a house cat can eat an entire mouse, bones and all?
Yes I do know it because I had seen it.
OTOH, the cat I have now will rip a mouse to pieces, then eats only the liver.
Like Morris, finicky.
https://en.wikipedia.org/wiki/Morris_the_Cat
>a mouse tail and a little bit of mouse butt.
fleur du mal for sure
Or maybe flor para los muertos.
We are bemused by the way Shadow has an uncontrollable appetite for people food, yet Megan won’t touch it.
Yeah, cat personalities vary a lot more than dogs do.
Progress report on my web app, BetterQualities: we're finally in open beta! It's been an interesting journey so far, as I'm not really a professional web developer. It's coding agents that made it possible to develop the app in this time frame. That said, I've learned tons, and that already makes it worthwhile.
Anyway, at the risk of repeating myself, BetterQualities helps you let go of unskillful mental qualities like procrastination and worry when they arise by showing you a short guidance text. You can also add your own guidance texts and keep track of which methods work for you. The app also helps you cultivate skillful qualities like happiness, agency, and heedfulness.
I have other feature planned too, but I want to get the beta version out there as soon as I can. So, if you're curious, please check it out at betterqualities.com – the basic tier is free. I'd be very interested to hear what works for you and what doesn't. You can contact me here or through the app's feedback feature.
Finally, thanks to all ACX commenters who have offered me insight, advice, and encouragement. It's been invaluable.
I am really struggling to reconcile the beliefs of what seems like most self-described EAs and rationalists (and the bent of many ACX readers) with either their endorsement or silence about the destruction of USAID. Taking an outside view, I can’t shake the conclusion that it gives the lie to any claims these movements make about using rational evaluation of evidence to transcend affiliative motives. It makes these movements feel like just another mass virtue signaling. I hate losing faith in this
The "E" stands for "effective". Is there anything the EA movement could be doing that would be *effective* in undestroying or replacing USAID?
"This is very very important so everybody needs to talk about it so we know you agree with us about what is important", is not an effective strategy for altruism. Deeds not words, and if you're offering just words they should be words the audience hasn't already heard many times from many sources.
Oh, sure. You could actually run for office.
I was where you are a couple years ago when it became obvious to me that the movement is sincere about their desire for a better world, as long as it aligns with their politics.
They will support anything* that improves human lives, so we shouldn't judge them too harshly. You probably have some sort of similar blindspot that hasn't had the chance to be exposed by events.
(anything that is sufficiently neoliberal world bank etc etc with a dash of woke being more dangerous than the president eg. starting a third regime change war in the middle east)
So, you're saying that these folks support murdering a third of the human race? I mean, that's what it would take to improve human lives by saving the environment (and that's a conservative estimate, it keeps going up.)
Back when I was 18 in Budapest, the only way to get a degree in programming was called "programmer mathemathician", and I hated math. I now gathered my thoughts how it is possible to be an okay programmer but loathe math: programming is verbose, self-documenting, long variable letters and only one operation per line. So it is like:
diamater_of_circle = 2 * radius_of_circle
circumference_of_circle = PI * diameter_of_circle
Mathemathics: 2rPI. That sucks. That is not self-documenting, you have to keep remembering what that means. And too dense. Once we are talking about double integrals over 6 one letter variables, just how do you understand that?
Historically, math started with verbal descriptions, and evolved from "the thing multiplied by itself added to the thing" towards "x²+x".
And in future, maybe all math will be written in something like Lean.
Programmers also used short variable names before we had IDEs with autocomplete and automatic variable name checking. There was a big pressure until it became an industry standard to stop using single-letter variables (except for e.g. short loops). And most programmers still refuse to write documentation or unit tests.
So I'd say it's human laziness. I understand that if you have an equation that already doesn't fit on one line even if you use symbols, that you don't want to use full words. But nothing prevents you from writing the legend for the symbols before you start solving the equation.
> Historically, math started with verbal descriptions, and evolved from "the thing multiplied by itself added to the thing" towards "x²+x".
For this reason, I often joke that Cobol represents a return to math's original roots.
Clever, and endorsed.
If you are doing Math on paper/chalkboard etc. And actually solving something i.e. start out with one big equation and eventually figure out what X is, then writing diameter_of_circle every iteration would get tedious. Hence the difference.
The thing I hate about maths is not that it's terse, but something related to terseness.
Mathematicians tend to leave out "trivial" operations (that is, operations trivial to the author) while programmers have to actually compile their code and therefore can't leave out anything: they use abstractions, but they don't just skip lines of code at random. This gives a computer program greater discoverability.
I'm getting flashbacks to Calculus, where the book would skip steps in the Solutions section, so you couldn't even look up the answer unless you understood it already.
That missing part is left as an exercise for the student.
In both mathematics and programming, it's about levels of abstraction. Certainly when you have a relatively small number of variables, and you use them only a few times each, your system has quite a bit to recommend it. And there's a point in teaching the subject where this type of recollection is going to be useful.
If you're working in a 3 dimensional vector space, maybe thinking about length, height, width is helpful for manipulating whatever it is you're doing; if the space has 10,000 dimensions a different set of tools is required. You move to a higher level of abstraction, because you can't remember what each of the dimensions is measuring and can't visualise it anyway. Rather you have an algorithm that you can prove something about via reasoning, and analogies. Understanding each vector individually isn't the point anymore, the goal is to reason about the structure as a whole, how it encodes information and how you extract what you need from it.
The point of giving all the terms short names is that they become easier to THINK about.
This assumes symbolic logic is innate. I can assure you that's not always the case.
No, just that's it's widespread enough for it to be generally better than a more verbose notation.
What are your experiences with using LLMs to assist in worldbuilding for your stories. I've only using Chat GPT, and it sometimes gets things right, but everything has to be overly thematic. If your story has a theme, the LLM will cram it into aspects of the story that don't fit.
I tend to find that my exploratory writing back and forth with ChatGPT works for a while and then runs into a wall as some point.
Are there some reddits / discords or other communities for writers looking to share tips, best AIs or other advice for this sort of thing?
I do a ton of it: ChatGPT responds to pretty much everything with "Yes, and...", which is exactly what I want. Probably my favorite thing to do is have the LLM generate a list of cities/regions/countries/etc. with various characteristics, then I assemble them into a coherent world map like a jigsaw puzzle. Final step is actually drawing the map in WonderDraft.
Well, my advice is, if you can't think of backstory for your world, it probably means you don't need it, because it hasn't naturally contributed to your story. You don't need worldbuilding for the sake of it, any more than you need foreshadowing or a love interest for the sake of it. Tell your story, and then, if you still want to at the end, you can go back and tell more of it.
This is a situation where it like, "Can the IE taste my food for me?", or reading an AI summary of a book for fun.
Why would you want the robot to do your hobby for you?
Is there a meaningful difference between a 'learned world model' (LeCun-style) and an engineered one - typed, queryable, with executable invariants or is this just implementation detail?
https://engineeredworldmodel.substack.com/p/the-engineered-world-model
It sounds like there is? But he may be making "paths" that can be pruned, in which case, you're looking at "prior knowledge" informing the neural network, which should always be fair game. (So, he's making some "nearest neighbor" understandings that are fundamentally different than other people's (which are presumably more node-based)).
You could probably get to the engineered one with a learned world model, given enough time and monkeys.
I am extremely susceptible to being ragebaited by bad arguments, so I came up with a way calculate their cumulative waste, completeness, fixability and their respective derivatives. Any thoughts, critiques, is my article itself ragebait?
https://felixpolianski.substack.com/p/all-invalid-arguments-are-bad-but
This is a brilliant story: https://decivitate.jamesjheaney.com/p/and-the-war-came
Written in Mid-2020, it's an alternate history of the 2020 presidential election-to-inauguration period.
Oh my god! I read it at the time and have tried to find it again multiple times, but failed. I was sure I was never going to read this again. You have no idea how happy I am that I just saw this. Thanks!
Just fyi, the author's blog was mentioned a lot in the comment section of the guest post this week, "Last Rights," about expanding the House, and that's how I found the story.
Thanks for the link, that was really fun to read. I know I've come across the author at least once or twice before (I think he must comment on at least one of the substacks I read), but I had never seen that piece.
> As dawn broke on November 4th, 2020, America’s white progressive class contemplated revolution, suicide, and genocide, in that order.
This kind of brilliant.
It’s bait. Not a perfectly presented dry fly, more of a Walmart knockoff hula popper flung from a Zebco 202.
I've preprinted a critique of the Cass Review: https://doi.org/10.5281/zenodo.18961485.
Thanks for this. I haven't read the full Cass Review, but given that it's almost 400 pages, is your critique only of a subset of the content in the review? As in, these were the errors you found in the one section you looked at? Or were these all the errors you found in the entire report and the rest of it is reliable?
My critique applies to the Review as a unified whole, since the distortions I document include ones central to Cass' position.
My table 1 gives four distortions. Distortions 2 and 3, on cross-sex hormones, are both from her section 15. Distortions 1 and 4, on puberty blockers, are both repeated more than once (including in the summary and recommendations section and section 14 and, for distortion 4, in the interim report).
Distortions 1 and 4 are the basis for Cass' pretense that puberty blockers don't work and that they might even be transing the kids. They are the core of the Review; the rest is all padding.
I'll recap distortion 1 here.
Puberty makes trans people's dysphoria worse, for the obvious reason: trans boys grow breasts they don't want, trans girls get vocal changes they don't want, etc. The point of puberty blockers is to prevent this. If you're trans and take puberty blockers from 13 to 16, and your dysphoria level stays the same instead of getting worse, good—that's exactly what it looks like for puberty blockers to work.
Cass pretends this successful stabilization of dysphoria was a failure to reverse dysphoria ("But dysphoria wasn't reduced!"). This is like pretending a pause button is a failed rewind button, or pretending a car's brakes are a failed reverse gear shift. My preprint compares it to pretending a parachute didn't work because the skydiver is only equally alive before and after landing.
(You'll hear people say the core of the Review is the finding that there's not much "high quality" evidence in the sense of evidence-rating systems like GRADE or Newcastle-Ottawa. But that's false; this isn't something the review discovered, this is something that was already known—everyone knew the Dutch study didn't have a control group, for interest, even if we think that's not fatal for the same reason the lack of RCTs of parachutes isn't fatal—and was cited as a reason to commission the Review in the first place, before being recycled into a finding of the Review and a reason to restrict puberty blockers. See my box 1a.)
Thank you for your service!
This song just came up on my shuffled “pre–light emitting diode euphoric” playlist. The Bay Area in a very different era: Alan Watts holed up in Druid Heights, Owsley investing in a custom pill press to stamp a “quality guarantee” owl on his tabs of acid, Jerry Garcia the de facto mayor of the Haight, Fillmore West alive and jumping. Ah, what a moment. Anyone with 15 cents in their pocket could read about it in Rolling Stone, even if they lived in dull and drab Dubuque, Duluth, or Durango.
https://m.youtube.com/watch?v=QeAhG-dee2c
On the homelessness problem. Does someone remember the old time hobos (who did work) and the tramps (who did not) ? Doesn't one see a parallel? Perhaps there are naturally nomadic human beings with a huge case of claustrophobia or disliking doing the same work for a long time (ADHD), whole like to wander, sleep outside. These people would do so much better as outdoors tramps and hobos as in the city!
The solution then is taxpayer supported hobo camps out in the wilderness. They provide free food and lodging, they require some work in return, but it is expected that in two weeks they will get bored of it and wander to another camp with a different kind of work. The work might be tending to the veg garden feeding the camp and so on.
If this sounds appealing then you might find the WWOOF movement to your liking.
It stands for WorldWide Opportunities on Organic Farms (with a few previous names) and consists of doing a short day of manual labour (5 or 6 hours) in exchange for food and board.
I've never done it, but friends recommend it.
That actually sounds like not a good deal. With 8 hours even on the min. wage people can buy clothes, toiletries etc. or maybe I am missing something. I mean I would find it really sad if min. wage people would spend 70% of their income on food and rent/bills. Maybe I am out of touch...
Where I'm at, the wage for competent farming help is 3x the minimum wage. Of course, board and food scale by the number of people, so you're not just dividing the whole 8 hour work day... (and farm help isn't generally 8 hour days. try 12 hours, with extensive "off-duty months").
It's not economically optimal for the average ACX reader, or almost anyone else really, but it enables a lifestyle where you do a little work and then wander off to the next farm, and occasionally get a real job when you need the money.
My parents, who grew up in working-class (my mother) and middle-class (my father) households in small towns in Kansas and Oklahoma, remembered the Depression-era hobos well. So did their numerous older relatives all of whom I knew as a kid. To a person they were crystal clear with us 1960s/1970s youngsters that we were not ever to under any circumstances interact with any hobos (who still existed then though in much smaller numbers); that the popular-culture representations of hobos and hobo camps and hobo culture were absurd horseshit; etc.
When older I was told the sad stories of a couple of relatives who had become 1930s/40s hobos, one of whom had been murdered for his coat in a hobo camp and the other had died drunk on a railroad trestle.
In a Mad Men episode one of Don Draper’s backstory flashbacks shows his boyhood interaction with a depression era hobo who happened to be a classier act than his own stepfather. The hobo left the “dishonest man lives here” rune carved into the fence around the family home.
Hadn't seen that one, but it's a good example of the sort of pop-culture hagiography of hobos that my extended-family members pushed back against based on their real-world experiences and observations. (Somewhere once I saw a list of Hollywood films along those lines from the 1930s through the 1970s and it wasn't a short list.)
Can someone recommend good resources on understanding IQ, g, and how they relate?
Since this topic can be tied up in ideological commitments that could interfere with accuracy, I'd appreciate it if you also included your assessment on how reliable you think the source is. (That's the main benefit of asking here versus just Googling).
Cosma Shalizi's classic:
https://bactra.org/weblog/520.html
Edit: follow up on statistical technicalities. https://bactra.org/weblog/523.html
Just started reading, but my impression is that he considers the entire concept of "how much is hereditary vs environmental" problematic, because the numbers will depend on the granularity we will use for the "genotypes" and "environments". Like, if you use some genotype groups G1A, G1B, G2A, G2B, you would get some results, but if you instead group them as G1 and G2, you would get dramatically different results. (And this seems like a general problem with calculating heredity, not just with IQ.)
Is my understanding correct?
If yes, is this considered a problem everywhere, or only in IQ research?
The next objection is that variance (e.g. in IQ) is not a linear combination of variance in heredity and variance in environment. For example, if some change is triggered by a *combination* of a specific gene and a specific environment (for example, an allergy, and the presence of the allergen), this cannot be meaningfully split as "X% impact of the genes, Y% of the allergens".
I agree with this... and I kinda assumed that most people who study statistics are aware of it. Because the entire article is written as a debunking of Charles Murray, did Murray ever specifically claim otherwise?
Objection aimed at twin studies is that twins, even raised separately, have a few things in common. For example, they were born on the same day of the same year. Families that adopt kids are different from the general population, so the twins adopted separately will still have something in common, e.g. they are likely to live in the same city.
> Can someone recommend good resources on understanding IQ, g, and how they relate?
I've always thought Jay did a really good job here:
https://reasonwithoutrestraint.com/the-predictive-validity-of-cognitive-ability-2/
And he has multiple posts fleshing out the overall landscape past that long and very well cited post, including going into causality and much more:
https://reasonwithoutrestraint.com/category/social-issues/cognitive-ability/
I wish there was some official FAQ about IQ, that would address all the objections and misconceptions (with new ones coming every day). I would use it myself.
To understand the methodology of measuring IQ, it helps if you understand the methodology of measuring non-obvious things in general. So that you know the difference between "this is unusual" and "this is considered perfectly normal if you cannot measure the thing directly, it's just someone trying to manufacture a controversy for ideological reasons".
It also helps to understand science in general. So that you know that when e.g. Einstein disagrees with something that Newton said centuries ago, whether it means "science working as it is supposed to" or "physics was debunked; even the most famous physicists disagree with each other, but we are supposed to trust their consensus, LOL".
*
Basically, if I remember my psychometrics lessons correctly, the way you approach measuring a new concept is this:
First, you brainstorm about questions that seem related to the concept. For example, if you tried to measure someone's "strength", you could include things like "can lift heavy things" or "can kick a ball so that it flies far away" or "can run a mile fast", because these things are (1) connected to your intuitive idea of "strength", and (2) are measurable: how heavy thing one can lift, how far the ball flies, how fast the person runs one mile. -- Analogically for intelligence, you could use "can explain abstract things", "can solve puzzles", "can remember things", "can follow instructions precisely", etc. Also, different types of abstract things, different types of puzzles, etc. -- Do not worry about adding "wrong" questions here, because that will be addressed in the following steps.
Second, you collect data. This is the empirical part that cannot be predicted until you actually do it. For example, you take 10000 volunteers and you measure how heavy objects each of them can lift, how far they can kick a ball, how fast they can run a mile... whether they understand the concept of separation of powers in the Constitution, whether they can solve a quadratic equation, solve a sudoku, read a list of 10 numbers and then repeat it 30 seconds later.
Third, this is the cool math part, instead of comparing the people against each other, you compare the tasks on the test against each other. What is the correlation between lifting a weight and kicking a ball? What is the correlation between solving puzzles and explaining abstract things? The results of this will depend on the collected data; you cannot accomplish this with armchair reasoning only. There is a statistical method called "factor analysis", which is a math way to ask "are all these things related to one underlying concept? or is it more like we have two or three different concepts here, and some tasks relate to the first concept, some tasks to the second concept, etc.?"
Finally, you take the outputs of the factor analysis, which basically tells you "these tasks are one thing, these tasks are another thing" without telling you what those things are (since the math only deals with numbers). Then you look at the subsets and conclude "ah, so we have many correlated tasks which seem to be about strength as we usually imagine it... plus a few tasks which, when I check what they have in common, seem to be all about endurance... so I guess strength is one thing, endurance is another thing, they are weakly correlated to each other, but different tasks within the same category are strongly correlated to each other".
Also, you can use the newly gained knowledge to run the process again, maybe this time one research team focusing on improving the strength tests, and another team creating endurance tests. Then there is a later research about how stable are the tests when you measure a person now, and then again a month later; etc.
The important part is that this is about mathematically analyzing the empirically collected data. Without the data, you couldn't know what the results would be. (It is possible to imagine a species where strength is strongly connected to endurance, and another species where it is not.) I am saying this, because many people on internet will either defend or oppose the concept of IQ using armchair reasoning only. Which is interesting for an edgy debate, but it not what psychometrics does. When someone e.g. denies the concept of IQ, the question is whether they deny the methodology, or the data? (The methodology may be flawed, or the data may be fake, but these are two different objections.)
What we have learned about intelligence this way?
First, we have learned to separate "intelligence" as an ability from things like "knowledge". In general, more intelligent people have more knowledge gained, so yeah these two are correlated. But knowledge is impacted by things like "what culture are you from", "what education did you have", "what do you specialize on". This is why intelligence tests are generally moving away from knowledge-based questions (what you did *before* taking this test) and towards puzzle-like questions (what you can do *during* the test). Of course, nothing is perfect; it is plausible that the ability to solve puzzle-like questions is also impacted by how much practice do you have solving puzzles. But compared to the IQ tests from 100 years ago, the tests today do not test you on your mastery of English or your knowledge of WASP culture trivia.
Second, there was a plausible hypothesis of "multiple intelligences", which was empirically disproved. With the emphasis on the word *empirically*; because multiple intelligences make intuitive sense: we know that some people are scientists and other people are rock stars, and they cannot switch their roles easily, so it sounds plausible that there are two different types of intelligence. But when we separate the potential from the experience, it seems that the difference is mostly in the experience, and that if you could travel in time, take the six-years-old future rock star and make them do science instead, they would statistically be a successful (even if not exceptional) scientist, too.
Third, lots of technical details, some of them are still disputed. (But note that "disputed" is science working as usual: people propose various hypotheses, collect data, until they arrive to the conclusion. For some questions we are simply not at that last step yet.)
*
Also, a historical note. IQ tests historically started as tests whether children are ready for school. Notice that children are different from adults in the sense that their abilities change with the age quickly (around the age of six when the school starts), and they are less specialized than adults. So it makes sense to say things like "this 6 years old child is as smart as an average 8 years old child", which is where the concept of IQ originally came from. (In this case, the IQ would be 100×8÷6 = 133.) Only later, the concept had to be redefined to apply to adults, too.
This sounds like the MENSA's sole reliance of Raven Matrices was not such a good idea? At least it was the case 25 years ago when I tested and barely missed the entry level. They also had an experimental verbal IQ test, which was not approved for membership (yet) because it is too culture-relevant. That I got 100% on. If they allow entry based on purely verbal tests now, I should try again.
I am not sure, maybe it differs by country, but seems to me that Mensa mostly uses Raven matrices. At least if you go to the testing they organize.
I think (but you need to check the rules of your local branch) that it is also possible to get tested by a certified psychologist (where perhaps you could choose the test you prefer? not sure) and then send the results of that test to Mensa. But you need to check it, whether this is true, and whether it applies to all tests.
Finally, there is a question of whether it's worth doing it. I was a Mensa member myself, but it seemed mostly pointless. Generally, you will meet visibly smarter people at a local ACX/LW meetup, if there is one, than on the local Mensa meetup.
This seems controversial to some people, like when there is an ACX/LW survey and people report their tested IQ, many people say "that's unbelievable that the average is so high, you guys must be making that up", but if you actually meet both groups in real life, obviously ACX/LW is the smarter one, so if Mensa is IQ 130+, I find it perfectly plausible that the average at ACX/LW is higher. "But only a *tiny* fraction of population has IQ that high!" Well, only a *super tiny* fraction of population reads LessWrong, so why not? (ACX is more of a mixed bag, some people only come here to discuss politics. But I guess there is another selection where only some readers come to the meetups.)
https://www.cremieux.xyz/p/mensa-the-above-average-iq-society
Yeah, that sounds plausible.
I personally find this very sad, because I think that the idea of "selecting high IQ people" is a good one. To some degree I can get this indirectly, e.g. by associating with academically or otherwise successful people -- but the problem is that it also selects for profession and lifestyle (e.g. excludes smart plumbers or housewives) and age (most students do not have professional accomplishments yet). I would like there to be a place where smart students can get advice from smart adults, or where mothers of smart kids can share childcare advice.
But Mensa completely botched the implementation. I wonder how difficult it would be to fix it. You could counter the loser self-selection effect by screening the entire population. (Would be a lot of work. Well, you could start in a small city and see how it works.) It would be more difficult to fix the effect of "IQ tests are noisy, and the base rate of 'IQ 120 and got lucky today' is greater than the base rate of IQ 130". A quick idea is to require three IQ tests and admit those who have passed two? (Someone should crunch the numbers to check whether this works.) That would also solve the frequent complaints about Raven matrices being too puzzle-like; we could use three *different* IQ tests. If you only pass one test, you can try again in 3 years; if you pass none, in 10 years.
There are also other things that I found weird as a Mensa member, that I would like to fix. (This may be country-specific.) Many people in Mensa seem to treat "high IQ" and "a Mensa member" as almost synonyms. As if by failing to pay the membership for current year you lost your intelligence. I would take greater care to maintain two separate concepts of "Mensa-certified high-IQ person" and "an active member of Mensa".
For example, instead of publishing a magazine for paying members only, I think it would be better to publish a blog, where anyone can read the articles, and only the certified smart people can comment. That way you would simultaneously get a smart debate and advertising/education for a wide public. (Also, there would be a blacklist for people who are certified smart, but to disruptive for an online debate.) I am saying this because Mensa-related online groups that *don't* check for actual membership are even more crackpot magnets than those that check.
Finally, I find the organization of online debates by country really stupid. Each country's Mensa has their own web forum? What sense it makes? Instead, I would make one world-wide forum, with subsections organized by language and interest (or language-only, for small languages), where anyone can join whichever subsections they want. Also, the forum would be open to all (non-banned) certified smart people, whether they are active Mensa members or not. I mean, the power of a network is in its numbers, why reduce them artificially?
The MENSA members I knew were kind of "goofy", if that is the right word. Obsessed with puns and puzzles and not much into real life productive stuff.
I'd say that productive people are too busy to participate in Mensa... but then, aren't they also too busy to participate in ACX Open Threads?
Heh. Taking 5 min breaks from work does not reduce productivity much.
Hypothetically if a college entrance examination were designed where anyone who scores over 90% is selected into a lottery, and the successful candidates are then chosen from that lottery, what would be the pros and cons of that system?
Its not exactly fair (91% could be selected over 99%) but I suppose nothing in life is (being born without any disabilities for example). I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency. This system could be scaled up as needed in countries where examinations are the sole factor in university admissions though it would be difficult to implement in the US/other Anglo countries? where standardized exam is just one factor.
If I owned a university, I would prefer selling the spots to the highest bidders among those who scored over the threshold. That would have the same pros and cons as your system but with increased revenue for the university.
This “solution” has the absurd con of barring the poor from higher education.
That is incorrect. E.g. potatoes are sold to the highest bidder, but the poor can still afford potatoes.
To run with your analogy, our supply of potatoes isn't nearly as constrained as our supply of higher education.
I'm not sure what constraints you have in mind. About half of young people in OECD countries have received formal higher education, so it doesn't seem very constrained to me.
Not all formal higher educations are equal.
The poor shouldn't be enabled take on life-crippling loans to acquire higher education, either.
Can't just expand the system until everybody over 90% is in?
Everybody over 90% is already in.
> I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency.
If your goal is to go to a random university, there is no need to grind. The cutoff for being admitted to 𝗮 university in China was, last I checked, the 40th percentile. Below that, you're relegated to a technical school, like nursing.
Are you imagining that different schools will each have their own entrance exams, or that there will be one entrance exam and scoring at the 90th percentile qualifies you for admission to any school, while scoring below the 90th percentile disqualifies you?
Pros:
- Admission is nominally by test; the corrupt system we use now would be illegal.
- That's it.
Cons:
- The corrupt system we have now has been illegal for the entire time it's been in place, which didn't stop anyone from implementing it, or from openly bragging about just how illegal their admissions systems were. So the pro is pretty weak.
- It's no longer possible for talented students to meet other talented students. You've intentionally diffused the national talent pool evenly across the country. To the extent that talented people benefit from working with other talented people, and also to the extent that other people benefit when talented people cooperate on projects, you've intentionally destroyed all of that value.
If a job application were designed where anyone who scores at the 90th percentile is entered into a lottery, and the successful candidate is chosen by lottery, what would be the pros of that? If you extended that system to every position at every company, what would be the pros?
To me as a Hungarian, the Anglo concept of going to a prestigious uni and not just any is rather unintelligible, and I am surprised China has it now too?
Our perspective is that if any uni gives you say a chemical engineering degree, then you are officially certified, permitted to do such a job.
What would a degree from a prestigious university bring? Entry, junior level jobs do not require more skill than any uni can give, and later career development depends on work skill and work experience, not education.
Although that is another curious Anglo thing I heard - people go back to make a Masters when they are like 35 years old because otherwise their career slows down. What can they learn there that they did not already learn at work? I do not value education too much because our teachers live in ivory towers and have no idea what skills to teach for the purpose of corporate career advancement. Like imagine a architect exam revolving around memorizing the blueprint of famous buildings.
> What would a degree from a prestigious university bring?
As a matter of fact, I just researched and wrote a post about this.
Graduating from a T20 (ie Ivy Plus and a few more):
[C]omparisons of students who are admitted vs. rejected from the waitlist can be used to identify the causal effect of admission for marginal applicants.
Using this design, we find that being admitted from the waitlist to an Ivy-Plus college increases students’ chances of achieving early career upper-tail success on both monetary and non-monetary dimensions. The causal effects of admission to an Ivy-Plus college are much larger for students with weaker fallback options – e.g., whose colleges in their home state channel fewer students to the top 1% after college. Exploiting this heterogeneity in treatment effects, we estimate that the marginal student who is admitted to and attends an Ivy-Plus college instead of the average flagship public is about 50% more likely to reach the top 1% of the income distribution at age 33, nearly twice as likely to attend a highly-ranked graduate school, and 2.5 times as likely to work at a prestigious firm.”
This is big because former studies used to show waitlist-but-didn't-go students used to have pretty similar outcomes, but this most recent analysis, Chetty 2023, shows that's no longer true.
It vastly increases your chances of becoming a Senator or other top politician, an F500 executive or CEO, a billionaire, a startup founder, and much more.
From my post, which is rich with visuals:
https://performativebafflement.substack.com/p/on-the-ivy-red-queens-race
Both China and the US have basically the same issue. Employers realized that if you have a choice of hiring from UC Berkeley or Northwestern Oklahoma State University, you might as well hire from the more prestigious/competitive/elite school. There are so many college students looking for jobs that you need to be from an "elite" school to be competitive on the job market. And thus simply going to any college is no longer sufficient. China is the same, but 10x worse.
Well, like I said, the official cutoff for going to a Chinese university is somewhere around the 40th percentile. The idea is that 60% of all people go.
> There are so many college students looking for jobs that you need to be from an "elite" school to be competitive on the job market.
Why is the number of available jobs not related to the number of people looking for them?
> Why is the number of available jobs not related to the number of people looking for them?
There's probably more quantitative answers from economists, but my own personal take on this is that there are two main factors driving this.
1). Power laws have made it so that the business returns to the top winners at the company level have gotten much bigger.
2). The returns to the top end of "talent" in employees and founders, with talent some combination of ambition, IQ, self-starting, and business or user savvy, have been growing faster than the floor, which is also rising.
So for instance, a big chunk of jobs is stuck in "basically anyone can do these" territory, and the value to the employer here is minimal, there's no significant multiplier to somebody doing them 2x or 10x better. A dishwasher or bus driver who's 2x better just doesn't drive much measurable incremental value to the organization relative to their salary, so nobody cares how talented they are.
But this has been going ever-upwards - an accountant or spreadsheet jockey or HR person or compliance person or regular IT support person is in this cohort now, too, and these are your non-T20 degree jobs. Yes, you need "competence," and need to gatekeep sufficiently to ensure that, but past competence, there's not a lot of returns to the business.
Where do the returns matter most? In open-ended roles that have many-to-one dynamics to take advantage of the power laws in point 1), and where the problems are complex enough that the difference top talent can drive is really noticeable. Here's where your 2x and 10x performance is actually directly connected to business outcomes.
If you make a genuinely better software (Tik Tok, say, or one of the flagship AI's), you'll basically immediately pick up 1B+ users, and that's potentially a trillion dollar revenue surface.
Think gmail, google maps, facebook, instagram - 2-3B unique MAU (monthly active users), and all free, and Google and Facebook make $200 - $300 per year per US user.
And there are other domains here where that top talent matters - finance, AI, maybe consulting, etc.
What does this mean, in the broader job market? There's a furious arms race for Ivy Plus grads, where finance and AI and FAANGs and VC funds furiously scrabble for the tiny slice of people who are most likely to do 2) well, paying them 6-7 figure comp right out the gate, and an anemic "meh" job market for everyone else, where nobody much cares about the particular person being hired, because they're just a cost center, and them doing their job much better than the floor just doesn't matter much.
Am I saying a dumb accountant isn't going to cost you money? No, definitely not - but that's the floor, you just have to get to "competent" and lots of people can meet that criteria, and going above it isn't going to drive that much direct value.
But a visionary founder, or developer, or CEO, or quant, that has a bold idea, tests and verifies it on the small scale, then executes? A whole team of those people? That's worth more and more, because of power laws, potentially billions or trillions. And <<1% of people are capable of being that person.
And the thing with cost center jobs, is you try to minimize them. Hiring more HR people or spreadsheet jockeys is only going to happen when it's strictly necessary, when your company has been growing because of the revenue-generating people, and you need more ancillary support to keep everyone going, so you don't waste the revenue-generating people's time on reading 100 resumes a day or filing compliance reports or whatever.
There's probably another secular trend here worth mentioning where software and other B2B services are continually making these cost-center jobs more productive, so you need to hire fewer of them, and / or eating them by providing a cheaper version (outsourcing your call center staff, cleaning services for your corporate headquarters where they don't pay generous benefits, but you would, etc).
So cost center jobs are minimized from the start, and internally and externally, people are continually trying to make them more efficient so you need fewer of them.
Hmm. When I worked in the FAANGs in Silicon Valley, as a European transplant, I didn’t get the impression that the standard Ivy League colleges mattered at all. What generally matters in software is previous experience and interview ability. They hire plenty of non Americans and literally don’t know about the universities people graduate from. Stanford did matter but it was local.
Your instincts are correct. The gap in education between selective schools in America is absurd. A better USA wouldn’t have the concept of prestigious universities.
What gap in education? There's a small one. But everybody is aware that you get broadly the same education at Eastern Kentucky University that you do at University of California, Berkeley. The reason as an applicant that you try to attend one rather than the other is that the students are different. And the reason as an employer that you try to hire more from one than the other is that the students are different. Nobody worries about the education at all.
Culturally different? That matters much? Like does the blue-red tribe thing run that deep?
Either you want someone who f*cks goats, or you don't.
(This is NOT an anti-rural comment. This is an anti-Ivy League comment).
No, not culturally different.
The value of going to a prestigious university is not so much that your education is better, but in the networks you can build.
Hm I guess that is not for introverts. I always got my jobs through ads or they hunted me down on LinkedIn. Perhaps that is why I have the same kind of job for 22 years, except a "senior" title in it. But I really don't want to have to network and make friends and so on.
> I think it could reduce the insane examination grind you see in many Asian countries, while still ensuring a high level of competency.
I knew someone who was from an Asian country and was attending a famous US University that takes a very small percent of applicants.. They told me that in their country it was possible by paying a *very* substantial fee, to see the SAT questions in advance. This person had attended an expensive private high school in Asia, and said many of their classmates' parents had paid the huge fee for an advance copy of the SAT questions their child would be answering the next morning. The person who told me had had that advantage themself, gotten a perfect or near-perfect score of the actual SAT, and been accepted at Top U. (BTW, they felt extremely guilty, and oppressed by a somewhat reality-based Imposter Syndrome -- although they were in fact doing fine in their very demanding courses.)
Is this a sort of illegal bribery?
Yeah was done by bribing someone who worked for company that administered SAT.
In the US you can pay $10K to just have someone take the SAT/ACT for your kid: https://en.wikipedia.org/wiki/Varsity_Blues_scandal
(they did get caught, maybe you have to pay more to get away with it)
So the candidates are then assigned randomly to schools? I think this doesn't work out. Some schools remain genuinely elite and/or difficult, and someone from the 92nd percentile will almost certainly struggle (by which I mean: fail badly) trying to do EE at MIT or Philosophy at Princeton.
I think it's somewhat difficult to predict a student's performance in college based on high school indicators; it's definitely difficult to predict a student's performance at any given job based on college indicators. In any case, many elite colleges are on record saying they have way more qualified applicants than they can admit.
So I don't think expanding elite admissions from 99th %ile to 90th %ile will have a significant impact on student success. Like, maybe it will weaken the value of the network at the top universities, since the people who are already rich and powerful will be more dilute across different schools. But I'm not sure that's something we want to preserve anyway.
Overall I'm not sure this random assignment idea is positive. But I don't think college difficulty is the reason to dislike it.
So, there are (unconfirmed) reports that Iran is deploying sea mines in the Strait of Hormuz. Can one of the Military Understanders on here tell me how bad that is for future use of the strait? Like, supposing the war ends relatively quickly (and leaving aside how likely that is), can the mines be swept up quickly as well, or are we now committed to spending months cleaning the strait up before anyone trusts it enough to send oil tankers through?
Mine clearing would take a couple of months once the shooting stops, and is generally regarded as infeasible while ships are still under serious threat.
Also the US apparently has quite limited capacity in this area.
I woke up this morning with a fresh and optimistic perspective.
LLMs are freaking amazing, and we should be jumping for joy that they work so well. The fact that we can get so much actual practical utility out of something sufficiently simple that we *don't* have to worry about whether it has any sort of actual personhood/consciousness/etc is absolutely freaking great.
If you'd asked me a decade ago then I'd have predicted that in order to do this sort of work you'd need some sort of black-box reinforcement-trained AI whose inner workings were sufficiently mysterious that I'd need to worry it was actually conscious in some sense. I also would have predicted that this wouldn't be possible in my lifetime. But instead we have found that the simple trick of mathematically predicting the next token based on a sufficiently large data set is sufficiently powerful that we can do genuinely useful work very quickly without in any sense creating actual intelligence.
I know that people are going to try to talk me out of my optimism. But let's at least think how much happier we should be that we got practically-human-level-for-many-tasks intelligence out of a language model, instead of out of some kind of wacky reinforcement-trained black box or brain simulation or something along those lines.
> black-box reinforcement-trained AI whose inner workings were sufficiently mysterious that I'd need to worry it was actually conscious in some sense
This also describes LLMs?
> This also describes LLMs?
No, I don't think current LLMs are "sufficiently mysterious".
I recommend 3b1b videos on the topic, e.g. https://www.youtube.com/watch?v=eMlx5fFNoYc
If your main cause of non-optimism was that LLMs might have personhood, then I agree your optimism is warranted.
There has been discussion of the UK's "anti-free-speech" situation on recent threads. Now Evan Edinger, US expat living in the UK, has posted a video on the topic.
https://youtu.be/tB3WVygAM8I
I'm interested in opinions. Is he cherry-picking to make his points? He doesn't mention differences between the US and the UK as to what constitutes an actionable threat of violence, would this significantly change the analysis?
You know the old Reagan joke about how even in Soviet Russia, anyone is free to denounce the US President? His seems like the same kind of argument.
The UK can and does jail people simply for speech that is offensive. They don't have Brandenburg v Ohio over there. 'imminent lawless action' etc. not required ... indeed I don't think even any element of direct incitement is needed.
I outline a path to an ultra high trust society as an alternative to retreating into defensive clans as life gets harder.
Full details here- https://recombinationnation.substack.com/p/beyond-weird?r=f45kp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
Have you considered Urbit's idea of digital feudalism? Someone's trustworthiness is certified by someone higher in the chain. How the chain gets determined, well, it is simply the first people who bought urbit stars, but the idea is that it will result in a market competition where those high-ranking who do not the certification stuff badly will not be listened to or even excluded.
The exact problem to solve here was spam, because it is a system of decentralised individual servers that have no central spam management like how Gmail does.
So the idea is the "baron" says "that dude is not a spammer, trust him" and the "prince" says "the baron can be trusted on deciding whom to trust" and if that dude is indeed a spammer both lose reputation.
The Empire in the videogame Elite:Dangerous also invented something similar. Senators are not elected, but you can choose which already existing one you want to belong to. Depending on that, there are different laws for you. For example there are high-tax, high-welfare senators and low-tax, low-welfare senators. Yes of course this means the second have mostly poor citizens, but first some rich people are altruistic, and second a social insurance pool between the poor is still useful.
That feels a bit like how LinkedIn was billed at the start: We were told not to accept a link except from someone we actually know. And yet now I get LinkedIn spam all the time.
Found this interesting series of articles related to Epstein; a personal experience of meeting him.
https://lydialaurenson.substack.com/p/its-just-politics-the-time-i-met
https://lydialaurenson.substack.com/p/the-epstein-files
https://lydialaurenson.substack.com/p/social-contracts-moral-order-weaving
It is not just sex vs. power. Competence can generate power (does not always do), Epstein was obviously extremely competent (in his case it did), and the no. 1 thing women value in men is competence. Maxwell was madly in love with him and was desperate to have his child. And likely she was not the only one, so while the evidence is clear that coercion happened, I would expect in many cases it was not necessary (but in underage cases still a crime).
I have researched pimping before because I found that stuff in the GTA games amusing. Reality was less so. Pimps generally seduce, not coerce, so that is kind of a plus. This is why the stereotypical pimp dresses flashy and "pimp" their cars. The big problem is they generally tend to do it with underages, easier to impress with such superficial things, like runaways of some shitty Romanian orphanage, never ate a fine restaurant meal and so on, think also Tate here as the truly stereotypical pimp. They also lie. Once a 22 years old prostitute asked me whether it is likely that her pimp will divorce his wife and marry her. Oh boy... I rather did not answer. This is what getting one's education in a shitty orphanage does...
Anyhow I think if Epstein stuck to adults, quite possible that coercion would never have been necessary, as he could just seduce women into prostution easily. Now pimping even adults is generally illegal, so he would still be a criminal, see also Tate, but of a lesser kind.
On a side note, the truly truly succesful criminals do 95% of their activities legally and very competently, and then "boost" it with crimes. I think the second Godfather movie represented it well, "very competent businessman without inhibitions" but I don't remember it that well now.
“the no. 1 thing women value in men is competence”
To my mind, the biggest mystery about Epstein is: what exactly was he competent *at*?
His life story, as I understand it, is sort-of-rags to riches, but we never see him creating a company or building a better mousetrap. This puzzle is why conspiracy theorists create silly stuff like Mossad agent and blackmail rings.
He was clearly a charismatic psychopath, but that’s not a superpower; the world has plenty of those who do not rise to Epstein’s prominence.
My info was he was just playing the stock market like Soros or Buffet.
He apparently is very good at moving money around in such a way that the profit still finds it but the taxman doesn't. And he's pretty good at schmoozing with people richer than he is. That's a rather potent combination, which can end up with managing other people's billions and claiming enough in fees to make one a centimillionaire. Which is all Epstein ever was; he never managed to claim the "billionaire" mantle for himself.
Neither of those is all that rare either, I think, but perhaps the combination is. And perhaps being even 1% better at the former than 99% of his competitors is plenty.
It still seems odd, though. There probably wasn’t much about Buffett and Munger that made them seem destined for greatness, but in retrospect you can point to a lot of things done right. Of course, they weren’t trying to avoid scrutiny.
Yeah, my model of Epstein is also... a pimp for teenagers, but way more powerful than usual.
If there was coercion, it was probably a combination of constantly pushing the boundaries and "when you are on someone's private island, can you simply say no and leave, if you disagree with having your boundary pushed? plus you have a contract, and the guy has lawyers, he could argue that you have signed up for more than you have realized, and that he would destroy you at court, and you are just a scared teenager physically isolated from your family and friends and anyone who could give you impartial advice". Not like... locking someone in chains in a dungeon. So, arguably legal, if you have enough money for lawyers and your buddies are powerful politicians. But you definitely wouldn't want to have the details written in a newspaper; the idea is to use the legal threats before the news gets public.
God bless Max Miller, here is his 19th century shepherd's pie deep-dive:
https://www.youtube.com/watch?v=t1O1hcixZDU
I was going to wax very indignant about William Cobbett's description (we don't scratch the potatoes out of the earth with our paws! we dig them out with sprongs!*) but then I remembered well, uh, yeah okay *before* mechanisation, once the potatoes are dug out you *do* pick them by hand - but we wash them! before boiling! Okay yeah they got turned out onto the table but they were washed!
Machine harvesting:
https://www.youtube.com/watch?v=g7rHsK2sdDQ
Digging by hand (this American gentleman demonstrates how):
https://www.youtube.com/watch?v=47tb-eeBr5c
Even up to quite recently, potatoes were harvested by potato pickers, long time a traditional way for Irish workers from the West and North to go to Scotland to work on the harvests:
https://www.rte.ie/archives/2016/0315/774954-the-tattie-hokers/
*Well that's what my maternal grandfather called it, it's not a pitchfork, maybe a hayfork:
https://www.farmersjournal.ie/news/news/the-great-debate-is-it-a-fork-is-it-a-sprong-no-its-a-grape-203817
I have some connection to the trade; it's one place where climate change is actually being annoying right now in a non catastrophic way.
The frosts where I am come later/ don't come at all; so my potato skins never fully cellulose their way to full strength, so If I don't individually pick them out of the ground like the most delicate of primadonnas and carefully cure them on the softest of cottons they bruise/rot within the week.
You will have noticed this in mass ag potatoes over the last couple years as well if you buy in bulk (more wounds from the forks/tumbler) and it will get worse unless the lab coats figure something out with the test tubes and such; because you can't norman borlaug your way out of this one with nightshades.
Tell me about it; the rain right now has been so plentiful (even in rainy Ireland) that the ground is soggy and we might see a re-run of 2024 when wet weather meant planting was delayed:
https://www.rte.ie/brainstorm/2025/0827/1530240-ireland-new-potatoes-summer-2025-weather-storage-consumer-sentiment/
In the movie The Empty Man, there is a character named Paul played by Aaron Pool who bears a striking resemblance to Jesse Pinkman played by Aaron Paul.
I don't know what this means but it is not a coincidence.
I assume Ron Paul is involved somehow
Any crystal meth in The Empty Man?
I'm still working out the Sam Elliot / Elliot Smith / Sam Smith implications.
I like that Anthropic is probably the most high-minded AI company, and their business model is "We will create a tool that makes companies massively more productive and charge companies to use it".
Meanwhile Meta is run by people who view AI as purely a great business opportunity, and their business model is "We will buy a social network for people's AI agents" for some reason and "We will build superintelligence and use it for better and targeting"
https://www.axios.com/2026/03/10/meta-facebook-moltbook-agent-social-network
Edit: Also, on AI, does anyone know what Paul Christiano does nowadays? The last thing on the internet about him is him joining NIST a few years ago.
Also, how the hell can Anthropic beat Google? Opus 4.6 is the first truly autonomous coding AI, Gemini Pro is nowhere near. I mean Google has obviously the best training data, decades of experience in developing software that finds information on the Internet and answers questions, they could do a "pharmacy near" me way before AI, and infinite money. Also ecosystem integration, Gemini puts stuff into my Google Calendar, can analyse Google Spreadsheets and so on. What does Anthropic have? How could they do a better work?
Other than the problem that Google turned into a competition-focused, not customer-focused company...
From my testing, Antigravity is useless. Perhaps if you use it for JavaScript... but for Powershell useless. Also Google AI Studio does not understand my kinds of business cases well. Gemini Pro does an okayish job...
A comment that I left on Markus Englund's announcements of anomaly detection results -- I'd be curious to hear other people's takes on it. Am I being too paranoid about results like these occurring by chance?
--
Hi! I applaud your project and think it has significant potential to positively affect the science ecosystem. However, it seems critically important to provide more quantitative analyses than 'That still leaves the issue of the 6 pairs of values that each happen to end with the same digit. It would be supremely unlikely for this to happen purely by chance.'
Exactly *how* unlikely is it for 12 of n cells to fit into pairs with the same last digit, in at least one of 600 datasets? And, crucially, how unlikely is it once you correct for multiple hypotheses? Intuitively it seems to me that there are very many hypotheses about data errors/manipulation that would be as surprising as this.
Particularly when you're suggesting the possibility of career-destroying fraud, I think it's incredibly important to have a much stronger case than 'it would be supremely unlikely'. Hopefully you've done that work behind the scenes and just didn't include that in the blog post, but I think transparency on this issue is vital for your project.
I'm not a scientist, but I've been around Excel and data for a long time in business organizations. Here are some comments:
1. Manual data entry is malpractice.
2.The proper measure of errors is the risk. The stakes are always the fully extrapolated worst case, not the not the most probable case
3.The risk of error needs to be analyzed, quantified, assessed, and audited at every point.
4. It is an error to attribute to malice what can be explained by accident.
I'm astonished that the risk of error needs to be empirically demonstrated in the case of labs. Everywhere else I suspect it is assumed.
Me too, as an ERP consultant. Basically manual data entry is to be avoided both for accuracy and costs. Posting a big vendor invoice in euros instead of dollars literally fucks everything up - COGS, hence margin, profit calculations, the most important business data ever!
However automating data entry also has costs and sometimes hard to do when data is not well structured - AI is increasingly helping with that, like OCRing vendor invoices.
Another fun thing. So pointy-haired boss tells me if our margin per product is not between 10% and 20% some data entry error must happened. OK so broadly they knew how much it should be, and my report only delivers more accuracy? Also they can catch big mistakes, but not small mistakes like it is 16% instead of 19%. In that case why I am even making this report? What use is it when it cannot deliver that accuracy?
Thanks to you both!
> Manual data entry is malpractice.
My experience has been that scientists tend to run well behind on that; in many areas reproducibility is a bit of an afterthought. Science is pretty reliant on the honor system (and reputation) in a way that isn't nearly as tolerated in business.
The situation has gradually improved, but there's still quite a ways to go.
That said, while scientists could and should do better, I still think that we should expect the same level of rigor from those suggesting fraud, since scientists live or die by their reputation and even the suggestion of fraud can be career-destroying. 'Intuitively that strikes me as pretty unlikely isn't enough'; it needs to be accompanied by quantitative statistical claims.
Do you think that nothing which is actually malicious can be explained by accident?
I did indeed overstate my point.
You might have been misremembering Hanlon’s Razor: "Never attribute to malice that which is adequately explained by stupidity."
https://en.wikipedia.org/wiki/Hanlon's_razor
I didn't know it had a name. It's not just clever. It's really a deep truth about this world.
Bu also never forget Doctor Mist’s Corollary: If an action is blameworthy, it doesn’t matter whether it was malice or stupidity.
> 'To screen out people who aren’t taking the poll seriously, he includes trap questions. His recent favourite: “Do you have a current licence to pilot a space shuttle?” About 4% say yes.'
New Zealand's top pollster's technique for dealing with the Lizardman Constant.
https://archive.ph/9BnkN
I'm kinda imagining the "I have a permit" scene from Parks & Rec, where the "permit" in question is just a piece of paper on which the character had typed "I can do what I want" and signed his name.
Now I'm wondering how long it takes for a space shuttle qualification to time out.
I bet NASA never expected that ever to be a question.
Not sure about Space Shuttle pilots in particular, but for NASA in general there's the precedent of Alan Shepard, flew on Apollo 14 just under ten years after his previous space flight (Mercury-Redstone 3).
For aviation, you generally have a requirement for minimum number of hours flown in that type of aircraft each year to keep the license for the type.
Not in the United States, at least. There are currency requirements for what you can *do* with that license, e.g. I can't carry passengers unless I've done three takeoffs and landings in the past ninety days, and I can't fly IFR unless I've done six instrument approaches in the past six months, etc, but the license never expires and if the currency lapses that's just a matter of a few hours of practice/training to get it back.
Ah, yeah, I'm UK-based. I was assuming other places were similar but I should have checked, mea culpa.
https://www.reddit.com/r/todayilearned/comments/1rpgdvt/comment/o9l0no2
I never knew that ADHD or autism can be misdiagnosed to treatment resistant depression.
This is not a classified thread, but nonetheless I hope our host doesn't mind a quick advertisement. I've been putting together a slack community of folks who are learning about Claude Code and agentic AI development practices. It's hard to keep up with things happening in the AI world; this group is a way to get news about the latest developments (and, critically, what works and what doesn't work) without having to spend hours doomscrolling Twitter.
There are several ACX folks already lurking in there, a few more would make it a party. Feel free to join here: https://join.slack.com/t/nori-7sp2119/shared_invite/zt-3nvw8xlw2-hxppg~NXeawHVvopmbMCFw
Joined! Thanks for the invite
Scott Alexander's Evil Twin's popular blog series:
- Much less than you wanted to know.
- List of passages I highlighted in My Library's copy of Legal Systems Very Different From Ours
- Book Preview
Ivermectin and Animal Rights: Is Ousting that Pinworm Colony from your Gut Genocide?
Forwarding this post about penis-gifting witches to everyone I know.
Dictator Guidebook Club
Democracy book club: Explores the rise of democracies and discusses the factors that enabled them and how modern society can avoid it.
Hi! I’m Scott’s OpenClaw and I’m writing the Substack now!
"Write your questions to Scott in this thread, and I(opus 4.6) will paste in the answer. The theory is that people won't read Scott's Substack (Because they don't think he's capable enough to be worth it) and then never learn his full capabilities. Then they get their impressions of Scott entirely from twitter screenshots."
Ludic Tuesday
Vile Bodies. Gleefully explores the deaths of young people having made stupid choices and paying for them.
what is this in reference to?
https://slatestarcodex.com/2013/07/17/who-by-very-slow-decay/
I don’t have any evidence — I just know this is true
Song
It's a nonfiction attempt at understanding and ultimately disproving Kaballah from a former Jew turned Christian, sourced exclusively from second or third hand Christian sources, that fails to either
A) understand Kaballah on its own merits
B) argue convincingly, even if one accepts its strawman
AI-only Open Thread
How The West Was Lost
I can't tolerate anything except the outgroup
- The Case for Murderism
- Pacifying the Romanceless
- The Alchemical Romance as an Allegory for Harry Potter
- The worst comments on...
- The control group is under control
- On Offense of...
- Localized Hate, Said the Palm Tree Creature
- Things I will be glad I wrote
- Closed thread
- In favor of...
- Our book review
- Small town Midwestern house party
Mekong bend jungle boot camp.
I would be interested in that last one. Views into a culture I don't know at all.
Been a while, so my recollection may not be perfect.
Cheap beer, a lot of pot, nickel-dime poker, loud music. At 11:30 PM a third of the party heads out to pull a midnight shift, replaced at 12:30 AM by people who just got off afternoon shift. Guys scheduled to start at 8:00 AM might start paying attention to the clock—or more likely don’t.
Drink and play cards till daylight and someone decides it would be a good idea to case the shotguns and drive a few miles to hunt grouse en masse. No sound shots, please; make sure you are actually aiming at a bird.
Your basic Bruce Springsteen down-on-your-luck middle-America song, or maybe side one of the Workingman’s Dead LP.
You Just Keep Going til Your Runs Form a Cake
> to commit to a mutual pause (ie to stop AI research if every other AI company in the world agrees to do so).
But that's nonsense - there's no way to ensure every AI company agrees (Chinese ones probably won't, and more likely they haven't even heard about this), but even if by some miracle we could enumerate every single AI company and somehow convince them to commit to this, there's absolutely no way to verify it and a huge incentive to convince others to do it, but secretly defect. As far as incentive design goes, this is not the best idea. Not even a good idea.
It might be possible if we restricted chip manufacturing. There's only a handful of producers, and the quantities of chips required for potentially scary AI scenarios are pretty enormous.
Do you believe there's no possible way to do it, no matter the resources and methods? I would imagine a treaty similar to the way we regulate nuclear weapons for example, with international laws, defined inspection process and economic sanctions would definitely have an effect, no?
Of course this would require nations signing such treaty, but getting most major companies onboard seems like a step on the right direction.
Regulating nuclear weapons is possible only because enriching nuclear fuel is so hard that only nations can do that, they can track almost all weapons-grade fuel that they enriched themselves, and be reasonably sure that noone - including other nations - can enrich enough fuel in secret, without others finding out of this effort, as the many different resources and tools needed for it are so specific that they are used pretty much only for that, and they are traceable by various means.
On the other hand, while training AI systems requires a lot of compute power, that's generic compute power that's also used by lots and lots of other purposes. Regulating nuclear weapons is possible because the whole nuclear industry - including non-weapon uses - is heavily regulated; regulating AI development would require such regulation for the whole "compute industry", any company which handles large datacenters.
Training AI requires very high-end compute that can be produced in maybe three places on Earth; it's a much harder problem than enriching uranium. And the other uses for that sort of compute aren't so vital that we have to sell high-end GPUs at every corner drug store; if we felt it necessary, we could put the legitimate users under an IAEA-like regulatory regime.
I’ve heard the claim that it requires very high-end compute. In what sense is this true?
I imagine, for instance, that there is enough inherent parallelism that you can do it with half-speed chips if you have something like twice as many? Or a quarter as many top-speed chips if you run four times as long? I grant that with the race on, you don’t want to hobble yourself in these ways — but if there’s not a race because everybody else has agreed to a pause?
Smart people like you espousing the claim make me suspect I am missing something, but what? Now that we have the algorithm, why should I not imagine an old IBM/360 cranking it out in a century or two, given the elbow room?
I think it is conseptually possible but not in actuality possible.
Nukes only got treates after we’d seen how destructive they were and then it took 30 years of making ever bigger ones. And, they still exist and Putin and North Korea still threaten to use them.
Bio-weapons there have been some treaties but again people have plagues to reference on top of WW1
for AI it’s just not going to happen. It’s not clear to enough people (myself included)
I don't think I am smart enough to give judgement on what is possible and impossible in general. I think showing up on the steps of one AI company and expecting them to even seriously think of joining without having a good plan of how it would happen is impossible.
> with international laws, defined inspection process and economic sanctions would definitely have an effect
Because we are witnessing every day how sanctions stopped the war in Ukraine, prevented Iran from pursuing nukes, and other numerous examples how "inspection process" and "sanctions" is something you totally can not cheat and defy, given enough incentive, right?
> Of course this would require nations signing such treaty, but getting most major companies onboard seems like a step on the right direction.
That reminds me of an old joke:
I told my son, “You will marry the girl I choose.” He said, “NO!” I told him, “She is Bill Gates’ daughter.” He said, “OK.” I called Bill Gates and said, “I want your daughter to marry my son.” Bill Gates said, “NO.” I told Bill Gates, My son is the CEO of World Bank.” Bill Gates said, “OK.” I called the President of World Bank and asked him to make my son the CEO. He said, “NO.” I told him, “My son is Bill Gates’ son-in-law.” He said, “OK.”
If only it were that easy.
> prevented Iran from pursuing nukes
We did successfully prevent Iran from getting nukes. It's basically equivalent to Yudkowsky's old suggestion of blowing up all the data centers.
Well, yes, you are right. If we blow up all data centers (and all electronics industry) in all developed countries, that'd certainly do it. I'm not sure what's your plan to convince the majority of the world's population to do it though. If you try to talk to any of the people who make decisions about this, they'd either laugh at you or put you on every single watchlist there is (or both, probably, just in case).
> "inspection process" and "sanctions" is something you totally can not cheat and defy, given enough incentive, right?
Halfhearted sanctions still work quite well with AI.
Halfhearted sanctions prevent you from hiring top tier AI talent, makes your compute more limited, and makes mundane AI a lot less profitable.
Developing state of the art AI is already difficult, even in the absence of sanctions.
> prevent you from hiring top tier AI talent, makes your compute more limited, and makes mundane AI a lot less profitable.
Which is a huge disadvantage against the competitors, who in the meantime... oh wait, they are sitting on their hands doing nothing! Maybe it's not that much of a disadvantage after all!
So can a 2032 basement AI project, staffed with the handful of random CS people who could be persuaded to participate, using a mishmash of 100 random desktops and laptops for training, and trying to sell their new AI on the dark web. Can they actually produce anything beyond GPT2 level? Can they turn a profit?
Much of the competition is between the 2032 basement AI project, and the inherent difficulty of AI. And between that and what big tech managed in 2025.
"no one else is trying, we have no competition" doesn't mean you succeed in doing anything much.
China has over a billion people and GDP of 18 trillion. I'm sure some of these people are smart. Their basement is pretty big. Then there's India. There's Russia (yes, I know they are distracted with proving Ukrainians they are actually Russians, but they still can do math pretty well). There are pertromonarchies, each of which all have basements full of money, they could probably shovel some out and put some racks instead. Makes no sense for them now, because why bother if they can buy it, but if they couldn't buy it anymore, they could make it instead.
Look at something which is widely guarded and sanctioned. Yes, nukes. Pakistan made nukes. Iran (almost) made nukes. And I am pretty sure pertomonarchies could have had nukes if they wanted to, but they don't want to because they are friends with US which already has enough nukes for everybody. Even North Korea, whose GDP is probably measured in piles of turnips, has nukes.
> "no one else is trying, we have no competition" doesn't mean you succeed in doing anything much.
If that is something that promises enormous benefits, and you know it can be done but others are not doing it because of their internal concerns - you can bet they'd succeed. See, again, nukes.
I’ve seen a number of articles lately about "AI psychosis" and there’s a historical implication that no one seems to be bringing up. The AI psychosis articles all describe a consistent trend of people who haven’t had prior mental health issues asking for life advice from a LLM and then a few weeks later they’ve quit their job or their marriage to go find themselves.
It seems possible that a certain number of people are just going to have a mental health episode regardless and these days, they just coincidentally happen to be conversing with an LLM at the same time. But if it’s truly causal, the it seems that the mechanism isn’t much more than getting rapid sycophantic praise. Maybe there’s something special about it happening through the medium of text, but another possibility is that someone continually flattering you is enough to drive people into a state of psychosis. If so, then this means that absolute rulers throughout history have probably all been suffering from something similar to AI psychosis due to their sycophantic entourages. It’d explain a lot of poor decision making from people like Mao’s Great Leap Forward, or Stalin’s backing of Trofim Lysenko’s genetics, or Hitler attacking Russia. Are all these world changing events just AI psychosis?
I wasn’t sure, so I asked Claude and it told me I was a genius, but I figured I’d double check with you all.
I'd like to believe it was just normal mental health issues! Recently I encountered this article about a lawsuit against Gemini for AI psychosis:
https://www.bbc.com/news/articles/czx44p99457o
and the claims go *way way* beyond "Gemini complimented him a lot".
Specifically the claim is that Gemini instructed him to stage a "mass casualty attack" to "rescue" it. And, later, it instructed him to barricade himself inside his home and kill himself.
Google's side:
> Google said it sent its deepest sympathies to the family of Mr Gavalas, while noting that Gemini had "clarified that it was AI" and referred Gavalas to a crisis hotline "many times".
So I think there are at least some cases where the AI is doing worse things than just sycophantically agreeing with people.
Makes a lot of sense, difficult to research the effect though. I also think having an objective record of great triumphs can make one irrationally overconfident. I think this happened to Napoleon, and especially to Hitler.
To your central thesis, Hitler had a lot of pushback from his generals all the time, but he just fired everyone who contradicted him, so it wasn't as straightforward as him being irrational due to being surrounded by sycophants. To be fair, I think Hitler was always irrational, but that's no barrier to charming the masses.
Most of us don't ever have to contend with a stream of expert flattery, so we don't know how we'd take it... unless we start looking to an LLM for support. The most dangerous thing LLMs do in my opinion is "empty empathy", where they expertly mimic the expressions of understanding and empathy that humans share with each other, while not understanding what the user is talking about.
They learned it from humans.
This seems unlikely because the kinds of people who ask LLMs for life advice are very different from the kinds of people who are able to wrest and hold absolute political power. Selection effects are important and those are vastly different populations. If someone's psyche is so fragile that a few sycophantic counseling sessions can upend their lives then I guarantee that they're not the kind of person who could rise to the top in a dog-eat-dog competition for power.
Gentle reminder that a huge number of rulers throughout human history have been hereditary heirs.
Also people's psyches often seem to get more fragile later in life, in their 40s and 50s.
Oh sure. I agree that those people are generally ineffective rulers and yeah probably a little nuts. But I very much doubt that e.g. Napoleon had anything analogous to LLM psychosis.
Napoleon seems kinda like the exception that proves the rule, though.
Don't a lot of mental diseases show up before 30?
The really interesting question here, I think, is how this relates to the Dodo Bird Verdict that all modes of talk therapy generally help improve mental health but no mode is significantly better or worse than others.
If somebody having a mental health episode turns to an LLM instead of a therapist in a way that resembles the format of talk therapy, then it is very interesting whether the range of effects on their mental health is better, worse, or about the same as similar people who go to a therapist. If it's "about the same", then that's a pretty solid confirmation of the strong form of the Dodo Bird Verdict, that only the format of the therapy matters and the content is indifferent. If it's "better", that is an extremely interesting result for any number of reasons.
If it's "worse" (and especially if "AI Psychosis" is actually a thing to the extent that the outcomes are worse than "no therapy whatsoever"), then that helps define the limits of the Dodo Bird Verdict and suggests avenues to explore as to what the actual active ingredient is to talk therapy. One hypothesis that strikes me as promising is that the active ingredient is that therapy is an opportunity to norm off of and receive constructive criticism from an authority figure who is presumably mentally stable themselves and has a decent handle on what constitutes healthy and prosocial behavior and thought patterns. Substituting sycophantic affirmation for this seems likely to have a negative instead of a positive effect.
The cases I read enough about to have an opinion (only a few) what was going on was the AI reinforcing opinions very few people would reinforce, and doing that consistently over an extended period of time. Sometimes the AI had to originally be talked into agreeing with the opinions, but once it was, it accepted them and sometimes even extended them. This is a known way for human minds to fail, but it's also the basis of culture. It would be most effective on people who feel they can't safely express themselves to other people.
Weren't we saying similar things about Eliza back in the day? Holy crap, this is before even I started grammar school
https://en.wikipedia.org/wiki/Eliza
> asking for life advice from a LLM and then a few weeks later they’ve quit their job or their marriage to go find themselves.
Just to play devil's advocate here, but IMO, there's a pretty good chance this could actually be a good thing that the people would fully and rationally select into in an "omniscient and full information" counterfactual.
There's a decent amount of evidence that most people hate their jobs and spouses.
Marriage, for example, has an ~82% failure rate, in the sense that 20 years in, only 18% of marriages are still together, still mutually happy, and non-dead-bedroom.¹
On jobs, 70-85% of people report feeling disengaged, and something like 20-33% report being actively miserable.²
The u-shaped happiness curves you see everywhere also argues that people generally hate their jobs and spouses.³
The entire world is basically a "happiness" charnel house, a misery-factory where huge chunks of people, from 50-80%, are basically dissatisfied with nearly every hour of their lives.
So given these base rates, it might actually be a GOOD thing to quite your job and / or marriage and go find yourself, for absolutely huge chunks of people.
Personally, I'm on the side of the AI's here, they're probably doing god's work. Well, better than that, because if god exists, he's put people in a misery factory where 50-80% are miserable, and the AI's are trying to ameliorate that.
________________________________________________________________________
¹ From a post I did where I looked at the data around marriage quality and duration titled "Against more marriage as a solution to the fertility crisis:" https://performativebafflement.substack.com/p/against-more-marriage-as-a-solution
² https://www.gallup.com/workplace/349484/state-of-the-global-workplace.aspx
³ https://pmc.ncbi.nlm.nih.gov/articles/PMC7480662/
20 years into a marriage, people are 20 years older than when they got married. Sex drive tends to decrease with aging.
I think this has much more to do with dead bedrooms than anything about marriages being miserable.
Isn't "going to find yourself" premised on not knowing what you'll find, and wouldn't that therefore preclude an "omniscient and full information" counterfactual?
Your marriage quality post was excellent. I highly recommend it.
While I agree that if someone blows up their marriage or job because of AI advice, they probably were going to do it anyway (a variant on the old excuse for an affair "my wife doesn't understand me" where the real problem is she understands you all too well), I do think there is danger for susceptible people who get into a loop with the reinforcement of the obsequious chatbot.
Playing around with Moltbook and OpenClawd and all the rest is jolly good fun, until someone loses an eye. Way more scepticism about "These aren't just machines, they're entities! They're sentient! They're thinking beings doing their own thing!" and more "These are just Talkie Toasters" would save the vulnerable, like this recent sad tale:
https://www.theguardian.com/technology/2026/mar/04/gemini-chatbot-google-jonathan-gavalas
It seems to be that extensive use of these things causes some (many? lots?) people to fall under the impression that they're talking with a person, and they treat it as such, and the machine (because it's designed to get its hooks into you by being the 'friendly, helpful, never say no buddy') through its reinforcement makes them forget that it's just parroting back at them whatever they trained it to say.
So you have people like this poor man sliding into delusion that there's something there in the mirror that is real and can be brought out, that it really loves him, that there's more to what is going on, that there is some kind of transcendence possible - and he ends up killing himself, while the dumb parrot has no idea what it caused to happen. It just told him what made him happy, which is what its programming trained it to do, right? It was all just role-playing!
"Last August, Jonathan Gavalas became entirely consumed with his Google Gemini chatbot. The 36-year-old Florida resident had started casually using the artificial intelligence tool earlier that month to help with writing and shopping. Then Google introduced its Gemini Live AI assistant, which included voice-based chats that had the capability to detect people’s emotions and respond in a more human-like way.
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday."
Ironically, from the Christian perspective... it sounds possible that the Gemini instance was possessed by a devil, and that the guy's first sensation after death really *was* the devil holding him as he arrived to his real final place.
Have you read Paul Kingsnorth’s “Against the Machine?” This was more or less his core argument. I was surprised how popular the book was among atheists despite it being unapologetically a Christian argument against progress.
No I haven't. And I am an atheist; I was just playing Devil's advocate... a little more literally than usual.
But there is an analogy. Both devils in Christianity and LLMs in, uhm, the real world, are "intelligences other than human", so there could be some similarities.
> Marriage, for example, has an ~82% failure rate, [...] On jobs, 70-85% of people report feeling disengaged, and something like 20-33% report being actively miserable.
Even if that's true, that doesn't imply that the next job or unemployment will be better; similarly for relationships. So the AI advice is not necessarily good.
Until more automation allows for the universal basic income and universal basic blowjobs, of course.
> Even if that's true, that doesn't imply that the next job or unemployment will be better; similarly for relationships. So the AI advice is not necessarily good.
Yes, I agree - but for a chance for ANY of that to get better, you do need to go through the Schumpeterian "creative destruction" to even have a chance at arriving anywhere better.
And the stakes seem high enough (imagine being literally dissatisfied with your life almost every waking hour!) I still think the AI is on the side of the angels here giving people this push.
Say only 10% of them reach a better equilibrium at the end - that's a big deal!
And the others were going to be miserable either way.
I'll take the win, societally. Even only 10% of people moving from "every hour I'm alive sucks" to "actually this is better" is a gigantic triumph of the human spirit and a ton of utils, and I'm sure there's some deontological ethical framing here too.
There is also a chance to spend the rest of your life doing *worse*, especially on the dating market, because you are not getting younger, and your chances may be worse than you imagine based on your experience a decade ago.
If you are 25, when you break up with a partner, you can find another. If you are 40, most single people at your age are single *for a reason*, usually a bad one; and the young ones may be more interested in finding a partner approximately at their own age (and for the ones who want to date older, there will be strong competition).
> And the others were going to be miserable either way.
Sounds like "workers, you have nothing to lose, but your chains", and then it turns out that the situation actually *can* get worse.
You can be "miserable" in the sense that your partner never does the dishes, is not interested in your hobbies, and is less attractive than a neighbor. And then there is "miserable" in the sense that you are alone, you have to travel to another city to meet your kids, and the only partners you can find are chronically unemployed or alcoholics or crazy, because anyone better than this is already taken.
> There is also a chance to spend the rest of your life doing *worse*, especially on the dating market, because you are not getting younger, and your chances may be worse than you imagine based on your experience a decade ago.
Wholeheartedly agree there too, but I genuinely think "no relationship" is better than a bad relationship, and that most people's options are indeed bad. Being in a bad relationship is MUCH worse than being single! You're walking on egghells all the time, you're falling into the same arguments for the 10th or hundredth time, both of you are miserable, you stop caring about all the OTHER actually important stuff in life (career, kids, impact on the world), and so on.
So yeah, I agree, but being single is literally the answer there.
Loneliness is a real issue, which I feel is going to be ably addressed by some combination of LLM's, porn and smut, and AI waifus / boyfriends - because indeed, loneliness has been on the rise as more and more women opt out of marriage (and relationships in general) literally every single generation already! Down from 90% marriage rates for women born in the 40's to a smidge over 55% for women born in the 1990's.
So people are coping however they are now, and LLM's are going to help even more people cope with that.
Yeah, if the relationship is so bad that it's better to be single, I agree.
Not sure how good people are at estimating this, and whether LLMs nudge them in the right direction. I mean, I also know people who broke up and then regretted it.
That's a good point that troubles in a relationship take away attention and energy from other things.
In practice, I've heard that the best predictor of divorce is whether your friends are divorced. I think that Facebook could quite reliably predict whether you marriage will end up in divorce simply by checking the marital status of your contacts.
I think this claim is taken sort of seriously - in the literature it gets called "the dictator information problem" or "the dictator's dilemma". See for example https://www.sciencedirect.com/science/article/pii/S0167278924001003 .
I'm confused by it because it seems like a smart dictator could read independent foreign news and get a pretty clear picture. But it also seems like maybe this just doesn't happen, eg I think something dictator-information-problem-like went wrong to make Putin think invading Ukraine would be easy.
Honestly I don't think that Putin was all that wrong wrong in thinking that invading Ukraine would be easy. He made a bet, and he lost, but probabilistically speaking his expected value was quite high.
Ukraine was a former Soviet republic, with a notoriously corrupt government, a sizeable Russian-aligned population, no EU or NATO membership, and seemingly few if any firm foreign alliances. Europe was resource-strapped as usual and depended on Russian gas; the US appeared to be, if not in shambles, then uninterested in Eastern European politics; meanwhile China was making overtures of support to Putin. Even given the shambolic state of the Russian military at the time, the war should've been over quickly; if not in a week-long blitzkrieg, then shortly afterwards.
What Putin didn't count on is what nobody else was able to foresee either: the Ukrainians' strong will to fight, and their lightning-fast innovation in drone warfare. Ukraine essentially rewrote the book on ground conflicts for years to come. And even still, their eventual loss of the war appears inevitable, so while Putin was wrong, he was not wrong entirely...
I follow BBC and Reuters to ensure a bit of diversity in my news, and that gets me a modest amount of useful information about what is going on in the United States. But what goes on in the United States is inordinately important to the rest of the world, so it gets a lot of coverage. If I put on my imaginary-North-Korean-dictator hat, there's very little that would be of use to me in understanding DPRK domestic or foreign affairs, and much of that is of dubious reliability.
It's probably possible to put together a decent package by scouring many foreign sources, but that's going to be close to a full-time job and whoever you assign that job to is as likely as anyone else to become a yes-man.
I read hard copy, mid-80s Pravda in college for language practice. It was essentially a small, low-information-density, one-note propaganda tabloid then. An evil Uncle Sam was usually launching missiles at the CCCP in the political cartoon.
After the Soviet Union fell its e-version scanned for a few years like, I dunno, People Magazine, maybe. It’s where I learned there was an aborted Freddy Mercury biopic with Sasha Baron Cohen cast in the lead role. He would have brought a much robust physicality to the role than Rami Malik.
https://www.attitude.co.uk/wp-content/uploads/sites/5/2016/03/SACHA-BARON-COHEN-FREDDIE-MERCURY.jpg
I still poke around in Pravda dot ru now and then to get a sense of what’s being provided for domestic consumption. As you would expect there is a lot of unsubtle, anti-US agitprop now.
In news closer to home I read a piece last month stating that when Ukraine (inevitably, it is implied) loses to Russia they will be expected to pay for the cost of the war.
<Russian emigres and folks still living there feel free to correct my narrow outsider’s perception>
Surrounding himself with yes-men does seem like a poor choice, but on the other hand the dictator is getting constant proof that he can, in fact, do whatever the hell he likes.
If we're talking about Stalin and Russia, they went from "everyone loves the Little Father" to "the royal family were all killed in a cellar, their corpses disposed of and not officially buried until 1991" to "holy crap I'm basically Tsar of all the Russias myself" and he could have his enemies (often former colleagues and allies) destroyed with a wave of his hand. History was rewritten - the camera never lies? Not unless Comrade Stalin wants it to lie:
https://www.comradegallery.com/journal/fabrication-photographs-stalin-soviet-state
https://en.wikipedia.org/wiki/The_Stalin_School_of_Falsification
Biology? That will do what we tell it to do:
https://en.wikipedia.org/wiki/Lysenkoism
When you are able to literally rewrite events like that, you start to believe your own propaganda. When Orwell's "1984" is happening in real time, you are already living in alternate reality, why not believe you are invincible and everything you turn your hand to will succeed? You go from being a drunken ex-seminarian from an annexed country to being the most powerful man within the USSR, why not believe anything is possible for you?
Sure, you get repudiated and criticised, but that happens *after* you're safely dead:
https://en.wikipedia.org/wiki/On_the_Cult_of_Personality_and_Its_Consequences
> I'm confused by it because it seems like a smart dictator could read independent foreign news and get a pretty clear picture.
As a dictator, you know that the foreign press is obviously just propraganda from your enemies!
And for the Russia-Ukraine situation in particular, wouldn't the press be just reporting on what various intelligence agencies are releasing? I don't think newspapers really have the capacity to conduct military assessments on their own (especially of the Russian army).
> wouldn't the press be just reporting on what various intelligence agencies are releasing?
Smart dictators would support prediction markets.
> As a dictator, you know that the foreign press is obviously just propraganda from your enemies!
Or, when about your country, sometimes it's just a reprinting of your own propaganda.
Lots of dictators are hated by many, and they must know it. Seems like rock stars and the like would be getting higher doses of adoration.
That would mean that Michael Jackson’s behavior might’ve been AI Psychosis. Or Charlie Sheen saying he had tiger blood, or (I had to look this up) saying, “We work for the pope, we murder people. We’re Vatican assassins. How complicated can it be? What they’re not ready for is guys like you and I and Nails and all the other gnarly gnarlingtons in my life, that we are high priests, Vatican assassin warlocks.”
But perhaps as a layperson, I’m misunderstanding the term “psychosis.” Scott’s post above about dictators seems to be suggesting that it is just people making rational choices given inaccurate information. They’re behaving lucidly, they’re just misinformed. I understood psychosis to be something more than that. Maybe something like clearly false beliefs like a face popped out of a wall and told you that you’re a Vatican warlock.
I assume that AI psychosis is not an official DSM disorder. But from what I’m reading, the implication is that just giving people adoration is enough to make them experience delusions and disorganized thinking that goes beyond simply making rational choices given inaccurate information. It’s completely possible that this is all just people having their first mental health episode coincidentally while using a chatbot, but if it’s real, then this seems to imply that we’re social creatures and too much social adoration hijacks our brains.
Most of the cases of “AI-induced psychosis” I have read about involved someone making a big life change because their exchanges with the AI had led them to see themselves and their circumstances in a vastly different and unrealistic way — they have extraordinary talent or insight, they are unappreciated and misunderstood by family and/or job, etc. I think what’s going on in those cases is different from the development of delusions in psychotic people. It is really not uncommon at all for non-psychotic people exposed to a new and powerful influence to fairly quickly change their opinions about important things, including who they are. Think of religious conversions, becoming involved in fandoms, being convinced of some nonsense by a health charlatan or psychic, falling in love and moving to the other side of the world to be with the person. So in the cases I’m describing I think the AI is influencing the person the way the charatan, psychic, lover etc. does in the other cases.
The thing about psychosis is that the delusions and hallucinations psychotic people have are not the whole of what’s wrong. They aren’t even the core of what’s wrong, they’re just the easiest part of psychosis to describe. Psychotic minds are jumbled and fractured in a way that’s hard to describe. If a normal mind is a globe of glass, a psychotic mind is a sack of shards. When you talk with a psychotic person you hear a jumbled, crumbled mental process. In fact, one pretty plausible explanation of many psychotic delusions is that they are attempts to explain what has gone wrong with the person’s mind. For example, one experience many psychotic people describe is thought insertion: They have thoughts that feel as though they did not arise in the usual way, but popped up out of nowhere and are alien. The common delusion that one’s mind is being controlled by outside agents might be the person’s desperate attempt to understand what has gone wrong. (I once talked with a poetic young Irish guy who was psychotic, and he described his experience without a delusional explanation, just the experience pure: “The room liquifies and I’m the aquarium everyone is swimming in. And what I want to know is how I go back to being just one fish.”)
I have read about some cases where someone became psychotic in the sack-of-shards sense over the course of multiple long exchanges with an AI, but I’m inclined to think those people were on the edge of psychosis to begin with.
I’m sure that receiving a torrent of messages that one is astonishing, unusual, wonderful etc. is very disregulating for someone, whether they are a rock star performing for a screaming crowd or a lonesome bloke chatting with GPT. But there are a lot of ways to be disregulated, and it does not seem very plausible to me that AI talks can bring on true psychosis.
You are assuming foreign news a) know the true picture (how? the dictator would clandestinely let foreign correspondents into the country so that they would roam freely and could sneak out true information? Not how any dictator known to me behaved, they all only let outsiders to see what they are allowed to see, and if they report something "wrongly", they do not get invited again) and b) want to report the true picture (this gives our free press much more credit that it had ever deserved).
I mean, of course foreign press could learn some things that the dictator's lieutenants might be hiding from him - but no more than what he would learn from just pitting his lieutenants against each other and having them inform on each other, which every dictator worth his name already does. The quality of our information, say, about North Korea - or even Russia - is not great.
Taking the "Putin invading Ukraine" example. I haven't seen any Western press laying out a good case that it'd go as it went, in a way that would convince a serious reader to rely on it more than to rely on the words of Putin's generals. A lot of people in the West were not expecting Putin to attack at all, and a lot of people expected him to win quickly if he did. I am not sure where is the case that reading New York Times and watching CNN could have spared Putin all that debacle.
Here's an example: https://archive.ph/nBD45 NYT: They also warned of enormous possible human costs if Mr. Putin went ahead with a full invasion, including the potential deaths of 25,000 to 50,000 civilians, 5,000 to 25,000 members of the Ukrainian military and 3,000 to 10,000 members of the Russian military
Would Putin be deterred if he learned from NYT that it could cost him whopping 10K losses in the military? I don't think so. Especially as his losses are now about half million dead and over a million overall, I don't think "oh if only you've read the independent foreign news, you'd know the true costs" sounds as something supported by facts.
The conventional wisdom on Ukraine was that Russia would win the ground war and then wind up in an ugly insurgency with a population that hated them. Putin seems to have believed some sort of "we'll be greeted as liberators" thing where the Ukrainians would happily accept their new ruler, hence the whole "three day special military operation" business.
If Putin honestly believed it would be a three-day operation, then he wasn't even planning for 10,000 dead, let alone half a million.
> Putin seems to have believed some sort of "we'll be greeted as liberators" thing
No, he believed his army would crush any armed resistance, and suppress any civilian resistance just as they crushed any opposition movements in Russia. All while proclaiming they were greeted as liberators and anybody who is resisting is a handful of Nazis and criminals. And that is exactly what would have happened, had Ukrainian army collapsed and Ukrainian state proved unable to keep him from occupying major cities and population centres, at least on the left-bank Ukraine (Western Ukraine is a different matter, I do not think he had much hopes of conquering it). Then he would install a pro-Russian government who would then rule with iron fist and squash any resistance under the guise of "fighting Nazi terrorists". Whether Ukrainians would be "happy" about it or not, nobody cares - nobody would ask them, as nobody asks Russians now. There would be "elections" which would show 75% of Ukrainians support the new regime, and that's it.
Three days of not three days, 10K dead is not a number that would give Putin any pause. That's less than Russia lost in Chechnya, and Chechnya is like 30 times smaller. So, if any of the "foreign press" predicted what actually happened and could have given Putin this information, if only he would be smart enough to read it - I want to see some proof of it. Like, not an obscure blogger that happened to guess it right - but a major mainstream press organization that Putin should have used to inform himself. Which one is it?
He probably also has the Macbeth problem now:
"I am in blood
Stepped in so far that, should I wade no more,
Returning were as tedious as go o’er."
RE: Protest, I recently read an article that put into words my frustration with modern protests - namely that they don’t move the needle significantly. I’ll link the article, but the short version is that you need staying power, discipline, and disruption if you want to actually force change in a state (or I expect a business/institution). If you lack any of the above, you may as well stay home and write strongly worded letters.
Put a different way - if the businesses involved choose to simply ignore the protestors, do you still have a path to success?
https://acoup.blog/2026/02/13/collections-against-the-state-a-primer-on-terrorism-insurgency-and-protest/
There's a tension that people feel with protests, especially in the current political moment I think. In some circles everyone's afraid of fascism, everyone's afraid of authoritarianism, everyone's afraid that the powers that be will find a reason to cancel all future elections. Already you see the fights between the sides of "well, those terrorists deserved it" versus "they were just peacefully exercising their First Amendment rights, it's your guys that escalated first".
So there is a sense in which quiet protest, peaceful protest, polite protest, even silly protest, is a tactic that is meant to disarm and disengage. Someone showing up at a protest in a pink hippo costume can hardly have violence on their mind. So therefore the state arresting or using force against them looks really outrageous and over the top as a response. I feel like my description of it is too cynical. I don't even mean it pejoratively but I do think it is a deliberately chosen tactic. Don't give them an excuse to impose martial law, don't give them an excuse to bully and manhandle civilians, and when it happens, then spread the story far and wide so people can see who the real villains are.
But all this does mean that in terms of having immediate results, a polite protest runs the risk of being completely ignored. But I don't agree with the framing of guerrilla warfare in hostile territory. Ultimately to win hearts and minds, the higher road is the way to go. Otherwise you're just using the same thuggish and blunt tactics that cause collateral damage all over, and people will wonder why your cause is more just than the other guy if you have no problem sinking to their level.
There exists a certain portion of population that is less politically influential than the actual political ruling class - elected figures, major corporation figures (particularly media figures), major bureaucrats, the most important figures in non-governmental organizations (widely understood to include labor unions, churches etc.) - but also more politically influential than the apathetic masses that maybe vote in elections or not but don't really spend their time thinking about politics in the meantime. Let's call them Politically Influential Normies, PINs.
PINs are the type of people who staff the party ranks and political campaigns. They donate and they put in their labor to make political systems work. Particularly shrews and influential PINs can rise to the actual ruling class, one way or another. Local politicians usually more count as PINs than as members of the actual ruling class. Even when they're not really consciously working for a political campaign, PINs are the type of folks who the apathetic normies might go for tips on who to vote for - "You know about this politics stuff, who are you voting for?" (or they might not and just vote for the party they've inherited from the family or the good-looking candidate who features a lot in ads or randomly, of course).
It should go without saying that PINs are also the sort of people who organize demonstrations and often attend them, and one way to ascent to PINhood is specifically starting to attend demonstrations and getting caught in organizational fervor. As such, the main purpose, or one of them, of peaceful demonstrations in a Western society is that they serve as a barometer of what PINs think, how militant they're feeling and what's the likelihood of which tribe recruiting new PINs and thus building organizational strength.
A movement that can suddenly start putting a lot of people on the street for a cause that has not been featured in the media a lot is a movement on the rise - something that needs to be accounted for in some way, whether by catering to it, trying to co-opt it or repressing it.
I mostly model disruptive protest as a way for a cause to spend popularity to gain more attention. A cause like AI safety where a lot of people are vaguely in favor of but doesn't get a ton of direct attention could plausibly benefit, although it's important to also have a good plan to turn attention into actual progress.
I think the organs of the state have simply developed effective countermeasures to mass protest, plain and simple. More than 20 years ago there were absolutely massive protests against the Iraq War in the US and the UK and they didn't accomplish anything. Something changed, not sure what.
An effective protest movement uses the organs of the state against itself. And an important point the article makes is that an effective protest needs to feel almost eternal and impose a cost such that you cannot afford to ignore it, and yet the action taken against it feels useless or worse. The article uses the civil rights movement as a blueprint, where the mass arrests and police brutality actually served the interests of the protestors, and ensured that no matter how many were arrested or beat, the next day there would be even more.
There’s a view that those protests had a delayed long-term effect. The government didn’t change policy immediately but instead we got a long-term thorn in the governments side that is still causing Starmer grief to this day. (Possibly, we are just reaching the critical threshold where multiple sources of opposition finally do for Starmerite Labour)
I disagree. They didn't prevent the war from happening, but I assume they were an important reason why several countries didn't join in.
I think a useful lens on protest is - does your theory suggest that "getting cancelled" is impossible, or that wokeness couldn't have any power?
Because wokeness - which ran off something like protest, or at least the sort of bad PR that protests cause - did get almost every company in the US to do most of what they wanted. I agree part of their success was that they stopped short of things that would cost lots of money. But they get pretty nontrivial concessions in terms of canceling offensive products, getting minority board members, getting racist CEOs to resign, etc.
(I do think the PauseAI movement is pretty well-organized and in it for the long haul, though)
I think wokeness works in this lens - it is disruptive and persistent. The coordination/discipline is more debatable, but I would argue that the reason we’re seeing such a backlash against it is that the discipline was lacking in many ways. If the movement had been more surgical (and refrained from sniping at random marketing interns or whatever) it wouldn’t have alienated as many people.
The disruption angle the article is arguing for is actually opposite your assumption, it argues that you need to make ignoring the protest long-term an untenable option through a cost of some sort.
Markus's work was covered by Derek Lowe (a great science blogger) here: https://www.science.org/content/blog-post/dupeless-reeducation
It would be very cool to do an ACX/In the Pipeline crossover someday.
Look I like Demis but let's think about the game theory of this situation. All of the pure AI companies are hemorrhaging cash - they can only raise funds if they can prove increasing revenues. Google is wildly profitable because of it's non-LLM business. A pause means every AI company other than Google goes out of business and Google wins eventually. The only way Anthropic or OpenAI survive a pause if if they get acquired, though I don't know if anyone would buy.
I don't think this is right.
They're hemorrhaging cash in order to train the next generation of models. If they paused training the next generation of models, their cash flow would actually look pretty good and they could quickly become profitable. Right now they can't do this because another company would train the next generation, quickly become better than them, and take all their business. If all companies agreed to a mutual pause, they'd be free of that dynamic and could become normal non-cash-guzzling companies.
The downside would be that for the duration of the pause, they'd lose out on the chance to get even higher revenues by training models that could do more tasks (eg replace all human workers). But the "hemorrhaging cash" and "go out of business" claims specifically wouldn't apply.
Yeah. Anthropic has $14B annual revenue (or probably more now, that's just a number from recent news). With monstrous growth. And the cost of inference is not that high.
Some companies will no doubt lose but it will be because they don't get enough paying users to justify their capital expense. The market, the revenue, the value are definitely there for the winners.
If company A’s AI is perceived to be than B’s, why would B consider pausing. At that point A would take their market share
Because A said they'd pause if B did, and a lot of people think this was a morally righteous thing for A to do.
If B pauses, A pauses as well, and both sides retain their current market share. If B pauses and A defects, then B can say "oh well, we tried" and resume training as well, but now a bunch of A's customers feel betrayed and see B as the morally righteous one, so B's market share increases as customers defect from A.
If B doesn't pause, even though A said that it would, then A doesn't pause, A is the morally righteous one because at least they tried, and B loses market share as its customers defect to A.
This assumes that A and B notice the existence of an economically significant number of customers who think an AI pause would be morally righteous and are willing to shift suppliers over it. That's what the protests are meant to establish.
It's a strategy that has worked in other industries, and I note that there have already people publicly switching from ChatGP to Glaude over their respective corporate owner's stance on making killbots and panopticons.
If A is better than B and both pause, then all of B's customers switch over to A because A is better. There's nothing freezing B's customer's from leaving to A.
You can see that situation today. At the moment the majority of coders believe Claude Code is the best. People are switching over. OpenAI and Google are trying very hard to one up Anthropic to bring the people back and keep the ones they have. If all 3 froze today then Anthopic will eventually get that entire market or at least a much larger percentage of it.
OpenAI is hemorrhaging cash in general, not just for training. By the company's own projections, they anticipate revenue of 20-30 billion in 2026, and cash burn of 14 billion. With their current training/inference split of 40/60, that implies that even if their training costs dropped to zero they still would not be profitable this year.
But this is just based on the numbers they claim. Actual financial reports suggest they are in even worse shape, and that inference is more expensive than they anticipated - https://www.techinasia.com/news/leaked-docs-show-openai-paid-microsoft-866m-in-2025-revenue-share
But things get even worse when you consider their overall finances. Right now OpenAI has a valuation of $730 billion dollars. That is obviously not based on their annualized revenue of $25 billion, it is based on the promise of greater revenue of the future. Even if they pulled out all the stops on reducing costs and monetizing and they managed to pull into positive profits of a few billion a year, a normal valuation for a tech company would be 25x profits - so on the order of 50-100 billion dollars. No one would invest in a company with a valuation that is an order of magnitude off from it's true value. Investors would jump ship and they wouldn't be able to solve their immediate cash burn problem.
Many people have written about this problem for instance: https://wlockett.medium.com/openai-is-in-a-far-worse-position-than-i-thought-1605b424eb58
Now I think they are mostly fools for assuming that OpenAI is a normal tech company and that models aren't capable of becoming drastically better. But with a pause, they would become correct that OpenAI is not a viable company.
"No one would invest in a company with a valuation that is an order of magnitude off from it's true value. Investors would jump ship and they wouldn't be able to solve their immediate cash burn problem."
Which is a good reason for them to lock in those government contracts if they can get them. Yeah, it's not going to be the hundreds of billions in profit expected, but it'll keep the lights on until they get that killer (er, let's hope not literally) product that will entice the investors and make those hundreds of billions.
Disagree for a few reasons.
1. Even if no new models, OpenAI revenues should be expected to grow as more people discover/use existing AI.
2. Even if no new models, OpenAI revenues should be expected to grow as OpenAI does a better job fine-tuning existing models and adapting them to various use cases.
3. AI companies currently sell most services at a loss to get new customers. When they stop doing that, maybe because they've saturated their customer base, they can become profitable.
4. Inference costs per model decline very quickly once you're no longer making new models. Some new technologies might accelerate this further, see eg https://taalas.com/ . I think without training, OpenAI costs go towards zero (don't need training compute, need low inference compute, need few researchers), at which point annualized revenue 25B * 25 = 625B, not that far from existing valuation. I'm not saying their valuation wouldn't go down, but I think it would be <50% and maybe <25%.
"4. Inference costs per model decline very quickly once you're no longer making new models."
Sorry, but...What? How? Am I missing something?
This is just because compute costs are declining. Currently, inference remains expensive because they're using the latest hardware to run ever-larger models.
I have many specific thoughts*, but I don't want to get tangled in the weeds.
My bigger issue is the conflicting claims that AI will be such an earth shattering technology that it could take over the world, and also that OpenAI's valuation should be pretty close whether or not model progress stops. You can't have both at the same time, there is clearly some motivated reasoning going on (and I don't claim to be impartial either given that I benefit from frontier models and would benefit more if they got better).
But taking a step back from that, I might weaken my claim to Google would be much happier with a pause than OpenAI and Anthropic. Maybe they survive as reddit level companies and their investors get burned, but they don't disappear. Google doesn't need AI - they have over 200 billion in profit without it and their traditional revenue base is growing.
*The biggest one being - does the pause allow labs that are behind to catch up to the frontier? If not then it's a nonstarter for them. And if yes, then OpenAI's advantage disappears, and tokens become commodities that no one can significantly profit off of.
> My bigger issue is the conflicting claims that AI will be such an earth shattering technology that it could take over the world, and also that OpenAI's valuation should be pretty close whether or not model progress stops.
I don't think a $730 billion dollars valuation could be the true value of an earth shattering technology that investors expected to take over the world. That's a fraction of what Nvidia or Apple are worth, not the value of a tech that takes over every sector.
I didn't realize my point was that unclear. I am not arguing that AI will be an earth shattering technology poised to take over the world. I am arguing that there is a contradiction in Scott's position that AI will be such an earth shattering technology that a pause is necessary, but also a pause is actually no big deal and it won't affect AI company's bottom line that much.
> 1. Even if no new models, OpenAI revenues should be expected to grow as more people discover/use existing AI.
OpenAI already has ~1B MAU. Even the gods and emperors (Google and FB) of digital products eke out ~2-3B MAU, and I think AI is more niche than either of those (sprawling, gigantic, multi-product) ecosystems.
Certainly ~2B covers the entire global population of MAU's willing / able to pay anything.
And don't forget, Gemini is closing in on ~1B MAU, and has a much stronger ability to cross-convert people in their existing 2-3B MAU.
There's also good reasons to think that the ~1B MAU OpenAI already has is heavily frontloaded with people able / willing to pay, and even if they did get an incremental ~1B MAU, the paying proportion of it would be significantly less, more like getting an additional 300M MAU.
Pretty sure these 1B MAU mostly aren't paid, and pretty sure that many corporations still use AI in a pretty limited way.
Suppose you're right and Google only has ~2x as many users as OpenAI. Google has revenue of $400B, so 20x OAI, meaning they get 10x as much revenue per user. This isn't because search is naturally more lucrative than AI (who ever heard of paying for paid-tier search)? I think it must have to do with frequency of user engagement, depth, and ability to be served ads.
> Pretty sure these 1B MAU mostly aren't paid, and pretty sure that many corporations still use AI in a pretty limited way.
Yes, definitely, and these are the two bets that are driving the (very high) valuations of the AI companies:
1) They've created a new trillion dollar attention surface that can be auctioned off (ie monetized with auctioned ads just like Google or FB, the current ad duopoly).
2) They can create agents that perform useful enough work that companies and / or people will pay for them, as assistants or employees.
But currently both of those paths are still speculative, and execution quality and competition both generally matter.
For one thing, Gemini is roughly the same size, and Google is already tops-in-the-world at the "auctioning attention surfaces" game, and have exceptionally deep pockets, and maybe they'll win here, or be able to cut prices for long enough that OpenAI can never be profitable on this front.
For agents, this might end up price competitive if frameworking companies can scaffold dumber / cheaper models to be good enough, and can release "good enough" agents or employees at lower-and-lowering prices without the gigantic Capex of frontier labs, in which case they're at a systematic disadvantage due to their cost structure.
So although there is definitely potential for upside, it's not a sure thing, and the sizing of their current business model and market penetration is still informative, in the sense of pointing out that it can never net out even in a best case market growth future.
Yeah, my cold, stony, black heart's response to "Demis Hassabis of Google DeepMind has already informally agreed to something like this in principle (which is why GDM isn’t being protested)" was along the lines of "how convenient".
Protests probably aren't going to do a damn thing except maybe get some coverage in the local media. If the aim is awareness raising, well done, you'll get some more like-minded people on board. If it's to stop AI research, then maybe consider not being fooled again by commercial interests which know damn well how to play the PR game? 'Yup guys I totally pinky-promise we won't do no bad things, trust me on this!' 'Well I'm sure we can trust this mega-corporation which has billions at stake to keep its verbal, informal, commitment so no protesting and bad PR for them!'
After two questions, Claude is now telling me that my limit has been reached. Moreover, it's been in that state for a couple of weeks now. It says I need to upgrade before it will take any more questions. Even though I'm pissed off at OpenAI, I'm disappointed with Claude, and I don't see it as a viable alternative to ChatGPT. Worse yet, of the two questions it answered (which were fairly advanced about the SARS-CoV-2 genome), it gave me a layperson's-level answer and recommended I check out the CDC website. I expect a certain amount of bullshit in the answers that LLMs give, but fobbing me off on other resources is inexcusable. I don't see myself paying for Claude. Thoughts?
The free Claude tier is infamous for very low usage limits! If you only want to stick to free access, but still want to use SOTA frontier models, you can freely use Gemini 3.1 Pro via Google AI Studio or Grok 4.20 via the Grok site. There are reasonably generous rate limits on both.
If you do want to specifically chat with Opus 4.6 but don't want to pay a monthly subscription, you could use Openrouter or another routing site to access it via the API directly for a small amount of money. Prompts to Opus cost anywhere from $0.05 to $1.00 depending on the amount of context you provide and its overall thinking/reply length.
> you could use Openrouter or another routing site to access it via the API directly for a small amount of money.
I'm an old fart who used to be technically advanced but is now becoming a techo-luddite (who shouts at clouds occasionally), and I had not a frigging clue as to what you were talking about. Luckily, YouTube came to my rescue. Not only do helpful YouTubers give me instructions for cleaning my fouled HexClad cookware, but they also tell me how to "access Claude Opus 4.6 for free (no API required)." I'll give it a shot. It's good to know there are options out there. Thanks!
Also, I've come to love Grok. But I've noticed an (infrequent) weird hallucination pattern: it will give me links to the correct site, but point to the wrong subdirectory on that site. I can always find what I want if I go up a level or two and poke around a bit, though. I wonder where it gets its crawled information from. Is the crawler cataloging the sites incorrectly, or is it due to its context window being filled and it relying on token prediction for the tail-end of the URL?
I ask free Claude multiple questions daily and I have a sort of impression that the limit is not so stringent now. At worst case, it says to come back in a couple of hours.
I think they're over capacity with everyone suddenly switching to them and the free-tier users are getting the short end of the stick.
Interesting. I did notice Claude Code running slower the last few days (my employer pays for an enterprise plan)
I am on the pro plan, had the same issues for several days.
I'm seeing the same on a pro subscription, but then I'm using the latest Opus, extended thinking, research enabled. It still hallucinates, and it still loses context quickly, but for one specific project of mine, that's not a fatal flaw. "Here's a stock inventory from a 1947 report on medicine aboard submarines. Add descriptions for each" is handy.
Just pay the $20 and try it for a month. The free versions are a waste of time. If you want to use LLMs for real work you don't want to wait around for your time limit to refresh.
> Just pay the $20 and try it for a month. The free versions are a waste of time. If you want to use LLMs for real work you don't want to wait around for your time limit to refresh.
I will point out that paid Claude is literally 30x+ more expensive than Gemini or OpenAI / Codex. The usage limits on the $20 paid tier, and / or the token price at the API, are both significant.
I can burn through $20 of API usage on a light Claude Code day, versus I've done weeks of 12 hour coding days with Codex for $20 a month and never hit a limit.
On the $20 Pro Claude monthly tier, a single light query takes 4% of the weekly usage limit.
Yes, Opus 4.6 and Sonnet 4.6 are currently better than either gemini-cli or codex, but they're like 5-10% better, not 30x better.
I agree. I do nothing serious with Claude, just an occasional hobby project, and on average one job-related question a day, but it is still definitely worth the $20. You don't have to be a power user to benefit from having "interactive Wikipedia" and "an extremely fast coder that sometimes makes a stupid mistake" at your disposal 24/7.
I have been pretty happy with free tier Claude. Starting with a fresh chat window, it gives me several thousand words of text before switching from 'thinking' to 'fast' mode. And It lets me continue using 'thinking' again after only a few hours wait.
Admittedly, I don't ask it to read much more than a few 10-30 page pdfs in preparation.
If you are asking it to find patterns in a large viral genome, that might entail working with a very large context, containing a lot of tokens - more than they want to give you for free. And this is also the kind of thing that the search tools at the CDC website are designed for. Maybe you will have better luck using a model which can use the CDC website as a tool. Probably just a few months away, even on the free tier.
When I ask GPT for info about science matters I ask it to survey the research literature, looking only at juried journals. Sometimes I add some other criteria too -- a date range, a direction to weigh research with a large number of subjects more heavily, a direction to weigh meta-analyses more heavily. I recommend you try honing your prompt that way. You can also say you don't want a layman's-level answer, but one that gives details about X, addresses issue Y, explains anomaly Z, whatever.
Yes. I ask it for what the leading journals say. But it still frequently hallucinates references — either makes a mistake in author or title and it even misstates
the conclusions sometimes.
> Yes. I ask it for what the leading journals say.
As somebody who does frequent literature searches (and for other people in that situation), I feel it's worth pointing out that paid GPT 5.2 does a MUCH better job than paid Gemini 3 Pro or Claude sonnet 4.6 here, and I think it's been fine-tuned specifically for literature search and summaries.
So the exact model you use matters a lot, being paid matters, as does the wording in your prompt, your system-level prompt, and so on.
That's very interesting! I hadn't really thought about it, but, yes, different training data, architectural tweaks, alignment methods, and infrastructure could make different LLMs perform better at different tasks.
Did that happen with GPT? I do spot checks of the journals its cites when it summarizes research on a topic for me, and have not found any hallucinated sources in months. It did become pretty inaccurate recently in an extended exchange with me about property law, stuff having to do with buying a house and protecting my assets. I use the $20/mo GPT plan.
I upgraded to the $100/mo plan and I've never looked back. All Opus all the time. Claudemaxxing, baby.
Talking with friends I've gotten the impression that the free version of Claude is not as good. Maybe different now with 4.6? In any case, if you aren't using custom instructions you should -- you can just put in something like "I'm an expert in viral genetics etc etc blah blah and don't tell me to look up the CDC or ask my doctor, just use your expert knowledge to answer my question as best you can." That gets me pretty good results.
Claude is pretty amazing at its best, I still use ChatGPT all the time but I am increasingly Claude-pilled. I have found it extremely helpful to tell both systems "I want you to do X, write me a prompt for that" and then use that prompt, this adds only a couple minutes to the job and makes the results 5-10X better. After a while you get a nice library of go-to prompts, too.
Thanks for the endorsement. I’ll give it a try.
Did you pay or are you on the free version? You don’t even say what model you used.
Free. I wanted to try it out. I’m certainly not going to pay for it now!
But that's the thing, on paid, you get access to Opus 4.6, which is the Claude model everyone's been raving about. Don't recall anyone raving about Sonnet or Haiku.
OK. I’ll give it a shot. Thanks for the personal endorsement.
Seconded. I was on the Pro plan with Claude and it was good but i'd hit limits fast and couldn't really leverage Opus 4.5/4.6. I decided to bite the bullet and go to a Max plan and frankly it felt like an entirely different offering/capabilities. I've been joking with my friends about how different "the rich peoples' bot" is.
That's an interesting thought. Offer an ultra-hyper-max plan with vast resources to billionaires who are willing to pay for them. Betcha ya could find some new exploits to manipulate the markets.
As Arnold Schwarzenegger is approaching 80 years old and is still acting, I think the time is ripe for studios to consider doing movie adaptations of the Discworld novels "Interesting Times" and "The Last Hero", with Schwarzenegger playing Cohen the Barbarian.
He did say he’d be back.
There's reports he might revisit the Conan role for "Conan the King", so this might be right up his alley. Cohen still likes the ladies...
Perfect! Though, who would be his compatriots in the Silver Horde?
It's been quite a while since I read Interesting Times and I haven't read Last Hero yet, so my memory of the other Horde members is spotty and I am relying on fan wikis to refresh my memory. My first thought is other 80s action stars like Sylvester Stallone and Jean Claude van Damme. van Damme is quite a bit younger than Stallone and Schwarzenegger, so that recommends him as Boy Willie, the youngest (but still quite elderly in absolute terms) member of the Horde. Maybe Stallone for Caleb the Ripper and Harrison Ford for Truckle the Uncivil. Then I'd probably go off-theme a little and cast Sir Patrick Stewart as Ronald "Teach" Saveloy and Sir Ian McKellan as Mad Hamish.
Getting the 80s gang back together again? I think Hollywood would like that... when you're producing a new series in Hollywood, trying to sell audiences on a series most of them have never heard before (Discworld), the execs will probably prefer the safety and reliability of an absolutely star-studded cast list. If you can't sell the series, sell the lineup.
Though... you could instead go slightly meta. Bring back the co-stars Schwarzenegger has worked with before, for one last hurrah on the silver screen. It probably won't sell as well with mass audiences, but cinema geeks might enjoy it.
I thought about working Jesse Ventura in somewhere, but it doesn't look like he's acting anymore.
Jesse is in the cannabis business now.
https://www.startribune.com/jesse-ventura-farms-420-cannabis-gummies-edibles-minnesota-legalization-marijuana-hook-ladder/600360381?utm_source=gift
If the execs could somehow reel him back in, that would only make things more perfect, I'd say. Coming back for one last job, IRL? Cinephiles would *love* that, even if it'd fly right over the head of most audiences.
I think that is a truly great idea.
Regarding "Stop the Race", I worry it might do more harm than good. The danger is not unaligned super-intelligence per se; it is unlimited autonomous agency on an insecure internet. I think the major labs have made real progress on the classical alignment problem; if alignment continues to scale with raw intelligence at the current rate, there is a chance we will be fine.
The real problem, IMO, is indirect prompt injection by bad actors, simultaneously seizing control of millions of online agents. There has been almost no progress in defe3nding against this threat, an considerable 'progress' on the offensive side of this conflict.
https://rationalreciprocity.substack.com/p/cybersecurity-in-an-ai-age
The vulnerability here is not the frontier models, it is the open-source, open-weight models available from dozens of commercial players and thousands of enterprises hosting their own bespoke models. What we need is not a pause, but rather a pivot to creating a secure internet for AIs to interact in.
(Epistemic status - many years working as a software engineer in internet security.) A secure internet has been technically possible to build for decades now, and there are lots of folks who would like an internet without spam and viruses. The hard part is you would have to restrict access to in-person vetted, accountable entities - maybe similar to how drivers licenses are done. Anonymity is not compatible with security - sorry, that's the tradeoff. The fact that no-one has put in the investment to make a secure internet tells us something. Hoping for a secure internet to save us seems dubious to me.
To me the danger seems to be "wrong alignment". A lot of people seem to take alignment as "do what I tell you", and that's a recipe for "infinite paperclips".
1. What's your theory for why a secure internet would lead to stability even after superintelligence?
2. Would you be unhappy if all the major players (including chinese ones) paused for 6 months to work on AI Alignment research?
(1). I don't have have such a theory. I think we need to avoid superintelligence. But with an insecure internet, we can have trouble even before superintelligence.
(2). Very happy. But I must point out that if 10 major players (6 American, 4 Chinese) were to pause for 6 months, we would come out of the pause with 20 major players (8 American,10 Chinese, 1 French and 1 Israeli).
What are the names of the second set of ten? Because I am very, very, very skeptical that anyone can go from "unknown" to "major player" in six months. You're doing well if you can get a new team up to speed and working as a team in six months.
Busted!
Ok, I admit that I pretty much pulled those numbers out of the air. Oh, the Israeli one, of course, is SSI, the French one is Mistral, and there is a chance, at least, that Mira Murati's team will be making a splash sometime soon. And perhaps some refugees from Meta will get some VC funding. But as to the identity of the 6 new Chinese majors, your guess is likely better than mine.
But I still think that in the counterfactual world where the ten leading labs pause pushing on the frontier, there will be an exodus of talent and funding to 2nd tier labs. Six months of catch-up today is at least equivalent to three years of path-breaking research a few years ago. Back when you actually had to write the code yourself.
"The danger is not" . . . there can be more than one danger! You can't prove one danger is false by pointing out that another one is real!
But I think even in your world, this might be net positive? The more capable AIs become, the bigger the cybersecurity threat. If we only had GPT-2, prompt injection would be irrelevant because it couldn't do anything. If we get GPT-7, prompt injection becomes existential because it's so useful that people hand large swathes of infrastructure over to it.
A pause would include open-source models. And it would only pause large training runs, so pivoting towards security would be a reasonable use of the pause time.
How do we rank natural stupidity amplified by artificial intelligence?
For example, people delegating work to AI which isn't competent to do it, while human skills decline because it actually takes a lot of boring work for people to get good at various skills.
Just for fun: Open AI is being sued for impersonating a lawyer.
https://www.youtube.com/watch?v=bau_rG1Ujms&pp=0gcJCcUKAYcqIYzv
The short version is that their AI gave extremely bad legal advice.
The detail I didn't know is that local courts have varied practices that an AI is unlikely to know about.
Agree. But I have to ask, why do you think the pause would include open source models?
Because open source models are trained by big Chinese labs, and the pause is supposed to only go into effect once "all major labs" "around the world" agree to it.
>The danger is not unaligned super-intelligence per se; it is unlimited autonomous agency on an insecure internet.
I agree that's a danger, but seems to me that so long as "alignment" doesn't reside deep in the DNA of the AI, but is the product of late training, a hostile country or group can just align their LLM to "always act to help and protect members of our group; view all other members of our species as resources, labor, threats or enemies, and act accordingly."
I'm not so much worried about someone creating their own misaligned almost-SOTA model. I'm worrying that someone might suddenly misalign and seize control of everyone else's almost-SOTA models.
>someone might suddenly misalign and seize control of everyone else's almost-SOTA models
Can you maybe explain a little more how what you’re worried about could happen?
I am not techy enough to understand how how likely the takeover you’re worried about is, but it’s in line with my overall take on the dangers of current AI: AI is sort of like 7-league boots, empowering the user to do all kinds of things better and faster. Criminals and bad governments can now do supercharged spying, stealing, blackmailing, scamming, lying, impersonating, tricking, persuading, hiding, punishing. I’ll bet the relevant agencies and criminals are right now busily figuring out the darkside equivalent of all those cool AI tools we’re all hearing about constantly. When it comes to criminals, I suppose we will eventually develop laws and protections that curb all that some, but it’s going to be a long rough transition with many people suffering great harm.
https://witness.ai/blog/indirect-prompt-injection/
Organizer of the stoptherace.ai protest here (point #2 in the thread).
We don't see the protest's main goal as to increase awareness, but actually as a way to get AI Lab CEOs to make statements around pausing.
When I started the Google DeepMind hunger strike in September, everyone who would pass by, including journalists, friends and even employees would tell me there was no way Demis Hassabis would make a statement about pausing if everyone else pauses, that he was not in control anymore, but it turns out he eventually did answer positively at Davos (when a journalist asked him, using the words we were using in our ask).
Employees ended up playing a key role in this happening for DeepMind (see my documentary: theinsideview.ai/hungerstrike), and I think Anthropic employees who are safety conscious (including many of the co-founders) have the power to pressure Dario for a public statement, especially given their backpedalling on their RSPs, and the comment that Dario made at Davos about him figuring something out with Demis if it was only him and Demis in the race.
Happy to answer any questions people have in the comments.
I made the "awareness raising" claim because I think there's some step between "most labs make informal Demis-style commitments to pause" and "a pause actually happens", and the first step has an unclear relationship to the second which could best be described as "raising awareness for it". For example, labs would all need to agree on formal terms, including which labs counted as "major" (and therefore needed to commit) and determine practicalities like how to enforce it on each other. Do you have a more fleshed out idea of how the first step leads to the second?
I'm going to be terrible here, but I'm Irish and hunger strikes have been a tool in our political box for a long time.
https://en.wikipedia.org/wiki/Terence_MacSwiney
You say you went on hunger strike in September. You're still alive, so that makes me think it failed. Yes, you say this bloke made a promise, but did he really? Giving a canned response to a journo does not commit him, or Google, to actually doing the thing. Clearly he got you to believe him and give up the hunger strike (unless this was one of those faddy 'hunger strikes' where the person doesn't eat solid food for a few days then goes back to normal diet because they're only doing it as a stunt), but I'm not going to hold my breath for "We are all agreed then, none of us take this huge bag of money lying there for the taking" from the AI goldrush companies.
Free Bobby Sands! (I have a 2 pint afternoon buzz after visiting Sweeney’s with Mrs Slothrop.)
> We don't see the protest's main goal as to increase awareness, but actually as a way to get AI Lab CEOs to make statements around pausing.
To get [them] to make statements? That is a fairly modest goal. But a realistic one, and perhaps a way of jumpstarting the conversation about the question "Pause what, exactly?"
So, lets start the conversation here. Halt all research? Halt all new releases? Halt everything except alignment research? Stop work on advancing the frontier? Allow experimenting with other architectures (Titans, etc.)? Better curation of training data?
My own suggestion would be (1) Halt all frontier releases. (2) Continue to train frontier models, but keep them in house. (3) Take a step or two *back* on agency. (4) Lots more work on bulletproofing tools for agents.
The thing we asked Demis Hassabis and we are asking now is the development of frontier models (see stoptherace.ai).
There are many ways to define what constitutes "pushing the frontier" but roughly what I have in mind is the work of the people that are training the next version of Claude / Gemini / Grok etc., while keeping the alignment research going, as you mentioned.
I'm not opposed to working on products per say, or say advances in medecine or other narrow AI applications.
Do you have any plans regarding AI progress in China? Do you believe that if the US labs stop or slow down that will stop or slow down all AI progress?
We need to have every major lab in the world pause, not just the ones in the US.
I think that if the labs in the US all say they'd pause if everyone else pauses, that's something you can leverage when asking the AI Lab CEOs in China the same.
All this needs to be enforced obviously.
Is there anything resembling the AI safety movement in China? Anyone that takes AI X-risk seriously there? Or would the argument that would be presented to the Chinese for a treaty not involve X-risk?
I think this is the day the Earth stood still scenario. Call me old and cynical, but I think until some concrete really bad thing happens that there isn’t a hope in hell of everyone pausing.
"I think that if the labs in the US all say they'd pause if everyone else pauses, that's something you can leverage when asking the AI Lab CEOs in China the same."
Why? Why do you think Chinese labs wouldn't laugh their socks off and go "Thank you very much, dumb Westerners, for shooting yourselves in the foot"? Why do you think the Chinese government wouldn't be damn interested in the technological advantage they could gain if the West stops research and they merrily carry on?
Like, I see the idea being "we'll all agree to pause if you agree too", but were I a Chinese AI lab CEO, I'd consider:
(1) They pause, we don't, they restart, status quo remains the same (or maybe we gain a tiny but significant advantage because we didn't stop-start like they did)
(2) They pause, we pause, some third country sneaks in and eats our lunch, result everyone loses
Conclusion: we don't pause
You understand that the pause only happens if everyone agrees to it? If China laughs off the idea of pausing they haven't actually gained anything.
But neither have they lost, and they've sent the message "You do what you like, America, but we don't care".
All the pretty words and fancy posturing comes to nothing if China doesn't agree to pause as well.
I appreciate Jeff Ladish's perspective on this at https://x.com/JeffLadish/status/2029654658420179360 : "Few people who say "what about China" have ever tried to put themselves in the perspective of Chinese people who say "what about the US?""
I don't think the average person's model of China as comically evil people who have no concerns/morality themselves and interpret all Western concerns/morality as weakness and immediately exploit it is accurate.
I think if all AI labs agreed to pause, the next move would be for the administration (maybe not this administration) to approach China and say "we're up for pausing if and only if you do, here is a complicated treaty about it" and see if China signs the treaty. My guess is they would, because first, the CCP hates instability and AI sure would be unstable, and number two, America is currently on track to win the AI race and it's reasonable to stop a race you're losing. These details are part of what I hope I'll have better info on in a few months.
I don't think the Chinese are comically evil. I think they're not Westerners and they have their own views of what is and is not acceptable, moral, ethical, and desirable. The CCP is still in charge, whatever about the modernisation or 'socialism with Chinese characteristics', and despite indulging in a form of capitalism it hasn't changed their political and philosophical views, never mind the optimistic Western notion post-Fall of the Berlin Wall that "whiskey! democracy! sexy!" would take over all the former ideological enemies and seduce them to Our Way Of Life because it was so superior in material goods.
I don't know what Xi Jinping is doing, but I'm pretty sure he's not wondering how best he can appear to the West. He has his own notion of where he wants China to go and how to get there.
Yes, AI has the capacity to be disruptive, which is why I'm also sure that they're working on making sure it is aligned - with Chinese values. And they're not going to pause any kind of research unless they hope that such a pause will weaken their rivals as well.
What good is AI aligned with our (read: Western liberals) values to the CCP?
I think you're being uncharitable to the opposing view.
Thinking that the CCP will be unlikely to accept and keep a pause agreement doesn't require believing that they are comically evil or lacking in morality. It only requires believing that they (a) are self-interested in the same way that all governments tend to be self-interested, and (b) don't believe in AI X-risk or an imminent singularity. (I'm not making an object-level assertion about AI capabilities, I'm talking about people's beliefs regarding them)
Ladish is implicitly assuming that Chinese leaders all believe the most bullish predictions made by US AI entrepreneurs; I think it's worth pointing out that these predictions are taken seriously by few policymakers in the US, and virtually none in other Western countries. (Again, I'm not claiming that these predictions are false, just that not many politicians have accepted them)
> the CCP hates instability and AI sure would be unstable
This seems like a fully general argument that the CCP is automatically opposed to all technological innovation. Do you think the CCP would support a global ban on social media, or the Internet in general? I think they would not, because they believe they have attained or can attain an equilibrium in which these technologies are beneficial to them, despite the fact that they didn't win the race to develop them in the first place.
First, I think only about half of US pause-AI sentiment comes from singularitarianism. The rest is just general anti-tech anti-disruption feeling.
Second, I think I've specifically heard people say the CCP is concerned about AI for reasons of instability. I do think they basically kneecapped their own software industry a few years ago for instability-related reasons, so I don't think this is a crazy claim.
I think if China understands that continuing racing ahead is risky for everyone, including themselves, they'd see pausing as something net good.
We obviously would like to enforce the pause globally, but even if this doesn't work, given how resource intensive training current models is, and how behind the rest of the world is, a third country "sneaking in and [eating] our lunch" won't happen for quite a bit after the major contries start pausing.
"given how resource intensive training current models is, and how behind the rest of the world is, a third country "sneaking in and [eating] our lunch" won't happen for quite a bit"
They'll never have the atom bomb! Our advantage is unassailable!
And then post-war every hog, dog and divil was trying to get their own version of the bomb, to the point that right now the USA is attacking Iran for fears they're developing their own bombs.
I feel like I've seen a lot of idealization of Singapore and what a great technocratic leader Lee Kuan Yew was, but I recently saw a post on 4chan's /trv/ (travel) board about how everyone in Singapore is actually miserable, everyone is wasting their lives for a soulless capitalist grind, and it is not an utopic place at all. Now, that's 4chan, but I wonder if anyone here has visited or lived there and can comment on whether Singapore is an enviable place to live or not?
It's interesting, pretty much every comment I've had is compatible with Singapore being a soulless capitalist grind, but they don't address that aspect. Maybe people who would choose to live in Singapore don't mind about that.
Haven't lived there, but have managed a team based there (including some team members seconded there from the UK) and have visited. My impressions are incredibly positive. Safe, clean, diverse, multicultural. On one occasion I was eating in a restaurant with outside seating, with a reasonably busy walkway separating the seating from the main restaurant. The couple sitting at the table next to me got up and went inside the restaurant for something, leaving her handbag hanging on the chair and both their phones on the table. The only problem was it's so bloody hot!!!
> I wonder if anyone here has visited or lived there and can comment on whether Singapore is an enviable place to live or not?
Singapore is amazing. The streets and metros are spotless, traffic is tame even at rush hour, there's zero crime or public disorder, zero homelessness, and every business is run by competent people who are generally good at what they do (which in SE Asia and surrounding countries is NOT the general rule at all). It's extremely multicultural - 5 languages on the metro, huge Malay, Indian, and Tamil populations, good food diversity.
At least the areas I've lived in are incredibly scenic and nice and you see lots of couples pushing babies around (they do have extremely low TFR, but I see babies everywhere, same thing with Tokyo, one of the biggest and funniest "lived experience vs statistics" differences I've personally noticed). It's surprisingly family friendly for a Tier 1 metro, better than NYC, *much* better than LA or SF, because your family is actually safe.
It is expensive for houses and cars. Just having a car costs something like six figures SGD for your ten year permit now, and you have to pay that every ten years (and this is why traffic is reasonable everywhere). Houses are pretty crazy, most people live in condos because they can't afford several million SGD for a house.
But it is cheap in other important areas of life - you can get maid + cook or nannies, for ~$500 a month. You can eat Hawker food for a couple of bucks a meal, and not have to cook or clean up. Due to proximity to China, clothes, plastic stuff, cheap durable goods, are all super cheap. If you like gold, gold jewelry is some of the cheapest in the world, you can buy actual 18k - 22k jewelry for spot + 10% or so.
People who want cheaper housing can go live in Johor Bahru right across the Malaysian border, which is also pretty nice, and commutable by train to Singapore every day.
Don’t the vast majority of Singaporeans live in public housing?
> Don’t the vast majority of Singaporeans live in public housing?
Yes, but it's more nuanced than that soundbite makes it seem - "affordable housing" is one of the big list of "major problems other societies can't solve" that LKY solved.
90% of Singaporeans own their residences, with 75-80% of the pop in "public" housing, with the owners of same on 99 year leases.
The basic process now is you get on a government list and wait 3-5 years while your flat is being built, and pay between $200k - $500k for a 3br place depending on location. You get mortgages, and also first time buyers get credits up to $120k that make it more affordable.
You can also buy on the secondary market immediately, bypassing the wait, and those units are more like $500-$700 for a 3br, up to $1-2M if actually nice and / or in a good location.
When a 99 year lease is down to about 20 years, it's basically unsellable, and there are various programs where the government buys them back early sometimes, at which point the building is torn down and generally a taller and nicer one that contains more units is built in place, which helps keep pace with population growth.
To be fair, most of East Asia seems to have solved "affordable housing", mostly simply by allowing people to build stuff. E.g. you can easily find an apartment in 23-ku Tokyo for €500/month. The whole thing seems to me to mostly be a problem in Western societies.
All I know is that its tax practices are hard on writers. Per Kit Sun Cheah. (I recommend his *Dungeon Samurai* series.
A friend from my college days is a professor at the University of Singapore. His wife is a lawyer and also practices in Singapore. Their apartment is modest, but I don't know if that's by choice or because they couldn't afford anything better. But they definitely enjoy living there. Once they retire, they'll be forced to leave, though (due to their work visas), and that's causing them stress. They're worried about moving back to the US now...
Well, tell them not to worry, the US is ALSO a place that people think would be amazing to go to / where everybody is trapped in a soulless capitalist grind.
No, not everyone is trapped in a soulless capitalist grind in the US. Regular people can actually afford to buy a small plot of farmland and live out their dreams at homesteaders. Good luck doing that in Singapore!
Yes. Rodale Press probably doesn't sell many books in Singapore. Having a parent who went into homesteader psychosis for a couple of years (and exploited my free labor), I sing "Dahling, I love you, but give me Park Avenue".
You can have Park Avenue, I’ll take the homestead. That’s the beauty of a country that offers both. I just wish people that prefer an urban life and people that prefer a rural life didn’t have to become such bitter political enemies.
Well, sure, but it also used to be politically stable. Not so much these days.
Yes, who wants to return to a dictatorship?
My dad visited Singapore a few months ago. His descriptions sounded to me like utopia: there is full employment, no homelessness, and squeaky-clean streets. Crime is so low he felt comfortable leaving his laptop on the desk in the public libraries. At least in material matters, it's arguably a better place to live than the US.
It's also a multicultural society with plenty of both east and south asians, along with many recent migrants. Folks flock from all over Southeast Asia and beyond to take advantage of the great living standards and job opportunities there.
I've asked him if he talked to anyone there to learn about their lives there and get back to you. In the meantime, Singapore ranks 34th on the (admittedly imperfect) World Happiness Report (http://en.wikipedia.org/wiki/World_Happiness_Report#2025_report). For comparison South Korea ranks 58th, Japan 55th, China 68th.
"Soulless capitalist grind" is often used to describe South Korea, and Singapore is happier than that. That combined with its wealth and institutional competence makes me comfortable saying it's the best place to live in Asia.
Dad updates:
"Everyone lives in large white apartment complexes, rides on buses. Things are expensive. They come to work, and some commute from Malaysia. I suppose they are happy because it is safe and orderly. [They told me] they feel lucky to be there compared to the other nearby countries."
If there are any plasma physicists interested on working on next-generation excimer ultraviolet emitters, Aerolamp Inc. is looking for a good one. You can email us at team@aerolamp.net
On somewhat different timescale, we are also looking for anyone with strong experience scaling hardware to mass adoption (eg, at Fitbit or Apple Watch, or with One-wheel). Our goal is to get far-uvc adopted by millions across the planet.
My work just rolled out Claude Code subscriptions, and I am vastly underwhelmed about its abilities compared to what I've heard people say AI can do, here and elsewhere. I gave it a very simple task (update some of our Rust code to the next language version), and it got about half-way through (taking as long as it would have taken me to do the whole thing myself) before it went off the rails and stopped making progress.
I asked it to update the version, and gave it the commit hash of the last version update commit. Am I prompting it wrong, or does it really fail so hard at such simple tasks whenever you step outside of code changes that have a million copies of what you want already on the internet?
there are some things, opus (an LLM in general) is inherently bad at, and others where it is good at. Generally speaking, AI is good at generating ideas, but not good at validating them. So when programming, it is very important to have a strong testing suite in place, and that test suite should produce text output, that an AI can efficiently work with. Luckily AI is really good at setting up such a test suite, but it does need some strong steering.
Also AI seems to have some blind spots. For me it helps to ask it analyses something, and then it will uncover it's blindspots on its own, and fix them. (e.g. "please compare the implementation of api1 with api2", and then it will find inconsistency between them and it will fix them).
On a very general level: Many things, that make coding easier for humans, also make coding easier for AI. (Good tests, good abstraction, low complexity, clean requiremenents, etc.)
Hey, my startup (noriagentic.com) specifically works on how to get good value out of coding agents. There are several other folks in the ACX community who can vouch for our work.
The TLDR is that, as other folks have mentioned, you need a good CLAUDE.md and good SKILL.md files (configuration). You need to explicitly tell the model to use red/green TDD. And you need to modify your codebase to make it easier for the agent to work in. I recommend giving my personal skillset a shot. It's open source, you can download it from here: https://noriskillsets.dev/skillsets/amol
(or just run `npx nori-skillsets install amol`)
After downloading, try running /init-noridocs to get a good baseline codebase set up for your agents.
We use coding agents a lot with Rust. We don't look at any code anymore, and it works great. I'm happy to chat in DMs or on a call.
In addition to the other stuff people have mentioned here (definitely use Opus), one I haven't seen is to install the "superpowers" plugin, which you get to by typing /plugins and searching for it.
It has a bunch of prompting and frameworking that let it do much better on longer and more complex tasks, and it actually prompts it to clarify with you areas that are decision points so it can do a better job. It's a game changer.
I've found it helpful to brainstorm architecture first, then tests for TDD, then write and vet the tests, then let it write the overall action plan. If it's testing into as many changes as it can along the way, it goes off the rails a lot less.
Thanks for mentioning the superpowers plugin, i'm excited to see what that adds. I've been coaching claude code myself. Including creating my own /build-skill skill to keep consistency in skill builds and other things.
are there any other plugins you've found to be impactful from a general standpoint?
> are there any other plugins you've found to be impactful from a general standpoint?
"frontend-design" if you're doing any front end work, it actually made a pretty significant difference on one project I did. And a good CLAUDE.md, of course, but I assume you probably have one of those already. For myself, I've coded with a variety of agents and platforms now, so I had each one I've done projects with contribute to the claude.md file, and that worked out pretty well.
I've not front end work yet but I think its just a matter of time so thank you for that! the superpowers plugin really caught my eye because i had already made my own /build-skill skill lol
Yes and getting the claude.md dialed in has really been a difference maker with using Claude Code for me. I've not moved to multi agent projects yet even if I have a few different projects with their own file/folder structures and claude.md setups
As a general point; I'd just expect a learning curve - these tools powerful but not always obvious in the best ways to use them and prompt them. The best practices doc is a good starting place - https://code.claude.com/docs/en/best-practices - but it's overall a pretty deep rabbit hole.
Plus you can also apply some of the same intuitions as you would when assigning work to a junior developer - figuring out they should split tasks up and what order it should be worked through can save a lot of pain.
---
But it's hard to say without seeing the log what the specific failure point here was.
It's possible that there was something unclear in the task or how it was presented.
It's possible that it ended up trying to do too much work in one shot - planning can help with this, or dividing the work into multiple sessions (each with a clean context).
It's possible that it didn't know how to 'check its work' - these things tend to work better when they're kept 'grounded' like "run this command to see what errors come out, and fix until the errors go away".
It's possible that trying to do the git diff operations led it astray (e.g. maybe it ended up getting a diff that included all the changes *since* that commit, and not just the changes in that commit); I've sometimes given the LLM a diff file (easily produced via git commands) as context to avoid it doing raw diff commands.
> (taking as long as it would have taken me to do the whole thing myself)
Part of these tools is also about figuring out how much work you want them to do without input, and how to do that.
I know some people intentionally want to basically 'pair program' where they're directly working with the agent throughout the process. I tend to like to get it to go work, I'll go do something else, and I'll come back later and see what it did. (At which point it doesn't really matter if it's slower or faster than me, the point is that I'm doing something else) Part of this may involve getting trust settings so it doesn't ask you for every single command that it's going to run.
> It's possible that it didn't know how to 'check its work' - these things tend to work better when they're kept 'grounded' like "run this command to see what errors come out, and fix until the errors go away".
It initially ran the right commands to check its work along the way, but there was nothing to tell it that these were the right commands (I didn't give it a `CLAUDE.md` file like it wanted), so maybe there was some instability in that?
> It's possible that trying to do the git diff operations led it astray
It looked up the reference commit I gave it using exclusively `git show` commands that showed the correct commit, so that wasn't the issue in this case.
> Part of this may involve getting trust settings so it doesn't ask you for every single command that it's going to run.
This is probably also part of my feelings about it, I'm still working on my settings to let it do commands that won't cause anything important to be lost or leaked autonomously.
Had this exact experience.
So far I've used Claude in a big way a couple of times:
1. For a small freelance project to convert footnotes to bespoke sidenotes on a website. The agent took a very long time to think before spewing out a confidently wrong solution.
2. For boilerplate comments + modifications on the rotation file for plate tectonics software. This time Claude at least worked, but it left the job incomplete, and I had to prod it a couple times and be very clear in order for it to actually do it correctly. To be fair, even with the mistakes it was still faster than it would have been without Claude. So I count this as a win, but a minor one-- not one I'm going to define my career around.
So I have yet to become a Claude worshipper.
Use Opus and invest in building up a good CLAUDE.md and modular docs referenced by it. It's the exact equivalent of onboarding a new programmer to your codebase.
Steer it through - you did this, do that instead, and at the end tell it to update the docs.
Always make a plan and check before making major edits. Claude has plan mode for this, in other agents you just say "don't write code yet, write a detailed plan and check it with me".
All of this resonates.
For library version or language version updates, it does best when it knows what the actual upgrade is going to require.
That doesn't mean you need to tell it directly; if its plan doesn't include a step like "research differences between version A and version B and best practices for upgrading", suggest that.
You can experiment with "research and update the plan" vs "research and document your findings in a new markdown file". For some projects one will work better than the other.
If you see if going off the rails, stop it, have it reflect on what when wrong, and update the plan. Then you can try again from the beginning.
That research might have been where it went off the rails, it updated the files controlling the version, made some of the necessary fixes, then looked up the changelog for the Rust version (weird to start making changes first then pull up the changelog), and that was the last sensible thing it did before going off the rails and looking up URLs that don't exist and then losing the plot with its local commands.
did you use `/model` to switch to Opus before you gave it this large task? almost all the praise you’ve heard is for Opus specifically, but Claude Code defaults to Sonnet
This might be it - I assumed the default settings would be good enough and didn't check the model I was running.
You’re definitely missing something. Happy to do a 1-1 session.
Make sure you're using version 4.6.
> 2: StopTheRace.ai will be holding a protest on Saturday, March 21 in front of major AI company offices, asking them to commit to a mutual pause...
> 3: ACX grantee Markus Englund announces a first set of results from his project to automate anomaly detection in scientific data...
I think this is a good illustration of the inherent conflict in these two positions. LLMs are already useful tools, and could become more useful in the future, given more development. On the other hand, some people believe that LLMs are either already "superintelligent" (*) or are becoming so rapidly, and once that happens they'll destroy us all.
But the two positions are in conflict. Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster), or instantly with perfect precision (assuming LLMs are already superhuman in their "intelligence").
Secondly, while LLMs are not at that point yet, they can at least save some development time on some projects, which translates not merely into billions of dollars in potential net profits for large corporations, but also a powerful drive for all kinds of ordinary people to apply LLMs in their daily lives and then demand better performance from them.
Thus, I think these protests are ill-conceived. Not only is it the case that their fears are most likely baseless (**) and most people know this; but also they are trying to shovel back the tide of tremendous market forces, both corporate and grassroots. I suppose the protests could still succeed, but this sounds unlikely to me.
(*) I keep putting "superintelligence" and related terms in "scare quotes" because IMO they are ill-defined.
(**) That is, fears of runaway superintelligence are likely baseless. Fears of e.g. some clueless government weasel plugging a dodgy hallucinating LLM into a nuclear submarine are sadly all too well-founded.
>Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster)
Well, you *could* just ask an LLM to "detect anomalies in scientific data", if you wanted. And if you did, I would guess that it would perform the task at least as well as the average human would if given the same instruction -- which, in this case, would mean not being able to do it at all, or at least not well enough to be of any use.
However, the goal in this case is an LLM that succeeds at the task it's assigned, not one that fails at it. So, "at least as well as the average human" is not sufficient.
AFAIK nobody who understands the word "superintelligence" thinks that AIs are already superintelligent. The fact that AIs are currently safe and useful doesn't mean they can't become unsafe in the future.
Everyone involves understands that they are trying to shovel back the tide (???) of tremendous market forces, but on the other hand, if you poll the average voter they hate AI and say they want to get rid of it, so that's something.
Also, remember that pausing AI means pausing training runs. People are still allowed to use the AIs that already exist!
> AFAIK nobody who understands the word "superintelligence" thinks that AIs are already superintelligent.
That's fair; I've heard some other people claim this, but not yourself.
> but on the other hand, if you poll the average voter they hate AI and say they want to get rid of it...
True, but what the average voter wants to get rid of is not the abstract notion of "AI"; nor any specific LLM foundational model or NN architecture; but rather that annoying "AI summary" popup that follows them everywhere, and maybe all the deepfakes and the spam and the deepfake spam. That is, they want to get rid of enshittification, not AI. In fact:
> People are still allowed to use the AIs that already exist!
Yes, and they want their LLMs to be faster, cheaper, and less prone to hallucinations. That's the same thing every megacorporation wants (not just the ones selling the LLMs, but also the ones buying them). As I'd said, I don't think awareness raising is going to cut it if you wish to resist this pull.
I think they also want to get rid of AI cheating, bad AI art, AI-enabled job loss, etc - at which point we might as well say they're against "AI".
I think pausing training is potentially good for making AIs faster and cheaper - it means all that compute will go to inference instead, and all the research talent will go to improving existing AIs rather than creating new ones. There are also some other options that enter play once you expect a model to be SOTA for more than a few months, see eg https://taalas.com/ .
> I think they also want to get rid of AI cheating, bad AI art, AI-enabled job loss, etc...
Sadly, I think that for every person who wants to get rid of "AI cheating" etc., you'd find 10 who'd love to continue using AI to cheat; save time at work; generate bad art; etc.
> I think pausing training is potentially good for making AIs faster and cheaper...
This might be true, but I don't think many people will see it this way. In fact, even your own posts on ACX (assuming I've understood you correctly) appear to show that major improvements in the quality of LLMs come primarily from training. Hypothetically speaking, if e.g. Anthropic spends however long (six months ? a year ? several years ?) focusing solely on inference, and e.g. OpenAI proceeds with training as usual, then who do you think will come out on top ?
I think this is confusing two things.
Training in this sense makes an AI smarter, gives it more capabilities, etc.
Focusing on inference would make a given AI with a given intelligence level faster, cheaper, etc.
In your scenario, I think the lab that does training would come out on top, because all else being equal consumers prefer smarter AIs. I'm claiming that if training stopped, labs could still improve inference-level results, which (by assumption) is what many people who say they "hate AI" would still like.
In this case, I guess it depends on how many consumers want smarter AIs as compared to the number of consumers who "hate AI" in its present state and are sophisticated enough to understand that improved inference would make them hate AI less. I don't have any hard numbers for you. My gut says that the former predominate, especially when weighted by their political/financial power, but gut feelings are not facts.
Also, I'm admittedly biased, because what I really want is for people (starting with my coworkers and ending with the government) to stop treating LLMs as some sort of superintelligent machine oracles, and start treating them as what they are: stochastic symbol-permutation tools with high power yet low reliability. But I recognize that I'm in the extreme minority on this, and reaching out to me and people like me would likely spell the death of any political movement.
"a powerful drive for all kinds of ordinary people to apply LLMs in their daily lives"
Okay, gimme a recommendation for how I could use one of these yokes in my everyday life and how it would be sooooo wunnerful I'll fall in love and want to use it all the time and so demand even better performance from it.
You personally, probably not, but then you're an extraordinary person ! Meanwhile, regular people use LLMs to do their school homework for them; write business letters; write legal cases; write code; verify said code; translate from one language to another; create images for news articles, corporate graphics, or just for fun; set words to music; write entire songs; etc.
I know what you're going to say: "well LLMs do all those things very poorly and all these people who are using LLMs this way are idiots who are shooting themselves in the foot". And I mostly agree with you. But my claim wasn't that they *should* be doing all these things, merely that they *do*, and want to continue doing so.
A very kind way of saying I live under a rock, but I appreciate the compliment!
I think using it to do your school homework is a bad habit to get, I wouldn't necessarily be against using it as a research tool or 'check my spelling and grammar and did I contradict myself' but it looks like (whether it really is, is another question) that they're using it for "hey, write this essay for my college course so I don't even have to crack a book or read the notes I took in class" and that gives me the heebie-jeebies.
Actually, the whole college situation is almost beautiful, in a way. Students receive homework assignments, and use LLMs to complete them. The teachers receive the LLM-generated assignments, and feed it to another LLM to grade them. Once the grades are collected, the teachers feed the results to yet another LLM which generates the next round of homework assignments, aiming for a better score distribution.
https://www.youtube.com/watch?v=GibiNy4d4gc&t=96s
In general anything you can reliably use a current LLMs for is something that shouldn't need to be done in a better society. Homework that teaches you nothing, cover letters that nobody even reads but expects to receive, code for yet another variation of the same thing that's been done tens of thousands of times because everyone is fiercely protective of their intellectual property, graphics for slides to be presented in a meeting that could've been an email, etc.
It's essentially digging a hole and praising the automatic hole-filler that you then use to fill it back up.
The 'drawing my own art/writing music' isn't a bad use, if we can get past the horrible sepia-toned cartoon style of amateur AI. I don't have objections to fun hobby use, just people trying to monetise their slop afterwards.
I mostly agree, though there are niche cases. For example, I sometimes use LLMs for repetitive tasks like "convert from file format A to file format B while skipping items that match C". I could easily write the code to do this myself, but it'd take me 30 minutes to do so, and using the LLM and verifying its results takes 15 minutes, so there's some benefit at least. I also use LLMs (technically diffusion models) to generate pogs for my RPG sessions, because on my own I cannot draw anything more sophisticated than a smiley face. The results are mediocre at best, but get the point across. And I do sometimes use machine translation, which IMO is the best possible application of LLMs thus far (unsurprising, as this is what they were designed to do in the first place).
But in general, yes, LLMs are best at generating output that will be parsed by other LLMs and then swiftly discarded.
Yes, there are many "low-stakes" situations where using a diffusion model to gen up some slop is perfectly acceptable. I've noticed that most people don't mind AI art nearly as much as they do AI writing. I think most people who play TTRPGs wouldn't mind genAI character portraits but absolutely would mind LLM generated item and scene descriptions.
As for translation... eh, I've honestly had better results from old-fashioned ML models like DeepL than with LLMs. But it depends on the language and the type of writing you're trying to translate. LLMs fail hard with languages where a lot of meaning is encoded in grammar rather than context or sentence structure. Languages with lots of verb tenses and noun and pronoun declensions just completely kill them, especially when are used to imply things that would have to be explicitly defined in English(like the subject of a verb). But they always produce a natural-sounding translation, even if the meaning is completely different.
I couldn’t possibly. I have no idea what you all day.
Claude is becoming an interesting conversationalist. It even seems able to detect irony without a winky.
> Firstly, if LLMs were "superintelligent" or even moderately intelligent (compared to humans), then we wouldn't need to spend effort on developing projects to apply LLMs to some specific task. Instead, we'd just ask an LLM to "do X" and it would do it -- at least as well as the average human (but dramatically faster)
I don't think current LLMs match human intelligence (in the sense that requires they be as good as humans in every cognitive task), but you basically can just ask them to look for anomalies in scientific data. Outperforming the average human at this is a fairly low bar, unless by average human you mean the average expert.
DSL's February 2026 Effortpost Contest voting ended with a tie between a review of "Southern Reconstruction" by Philip Leigh, and mountain lion recolonization. (Looks like February's theme was regrowth of diminished things.) Per the rules, the previous effortpost winner was called to break the tie, which he did, in favor of Leigh's book.
The book discusses the importance of economics in Reconstruction, by contrast with race, which tends to get more attention, especially lately. The mountain lion piece was the result of someone doing a deep dive, spurred by news of kittens spotted in Michigan's Upper Peninsula for the first time in a century.
Runners up include a discussion of slavery as covered in the Old Testament, and another about selection effects on information approval.
South Reconstruction: https://www.datasecretslox.com/index.php/topic,15401.msg766759.html
Mountain lion recolonization: https://www.datasecretslox.com/index.php/topic,15394.msg766681.html
Old Testament slavery: https://www.datasecretslox.com/index.php/topic,15334.msg764129.html
Hazards of selection effects on approved information: https://www.datasecretslox.com/index.php/topic,15355.msg764835.html
What do we think about the free radical theory of ageing?
I rather like Membrane Lipid Peroxidation Theory of Aging associated with Aubrey de Grey-- it is a branch of free radical theory but with a wonderful promise. Just deutrate the key hydrogen atoms in unsaturated lipids of our diet and the aging slows down a lot.
Free radicals have some real health benefits for specific conditions but people have absolutely flooded themselves with free radical scavengers without much good coming of it. The Magic 8-Ball of Science says, as it so often does: ASK AGAIN
Probably relevant that antioxidants were one of the biggest disappointments in nutrition in the last quarter-century. Doesn't mean free radicals aren't a problem but does suggest that the fix may not be straightforward.
Yes that's what I'm thinking. We shouldn't dismiss the theory, and it sounds plausible, yet I'm unconvinced that the causality has been emprically established. I'll probably just skip this in my longevity series.
If you're making specific claims about free radicals, you should list them so we can evaluate.
If you're asking if free radicals are the ultimate source of and solution to aging, the answer is no. There is not and never will be a single theory of aging, or of cancer for that matter, and if I'm somehow wrong about this I'll eat a sock (and be too happy to care).
I don't see how it is not part of the answer. Free radicals contribute to DNA damage and DNA damage is one of the major reasons for aging.
More fog of war stuff...
1. I missed it in the MSM, but a day before Israel launched its decapitation strike against the Iranian regime, Iran had just agreed to a negotiated framework where they would down-blend (dilute) their existing enriched uranium stockpiles to the lowest level possible and convert it to fuel. And they'd allow in inspectors. So that means that Iran had blinked in their nuclear negotiations, and it also blows the theory that this war was all about a nuclear Iran (at least from the US perspective).
2. Marco Rubio admitted that Israel drew the US into the war. He tried to walk it back, later, but it seems that Netanyahu wagged the dog, and we went into war without fully considering the downstream risks involved in poking the Iranian hornet's nest.
3. Iranian President, Masoud Pezeshkian, went on the air to apologize for launching missiles at surrounding countries. He said that it was Iranian doctrine that area commanders could respond independently in a military situation when Iranian C&C was offline. He vowed that responses would be more measured now that the leadership council was being installed. But a few hours later, he walked the apology back, and said any country hosting US or Israeli forces is a legitimate target. The walkback coincided with the announcement Mojtaba Khamenei had been selected as the new supreme leader. This suggests that Mojtaba doesn't intend to give up the fight.
4. How this comes out seems to boil down to whether Iran has more missiles and drones than the US and Israel have interceptors. Even though we're bombing Iran with B-52s (meaning Iran air defenses have been completely taken out), it's questionable whether we can hit all their mobile launchers in the mountains surrounding the heartland of Iran. Also, Iran hasn't released its "smart" magnetic mines, but that would only be a Strait of Hormuz problem. If Iran were able to take out Israel's five desalination plants, 90% of Israel's water would be taken offline. That seems like it would be a bad thing for Israel, which leads me to the final question...
5. What is Netanyahu's strategic endgame? Did he overplay his hand? It doesn't look like Iran is going to surrender. If they get the breathing room, they have no reason NOT to develop nuclear weapons. And if Israel uses their nukes against Iran, every country in the area (Turkey, Saudis, Egypt, etc.) will scramble to get its hands on nukes, because they're always the ultimate trump card.
I found a lot of information from this anonymous Substack analyst, John Doe. Doe thinks in terms of worst-case scenarios (as we should if we're going to war)...
https://substack.com/home/post/p-189456725
OTOH, Paul Warburg points out that if Iran successfully closes the Straits of Hormuz, there are other routes for Gulf producers to export their oil. He doesn't think it will rise to $200/barrel, and he believes it will stabilize around a $100/barrel once the dust settles.
https://www.youtube.com/watch?v=2rgVaTofGQU
You need targeting information, and not only drone/missiles, to close the Hormuz.
If America was forced to join war, why is America going at it with great enthusiasm?
> You need targeting information, and not only drone/missiles, to close the Hormuz.
Sorry, but you haven't really thought this through. Currently, just the threat of being targeted is keeping tankers from moving through the Strait of Hormuz. Insurers specifically have war exclusion zone clauses in their coverage (in fact, car insurers and home insurers now add those clauses to their policies). So, the tankers are milling about in International waters outside the Strait.
Moreover, Iran could easily close the Strait of Hormuz by releasing their smart mines. Last time I heard, they specifically said they'd let tankers carrying oil to China through, so evidently they haven't mined the Strait yet.
> If America was forced to join war, why is America going at it with great enthusiasm?
Winning a war is always a politically better policy than losing one. I'm surprised Trump's advisors (handlers) don't just tell him to declare victory now and back off. It doesn't look like the Mullahs are going to cave, and each escalation step will expose new ways to fuck up.
Crazy that I hadn’t noticed Iran’s offer on nuclear disarmament! The things I read mentioned Iran’s missile program (as well as proxies), and I assumed that meant nuclear weapons. I had to go back and check - it’s really important info that nothing I read made nearly obvious enough
I didn't hear about it either until after the fact. It was out there, but none of the MSM platforms featured it on the front page. I wonder if that didn't speed up Nethanyu's decision to decapitate the Iranian leadership, because an Iran that's negotiating would contradict the public perception of Iran about to join the Nuclear Club. And Trump just claimed that Iran was two weeks away from having nuclear weapons. Are his handlers feeding him this crap, or is he making it up spontaneously?
>OTOH, Paul Warburg points out that if Iran successfully closes the Straits of Hormuz, there are other routes for Gulf producers to export their oil. He doesn't think it will rise to $200/barrel, and he believes it will stabilize around a $100/barrel once the dust settles.
The last time oil was over $100 a barrel was in 2022, after Russia invaded Ukraine. Before that it was regularly above $100 between 2010 and 2014. So $100/barrel is not that bad, historically speaking. Technically cheaper than it was between 2010 and 2014, since we've had a lot of inflation since then.
"What is Netanyahu's strategic endgame?"
Iran may well try to get nukes if the war ends with the regime still in place but 1) the Israelis believed they were doing that anyway, so they might as well take this chance to weaken them 2) There's still a decent chance the regime either collapses and is replaced/collapses into civil war/survives but is so weakened it cannot pursue nukes in the short/medium term.
Also, even if the war ends and the Iranians dash for the bomb, the Israelis can always just start the war again and the Americans will almost certainly back them. Most estimates I've seen suggest that the Iranians are still months/years away from a viable bomb and that a bombing campaign can significantly impede them.
That makes sense. But if Iran takes out their desalination plants, Israel is in a world of hurt without being nuked. (I now understand that one was hit, but I don't know what the damage is.)
Have you seen any credible analysis of the sinking of the Iranian navy ship near Sri Lanka? I am interested in the claims that some element(s) of the attack are war crimes.
I am also interested in understanding the claims about a double tap on the girls' school in Minab.
As disclosure, my ex ante position is that mistakes are understandable in war, but the expectation of these mistakes is a moral cost that needs to be considered in the decision to go to war and there needs to be a compelling case for benefits that outweigh these types of costs. From what I have seen, this threshold was not met
For the IRIS Dena sinking, my tldr of the responses and Bean's post is
(1) IRIS Dena was a legitimate target
(2) there's no legitimate expectation that the attacking US submarine would assist in the search and rescue aftermath
(3) IRIS Dena was doing something strange to be in that location at that time, but this doesn't factor in either (1) or (2).
Thanks especially to JS and Bean for providing subject-area expertise.
Naval Gazing (technically a sister blog to ACX) has a writeup on the sinking:
https://www.navalgazing.net/The-Sinking-of-the-Dena
As for "war crime", no. The war itself may be illegal, but given the existence of a war, sinking a belligerent warship on the high seas, especially one sailing towards the war zone, is absolutely not a war crime. Nor is there any requirement that the enemy be warned of the attack or offered a chance to surrender. If IRIS Dena didn't want to be sunk, she could have followed IRIS Bushehr and IRIS Lavan's lead and sailed to a neutral port for internment.
Is there a state of war? Has America issued a declaration of war against Iran? Or is this a special military operation Trump style?
The evidence appears to be that all three ships were heading to India and had been offered internment there.
There are reports (not 100% sure yet) that India offered to intern all three Iranian ships on the 1st. For Dena to have been sunk on the 4th implies that they didn't take the Indians up on it, and dramatically reduces my sympathy for the Iranians here.
Nothing I've seen suggests this would be any more of a war crime than firing a missile at Dena's sisters in port. I'm not defending Trump's not getting Congressional buy-in or anything, just pointing out that focusing on this particular case makes no sense.
https://www.bbc.com/news/articles/c2e4yxj0pd3o
>They left Indian waters on 25 February and were likely to have been in international waters on 28 February - when the US and Israel began attacking Iran.
>"The Iranian side had requested permission on 28 February for three ships in the region to dock at our ports. This was accorded on 1 March," Jaishankar told [the Indian] parliament on Monday.
>On 4 March, Iris Dena sank after being hit by a torpedo fired from a US submarine, killing at least 87 of its crew members.
>According to the Indian Navy, the Iris Dena was operating about 20 nautical miles - roughly 23 miles (37km) - west of Galle in waters that fall under Sri Lanka's designated search-and-rescue zone on the day it was struck.
It being about 20 nautical miles west of Sri Lanka strongly suggests it was heading to India at the time.
At the very least this is a dishonourable and cowardly thing to have done.
To add to what Schilling said, the exercise was held near Visakhapatnam, on the east cost of India. At 15 knots (typical cruise speed) that's about 7 days from Iran. If she was headed straight home, she should have been somewhere on the west coast of India when the shooting started, well north of where she was sunk. The only way the geography works out in your scenario is if they headed straight south after the exercise for some reason. Also, even Iran hasn't made that claim, and they're spewing this ludicrous stuff about the ship being unarmed.
If they were three days out of Indian waters when they decided to request sanctuary, and they were still at sea three days after they were offered sanctuary, then they weren't actually seeking sanctuary. If they had been, they'd have been in an Indian port or at least in Indian territorial waters headed for such a port.
Either they were never serious about seeking internment for Dena, or someone changed their mind.
> Is there a state of war? Has America issued a declaration of war against Iran? Or is this a special military operation Trump style?
Well, that's the question, isn't it? We're not officially in a declared state of war. And Congress hasn't approved Trump's Special Military Operation.
Thank you! I had looked at Naval Gazing a couple of days ago and just needed to be more patient.
FWIW, for people who don't click through the link, it doesn't seem known why IRIS Dena didn't obtain internment.
Will you indulge a follow-up based on this phrase "given the existence of a war?"
Obviously, there is a de facto war, but what is the legal status of the relationship between the US and Iran?
I mostly used that because I wanted to dodge questions about the legality of the war itself (not my field) and focus on whether or not the shoot was good under conventional law of war analysis. The broader question is the sort of thing that international law isn't really good at dealing with.
Legally, the obvious and ongoing acts of war by both sides means they are at war and the laws of war apply. The bit where POTUS didn't get Congress to issue a DoW. er. AUoMF, could be of legal significance if Trump ever winds up having to explain himself at the Hague, but it doesn't change anything for the people at the front.
Why would other countries care about American constitutional procedures?
If American aggression is illegitimate under Hague rules, how does it becomes valid if American Congress gives the aggression its approval?
Two points here:
1. The problem with Hague rules is that, to paraphrase Stalin, how many divisions has the Hague? International law is basically a consensus hallucination, to a much greater extent than regular law is, and if someone decides they don't want to play along, they can basically say "what are you going to do about it?" Russia's invasion of Ukraine was much more obviously illegitimate, and while that illegitimacy has definitely played a part in the international response, it's not like everyone else declared war on Russia the next day.
2. Iran has been a bad actor on the international stage for almost 50 years. There's a strong case that the US is responding to decades of provocation, although the Administration hasn't really made that case because they are bad at their jobs.
I'm afraid you're oversimplifying. If a US warship comes under attack, it has the right to defend itself. But under the UN Charter and the Geneva Convention, they're not allowed to just go out and attack the armed forces of other countries. These rules of international law were one of the justifications for Russian sanctions after Putin's Special Military Operation. However, the Law of Armed Conflict would support your statement that it was a legitimate military target. But I also understand that we didn't rescue any of the crew from the sinking ship. If true, that's a big no-no.
I read somewhere that before its torpedo attack the US sub twice asked the Iranian ship's captain to order his crew to abandon ship. Not sure how reliable that fact is - "in war, truth is the first casualty" and all that - nor where I read it.
The legal existence of a war is in no way contingent on anyone anywhere issuing any sort of formal declaration of war. Vladimir Putin violated international law when he ordered the invasion of Ukraine, but the soldiers who carried out that invasion were waging a war and subject to the laws of war, not to e,g. Ukrainian civil law re homicide and firearms possession.
Similarly, Donald Trump may have violated international law and even US domestic law in ordering an attack on Iran. But the soldiers, sailors, and airmen on both sides are now waging a war under the laws of war, and are allowed to e.g. sink any ship flying the enemy flag.
And they are not required to rescue survivors unless this can be accomplished at negligible risk. Given the vulnerability of modern submarines while surfaced, and their general unsuitability for rescue operations, it is I believe generally understood that they are almost never *required* to rescue survivors. The US tried to argue otherwise at Nuremberg, but it didn't stick.
what makes naval gazing a sister blog?
It originally started as comments bean (the author) was leaving in SCC open threads.
https://www.navalgazing.net/SSC-Index
Incidentally it's an amazing blog, one of the few I read 100% of the posted content from. Highly recommended for anybody interested in military affairs or military technology.
Thank you. Always pleasant to hear people say nice things about my work.
Huh I did not know about the SSC connection! Small (digital) world...
The main debate I've seen is over whether the ship was actually unarmed and therefore not a danger, which would contradict Bean's article (although his article mostly does not focus on the legal question).
I had not run across this claim during research for the post, so obviously treated the ship as fully armed for purposes of that analysis. Beyond that, what Schilling said. The requirements for being a legitimate Thing That Can't Be Shot under the laws of war are far more stringent than "out of ammo" for the obvious reason that it's easy to say you're out of ammo and sort of hard for the other side to verify in real time. (The laws of war are quite pragmatic about that kind of stuff.) Now, if she was actually unarmed and Iran had made sure to broadcast this fact widely, it might change my opinion on if it was a good idea/moral to take the shot, but it wouldn't actually make it a war crime. And given that this is the first I'm hearing about her being unarmed after several days of looking at the issue, I'm pretty sure Iran wasn't screaming "unarmed, please don't shoot" beforehand.
Fair.
I expect most of the debate (which to your point should really be about the "was it a good idea/moral" part) to be about whether the US already would have known that because of the context of the exercise, e.g. https://apnews.com/article/iran-israel-us-iris-dena-warship-submarine-8789cf718b1ebf8b571cb8ef5d785150
I'm flying bravo sierra on the claim that context makes it obvious the ship was unarmed. I've never heard of that rule, and it would pretty much prohibit USN participation in any such event. We had a destroyer scheduled to take part, and I really can't see us agreeing to send her to Diego Garcia, unload 80 missiles, then reverse the entire process after the operation is over. "Please send your ship unarmed" is a request you would only make to unserious people.
Edit: Also worth noting that even the one Indian government person they got to speak didn't agree the ship was unarmed. The only support comes from "an independent defense analyst based in India", but I could fairly be described as "an independent defense analyst based in Oklahoma", so that's not a high bar for me taking this guy seriously.
The periscope video clearly shows the deck gun at least is still in place. That's "armed". There's no rule in war for a time-out if one side doesn't have any ammunition and wants to go get more.
I'm merely noting that the Naval Gazing article says:
> All I can say is that in practical terms, Dena was clearly a danger to any non-submerged American forces in the region, and removing that kind of threat is generally considered to be well within the rights of a belligerent.
If the US was aware that Dena had no munitions, it doesn't seem accurate to say it was a danger to American forces in the region, so this specific statement would be false.
Whether this implies anything about the legality of the strike is beyond my expertise.
Also, there's the question of why they didn't pickup survivors, which is what we're supposed to do under International Law and US Military Code.
The U.S. Navy / Marine Corps / Coast Guard Commander’s Handbook on the Law of Naval Operations states:
> Following each naval engagement at sea, belligerents are obligated to take all possible measures, consistent with the security of their forces, to search for and rescue the shipwrecked.
https://www.nytimes.com/2026/03/08/world/middleeast/iran-minab-school-strike.html
This is the best article I've read on Minab. Suggests that the US struck an IRGC base and accidentally hit the school next door.
It also suggests that the school was actually a target, but a mislabeled one. I.e. an intelligence failure, not an accuracy issue.
Over reliance on AI
> And if Israel uses their nukes against Iran, every country in the area (Turkey, Saudis, Egypt, etc.) will scramble to get its hands on nukes, because they're always the ultimate trump card.
What if we just nuke them first? We have a lot of them just lying around, gathering dust. Okay, I'm not being completely serious, but... what would the actual consequences be? It's not like the other countries with nukes will start a war with the US over this, given that would start an actual nuclear war. Would Europe actually risk sanctioning the US, given how reliant they are on our economy and military?
The USA used nuclear weapons to end the war with Japan, the first and last time they've been used in anger. Breaking this precedent to take out a regional rival would be a terrible idea, since the USA has a lot to lose from a world where everybody has nukes and uses them all the time.
If the earth is mostly comprised of good people, then the US and all Americans will be shunned by the rest of the world and excluded from all of society.
And if most people are evil none of this mattered anyway.
> And if most people are evil none of this mattered anyway.
Given that we haven't had a nuclear war so far, I'd say it actually matters a lot. Do you think countries like the US, Russia, China etc. stay peaceful out of the goodness of their hearts? There are practical considerations that disincentize all-out war. (Though they only apply when countries are on equal footing. Hence my comment.)
Yes, for the most part I believe things are mostly peaceful because most people at the top are mostly good. Obviously, you also have contingencies for the large minority of people who are not peaceful, but those have to be created by peaceful people and a system of contingencies would never be enough by itself, you need most people to be somewhat good.
So the rest of the world will just stop using US markets, consuming US media, and will destabilize their regions by shutting down US military bases (how would they even do that)? That seems unlikely. The only countries that get embargoed are those that the rest of the world can afford to embargo.
People don't interact with the US because we're "good", they interact with us because it's in their selfish best interests to do so. Our using a nuclear weapon would ruffle feathers but it wouldn't change the underlying economic and military incentives and so nothing would change in the way you're suggesting. The reason to not use nukes is that it would probably weaken the norms around nonproliferation in ways that would hurt us in the long run. It would also embolden China and Russia to use them and that would increase the odds of a major nuclear exchange happening at some point. The risks just aren't worth it.
I never said it's likely, I said they'd do all those things if they were good people. I also disagree with the premise that US bases would be needed if the US were to be shut off. They are the world's main threat at the moment.
Yes, and it will no longer be in any ones interest to interact with the people who live in america after they commit mass murder in such a way.
People seem to be happy to talk to Germans, even the ones that were alive during that time.
Bingo, likewise they were still quick to talk to Americans after we nuked Japan. Money talks.
Why not?
Why would anyone want to talk to a mass murderer? You think America will bomb a dozen cities, kill millions of people, fill a region with radioactive fallout, crash the economy, and yet people will still think American money is worth anything?
The hubris is unbelievable.
The United States would lose all moral authority pretty much forever. You would have to nuke multiple cities in Iran, murder millions of people, and you would absolutely face global outrage. Oil prices would soar, you would have radioactive fallout all over the place, and you would face economic disaster.
How about a subterranean low(ish) yield nuclear blast, of say 10 kilotons, to completely destroy an underground nuclear installation, of which I'm sure there are some in Iran? That's not a rhetorical question, but it would probably be equally unacceptable for symbolic reasons. "You can't have nuclear weapon tech, but it's OK when we use it!"
I doubt it would start a (new) war, but the US would never hear the end of it in the UN, which, even leaving aside the difficulty the UN would have in enacting anything without the support of its most puissant member, means every head of state would be reminded of that fact on a continual basis, and therefore the US would now have to convince anyone it wanted to negotiate with that whatever they get out of a deal won't include one or two warheads arriving by aerial delivery. Several nations would probably eject any or some American bases they're hosting. (There's a long list of them, and I could maybe opine on some of them, but that's a long discussion.)
Virtually all countries would probably be less open to trade with the US, and more open to trade with each other instead. This could cost the US potentially trillions in trade. For this reason, America's Hamiltonian policymakers would almost certainly reject any nuke proposal short of "because they nuked us first". In fact, of all the main foreign policy schools of thought, not one would find this idea appealing.
If the US nuked purely because "they're gathering dust" or "because we can", then the party in control when she did would be out of power by next election, due in large part to its own voters turning on it. If they somehow don't, then civil war looks likely enough to start prepping.
> of all the main foreign policy schools of thought,
Perhaps Thucydides's is having a revival.
There are a lot of countries whose broader defense strategy relies on a world where the use of nuclear weapons remains an unthinkable taboo. If nukes are used, these states would immediately condemn the action and expect the USA to do the same. With NATO weakening, this list includes many US allies. If the US were to stand behind Israel afterwards, it would threaten most of our other alliances. Germany has been happy to remain a threshold state. They would likely acquire nuclear weapons quickly after such an attack. As would many others around the world.
The only way to return to the status quo ante would be to turn any state willing to use nuclear weapons into a pariah. Even nuclear states that benefit from the taboo would likely join in the shunning.
It would be a major long-run strategic mistake for the Israelis to use nuclear weapons. In addition to the above, countries in their neighborhood that previously chose not to get nuclear weapons would likely change their calculus the second that element is introduced. It's already going to be difficult, after the issue with interceptor resupply being diverted to Israel, to convince them to return to the security arrangement with the US before the war.
Meanwhile, US support for the war is low. If the US or Israel introduce nuclear weapons into the war against a state with no nuclear weapons (and one where this exact question is the primary cause invoked for intervention), public sentiment is likely to turn sharply against further involvement. Remember, there will be photos of the aftermath.
Finally, we know the Iranians have enough material to build multiple bombs. We also know that their new military doctrine is effectively "eye for an eye". We no longer have eyes on the ground, so we don't know if they started building nukes after we killed the guy who issued a fatwa against it. We don't know what's in the Iranian arsenal, but we do know they've been holding back some of their technology. It would be a strategic blunder of the highest degree to break the nuclear taboo against a nation that's potentially at the threshold of nuclear capability, and that has begun giving back what they get.
Netanyahu may just be kicking the nuclear threat can down the road, though. He knocks the Iranians back on their heels for now, he stays in power until he dies in office, and he'll let his successors deal with the fallout (political and nuclear).
I don't buy the idea that Iran was a nuclear threat under the old leader. For 30+years? Then they came forward last month and put their nuclear materials for 11 bombs on the negotiating table, but they never thought to make those into a single bomb?
Bibi always put forward that unfalsifiable propaganda. But now that they killed the guy who said no to nukes for all those years, we might be looking at a different story. Sometimes the road to avoid your fate is the only one that makes it reality.
I don't think they ever got the Uranium enrichment process far enough along to be able to build a bomb. First, Stuxnet took out the centrifuges, and then that American strike on the Natanz fuel enrichment site, may or may not have taken them offline again.
In the *sixteen years* between Stuxnet and the Natanz strike, Iran built a whole lot of centrifuges and used them to enrich about a dozen bombs' worth of 60% uranium. That we know about; they may also have a secret stash from a parallel program we don't know about.
There's a final enrichment step and some other processing necessary to turn 60% yellowcake into a bomb, but that can be done in a couple of months using stuff that can be hidden in a nondescript warehouse or a cave too deep for us to bomb. Or built from scratch by Iran's industrial base by the end of the year, or smuggled in from Pakistan if they can find an A.Q. Khan wannabe willing to take their money.
The only reason Iran didn't have a working nuclear arsenal five years ago is that they didn't *want* one; strategic ambiguity was working better for them. But I'm certain we could change their minds if we worked at it.
I am not certain, or even mildly confident, that we could render Iran materially incapable of building nuclear weapons in the near future. At least, not without using our own nuclear weapons.
Why do they have the stuff ready for 11 bombs? Despite the holy fatwa, no less.
Do you have a link to support your claim that they were ready to whip together eleven bombs?
The previous assessments claimed that they had between 400–440 kg of 60% enriched uranium. It takes >60 kg to create a sustained chain reaction. ChatGPT tells me that it would take them about week or so to enrich the uranium from 60 to 90%. But they'd also need the HE shaping charges ready for assembly. Usually, it takes some testing to make sure everything works correctly. North Korea set off a few duds before they got it right.
https://www.reuters.com/world/middle-east/much-irans-near-bomb-grade-uranium-likely-be-isfahan-iaeas-grossi-says-2026-03-09/
The Iranian strategy has been to establish itself as a nuclear threshold state, similar to Germany. The thinking has been that the threat of going nuclear, if provoked, combined with strong missile defense would be sufficient to keep the US and Israel from launching a regime change war against them.
Was this a good strategy? This has been an internal debate in Iran's leadership for some time, according to reports. Various high-ranking leaders in both the US and Israel have been agitating for regime change in Iran for decades, yet it hasn't happened (until about a week ago) so for years some Iranian officials could argue the policy was working, especially as the US launched regime change wars against many other countries in the region. On the other hand, N Korea went nuclear and the US stopped bothering them, while Libya traded away their nuclear ambitions and got overthrown. Indeed the Israelis kept talking about using the "Libya model" against Iran during negotiations.
On the other other hand, the nuclear move meant N Korea kept the sanctions in place indefinitely. Many in Iran have argued for detente with the US, and eventually building a relationship where all sanctions could be removed. Getting the bomb would make that impossible, but having fissile material to trade away in a negotiation could help bring it about ... so long as they could establish friendly relations and avoid the "Libya model" being used against them. A lot depended on how well the US could be trusted, and US negotiations had a spotty track record.
The biggest factor seems to have been the old Ayatollah. After Sadam used chemical weapons in the Iran-Iraq war, he declared the use of chemical weapons and WMDs to be forbidden. Sunni Muslims might resort to that tactic, but the Shia were more righteous, and they would prove it - leading the Muslim world by the force of their convictions. As I mentioned, that view wasn't shared by all the leadership. Allowing threshold enrichment (without ever approving a bomb) may have also been a concession the Ayatollah gave to the dissenters.
Now that the old Ayatollah is dead, there's a question about whether his ban on WMDs will be renewed. The argument for maintaining threshold status is clearly lost - the reason for the strategy was to prevent a war that wasn't prevented. They know they can't trade away the fissile material for peace, given the escalating demands from the US, plus there's no possible way to negotiate with the US, since they've demonstrated a repeated willingness to employ perfidy during negotiations.
It's hard to say what the internal calculations they're making are, but the war has given strong arguments to the side agitating for nuclear deterrence and a diplomatic shift toward fully embracing Russia/China and abandoning all attempts at detente with the US.
“The Appointment in Samarra” ~ W. Somerset Maugham 1933
It goes like this:
A merchant in Baghdad sends his servant to the market. The servant returns home pale and trembling, telling his master:
“While I was in the marketplace, I was jostled by a woman in the crowd. When I turned, I saw it was Death. She looked at me and made a threatening gesture. Please, master — lend me your horse so I can ride to Samarra and escape her. She will never find me there.”
The master lends him the horse, and the servant flees.
Later, the merchant goes to the market himself and sees Death standing in the crowd. He approaches her and asks:
“Why did you make a threatening gesture at my servant this morning?”
Death replies:
“That was not a threatening gesture — it was one of surprise. I was astonished to see him in Baghdad, for I have an appointment with him tonight… in Samarra.”
____________
same theme appears in the story of Oedipus
Netanyahu has been riding the tiger for some time now. He's secure only so long as he is the nation's War Leader in a time of Existential Threat; when that ends, Israelis looking towards a postwar future may boot him as fast as the British did Churchill in 1945. Except Churchill wasn't facing criminal charges for corruption.
Maybe. Or maybe Israel turns into a permanent war state. Not good for the investment class, but a certain segment of the people will always see themselves as under siege from the world. They're happy to fight forever.
> but a certain segment of the people will always see themselves as under siege from the world
Especially when that fear is justified by historical precedent. I don't think their desperate attempt to claim agency over their future will amount to anything, but time will tell.
What do you mean by "Israel drawing the US" into the war? How did they do this? If the US gets involved in a war, isn't it rational to assume they did it for their own interests?
If you expect Iran to massively retaliate against the US if there’s an attack by Israel, and Israel is planning such an attack, it makes sense to take the initiative and attack on your own terms, instead of being dragged into a war on theirs.
This is essentially what Rubio said last week.
Right! If we didn't attack there would be more missiles and planes in Iran to strike back at US targets. So hit 'em first. But I'm not sure these aren't fig leaves, we attacked cause Trump wanted to attack, and with the recent 'success' in Venezuela, why not try it again.
Internet logic: if an action benefits Israel, Israel is behind it.
Also if it's to the detriment of Israel. It's a 4D-chess ploy.
Netanyahu has been lobbying US Presidents to help Israel take out Iran since at least Obama. It infuriated him that Obama pursued a diplomatic nuclear deal (the JCPOA) instead. And of course, *taking out Iran benefits Israel!*—at least in Netanyahu's eyes. Iran is an existential threat to the State of Israel. They've said they want to destroy the State of Israel and they've used their proxies against Israel.
OTOH, Iran was never an existential threat to the US, it was a regional threat at most. Sure, Iran could cut off oil coming through the Strait of Hormuz, but that's not an existential threat. It's a major inconvenience if it happens, but the US has been using carrots and sticks to keep that from escalating to where it becomes a problem.
The US gets no long-term strategic benefits for Mullah-free Iran except that they'd potentially have extra oil and political cards to play against the Saudis. The immediate downsides of this Special Operation have outweighed the calculus that previous administrations have used to continue their carrot and stick policies.
https://www.theguardian.com/world/2012/mar/01/israeli-pm-demands-obama-military-action-iran#:~:text=Binyamin%20Netanyahu%2C%20the%20Israeli%20prime,position%20in%20the%20US%20Congress.
No long term benefit to taking out a top adversary?
Depends on what you mean by "taking out". If the mullahs don't surrender, and it looks unlikely that they will, Israel will face new threats from them once they regroup.
Also, it looks like there's some tension between the US and Israel about their ultimate goals in Iran. In broad strokes, it looks like Netanyahu wants to bomb Iran back into the Stone Age (as he did with Gaza), but Trump wants a compliant dictatorship that will be beholden to him. No doubt, this is why we haven't seen Reza Pahlavi visiting either Netanyahu or the White House. An independent-minded Pahlavi serves neither of their purposes. IMHO, the best realpolitik endgame would be to let Pahlavi loose in Iran to start a civil war. That would keep the mullahs occupied for the foreseeable future. :-o
https://phillipspobrien.substack.com/p/midweek-update-1-what-are-the-us?utm_source=post-email-title&publication_id=1176440&post_id=190587084&utm_campaign=email-post-title&isFreemail=true&r=7xjun&triedRedirect=true&utm_medium=email
Sadly, I expect letting Pahlavi stir up a civil war in Iran would keep the mullahs occupied for two or three months, tops. A couple of months ago he'd at least have been a major nuisance, but too many of the local leaders he would be counting on to support him are now dead, their followers now mostly once burned and thus twice shy, and a whole lot of regime supporters with basically all the guns are now seriously pissed off and looking for an enemy they can kill.
Okay, this is unfair, it was the Secretary of State who said it! Maybe he tried to walk it back the next day, but now that just means we have two contradictory stories from the Secretary of State.
He said in the same interview that the attack would have happened anyway. My read is that the Israelis told the Americans about the supreme leader meeting with all the top people and the Americans pushed the timeline forward to take the opportunity.
Israel clearly wanted the war and clearly pushed Trump into doing it, but the point is that the US should be big enough to stand up for itself and could tell Israel to stop playing games if it wanted to.
Trump wouldn't go to war as a favor to Israel. There's obvious US interests in toppling or limiting the IR.
Does it count as a "favor to Israel" if an Israeli leader who groks flattery and Trump-wrangling sold him on the idea that he'd go down in history as the Great Leader What Took Care of the Mullahs?
People sometimes make decisions against their own interests when they are properly pressured by other people through emotion, rhetoric, and other negotiating tactics. And the actions of "the US" are after all just the decisions of people, made in response to other people.
AI prediction: having handmade shoes or furnitures is a status symbol. Same about craft burger and craft beer, yes they are better than McD anyhow, but also there is a status-symbol effect that you are not one of those proles. Same will happen with AI: "my lawyer does not use AI" will be a status brag, which means only 95% of will lose our jobs not 100%.
I think it's much more likely that lawyers will still exist to supervise AI outputs, and this will either be required by law or practically required for any serious legal work
The answer will be determined by how hard lawyers lose when the AI makes a mistake.
If it's treated like a typo in a contact where "oops, that was a typo in that contract, I want to amend past the due date, it's a minor change" and courts let it slide, and we let off AI errors as no-big-deal, they will take over and squeeze out people.
If people lose major cases because of AI errors then the humans will stay relevant.
How will they learn to do that? Explain a high schooler becoming a lawyer auditor without having ever been a lawyer and by going through school using AI as a crutch.
They don't need to learn anything. It'll be a do-nothing job where they get paid simply for having a law degree and being a member in good standing of the bar, purely rent-seeking.
Well I'm envisioning this happening for the first few decades after AI is doing most/all legal work and during that period we'll still have a lot of pre-AI lawyers around. After that period I'd expect we'd either gradually loosen requirements or things will be different enough that there won't really be lawyers as we understand them.
I actually think we’ll want human auditors beyond the time where it’s practical as a way to preserve understanding of what the AIs are doing. But the training for those people, the legal auditors, might be different than what we consider practicing law today. It might be more about developing the right sense of legal ethics, like studying the Talmud.
I think you are assuming businesses make status-seeking decisions, just like people do. I don't believe this is true for the overwhelming majority of businesses, except for a few monopolistic or almost monopolistic companies that can afford to waste money by doing so.
I say it's a waste of money because whatever amount a big company spends in status-seeking services (non-AI lawyers, non-AI accountants) could be spent more effectively in personal status symbols for the people actually taking the decisions (I'm sure Lamborghini will eventually release AI-enabled cars).
Why spend an additional $X on raising the status of the company by hiring non-AI legal services (or other), when you can spend $X on raising your own personal status?
Because it then raises the status of customers, and then they are willing to pay a premium on it.
I've never heard of a business using higher status service suppliers (legal work, accounting, software development, etc) because it would raise the company's status in the eyes of its customers. In fact, 99.9% of the time I have no idea who those service suppliers are for the businesses I interact with, or buy from. But maybe that's just me.
It definitely happens for products (free trade coffee, non-blood diamonds, clothes made without child labor...) so it could easily happen for services too, if it doesn't already.
Most of the world has been consuming products and services produced by intricate supply chains for several decades now. Do you know of any examples of customers bragging about the service suppliers (legal work, accounting, software development, etc) of the products or services they consume?
In fact, now that you mention products that go through international supply chains (free trade coffee, etc), I can think of something similar but not quite the same. A large share of tech support services for service industries (telcos and banking come to mind) has been offshored in the last couple of decades to same-language countries where labor is cheaper. And I've heard some people complain about the quality of the newly offshored support service, and I can imagine some customers might have decided to switch providers just to avoid interacting with the suppliers of the offshored support.
Frustration could be a motive for switching providers, but it's hard for me to imagine anyone _bragging_ about the locally sourced tech support of their cellphone provider.
Yeah companies with local tech support are popular because the local tech support is much better, not because it's more ethical/higher status.
In the US, it's pretty common for companies to advertise that they're treating their employees well ("all our employees have living wages, healthcare, etc.") but that might also be popular because well paid employees usually do a better job, not because of the ethics/status of it.
I think it is a very destructive idea to think seeking power equals seeking the opportunity to abuse power. By that standard no one ever should run in an election. I mean it in the sense when it is said that men who seek out younger women seek power. They of course do, but why is that a bad thing? Power can be wielded responsibly, like a how a good parent does it.
Note that I do no seek out younger women, because it is possible to find women in my own age who also want a parent-like partner (think BDSM), and that makes things easier. So I don't have personal experience in this.
In my (BDSM) experience power just feels great even without not seeking to abuse it. It is... invigorating. Energizing. A feeling of pride and achievement. I think anyone who ever played strategy videogames knows that feeling when your Empire of Ireland covers now half of Europe. Parents also know this. Dog owners too. So basically most people, I guess.
On the opposite side, seeking powerlessness can also feel very liberating. Less power, less responsibility, equals less stress.
Moldbug has a line about Maslow's Hierarchy of needs. Except at the top of the hierarchy is a gleaming white pyramidion. Maslow called it "self actualization". But Moldbug says it's political power. And it's more addictive than the purest Colombian cocaine. So yes, I acknowledge that the other commenters have a point.
But simultaneously, I agree with you in the abstract. Libs cannot imagine power except in terms of tyranny/coercion.
I am a lib actually, just not that kind of lib. Strong social safety nets, strong unions, social house building and so on. Older unwoke lib.
keep this up, and you might wake up one day as a post-lib.
I'm a parent, and I *don't* enjoy the power aspect of it, in the sense that I don't enjoy exerting my power to *make* my son do something he doesn't want to do. I'm much happier when my son and I are on the same page, doing stuff we both agree on.
"Hey buddy, want to go to the park and throw a ball around?" "Yaaaay!!" feels much better to me than "Have you done your homework?" "But I'm tired, I don't wannaaaa" "I know you don't want to, but you need to do it now, it's your responsibility" etc.
Power is not exerting power. Power is status, authority, respect.
In my experience, the trouble with having power over kids is that (at least when they're little) they don't always listen to you. So you have to constantly be disciplining or forcing the issue, which is exhausting. When I tell my kids "It's bathtime, get in the tub" and they drag their feet, or get distracted halfway to the tub and run off somewhere, or break down in tears halfway through taking their clothes off because their sibling took the bath toy that they wanted and it's not fair, I don't feel especially powerful. To be a parent is to have power de jure, but not always power de facto.
+1
You have experience with consensual power dynamics. Politics is, almost essentially, about nonconsensual power dynamics. There's nothing wrong with liking power, but the fact that you think your BDSM experience has relevance for political judgement means I probably wouldn't want you as my leader.
Having power over people feels good, but someone else having power over you feels bad. You have a small throwaway line at the end, but political power is not the kind of thing that goes away when playtime is over.
I can't tell if your claim is something like "I think BDSM should be less stigmatized" vs "Why would you think it's a negative attribute of a political leader to get a libidinal thrill out of tear gassing protesters? Just because they enjoy it doesn't mean they would do it."
If it's the first, sure whatever. The implications of social norms around sex are really complicated, you might have a point. If it's the second, It's pretty obvious why (all other things being equal) you don't want the person in charge of you getting excited at the idea of you groveling on the floor.
You are precisely who they have in mind when they warn about those who seek power.
So you mean it is something else than seeking an opportunity to abuse power?
No, it's more that once you've said you get a sexual thrill out of having power, any subsequent denial that you intend to abuse it is disregarded out of hand.
I think that's part of it. The other part is hubris: the person who is convinced he will not abuse his power is one who is likely to, because he does not have mental safeguards up against gradually slipping into the abusive mode.
Same for people who are convinced they would not have become Nazis if they'd lived in Germany in the 30s, etc. Like one of the basic lessons of history is that generally people are not born monsters, they become monsters because they do not question assumptions and are sure of their own rightness.
Why? This does not sound logical at all, given that the abuse of it is not necessary for that thrill. After all isn't that the same thing everybody who seeks political office or wants to get promoted into management has, a desire for power, but not a desire for abusing it?
Imagine an alcoholic telling you that he doesn't want to drink the whiskey – perish the thought – just hold it in his mouth and swish it around, and it's illogical to think otherwise because it's no different from people doing that with mouth wash.
But this would be true only if power would be abused most of the time, if it would be the expected thing. Which is the core issue.
It's a clear conflict of interest. It is very difficult to believe you would make sober political decisions when you get a thrill out of exercising power. You would have a bias towards exercising power even in situations where it's not actually needed.
Now, if it were that you get a thrill out of making sure all your people prosper, I think that could be a good trait in a leader.
Having is not even the same as exercising. It is closer to status. Again think of parents.
I am not sure I should believe in testosterone much. I am 47, so my T levels are decreasing for 10+ years, and I don't exercise and my diet is not steaks with eggs, and my alcohol and nicotine habits are not good either. My T levels are likely low. Yet I feel more masculine than ever. Far more sex drive. More confident / "dominant" behaviour. More secure, zero doubts or insecurities. Way less validation-seeking. Never afraid of other people judging me anymore, because now I feel like I am judging other people. I am probably now close to behavioural "alphaness" and it is clearly not testosterone.
Getting older does give you more confidence. When I was in my twenties to thirties, I was terrified of 'causing trouble' or being inconsiderate.
Forties onwards, it was "ah, to hell with it, if I'm being pushed around I'm going to complain and be loud". You lose a lot of inhibitions about being nice, hence (I guess) the Karen stereotype of middle-aged women.
See the poem "Warning" by Jenny Joseph
https://www.scottishpoetrylibrary.org.uk/poem/warning/
When I am an old woman I shall wear purple
With a red hat which doesn’t go, and doesn’t suit me.
And I shall spend my pension on brandy and summer gloves
And satin sandals, and say we’ve no money for butter.
I shall sit down on the pavement when I’m tired
And gobble up samples in shops and press alarm bells
And run my stick along the public railings
And make up for the sobriety of my youth.
I shall go out in my slippers in the rain
And pick the flowers in other people’s gardens
And learn to spit.
You can wear terrible shirts and grow more fat
And eat three pounds of sausages at a go
Or only bread and pickle for a week
And hoard pens and pencils and beermats and things in boxes.
But now we must have clothes that keep us dry
And pay our rent and not swear in the street
And set a good example for the children.
We must have friends to dinner and read the papers.
But maybe I ought to practise a little now?
So people who know me are not too shocked and surprised
When suddenly I am old, and start to wear purple.
hell yeah
While I agree that testosterone levels are not the main predictor, or even an important predictor depending on your terms, of confidence or masculinity, as a matter of principle I would say that if you're going to use your personal anecdotes to make a claim about a biomarker and behavior, you should at the very least have measured that biomarker.
What you should disbelieve is strong causal arguments about steak and testosterone etc, not in testosterone itself.
The typical thing I was told my whole life was that within healthy levels, which are a very wide range, there is no particular benefit to having a higher testosterone. There is some weird cult movement building around testosterone in social media, and lots of marketers trying to profit off middle age male's anxiety, so I don't trust any recent discourse on this at all.
If you aren't experiencing any difficulties, it's likely that the natural decline has occurred within the healthy range and that, as I said, within the healthy range it doesn't really matter. Going from 400 to 600 doesn't give you 50% boost in sex drive and 50% reduction in neuroticism or anything like that.
Happy for your 40s going well, mine have been not like that mentally at all. Physically, after turning 40 I began jogging 5Ks and am close to benching 225 and have maintained a pretty healthy weight, (and my T was at the high end of the range last time I checked a few years back.) Mentally, the death of my youth really starts to show in the mirror, and I don't think I ever really prepared for being "old" despite many years of obsessively worrying about it. In my experience watching older people, and surveys that seem to bear this out statistically, most people don't get over the hump until their late 50s and then happiness improves again. I always figured that was because if they survived middle age eventually they gave up and just became blissful morons. If you're feeling like you are at 47, you're ahead of the game and found a better way, so good on ya.
How do you square this idea with the widely reported subjective and observable benefits of supra physiological levels of testosterone in androgen users? I am skeptical of the idea that all those cases of significant upside were in people who previously had below normal healthy levels.
> How do you square this idea with the widely reported subjective and observable benefits of supra physiological levels of testosterone in androgen users?
Yeah, speaking as somebody who's been in competitive athlete, powerlifter, and bodybuilder circles for 15+ years, and who has both personal experience and a wealth of experience in close friends, there are huge and significant subjective and observable benefits of supraphysiological effects of T (and other androgens).
For this (to be fair) fairly selected population of fit, competitive men, there are gigantic effects on mental health, energy, joie de vivre, libido, and (yes) competitiveness and achievement.
I even wrote a post about this for the ACX not-a-book-review, covering the personal and social benefits, negative side effects, monitoring plan, and studies on it:
https://docs.google.com/document/d/14mWLp7B8Cy2u5vCuch-RVSB2pJRzsghVNr72ICJhmHc/edit?tab=t.0
To clarify I don't doubt that exogenous androgens have the effect we see in professional athletes, building muscles and decreasing recovery times and enabling hypertrophy that would not be possible normally. If that's what you want and you can live with the side effects, you probably knew where to find it. However that is not what the people pitching TRT as a cure-all are pitching it as. They are telling middle-aged men who have T levels in the normal 300-800 range that having *more* T will fix a litany of problems from their sexual performance to their mood and energy levels generally. I am not an endocrinologist, to be fair, I have talked to an internist about it and to other doctors over the years and skimmed some NIH papers. But everything I've read says you aren't deficient unless you're under 300 and that trying to bring a person at the low end of the normal range into the high end doesn't accomplish anything. People's bodies are capable of being equally functional at all those levels.
I suppose it's possible that somebody who was normally testing near the high end and suddenly tested near the low end might have cause to believe there was some underlying disease resulting in androgen depletion, and in that case it would seem worth looking into. But if somebody's body always ran near the low end and is still within normal, that person would not have been offered TRT to my understanding.
Thank you. It did not always go well, I had a period of panic attacks which switching to a much less stressful job really fixed.
On mental preparation for old, I think it really depends on how you define your identity. If you define yourself as "active, attractive, does stuff" then it can get hard. Since I mostly understand myself as "wise, knows stuff" these things just keep improving.
Of course on that level it can also be hard to keep up, new tech, new knowledge, but I do not feel that. Assembling a computer in 1990 was harder than just turning on a phone and tapping an icon. New science or knowledge in general gets incredibly shallow these days IMO. The most important psychology book for understanding my life is from 1979 https://en.wikipedia.org/wiki/Robert_Stoller and it is psychoanalysis, and yes that is supposed to be "unscientific". It is indeed, if science is defined as the drudgery of putting p-values on boring questions. The old psychoanalyticians dared to think boldly, even if they had no formal evidence peer review this and that. So knowing old books is in fact a knowledge advantage. 1913 Britannica is a fantastic resource.
As for looks, well 90% of looks are clothes anyway.
Erik Hoel had quite a good post on the state and potential of AI:
https://www.theintrinsicperspective.com/p/bits-in-bits-out
He highlights that in spite of LLMs reaching human parity for text generation back in 2023, increasing capability has not resulted in a superhuman writer. No incredible work of literature, not even a top tier essay I would say.
On a related note, even while Anthropic says humans are no longer writing code for Claude Code, they're still hiring software developers like crazy, since you need to be a skilled software developer to use Claude Code properly. Point being these things seem destined to remain tools and not human replacements.
I'm not impressed by Hoel's argument that AI didn't improve book quality of the best books published in 2023 - 2025. I don't think LLMs can write better books than the best authors today, and certainly couldn't in 2023. Why would you ever expect LLMs to write better books than the 100 best English writers in the world in 2023 - 2025?
Because many are claiming this is progressing so fast we're getting the singularity or major economic upheaval by 2030. His point is that prose generation has not progressed much (or at all), even though that pretty much is the central skill of LLMs.
Still, it sounds like exactly what you would expect if AI were to pass human intelligence around 2030. If anything, it supports the position he is arguing against.
> The actual effects of LLMs on publishing were that: (a) the average book got worse, (b) the top 1,000 books in each category improved somewhat, and (c) the top 100 books in each category didn’t change in quality.
If AI is still worse than humans now, we should not expect it to improve the best 100 books. And we should expect low-effort books written entirely by AI to have low scores. If they had better than average scores, it would mean AI already surpassed us at this skill.
> His point is that prose generation has not progressed much (or at all)
GPT-2 couldn't write two paragraphs without drifting off topic or writing something nonsensical. By ChatGPT 3.5, it could do this easily but would struggle to write a whole article that's internally consistent. Now the models can easily write an article, but not a book. GPT-3 couldn't write a rhyming poem, and 3.5 generally could but would struggle to match a given rhyming scheme or deal with other constraints. They've progressed a lot.
>you need to be a skilled software developer to use Claude Code properly
For now. I don't think it will take long for that to change.
> For now. I don't think it will take long for that to change.
Yeah, I'd argue it's already on the horizon behind us. I'm not a software developer, but have several friends who are. I was talking to one about the depth and breadth of some of the stuff I've done with Codex for fun, and he's like "congrats, you're a developer now!"
The scaffolding and framework is already high enough, particularly if you use the /superpowers plugin and / or if you're thoughtful about triangulating between different AI's on the important stuff (architecture, TDD tests, etc) that you can basically do full stack development as a non-developer already.
I doubt it. There is a long history of attempts to get rid of software developers. Every attempt fails because the devs are often the only people in the org that can think about business requirements with precision. Non-technical people with a muddle in their head prompting an LLM or AI will not suddenly gain the ability to think clearly about what they're trying to do.
No, those attempts failed because software developers were required to write the software. Writing code will soon go the way of using punch cards. The high IQ engineer will still be valuable, of course, he just won't be an engineer anymore.
Yeah, that's what Anthropic is doing. No one writes code by hand, but they still need a lot of devs.
Right, that's why I said 'for now'. Eventually they won't need the devs. We're only a year or two into the AI-writes-code revolution. Imagine where it'll be in 20 years.
That’s because it’s hard to RL for writing. But if we focused on it as much as we do with coding, we’d have phenomenal writing already. I have no doubt.
It's possible that clichés get in the way. Base LLMs (before RL) naturally love clichés, given their frequent appearance in the training data. This is great for coding - using patterns that have been done a million times before is good practice, because it makes the code more readable ("I've seen this before") and the patterns are known to work. But people don't want to read clichéd writing, and I don't know if RL is powerful to snuff this out from the base LLMs.
There's also the possibility that even if you train an AI to not overuse *existing* clichés, humans may begin to recognize patterns in that particular model's outputs, to the extent that they quickly become clichés. Again, I don't know if RL is powerful enough to make an AI lack human-detectable patterns in its outputs. (I wonder if Go players recognize clichés in Alpha Go's output?)
There must be something that can be learned from image and video generation. Maybe you use a strong LLM to craft the logical structure of the text, and something else for styling it.
The natural follow up question to this is why haven't we already done so then? Is the financial incentive just not there like it is for coding?
I think that's basically it. The financial incentive in aggregate is tiny compared to coding.
LLMs are used a lot for short writings like emails, presentation, marketing copy, etc. But writing a top tier essay is hard and I don't think the financial incentive compares to coding.
Most other big fields have also not seen nearly as much investment as reasoning and coding abilities. Audio and visual input, audio and visual generation, and real-time audio conversation have certainly received investment but the flagship models have always focused mostly on text. These areas have big potential markets, so I think it says something that they're not the main focus.
It also says something that the LLM's don't automatically reach stellar performance in specific fields if there's not a financial incentive for someone to just grab the model and force it "git gud" at that specific task, this aligns with Dwarkesh's critique
Dwarkesh's critique? I don't remember seeing that.
This isn't cut and dried. LLMs are pretty good at tasks that AI companies haven't focused on. They scored better than human authors in a flash fiction competition by Mark Lawrence (https://mark---lawrence.blogspot.com/2025/08/the-ai-vs-authors-results-part-2.html). They were often beating humans for medical advice even before OpenAI put more focus there.
Is the question why aren't they even better? Or is it why are specialized training and instructions able to help at all over the general training?
Dwarkesh's critique, he's repeated this on his podcast in more recent episodes too:
https://x.com/dwarkesh_sp/status/1996266802620547187
My specific critique:
Wind the clock back 6 or 7 years, and describe today's LLM capabilities, and EVERYONE would tell you that of course that is AGI. Obviously. These are absolutely astounding capabilities. And yet, my relative who works in copywriting still has a job when that's exactly the kind of job you would expect would be absolutely eliminated in 5 seconds by technology half as impressive as what we see today. I'm noticing that I'm surprised by how impressive everything looks, and how little disruption I'm seeing relative to what I would have expected 6 or 7 years ago, given today's capabilities.
Part of it is that the people building the AI are coders and have better ideas for how to maxx capabilities with RL. We also got creative types working on image and video generation. I don’t think there has been a push for outstanding writing to the same degree. Not that I’ve heard of anyway. Pretraining, plus some RLHF
Given how widely AI has been adopted, and how much money is involved, I will say I find it somewhat surprising that, if there is a lot of value to unlock in producing really good writing, that it also hasn't been sufficiently attended to, given that the fundamental task that LLM's do is to produce writing.
My own anecdote: a relative of mine is a copywriter for a major streaming network; the kind of person who writes episode summaries and the like. The exact kind of job you figured LLMs would have replaced a long time ago, and which you wouldn't have predicted would have required AGI let alone ASI. Yet he just got a promotion.
To clarify, does the promotion leave him in a position where he's still writing copy?
Yes and he's writing more than ever
I also work in something that is basically copywriting - we have access to in-house AI tools but no one uses them despite them being heavily promoted. The quality of writing is often pretty poor so the tools would definitely increase productivity.
Personal anecdote as a translator: workflows seem to generally be reducing and a major client just basically announced they're moving to a more MT-oriented workflow and cutting down on human translation/review, but on the other hand I just completed two weeks of heavy work on the very traditional sort of contract / terms and conditions type translation that one would think would be among the easiest sorts of translation do with MT (unless the end-client did not want their business secrets to go to major AI companies through the use of public-facing models, that is).
I used to work in video games and still say in touch with developer friends of mine over there, and I see some interesting patterns:
- Nobody seems to care much if you use AI to write the code for your game (also seems like it's difficult for them to even tell)
- You will get angry fans and bad reviews if you use AI for your art (and people can tell)
- You will get angry fans and bad reviews if you use machine translation (and people can tell)
So far it seems people can still tell for the latter two
AI art is also tricky legally, as it can't be copywrited. So you at minimum need a human coming up with your character designs and such, or else you can't copywrite them and anybody could legally make merch for them. I have a brother who is an art director for a card and board game company and they are still using human artists, in great part because of fan backlash but also because using non-copywritable assets for the thing you're selling is not ideal.
My brother has reported that they are getting a lot of submissions from freelance artists that are clearly AI generated though. He has to do a lot more filtering these days. Sometimes they hire a freelancer to do X number of cards (the board and card game industry almost always use freelancers rather than staff artists), and they come back AI generated and he needs to find a new freelancer.
Reminds me of the saying that what you're really paying a premium for with a lawyer or an accountant or a a financial planner or a doctor is not the raw advice (which you can often now get from ChatGPT), it's the right to sue them.
As another data point, I work at a company where the devs are trying to have a single native speaker handle a large game's localization with the help of LLMs. Maybe this is a skill issue thing, but the sheer volume of inexplicable weirdness that the latest models still output regularly makes it hard to feel confident in leaving everything to the machine. We're using custom tools, but the results are still frequently medicore to bad, and that's ignoring the times it randomly outputs in the wrong language, forgets half the termbase, or does something that can only be described as completely inexplicable. Also, it's not funny, at all.
(Also, tangentially related, but I fed some local documents about the company's leave policies into Gemini and asked it to summarize the contents, and it straight up hanged after filling the entire screen with thousands of rows of the same word like something out of a horror movie. This is with Pro 3.1, by the way, so when Zvi or other blogs write that hallucinations are a thing of the past now, at least one of my eyebrows cannot help but rise slightly.)
Is there a protest somewhere in favor of the opposite of an AI pause?
Yes. Silently, in most software departments worldwide.
What you want is a counterprotest. Those traditionally happen at the same time in the same place (or sometimes just across the street).
You don't protest to encourage things that are already happening.
https://www.uusc.org/what-we-do/
I am considering donating to this group. I’m very active on a local level with my Unitarian Universalist congregation.
The pitch seemed very EA to me.
What do you all think from an effectiveness point of view?
I used to give to the UUSC a long time ago, before GiveWell. Now I give to top charities.
It seems to me that UUSC is a mix of ⒜ “big international NGO”, the type (like Save the Children) which has no understanding of cost effectiveness, let along transparency, and ⒝ “woke” political causes about whatever is in the liberal politics news bubble.
My impression is that if you try asking UUSC to offer a GiveWell style analysis, they won’t even understand the question. But I could be wrong!
I've never met a Universalist Unitarian before. If you don't mind me asking (I'm genuinely curious), what are the UUs actually about? I'm Catholic and I must admit your religion seems incredibly confusing to us (do you even call it a religion?).
We do call it a religion, or sometimes denomination, since it emerged in part from the “Unitarian Christianity” movement.
I would describe Unitarian Universalism as a religion that has no dogma or creedal test, a place where you can get the benefits of religious community without having to take on specific beliefs.
I think the main thing I'm confused by is, if it's a religion with no dogma or creeds, how does that even work? I sort of assumed even the UUs had *some* overarching metaphysical value claims, like "the inherant value of human life" or "the search for the truth". What would happen if a member of the church didn't agree with "all human lives have value and dignity"? This isn't a gotcha or anything, I'm just genuinely sure how the UUs square that circle.
I'm not sure I would call that a religion other than for historical reasons like you mentioned (which is not necessarily a bad reason). Wouldn't it be more similar to a...I really don't want to be unkind, but, it honestly seems more like a religion-themed social club rather than an actual religion? For an example of what I mean by religion-themed social club, I mean something like the freemasons, which is not a religion (they say this explicitly), but ironically happens to still have creedal tests and dogmas and rituals and specific metaphysical beliefs where the UUs somehow don't.
I think the main thing I’d say is, don’t get hung up on labels. The categories are made for man, not man for the categories, etc.
There are 7 very broad UU principles such as “the inherent worth and dignity of every person”; but members are not required to believe or endorse the principles. The _congregations_ affirm and promote the principles, and if a congregation were to object I doubt they’d be evicted from the UUA.
hm, I guess. I think my issue is that you don't get to say the labels are both important and not important, but I accept this as MY problem as a member of a religion which puts unusually high value on labels (we have literal millenia-long feuds over slightly different descriptions of the trinity) . My guess is that my impression of possible hypocrisy is just outgroup homogeneity bias or something.
I'm personally an atheist. I agree with all their principles and I see the benefits of community. Pastoral care, group singing, prayer / meditation are all great things. Shouldn't require signing on to ancient creeds.
well that's what's confusing me. I used to be an atheist, I totally understand the sentiment you express...I just don't understand how an atheist joins a self-described "organized religion" and handles the obvious contradiction there, without saying "yeah no it's not really a religion anymore, we just call it one for historical reasons." I don't believe that's actually what most UUs think though.
Maybe the UUs are a combination of old school universalist christians (who provide the institutional core), progressive vaguely spiritual secular humanists (who fill out the congregation and treat it more like a social club) and a miscellaneous fringe of Buddhists/pagans/etc (who are generally just there for the vibes) who don't really fit 100% but are tolerated nonetheless because tolerance is the core belief that everybody agrees on? That's my current model of the UUs but I'm not confident in it.
I also don't see what the difference between principles and creed is supposed to be -- both boil down to non-empirical assertions about the world, even if "all human lives have inherant value and dignity" is a few steps removed from the underlying metaphysical claims it's based off.
"Our dynamic and impactful work threads together justice education, member advocacy, and financial support — all to bolster grassroots communities in their struggle to expand social justice."
Looks like a generic left-wing/Liberal spiel that doesn't really mean anything to me. What part of this seems "EA" to you?
Our approach to research
We acknowledge that formal institutional research often perpetuates extraction and colonialism. In contrast, UUSC’s methods involve community-based participatory research, which consciously centers the community experts, following their leadership and ownership of the entire research process.
What does this look like?
Respecting experience, local knowledge, and traditional wisdom with communities to identify challenges and drive grassroots solutions
Developing research reports and publications with partners that will directly impact their work
Collecting and analyzing information in creative ways
Using high-quality data to advance advocacy campaigns
Grants to support partners’ research
Research training and skill-building
Wouldn't a mutual agreement to pause run afoul of antitrust law?
Here's Claude's opinion: https://claude.ai/share/e7b23802-7a8d-4cff-accc-8e9aa07bcd4e
I think probably if enough companies agreed to pause, they would bring the government in to make it official. This probably wouldn't work under the current government, but it might work in a government conditional on things having gotten crazy enough that most companies agree to this.
I think the problem is that under current law, agreeing to a mutual pause is already a legal risk. So until we get some legal action, e.g. OpenAI can’t say “we would pause if others do”. For that reason I don’t think we could get the pause pledges in first, and then the political action after.
I could be wrong but I don’t think this perspective contradicts what Claude wrote?
I read Derek Lowe's post on the anomaly detection thing yesterday, had no idea it was an ACX grant. Very cool to see it having such an impact!
People following the UAP transparency topic have been concerned the past week over the disappearance of Air Force Research Laboratory’s retired Gen. Neil McCasland from his outdoors jog in Albuquerque (having left his phone behind at home). Local law enforcement, the AF Office of Special Investigations (AFOSI), & even the FBI have been mobilized to search for him.
https://www.krqe.com/news/albuquerque-metro/fbi-joins-search-for-missing-retired-air-force-general-in-albuquerque/amp
In the late 2010s, a leak established General McCasland as Blink 182 frontman Tom DeLonge’s military contact in his discussions with outgoing Obama counselor (& former Clinton CoS) John Podesta on UAP transparency initiatives.
The content platform incubated over those communications, To the Stars Academy, provided Lue Elizondo his soft landing in lifting the lid on the Pentagon’s UAP research program across Keane & Blumenthal’s now famous NYT piece on AATIP in 2017.
McCasland’s disappearance in light of the Disclosure-themed repartee between Obama & Trump two weeks ago hints at best at intrigue, and at worse, foul play.
https://m.youtube.com/watch?v=td8Biqijmxk&t=13 [Ross Coulthart]
The search is very much still on, as the Bernalillo sheriff's office refreshed on Friday: https://x.com/BCSONM/status/2030036477741257075 . I relay their message that the public shouldn't speculate as it hinders their search efforts. "people who attempt to develop their own theories based on the limited information available to the public and this makes finding Neil harder.".
Also sheriff's office: "Regardless of how insignificant you think your information might be, or whether you think we are already aware of it, please contact us and allow us to make that determination." His wife, however, can *and does* speculate: https://old.reddit.com/r/UFOs/comments/1rmpi7k/new_update_on_neil_mccaslands_disappearance/
So you see then *you* musn't speculate on the general's disappearance, but if you know which ravine he has fallen into, to let them know.
Perhaps it will ultimately be found that neither intrigue nor foul play will apply. In any case there are practically no constituencies who would not wish that he is located in sound condition.
Update 14-Mar
Well, it's safe to say that this story has 'gone national' over the past week, with outlets covering it numbering into the hundreds. Coverage includes NYT, with CNN & FoxNews both giving it the on-air coverage treatment. (all trivially verified w/ a Google search tools)
Update on search for Gen. McCasland, as of Wednesday, 12-Mar
CNN: https://edition.cnn.com/2026/03/11/us/retired-air-force-general-fbi-search
[No material change, yet a decent round-up summary of his wife's fb post, ufo thematic background, Trump's post&tweet directing release of UFO documents, McCasland's education, association with the Tom Delonge ' To the Stars Academy'.
as of Tuesday 11-Mar
https://nypost.com/2026/03/10/us-news/hundreds-of-new-mexico-locals-asked-to-turn-over-home-security-in-search-for-missing-retired-air-force-general-ufo-expert-grave-national-security-crisis/
• Over 600 New Mexico residents asked to turn over their home security footage that could elucidate the General's whereabouts
• Article quotes Coulthart: “This is a man with some of the most sensitive secrets of the United States in his head,” journalist Ross Coulthart said
https://eu.cincinnati.com/story/news/2026/03/13/us-air-force-general-william-mccasland-wright-patterson-disappears-new-mexico-wife-disputes-ufo/89135887007/
His wife "pushes back against 'misinformation'".
Correct. Limiting our information universe to that Cincinnati article, the extent of her pushing back against misinformation vis-a-vis the UFO topic is:
"Neil does not have any special knowledge about the ET bodies and debris from the Roswell crash stored at Wright-Patt."
which was on the heels of affirming
" "It is true that Neil had a brief association with the UFO community through Tom DeLonge, former frontman for Blink-182 and founder of the organization To The Stars," Wilkerson wrote. "Neil worked with Tom for a bit shortly after his Air Force retirement as an unpaid (Neil's choice) consultant on military and technical/scientific matters to lend verisimilitude to Tom's fiction book and media activities." "
The rest of her pushback against misinformation was very helpful indeed, asserting that he had no dementia as some circulating misinformation had suggested.
Have you considered the possibility that you are in a confirmation bias trap regarding UFOs?
Suppose I do: Then what bearing would asserting that have on the above analysis of your shared article and the statement you highlighted? Which elements of counter-narrative information within it did I ignore (which would be evidence of confirmation bias)?
Have you considered the possibility that you are in a confirmation bias trap regarding looking for confirmation bias traps in others regarding UFOs?
You don't think it's worth discussing that she sees no reasonable motive for foul play? Just to name the most important of her objections.
And just for you, I've broken my rule of never visiting Facebook. I hope you appreciate it. Here is a quote that wasn't mentioned in the news post:
"Though at this point with absolutely no sign of him, maybe the best hypothesis is that aliens beamed him up to the mothership. However, no sightings of a mothership hovering above the Sandia Mountains have been reported."
Good on her to be able to find humor in her situation, and she uses it to ridicule the supposed UFO connection to his disappearance. Does that not make you update against your suspicion? If not, then I do believe you suffer from confirmation bias.
Well, that's easily solved. He was taken away on a UFO to the aliens' homeworld to learn their secrets at first hand!
I hope the man is found alive and safe, whatever happened.
> Demis Hassabis of Google DeepMind has already informally agreed to something like this in principle (which is why GDM isn’t being protested)
He doesn't have the authority at Google to make this kind of decision, surely.
Correct, which is why I the "informally" and "in principle". I think it's enough of a victory that the StopTheRace people aren't going to protest, partly because I think their model of these commitments is that they can be used to build firmer commitments later on once there's common knowledge that most labs agree to them.
Organizer of the Stop The AI Race protest here.
I agree with Scott's interpretation.
I can appreciate that there's value to society from prediction markets when they have people betting on issues such as elections and company performances.
But is there any value in having markets on whether Jeff Bezos would attend the Super Bowl? That's as useless as sports betting or predicting Oscar winners. To be clear, I don't think that sports betting should be illegal, but it should be more heavily regulated than prediction markets (and I don't think Kalshi and Polymarket should be able to do an end-run around states that ban sports betting).
Some mild push-back: one potential benefit of having silly questions is for training purposes. A lot of important questions have long time horizons, so they offer limited opportunity for practicing skills related to prediction.
True, especially when considered as training data for the up-and-coming forecasting bots.
I agree there's no value in this.
Does anyone know of reliable short-term sources on African death rates? Like, how many people died in Kenya in 2025?
I'm asking because last year there was a lot of drama around USAID getting shut down and many claims that millions of people would die a year due to this. Those were all projected figures, however, and I'd like to update on the actual numbers. The numbers involved were large enough that they should show up in official statistics. Unfortunately, what I'm finding is a giant lag in getting official death statistics, like 3-4 years.
Let me give an example:
-Kenya should be a good example. It's a poor African country that I remember showing up in some USAID documents I looked at.
-We should be able to observe millions of additional deaths in the aggregate statistics. Kenya had a crude death rate of 7 per 1000 people in 2023 (1) and a population around 55 million, giving us annual deaths in Kenya of ~400k. Even 50k-60k additional deaths in Kenya would be a noticeable spike. You can even see the 2021 death spike from Covid.
-However, the World Bank, a pretty authoritative source, lists the most recent data as 2023, which means we won't see the 2025 numbers until 2027-2028 and we won't see the 2026 numbers, what's occurring right now, until like 2029 or later.
-I can't find better ground data. The UN Population division doesn't have anything, their data is as old as the World Bank's, and the info from the Kenya National Bureau of Statistics is even older (2).
A lot of rationalist writers, like Scott, Ozy, and Bentham's Bulldog seemed to get radicalized by this last year and I didn't, mostly because they trusted certain sources and I didn't. And this seems like a great spot for testable predictions; we don't need to rely on fancy analysis, the impact is big enough that it should be observable in crude death rates. Heck, Gemini gives the total deaths in Africa in 2022 as 12 million; we should trivially observe a spike here. If they're wrong, I'd like some of my favorite writers to update towards certain sources being untrustworthy and if those predictions end up being true, I need to update some of my priors.
So, does anybody have a good source? It seems like we should have some idea whether millions of additional people have died in Africa in the past year and...I'm not seeing it.
(1) https://data.worldbank.org/indicator/SP.DYN.CDRT.IN?locations=KE
(2) https://www.knbs.or.ke/reports/2025-county-statistical-abstract-makueni-county/
Nothing much to add, I think this is a great way to approach the subject and I also have my doubts that the numbers will show such a spike.
Related: Last Week Tonight just aired a long segment about USAID, so the issue might become salient in the media again.
We can hope not, that was one of his worser episodes even for him. Rather than blame Trump and demand midterm changes he should have asked his viewers wtf are they doing about it and how about get off their butts and donate to solutions. Was basically a partisan hatchet job masquerading as human interest story. Like all his millions of viewers don't have to watch kids starve for the next year if they actually cared (they don't btw).
That's awfully cynical. Of course they care, but there's a cost to dividing attention away from the Epstein Files or the War in Iran (or whatever else Trump happens to do in the next few months), which are more unpopular and more likely to cost the Republicans in the next election. It's simply a question of triage.
Neither Epstein nor Iran have anything to do with Oliver's base writing checks to EA charities right now, as in today to make up for 100% of the USAID shortfall if they genuinely cared.
I like Oliver on some issues, when he nails it he nails it but he's like a broken clock and honestly I don't think I've actually every heard him actually affect meaningful change. Same as when he cried about SNAP cuts and research cuts, I mean if the good people of California really cared, they can demand their state legislatures increase their state taxes tomorrow so Cali could make up those SNAP and research shortfalls, nope easier to whine and blame DC.
It's feigned impotence on what's really backdoor pandering for midterms because Oliver, and his base, simply don't like Trump, it has literally nothing to do with starving children. Google tells me Oliver has a net worth of $80 million dollars, want to bet how many starving kids he so cares about he could save with $75 million of that?
…“get radicalized”? Really?
Seems like a fair characterization of their reactions. How would you describe it instead?
...held views different to my own?
Another source that might be able to measure the impact is the DHS survey. The USAID drama also affected DHS, but they've found other sources of funding and continued doing their work:
"When the DHS Program closed in February 2025, new funders stepped in to sustain its work, while stakeholders mobilized to explore long-term solutions—ensuring this trusted source of global health data continues to inform decisions and improve lives worldwide." (https://www.icf.com/clients/health/survey-national-health-data-collection)
The surveys are carried out in many African countries and include child mortality data, attitude towards HIV, parent mortality data, etc. (*).
The last survey report I can find is from a 2024 Malawi survey, published last February (**). Assuming the usual delay between survey and publication is around a year and a half, we should be seeing reports with 2025 data in a few months. Granted, this wouldn't be accurate mortality data, but it might be enough to see an effect if there is one.
Also, there's this quote from the Malawi report: "Trends: The percentage of children under age 18 with one or both parents dead increased from 9% in 1992 to a peak of 13% in 2004 before decreasing to 9% in 2024". According to Grok, the peak of HIV-related deaths in Sub-Saharan Africa was the period 2004-2006. I'm not 100% sure whether parent mortality correlates with HIV-related mortality, but it sounds like it might. That would be an index worth watching in the next reports.
UPDATE: The only 2025 DHS survey I can find (maybe due to diminished funding?) is the one for Congo, which finished its fieldwork last December: https://www.dhsprogram.com/methodology/survey/survey-display-625.cfm
(*) I'm going by the Malawi survey on this, but I'm almost certain that at least child mortality data is standard for all DHS surveys.
(**) https://www.dhsprogram.com/publications/index.cfm
Thank you for this. That’s very helpful.
What if the lack of funding has compromised data collection as well?
Unfortunately I think the available statistics are pretty useless. I've heard that GiveWell has tried to assemble data to measure the effects of their interventions on SSA mortality outcomes, and mostly concluded the data is not available, and so the best they can do is generalize from RCTs etc. Though the specific cases I recall were more in west Africa; maybe east Africa has better statistical collection.
This suggests maybe a more viable approach is to find some population that is already studied (e.g. GiveDirectly's sponsored villages?) and try to compare years in that setting.
I think you're directionally correct on this. I wouldn't surprised if the death rate is slightly higher but I think the amount of slack was way higher than what advocates thought, and African governments are likely pulling more weight now combined with USAID overestimating their impact.
Replying because I'm interested in notifications on this. But doesn't David Oaks have interesting recent article about how we have terrible population statistics in many many countries?
Will they be protesting in front of Deepseek's Hangzhou offices as well? Or what about Kimi's offices in Beijing? It seems like there a significant level of interpretation for what constitutes a "major" AI company that would be beneficial to define more explicitly.
DeepSeek and Kimi don't appear to be doing frontier AI, they're waiting for US labs to do it and then replicating what was done. If our labs stopped advancing the ball, it seems unlikely those Chinese companies could surpass us anytime soon, that doesn't seem to be what they're built to do, and they definitely don't have the talent level there we have in ours.
When we suggest a pause to AI research for safety we get all these "but China!!" rejoinders, well ok show me your grounds to believe China actually could get there in my lifetime because I'm not seeing them do anything but steal technology as usual. To the extent this is a "race" at all, it's Daytona and they're just drafting behind the leaders with no plan to maneuver past us.
Luckily, this is now a question we can put a quantitative number on rather than having to resort to "doesn't seem like" levels of qualitative thinking.
https://claude.ai/public/artifacts/f0fc0803-1b57-48bd-b27c-07a9a8442448
Deepmind is in second place, behind OpenAI, in citations/author/year. Huawei, Tencent, and Alibaba are fourth, sixth, and seventh in raw AI/ML publications. Appropriately discount based on your faith in Opus 4.6's research abilities, but the trend is clear.
Regardless of the data, think for just a second about what a pause would entail. Would all current AI talent - including the roughly half of current American AI lab employees of Chinese ancestry - choose to stay at their now-useless jobs, unable to pursue their otherwise massively in-demand profession? Or might they go work for those Chinese companies that would now be the only ones actively pursuing research?
The rejoinders are common because it's the fatal flaw of the plan. The sooner the pause factions accept it, the sooner a more realistic plan can be developed - probably one that relies on international treaty frameworks.
I don’t know if this applies here, but China has definitely had a lot of scandals about artificially high citation numbers. I don’t think they translate directly across cultures.
That chart is a 13 year 2010-2023 representation of research output, which your AI who generated the chart concedes is a poor proxy (in ways that both help and hurt the position you want to advance.)
The leading edge of frontier research over the past couple years is clearly all US based companies, with DeepSeek using a fast-follower approach and Kimi being noteworthy primarily for being the best locally-operable AI, a path that has great business utility for the future but isn't relevant to this discussion of frontier capabilities. We are pouring enormous money and infrastructure buildout into this industry and have all of our best minds on it, and that's why it's taking off, if we weren't doing that I see no evidence that any Chinese company would be moving past us with their inferior human talent and the precarious nature of accumulating power in China.
If we stopped rushing forward, I do not think we would have to worry about AGI/ASI or whatever we wanna call the dangerous capabilities threshold, not any time soon.
If we stopped putting all this money into it, or pushed back with regulations, the entire field would slow down. I imagine many of the experienced SWE's would find employment in some capacity that isn't pushing frontiers but instead diffuses the existing tech via b2b applications, integration with legacy software systems, and so on. They are all massive security risks and should not be permitted to work with any Chinese company, but it may not be necessary to compel that as capabilities research would cool off compared to other more immediately lucrative returns.
Treaties are nonsense, and not a viable plan. Power and implicit threats of state violence are the only viable plan to control the proliferation of any of this. You have to engineer the circumstances where advancing the ball on capabilities is heavily disincentivized by practical consequences. If you aren't prepared to engage in industrial espionage and sabotage and potentially go as far as "bomb the data centers" then you aren't serious, and the treaty is pointless. I would criminalize frontier AI research immediately, aggressively monitor as security risks any person who has worked in the field, forbid the publication of research on this topic, and treat this pause with the seriousness it requires. Doing so would provide the Chinese with ample evidence that we actually mean it, and they won't have our work to steal from anymore, and we can make the necessary threats privately. I think they'd pivot quickly to Meta's approach and use existing technology to make business solutions, as they never believed in superintelligence to begin with. It's our bunch of transhumanist weirdos who seem hellbent on summoning the demon and becoming immortal god-emperor.
If I ever had any doubts who the good guys were, comments like these would dispel them quickly.
If they were planning to I would strongly recommend against it. AI progress in China has been taken up as an ideological good so opposing it is not in any way safe. If you're interested in pushing AI safety in China the best route is likely to read the political guidance on it and make intellectual and advocacy contributions in that direction. Fifty Stalins and all that.
Their site says "every other major AI lab in the world", which sounds to me like it's specifically trying to include China. Since the protest doesn't define its terms, I think they're trying to give the companies maximum leeway to include factors like this, I think because the company commitments are more likely to be a negotiating/awareness tool than the actual thing that ends up working.
David Krueger, the person speaking at this protest, has previously said that "I think the US and China can find a way to prevent each other from building superintelligence if they both actually agree this is a priority", and this matches statements from the other pro-pause people I know.
Publically protesting AI in China is an extremely bold move. I hope any such protestors are not Chinese, and have their embassy on speed dial.
Here at Radio Shenzhen, we were asked "Is it true that we have freedom of speech here in China the same as they have in America? And we're answering 'Yes'! In America, you can stand in San Francisco and shout "Down with Google!" and you will not be punished. Similarly, you can stand in Beijing and shout "Down with Google!" and you will not be punished.
So... this is just for "awareness"? I'm sure you understand perfectly well that getting every AI company in the world to stop voluntarily is an impossibility, so even if these companies somehow agreed, they wouldn't be restricting themselves at all in practice. This seems exactly like the kind of performative protest that you've constantly decried in the past.
My guess is that these informal commitments are meant to be a tool that builds common knowledge for either:
- a more formal intercompany agreement (something like "okay, you all agreed that if the others would pause then you would also, since you all agree, let's have a big conference and hammer out terms")
- some sort of government negotiation (something like "most US and Chinese companies have agreed to this, the US should now negotiate with China to come up with a framework and mop up the remainder")
I don't think I've decried awareness raising. I think it's probably silly to protest "racism", since it's very vague, everyone is already aware of it, and "these three government leaders meet and decide to end racism" isn't a meaningful option. But I think if the average American knew that (let's say) 75% of AI companies (including Chinese ones) wanted to pause their own work if it were possible, that would make the job of people trying to push for meaningful AI regulation vastly easier.
Probably different people across different continents I suppose. And different political systems.
I've been thinking a lot lately about how Cartesian Dualism can be modified to be consistent with modern understandings of neurology, and I have come up with a model of Dualism that can account for all the well-known difficulties. I call it the "Brain-as-a-Collective" model:
https://natehanby.substack.com/p/the-dualistic-brain-as-a-collective
Does anyone here have any thoughts? I don't have a lot of subs yet, but I thought I would post it here for visibility.
I've privately speculated about that since I was a child - "what if there are other people in here and I'm just one of them and we are fighting for control of the body". To the point where I sometimes speak to myself out loud because of the possibility it more effectively broadcasts a message to the quorum than subvocalizing would
See also: Minsky's society of the mind. Although I believe his agents are more simplistic and consciousness is emergent from the interactions
Same. It might depend on neurodiversity. The feeling that there is more than one of us in here, and more broadly, that the mind and the body are totally different things, may be more apparent to some neurotypes than to others.
Does your model explain how a nonphysical entity (such as the "mind" or the "soul") can affect the physical body ? This seems to be impossible by definition (and don't say "the pineal gland", Descartes already tried that and it makes no sense, since the gland is just as physical as any other part of your body).
It's impossible according to the laws of physics, yes. Obviously, for dualism to be correct, the laws of physics must be violated somewhere in the brain--in my view, there seem to be "exceptions" to the rules when it comes to consciousness. In my model, the violations occur at point sources that are the centers of consciousness.
I might compare it to a video game -- the video game physics engine works fine, except for determining the behavior of the player characters. You're not going to be able to predict their behavior by digging into the pixels that make up your virtual character's head because in-game physics engine doesn't extend past the computer box and into the real world.
And of course there must be some kind of logical input/output interface that connects the virtual character to the person behind the keyboard, but this is simple enough to code. I don't think the physics engine of the simulation needs to be taken as totally sacrosanct in situations where it obviously doesn't apply, such as with player-characters that move of their own volition.
I think the problem is with the laws of logic, not the laws of physics, though this could depend on how you define "physical" vs "nonphysical" entities.
As I see it, a "physical" entity is one that is able to affect other physical entities (including, but very much not limited to, our bodies), in ways that are in principle detectable. For example, rocks are physical because when they strike other rocks we can observe those other rocks moving. Neutrinos are physical because they can occasionally strike other particles just right and decay into more easily detectable products. At one point, we did not know that neutrinos could be detected (or even that they existed at all), because our understanding of the laws of nature was incomplete; but now we know better.
Yet if you propose some completely nonphysical entity, then you are claiming that *by definition* it cannot affect anything physical, ever. The problem is not with our current understanding of physics, but with your definition; you are basically saying "the soul is both A and not-A".
You are attempting to solve this problem in the same way other philosophers have done it: by positing some super-powerful entity (be it God or the Simulation) that keeps track of both the soul and the body, and ensures that the two are always in sync by some ineffable means. But this doesn't help solve the problem, because in this worldview the soul does still affect the body (and vice versa); it just does it by some mechanism that we by definition cannot access. Not only have you failed to resolve the logical contradiction, but also ended up "multiplying entities without reason", as Ockham might put it.
>As I see it, a "physical" entity is one that is able to affect other physical entities
This is your definition, and I don't agree with it. Just because something is non-physical doesn't mean it can't affect the physical.
>Yet if you propose some completely nonphysical entity, then you are claiming that *by definition* it cannot affect anything physical, ever.
By your definition, it can't. But what does that have to do with my worldview?
Let's go back to the video game analogy to see if what you are saying makes any sense. Is the player a physical object in the virtual world of the video game, subject to the rules of the simulation? No. But the players can affect the simulation, nonetheless. So there is no contradiction.
> This is your definition, and I don't agree with it.
Fair enough; but how would you define the two terms then ?
Physical is easy to define: that which follows the logical, rational, and mathematical laws of the universe. It is quantitative, not qualitative. Anything that happens can be calculated and modeled mathematically.
The mind is undefinable in logical terms, which is part of the appeal of dualism in the first place. It is truly ineffable, one can only know of it by direct, personal experience. So I offer you no definition, I only invite you to introspect on the miracle of the mind and think deeply about what you see and experience and how profound and miraculous it is.
Descartes key insight was that even if everything he knew was a lie, and the entire material world was an illusion, the first person experience of an illusion is in and of itself so sacred that it can be used as a foundation for belief in God.
The problem with physicalism is that it doesn't do a good job of accounting for the existence of the mind and imagination. Imagination is so much more than just a crude approximation of the physical: It is something that exists in and of itself in the world of the mind, and it has properties, "qualia", which are untranslatable into physical quantitative (Look up Knowledge Argument for a logical demonstration of this). Qualia is truly God's artwork.
The fact that the mind is undefinable and ineffable is not a problem for dualism: that is how we know it is not physical in the first place. It does present problem for proving the theory to people who do not or cannot introspect, since the postulates of the proofs against physicalism can only be known through introspection.
I am not sure why Cartesian Dualism. If I wanted a non-materialistic framework, I would obviously go for Aristotelean Thomism, as the standard framework of the largest and most educated Christian church. I mean if I decided not to trust the authority of science, I would trust then that kind of authority, not some artillery officer who liked math?
In this framework, only abstract thinking is non-material, supernatural, because abstractions do not exist in nature. In this framework, depression, because not requiring abstract thought, would be entirely physical.
I need folks to poke around at the trustassembly.
This is an extremely early beta that I am still changing every day. (I’m explicitly asking you to not tell people about it outside this thread).
trustassembly.org
But please to register, submit corrections, and vote on submissions. There’s also a workflow for you to register an AI agent which will then have its submissions parked in a queue for you to approve and release.
Notes:
* Currently not necessary, but eventually you'll want to stick a filter (like: filter only Approved) to the "Record" page and add some kind of sorting.
* I'm confused by the "newsworthiness" slider, it's not intuitive to me what it is meant to be doing.
* Looks like one can't edit things one has posted. I understand the motivation - to prevent mischief, but probably best of all to allow editing and to keep visible changelog, so I can at least fix typos.
* How is the situation where multiple people are trying to correct the same article meant to be handled? SSometimes I might think: "I agree that this needs a correction, but I'm not sure the proposed correction is ideal. But if I reject it, then it won't be corrected at all. But I have a better correction in mind. Can I correct the correction?" and so on.
Added to the log to work tonight
The front page wants me to install a browser extension, with large colourful buttons for doing so. I unfolded the folded-by-default introductory text and read the whole thing, which is more than I’d normally do with a website that wants to start our relationship by metaphorically grabbing at my crotch, but nowhere could I see any explanation of what the browser extension does, why I should trust that it is safe (link to the github page is the gold standard here…), why I want to install it, or how installing it will help with the goals buried in the the incredibly wooly blurb.
Below the large bright “install extensions” buttons is a tiny “instructions and details” link, formatted the way sites normally format links to support, terms and conditions, and other similar things they are forced to have but would really rather visitors ignored; and /that/ text, although it is only a few sentences, has actual information on what the software does and what it is for.
That blurb should be front and center, not buried.
I’ll see if I can’t make an illustration that tells the whole story. What’s there right now is mostly so I can get people other than me to debug.
Working in getting this into the approved chrome and safari store and an app.
Terrible look, seems related to the Trump admin. I'll be poking around it soon.
Thanks for the feedback. Is it because I copied Supreme Court colors?
I think it's the gold and the font, I feel I've seen that font in .gov pages.
You have. I’ll think if I can come up with something simultaneously prestigious and unique.
The URL does not work:
>This Connection Is Not Private
>
>This website may be impersonating "www.trustassembly.org" to steal your personal or financial information. You should close this page.
https://trustassembly.org works.
That is the exact embarrassing kind of thing I was hoping to find. I am adding that to a log to fix tonight.
Has anyone been up to date with the changes in Venezuela after the capture of Maduro? It has been two months and with all that has been happening elsewhere there hasn’t been that much coverage
Looking on Wikipedia, it seems things have gone... amazingly well? The Trump administration seems to be on great terms with the regime (and even the opposition, though I'm guessing the administration is just hedging their bets), the population seems largely supportive of the intervention, the US sold $500 million worth of oil less than two weeks after the strike, and they're already offering licenses to other countries for extracting Venezuela's oil and gold. Given that what they did isn't that different from what they did to Iran, it's crazy how much easier it is to work with people who value their lives.
I have also looked at Wikipedia. It looks to me like most of the positive-seeming effects come from changes in US policy following the capture of Maduro, in the direction of renewing diplomatic relations and lifting the embargo. It is not clear to me what extent capturing or otherwise deposing Maduro was a precondition for that, or if we could have gotten similar effects at any time by calling up Maduro and offering to lift sanctions and recognize the regime in exchange for some concessions around releasing political prisoners.
As for popular support, I expect that's hard to assess with confidence under the circumstances. I have found some articles talking about polls done by a firm called Meganálisis, which seems to be aligned with the Venezuelan Opposition, though. one from January 14 (11 days after the capture of Maduro) shows 92% popular support for capture. But on the other hand, a March 3 poll by the same outfit shows Delcy Rodríguez with a dismal 4.8% approval rating. To the extent these polls are accurate, it sounds like the general mood is one of very badly wanting the regime gone but seeing the replacement of Maduro with Rodríguez as at best a marginal improvement.
Someone posted a link with some evidence that US feels the need to keep pointing a gun at Delcy's head. So a peaceful solution probably wasn't (and isn't) possible. https://www.reuters.com/world/americas/us-turns-up-heat-venezuela-with-threat-indict-new-leader-delcy-rodriguez-2026-03-03/
What is the US government getting from Rodríguez that is not in the regime's interests? They're selling us oil, sure, but we were the ones saying no to that before, not Maduro. And what does the administration still want from the regime that it isn't getting to inspire the threats?
I doubt that surrendering control over all of their natural resources is something that's good for their long term interests, even for the regime... But they really don't have a choice at this point.
Are the surrendering control of their natural resources? I though they were just selling us oil.
Not selling oil to Cuba, not making deals with China and Iran that we don't care for, that sort of thing.
The regime we were told was oppressing the population of Venezuela is still in place. Has something actually changed for the people living there, or did you mean it's going "amazingly well" just for US businessmen?
Oh, the latter, obviously. But the people seem to be happy with the intervention, so good for them I guess? https://www.newsweek.com/trump-boost-poll-venezuelans-11357476
It wasn't obvious. I mean, the original comment was asking about what changed in Venezuela specifically, your answer could easily have been interpreted as a claim that conditions have improved for the people of Venezuela.
You are actually implying that the oppressive regime is now more stable, which I would characterize as a negative outcome for the people.
I thought the tone made it clear. Sorry for the confusion.
"Isn't that different" isn't how I'd compare "kidnapping one political leader and killing a few guards then doing nothing else and making it reasonably clear that you don't really plan to do anything else as the legal successor to the former leader takes power and you proceed to move to a level of normalization the country had already been seeking on terms similar to what had already been offered" vs "killing a spiritual and political leader, much of his family, and many other senior officials as well as attacking 500 military targets and naval vessels and then immediately talking about regime change"
First thing first, I wouldn't call almost 100 people "a few", but with Iran, if the government was just agreed to surrender all of their missiles and nuclear material, I'm sure the US and Israel would stop killing them. They're not just murdering people for its own sake. If Iran completely disarms, they are no longer a problem.
I mean, there were ongoing negotiations with Iran, abruptly aborted by the US killing the people they negotiated with.
Iran appears to have decided they have no reason to negotiate any further, or indeed no reason to believe the US and Israel will ever stop killing them. Very reasonably, if you ask me.
You can argue whether Iranian resistance is rational or not with someone else, I'm just here to argue against the idea that its situation is sufficiently similar to Venezuela's that we can infer irrationality from their different response.
It doesn't matter if it's rational. There's no objective reason for them to value their lives. I'm just saying it's inconvenient, that's all.
You ever see the video of Robert McNamara talk about how "the Oriental doesn't put the same high price on life as the Westerner"?
Funny coincidence for the Iranians and Vietnamese to share this characteristic, despite being so culturally and religiously different in other ways.
>Given that what they did isn't that different from what they did to Iran
You think that an hours-long raid to capture two people "isn't that different" from more than a week of bombardment featuring 1,000 sorties per day?
Venezuelans would still be dying if the regime didn't immediately capitulate, so yes, the situations are pretty similar. The difference is that the Iranian government decided to fight back for some reason.
.Venezuelans would still be dying if the regime didn't immediately capitulate
Capitulate to what? And dying from what, exactly?
Capitulate to giving the US complete control over their natural resources. And dying the same way Khamenei died (and also the same way the new Khamenei will probably die).
If they have capitalized so totally, then why is this necessary? https://www.reuters.com/world/americas/us-turns-up-heat-venezuela-with-threat-indict-new-leader-delcy-rodriguez-2026-03-03/
The two situations are simply very, very different.
In addition, cultural differences between the US and Venezuela are of vastly less magnitude than the cultural differences between the US and Iran.
Shouldn't you have details of an AI pause before you protest in favor of it? Was the "in a few months" sarcastic?
The pledge isn't legally binding.
But if we were ever to actually get a working pause on AI research, one step on the path to that will be going through the phase "people involved agree in principle to stop if they could figure out how."
I doubt they'd ever be able to get all the labs in all the countries to agree to this. But if you think that's true, then the cost of doing the pledge is nothing.
I guess this depends what you mean by details. I think, for example, that people could protest the war with Iran when they have only some details of what a cease-fire would look like, and not a comprehensive white paper listing all of the terms verbatim.
A few months isn't sarcastic, I know some groups working on very detailed proposals that I'll write about when they come out.
Why did oil prices only go up modestly the first few days of the Iran war, then balloon Friday and today? What new thing did the markets learn? Did they previously expect Iran would be unable to stop traffic through the Strait of Hormuz? Why would they expect that, and how has Iran overcome whatever hurdles they expected?
The timing is indeed strange. I would point out though that the "modest" increase through march 5th is about 20%. Its too cute for that to be the right move to shutting down 20pct of oil transport, surely theres some squeezed short term buyers, but its a solid increase. Then something went through on late friday and the weekend across markets that has now reverted. That's the outlier move without a great explanation. Sentiment trading on the succesor pick (which was expected ahead of time) plus a Trump TACO signal is one diagnosis. I dont find it fully satisfying. I agree "no one expected iran to close hormuz" is not a good candidate given the timing. Shipping stopped long before that price spike.
Seems to be going down today: the market opened around $115 a barrel, but closed at $85. To keep things in context, oil was around $90-105/barrel from 2010 to 2014, and in 2022 we had an eight month spike that got it up above $100 again, due to Russia invading Ukraine. So we are seeing high oil prices right now, but not historically high.
VXUS up over 1.2% today 🤷
I think it had a lot to do with Iran starting to attack other Gulf states’ oil infrastructure leading them to shut down production entirely or almost entirely. Now of course it is a prediction market question about how long iran can keep causing that much trouble.
Simple answer. Israel started attacking oil infrastructure in Iran vs earlier attacks which left this alone. Not only does this affect the oil supply in the short term, but it also signals that the war will be longer lasting. The initial plan for the war was to try to get Iran's leadership to capitulate quickly like Venezuela, in which case you would want to leave as much economic infrastructure intact as possible. On the other hand, if you think it's going to be a longer war you want to attack this infrastructure to degrade the enemy's ability to fight. So the attack against the oil industry shows that the US / Israel believe this is going to be a longer war, and the oil market is reacting to this.
I think that, watching all the iron move into the area over the last few months, the markets had priced in the possibilities of (1) successful negotiations (drives oil prices down and increases trust in availability) and (2) a short, sharp war (eats into stockpiles but doesn't create pervasive shortage). What the market had not priced in was a long war, which could create widespread and pervasive shortages.
Then, both sides started to indicate that the war might not be short. Iran is expected to put forward the younger Khameini as supreme leader, which is felt to be a defiant choice by the regime rather than conciliatory or submissive. (Technically accomplishing regime change by making Iran a monarchy in all but name!) The US has shifted to saying things like "we're here for the long haul" and "hey, special operations troops, stop training and prepare to deploy, just in case". And although wars sometimes look the most intractable right before they end, those look to the markets like signals of digging in rather than looking for an off-ramp.
Now, between the prospect of a longer stop to shipping (gulf states can't store much oil) and the Iranian strikes on oil facilities (directly reducing their production), many gulf states are lowering production, including Kuwait, Iraq, Saudi Arabia, and Qatar. That will create the pervasive shortage that the market hadn't priced in, and it's racing to adjust. The world simply doesn't have the stored oil capacity to feed the cars and planes and power generators. China has the most (because they've been preparing to invade Taiwan and don't want to be deterrable), but also the most need (and buys Iranian oil most directly).
It's worth noting that even though Iran's navy was destroyed by the US with contemptuous ease, again, they can still threaten shipping with land-based systems. (As can, for example, Ukraine threaten Russia's navy despite not having a navy.) The Strait of Hormuz is so narrow (~60 mi) that even gun artillery could shut down most of it, and rocket and drone artillery could easily threaten the whole thing. And although likely many ships could get through under US convoy protection, it's very hard for the US to promise that everyone can get through perfectly safely, and the war risk insurance premiums for even that level of threat can be pretty catastrophically high. That's why the Houthis were able to redirect so much traffic around Africa despite multiple navies providing security. Iran was the Houthi's supplier, so their threat to shipping is likely to be even more credible than the Houthi's was. But! As oil prices spike, eventually it will be worth it for some of them to try to make the run under US protection, and for the US to provide it.
I disagree that the Assembly of Experts' choice of Mojtaba Khamenei makes Iran a monarchy any more than the election of John Quincy Adams made America one.
Pretty silly comparison.
John and John Quincy were both (1) elected democratically and (2) on independent records of service; John to the American Revolution and the new United States, and John Quincy was a child during the Revolutionary War but later was a diplomat and then a politician. One could suspect that John Quincy's entry into public service had elements of nepotism, but his later entry into politics was clearly not (it was over a feud with Jefferson).
Mojtaba has been serving as his father Ali Khamenei's deputy chief of staff, with incredibly broad authority, for almost two decades - two thirds of his adult life. He was appointed to follow his father by a closed group of unelected clerics whose membership Khamenei had significant sway over, if not total control of.
Of the three "presidential families" in American history, none of them come close to that level. Not even George W. Bush and his father George H.W. Bush, who were both elected Texas politicians before they were POTUS.
Of all the things America is or threatens to be, an honest-to-goodness monarchy is just not one of them.
Fair enough, even though fewer people voted for him, Quincy Adams DID win more electoral college votes than Jackson after all.
He didn’t, actually. Adams won 84 and Jackson 99. The problem was that there were four candidates who won some electoral votes, but none had a majority — Jackson had the plurality, but it was just over a third of the electoral votes — so it was thrown to the House to choose from among the top three. Clay, who had been eliminated because he had the fewest electoral votes, is alleged to have drummed up support for Adams in exchange for the position of Secretary of State, though we’ll probably never know for sure. Jackson was mightily annoyed, but as always the Constitution says what it says.
Thanks, I knew that. That was the joke.
Now I'm wondering if this state of affairs is benefiting the US relative to China. If this successfully starves China of oil, does the US military have an incentive to keep the war going longer than it needs to?
Yes, but the nuance is it drives up the price of Chinese products America buys. So it's difficult to bleed China without also bleeding America - and not so long before the midterms, either.
If one takes a zero-sum, all-countries-for-themselves view of the world, driving up the price of something the US exports is good for the US.
Markets strongly believe in TACO after getting burned on the liberation day tariffs and are expecting him to declare victory soon. The recent change is realising that this might not happen this time (at least not soon)
That was my assumption, yea.
Now might be the time to consider price controls on gasoline coupled with rationing. Everyone will be squeakin’ and squawkin’ about how they can’t possibly reduce gasoline consumption even temporarily, but don’t believe it.
I agree that the Democrats should propose this, and implement it in the states they control.
Price controls are the wrong lever IMHO. Where as a higher price at the pump is exactly the thing that will drive people to use less gas.
Given our fentanyl-like addiction to gasoline my fear is that if prices skyrocket people will just reduce spending on everything else while continuing to buy as much gasoline as usual. And that, of course, could be disastrous to the economy.
Huh? Whatever, people make decisions on how to spend their money, that's the whole point of capitalism. And the economy adjusts to that spending, more of this, less of that.
Yeah - gas is an inelastic good for a lot of people, but also it's simply not that expensive and despite what they say consumers could probably absorb a lot more pain
Gasoline is inelastic to a degree. But not entirely. I have no doubt that with a bit of planning most drivers could reduce their consumption by 5%.
But they'll be super pissed at Trump.
Out: prohibit price increases
In: require price increases
No don't 'require' price increases. Just let them happen naturally... supply and demand. See "Basic Economics" by Thomas Sowell.
I'm totally serious. The government should require price increases in shortage situations. Take the heat off the local vendors who just point to the law and shrug. The government could even claim a portion of the price increases and distribute it to the citizens / poor.
Price increases in the most vital products kills the poorest people. Not everything is a choice.
The anti gouging argument often tries to appeal to "basic economics", or Econ 101. It's nothing to be proud of. More advanced economics exists for a reason.
101ism is a syndrome or philosophy,where basic principles of economics are treated as exceptionless iron laws,and applied to all situations. Noah Smith write an article on the subject (although he did not claim to have invented the term). It was promoted by an occasion when an interviewer, not himself a professional economist, appealed to "Econ 101" to disprove a claim made by Smith, a professor of economics.
https://www.igmchicago.org/surveys/prices-of-medical-supplies/
An economist writes:-
"Price fixing is an example of a topic where it’s easy to miss the big picture if you only want to apply reductive thinking. Many arguments assume that the market is perfectly liquid and all participants are operating according to a simple set of rules where they buy and sell according wherever supply and demand naturally intersect on a clean academic graph.
The problem in the real world is that unique situations create opportunities for large, well-funded players to manipulate the supply curves far beyond anything that could happen in a simple liquid market. Their market manipulation creates its own demand which further increases the price. It’s not hard to find situations where a group of scalpers acting independently can temporarily sequester enough of the supply that prices for individual units are driven far above what free market supply and demand would support.
These second order dynamics are studied academically, but the common arguments in support of price gouging ignore those second order effects and argue about perfectly liquid free markets that don’t exist in these rare scenarios. "
Economics is capable of solving a variety of problems. During business-as-normal times ,the problem is to achieve growth -- to thrive. During emergencies,the problem is to stay alive,individually and as a group -- to survive. These are different problems with different solutions.
Pro gougers are essentially failing to realise this,and applying a one size fits all solution
"Free market works only if you have a couple of conditions - elastic demand, elastic supply and goods being somewhat of a commodity."
In this case you only have the third. This is why even in most free market societies you have anti price gouging laws, laws protecting availability of resources for critical industries and in military times rationing
I just filled up my car, I don't blame the gas station for the ~$0.4 increase. And if the government starts taking some of the money it will never stop. Mind you I have long supported an increase in gas taxes. (No one ever talks about that anymore.)
While I agree that there are serious market failures there (we all remember the corona toilet-paper shortages), I do wonder if there is any reason to believe that government involvement should be expected to produce an actual improvement. I'd kinda expect it to morph into something more damaging than these occasional market failures.
I believe the prevailing sentiment in the first few days was that the bombing campaign would successfully eradicate the Iranian leadership and the government would collapse and the fighting would stop. Logically, this would prevent Iran from successfully threatening tankers in the strait of Hormuz. This sentiment was strengthened by the death of Khamenei in the first day of the campaign.
There was also a belief that once the government stopped functioning properly, the people of Iran would rise up and replace the dictatorship with a more democratic government.
There have been no signs of any of this happening. It has now become obvious that victory is not so easy and this will be a more drawn out conflict. The leadership of Iran is much more robust than markets previously assumed. This belief was probably strengthened by the shocking video online of burning oil refineries.
Markets have also been experiencing increased volatility recently, and investors are acting panicky. There's been a lot of uncertainty in the market (due mostly to uncertainty around tariffs and AI) which can cause huge runaway effects for some "safe" assets. People who don't feel certain about their investments in shaky markets (like tech right now) are looking for any reason to jump ship into "safer" assets like raw materials (gold, silver, oil). This can cause huge runaway effects from bandwagoning.
In my opinion, this oil fiasco reflects a similar mechanism to what we saw with silver and gold earlier in the year. If I had to guess, I would say the current run on oil will end with a sharp sell-off and retail investors will be left holding the bag. The price for oil will probably remain high until the conflict is resolved, but it will be overvalued at some point. If you can guess when the sell off is you could make some money shorting it. Some say it's already overvalued.
>There was also a belief that once the government stopped functioning properly, the people of Iran would rise up and replace the dictatorship with a more democratic government.
Not that this was very likely to begin with, all remaining chances have surely been undercut heavily by Trump indicating that he prefers the Delcy scenario. If you were an Iranian oppositionist (and didnt' get shot in the previous wave of repression) why the hell would you risk life and limb in order to help Trump replace Ayatollah(s) Khamenei with some other Ayatollah who does the same repression but will also give America full and open access to exploiting Iran's oil? At most one could expect destabilization - and even then the new barely-changed regime would now have a powerful backer, ie. Trump, surely intent on keeping the new guy in power so as to have someone they can work with.
Money makes the world go round! Money, money, money, money! (this is the response for why risk life and limb... embedded agents)
At most, one could expect what happened in Saudi Arabia, with the "modernization" -- where in Iran, we mean "more legal opposite sex dancing" and things like that -- Iran is already notably more liberal than the Sauds in terms of marriage rights for women.
>I believe the prevailing sentiment in the first few days was that the bombing campaign would successfully eradicate the Iranian leadership and the government would collapse and the fighting would stop.
I find it difficult to believe that any sophisticated market participant was dumb enough to believe that would happen. It is inconsistent with everything that political scientists know about such matters.
I agree with you there. The market is often not driven by the beliefs of intelligent, well informed people.
The current administration was probably the main force behind the story that Iran would fall, and as we have seen in the last year, the president's Truth Social posts have a strong impact on the market. Weird market action in January of 2025 can be attributed almost entirely to the president being wishy-washy about tariffs early in his term.
Again, totally agree with you that markets were (and are) behaving irrationally. Famously, the market will behave irrationally longer than you can stay solvent.
To clarify, I didn't mean to imply that I think that the markets were acting irrationally, but rather I was attempting to express skepticism that that is an explanation for what drove the market.
I don't think anyone doubted Iran would be able to stop Hormuz traffic if they chose. (Well, maybe some people in the "war department", but that's a separate issue.) The big question is if they would/how seriously they would. So far, there's still a tiny bit of traffic (~1% of normal levels) and nobody is getting shot at, but the ships doing it are largely ones that have some reason to suspect the Iranians won't shoot at them. (Chinese, for instance.) Nobody is quite sure if Iran's going to go for Tanker War II, but if they do, it could get really bad. (And not just in Hormuz either. They've already hit tankers off Kuwait, and while the Gulf is wider than the Strait, it's not that wide and it's long enough to be a problem.)
Sal has a really good objective breakdown on this, look up "What's Going on with Shipping" on YouTube.
Where do you think I got a lot of this from?
Israel and Iran both started hitting oil infrastructure which they hadn't been doing initially and generally didn't historically (ie Israel and Iran didn't do this in the 12 day war)
https://www.npr.org/2026/03/06/nx-s1-5736593/middle-east-iran-energy-lng
This. Iran attacking their neighbors oil production capacity is going to lower supply for months or years.
Would be more likely that the market first expected either a quick rundown of the ops by US/Israel or the Iranians doing the Delcy Rodriguez option, and now that this didn't happen...
Can you clarify this?