85 Comments
User's avatar
Some Guy's avatar

The paid version of Suno is really good now, three examples:

A rap battle about alignment strategies where “Yud in the blood” shoots down several proposed methods: https://suno.com/song/6270f362-ca0a-4607-9243-270ceebde409

A medley about instrumental convergence that sounds nice and has some jokes but really didn’t have much to do with instrumental convergence but is still funny: https://suno.com/song/6fbc68ab-590c-4809-a593-b9f670e55195

A musical about my project to create truth in the news: https://suno.com/song/485a8c10-ffdf-4cbf-8498-96036ca5cf7f

I will spare you the Hamilton remake I made while mowing my lawn where I debate the new atheists. I was literally able to do this hands free with gpt5 when emptying the grass.

Expand full comment
Dragor's avatar

Slightly better than the last Suno song I heard, but it's still got the thing where it's kinda...dull? Granted, I often get bored of songs midway through, but the Trust Assembly one sounds kinda generic.

Expand full comment
John M's avatar

Damn, that alignment rap actually hits.

Expand full comment
Gian's avatar

Physics is study of repeatable events. But do repeatable events exhaust the space of all events?

Might not there be singular events that happen once and never recur or perhaps occur unpredictably many times, never to occur again?

Miracles are one type of singular events. But singular events that are not acts of a supernatural agent could be possible.

Expand full comment
Gian's avatar

Laws of physics are derived from observed regularities. Singular events don't have the character of regularity-- that's why they are singular.

Hence, by definition, they are outside the purview of physics.

Only question is whether they occur at all.

Expand full comment
Alastair Williams's avatar

Physics does not study only repeatable events. Singular events that never reoccur are allowed under theories of physics.

Expand full comment
Gian's avatar

How will physics deal with an event that occurs only once?

It comes and goes out. It doesn't wait for experiment and no planned experiment can capture it.

Expand full comment
Wimbli's avatar
24mEdited

Is it a large-scale event? it is possible, like having an ice cube appear in the middle of warm water, to have a "plausibility breaking" event.

Science takes the exception and lets it break "natural laws" provided you have enough evidence to show the exception actually worked.

You could write a story about "The Event that Broke Science." (whereby we mean the fallible scientists discovered everything they knew was wrong.) But in our world, it's far more likely that we wake up tomorrow with a "new mission" and a "new set of laws" that govern our simulation.

Expand full comment
EngineOfCreation's avatar

>Physics is study of repeatable events.

Never heard such a definition of physics. Source? Perhaps you're conflating this with the scientific method of observation, prediction, and experiment. Physics absolutely is capable of forming hypotheses/theories from non-repeatable events, such as a distant star blowing up in a unique way.

Expand full comment
Wimbli's avatar

Kulthea explores this. Magic isn't exactly supernatural there, but science for all intensive purposes doesn't exist because it's too difficult to predict.

Expand full comment
Deiseach's avatar

"Intents and purposes" not "intensive purposes". Sorry to be nit-picking, but this is one of my bugbears: phrases and terms spelled incorrectly because people are relying on "I heard it said" rather than "I learned it by reading it" (see "persay" for "per se" and "could of" for "could've" which is the shortened form of "could have").

Expand full comment
Wimbli's avatar

Huh. I've probably read that phrase more than I've heard it spoken. Thanks for the correction! Teach me to write in a hurry (I think this comes of "spelling phonetically" (you know those people) -- except this time, I was mangling word boundaries).

Expand full comment
Arrk Mindmaster's avatar

Maybe science is difficult to predict only when the purposes are intensive?

Expand full comment
Mister_M's avatar

Shankar's suggestion seems to push back against your definition of physics. I'll also push back: are any two events identical? Probably not, but they have some regularity that we use to find patterns and make predictions. Proton decay is kind of like other decays, which is why we predict it may happen even though we haven't seen it.

Expand full comment
Gian's avatar

Good point. Isn't it held in physics that all electrons are identical, all protons are identical etc etc.

A singular electron might be slightly and unpredictably different from other electrons.

Or slightly different only at unpredictable times.

Expand full comment
Shankar Sivarajan's avatar

Sure, proton decay might both occur and be sufficiently rare to only ever happen once.

Expand full comment
Gian's avatar

Proton decay is an item of physics, even though a bit improbable (calculated so within physics).

I was thinking of going outside physics altogether as something entirely anomalous.

Expand full comment
EngineOfCreation's avatar

If an event has an impact on the physical world, it's within the realm of physics. How could it not be?

Expand full comment
Shaked Koplewitz's avatar

I've recently had some dreams where I ask AI questions. This is interesting in contrast to smartphones, where it's been observed people (including me) almost never have a smartphone in a dream despite their omnipresence in real life. Have other people had these experiences.

Expand full comment
ACXanon's avatar

My understanding is people dream about the contents of their screens, but not the devices themselves. Like I often dream about AstralCodexten, the plot of whatever movie I last watched, and when I played video games I would dream about playing them all the time. I guess the brain doesn't register devices themselves as very interesting.

Probably our brains register talking to AI as itself as an interesting experience, which makes sense since it's very different from other conversations we have.

Expand full comment
Christina the StoryGirl's avatar

I often have dreams where I need to dial 911 in a dire emergency and keep mistyping a digit.

Expand full comment
thewowzer's avatar

I had a dream last night where I met Tobuscus. No phones or AI specifically, though.

I think it makes sense to have an AI-interaction dream even though smartphone dreams are uncommon. I'm guessing most people think about/interact with AI LLMs while regarding them as like a specific "person", whereas smartphones are more thought of as a tool that you use to do specific things and not a "person" you interact with to learn things and get stuff done. Like, I don't really think about my phone; I think about the things I do on my phone. But with AI, I think about it as much as or more than what I do with it.

Expand full comment
LambdaSaturn's avatar

I often have a dream where I drop my phone on the floor and break the screen.

Expand full comment
nifty775's avatar

Anyone else here smart but not very hard-working/have limited work capacity? Like most of us I have a white collar office job, and I've noticed over the years that I simply have 'limited work capacity'- I literally can only cognitively focus hard for a few hours a day. This ability appears to be normally distributed, and I am simply not gifted with high levels of it. Anyone else managing this? It's a bit like a disability, but also pushes my energies towards being clever & being more efficient at my job- i.e. with only x amount of work capacity, how can I be smart and use my limited resources towards accomplishing more?

It's interesting how much moralizing there is about hard work, whereas if you view work capacity as simply a normally distributed trait a lot of the moralizing goes away. If we all trained really hard to run say a mile, some of us would be much better at it- cardio capacity is another such trait. No one moralizes athletic limits. Just a thought

Expand full comment
Straphanger's avatar

What you're describing sounds like typical laziness/lack of motivation. It's hard to be disciplined when you are in a relatively comfortable spot in life. I'm as guilty as anyone else. When I was younger I was able to work harder because I had an intense fear of not being able to get a job and make it on my own. Now that I'm more established it is difficult. Not everyone is going to have the work tolerance of an Olympic athlete, but you can certainly increase within a normal range. The best suggestions I have are: 1) You need to find a compelling reason to work. Without an answer to the question "why?" that you can return to when you feel resistance it will be difficult to stay disciplined. 2) Slowly improve your habits. Make sure you are properly fed, rested, and exercised. Then set small easy goals. Start by trying to get just 20 extra minutes of work done daily and make it part of your routine. Give yourself credit for keeping the routine. You will feel less mental resistance over time and can start increasing the goal until you feel satisfied.

Expand full comment
Michael's avatar

Most people can only cognitively focus hard for a few hours a day, so I don’t think you’re that extreme.

Most people who work intense long hours are on “manager time” taking sequences of meetings and calls, which has different demands.

People on the far left end of the bell curve can get an ADHD diagnosis and take amphetamines. Many people who appear to be on the right end of the bell curve are also taking amphetamines.

Incidentally people do moralize athletic limits, it’s just easy to opt out of being evaluated on those traits.

Expand full comment
Mister_M's avatar

Capacities also develop. Is that claim coherent? Kind of, or to put it another way, the line between a general capacity and what you're capable of right now is fuzzy.

I do think we have a responsibility to develop our capacities within reason.

Expand full comment
Wimbli's avatar

Capacity for cognitive focus is decreasing, dramatically, in our youth. This affects attention span, and it affects designing video games, and television...

Expand full comment
ascend's avatar

I just read the School review for the first time and (though I haven't finished reading the comments) I feel like everyone's missing the most important factor and reason for the existence, structure, and nature of compulsory schooling. Namely, that it's the linchpin of our society's entire foundational ideology of equality of opportunity. It's the absolute cornerstone of this concept, of this precarious balance between officially acknowledged inequality and enforced equality of outcome, without which the entire system (and the ideology that sustains it) would collapse.

If you're a capitalist, it's the primary thing that shuts the socialists up and stops them raising a mob to overthrow the whole "unequal" system--as long as universal schooling exists it's clear that everyone has a theoretical chance to rise, and regardless of the details that's enough to ground the idea that there's basic equal opportunity, however imperfect.

If you're a socialist, it's the primary thing restraining capitalism from going off the rails and turning into fully fledged hereditary privilege reborn. As long as universal schooling exists, the ability to keep the classes meaningfully stratified without enormous effort is immensely constrained.

And with this in mind, it should be obvious why abolishing school is entirely out of the question, making it voluntary is out of the question, and making even small changes to its structure (especially the parts connected to formally equal opportunities and equal resources) are enormously sensitive questions. It's not just one of many different institutions in our society that some want to change and others don't. It's, rather, the central institution of our society's governing ideology.

And...hardly anyone seems to even acknowledge this fundamental fact. This isn't a normative claim I'm making, merely a descriptive one about the ideological role school plays in our society. But even this descriptive fact is almost completely disregarded in the discussions about whether school should exist, whether it should be radically changed, and so on. I don't know if this is because of a rationalist tendency to overfocus on details and ignore the broader historical picture or if it's for some other reason, but I think it's a major oversight that limits coherent analysis.

Expand full comment
avalancheGenesis's avatar

It was a really strange review to read for anyone who's familiar with Freddie de Boer's book on education, The Cult of Smart. Well, same for the Alfalfa School one really. One side of the pendulum says "selection effects!", the other says "load-bearing for false consciousness!" Rationalists love pounding the table about Chesterton fences, and sometimes I think their ideas on this topic are even pretty interesting (I would not naively expect unschooling to work as well at it does!), but there's really not much deeper systems analysis once one scratches the surface. Not sure if it's due to a higher than usual % of "I hated school, by which I mean child prison" or what.

Expand full comment
Wimbli's avatar

Ideologically, we could have the same role taken up by free libraries. School could exist to teach the "up to 8th grade" reading writing and arithmetic, beyond which "you have the toolbox to learn yourself."

I think you're wrong, in general, though. If parochial schools and other private schools did not exist, you'd have a case. But pretty much everyone who Is Anyone "knows a guy" (bribery) in order to get into Good (Elevator-esque) schools. The Obamas didn't send their daughters to DC city schools. Nobody does.

To use the british parlance, we do have public schools and state schools. We just try to pretend that the public school system doesn't exist, in our ideology.

Expand full comment
ascend's avatar

I meant to include all that as "the details" in "regardless of the details that's enough to ground the idea that there's basic equal opportunity,"

The existence of stratified alternatives to the official schooling system is (so the argument goes) a difference in kind, not in degree, to the existence of stratification within the official system. Or I should say officially acknowledged stratification within the official system, to allow for the "I know a guy" effect.

But in any case, if you acknowledge that we try to pretend otherwise, then you're affirming my point: the image, the idea, is central to our social ideology. Again, it's a descriptive not a normative claim. It doesn't matter if the claims (the official claims, or the capitalist claims, or the socialist claims) about school are true, it matters that those claims are a linchpin of the ruling ideology's legitimacy. That factor needs to be enormously engaged with, or the discussion is all just meaningless idle daydreaming.

Expand full comment
Charles Krug's avatar

This week’s, or perhaps last week Food Panic that will Kill us all concerns Cesium 137 contamination, with breathless repetitions of “How could this ever HAPPEN????”

Under a minute of DuckDuckGo-ing gives the answer.

Cesium 137 is one of two materials used as the radiation source for food irradiation, a method of sterilization that’s so frightening that only Americans seem to be in the least afraid of it, or at least a noisy subset of Americans.

So “Careless handling of Rad Waste.”

I’m not sure of the level of contamination; the absence of numbers in the news stories I’ve read suggests it’s unlikely to be harmful, otherwise they’d have published them.

But I’m an old cynic.

Expand full comment
Wimbli's avatar

People don't like to talk about rats defecating in our food. That has verifiable food outbreaks of ecoli, repeatedly.

Expand full comment
TheKoopaKing's avatar

On October 4th, Judge Immergut, a Trump term 1 appointee, blocked the Trump admin from federalizing Oregon's National Guard. https://www.courtlistener.com/docket/71481149/56/state-of-oregon-v-trump/ The reasoning was simple; the conditions for invoking 10 USC 12406 https://www.law.cornell.edu/uscode/text/10/12406 were not met. There was no rebellion, invasion, or inability to execute immigration laws in Oregon. From the judge's opinion:

>In this case, and unlike in Newsom II, Plaintiffs provide substantial evidence that the

protests at the Portland ICE facility were not significantly violent or disruptive in the days—or

even weeks—leading up to the President’s directive on September 27, 2025...

>The President’s determination was simply untethered to the facts.

In response, Trump instead federalized California's National Guard and sent them to Oregon. Judge Immergut called both parties to a hearing at 7 pm Oregon time the same day https://x.com/kyledcheney/status/1975006303933374847 and within 2 and a half hours granted a second TRO https://www.courtlistener.com/docket/71481149/68/state-of-oregon-v-trump/ that specifically prohibits sending any federalized National Guard members to Oregon. Abbott, the governor of Texas, has also aided in disobeying the judge's initial TRO, by volunteering members of the Texas National Guard to be federalized and sent to Oregon and Chicago. https://x.com/kyledcheney/status/1975019706374586377

It should be noted that 1) Any deployment of troops to Oregon is illegal and disobeying the judge's order because the statute that would enable it, 10 USC 12406, was found by the judge to be illegally invoked to send troops to Oregon, 2) Trump is relying on his initial invocation of 10 USC 12406 from JUNE directed at PROTESTS in CALIFORNIA https://www.courtlistener.com/docket/71481149/1/2/state-of-oregon-v-trump/ to legally claim a nationwide ability to deploy National Guard troops to blue cities anywhere in the country as a domestic policing force, and 3) Republican governors are abetting him.

It's pretty clear that when the Executive is disobeying the Judiciary like this, it's time for we the people to step up. Immergut should deputize anybody it takes willing to use force, including and up to lethal force, to get Trump, Hegseth, Abbott, and the DoJ lawyers lying to the National Guard troops that their actions are legal, into compliance with her order. This is not a joke. There is nobody left who will enforce this judge's order if the Executive abdicates their duty. Trump has publicly stated to all the US military generals that they would use US cities as training grounds https://apnews.com/article/trump-hegseth-generals-meeting-military-pentagon-0ecdcbb8877e24329cfa0fc1e851ebd2 and that "We’re under invasion from within. No different than a foreign enemy but more difficult in many ways because they don’t wear uniforms." Every single Republican politician that otherwise voices support for this in a media interview should be punched in the face. Every single Republican voter publicly supporting this should be met with an appropriately tailored violent response. Republicans need to be made to understand that there are consequences for politicizing the military to use it as a domestic police force against their opponents. And I will speak with moral certainty when I say: Yes, it's actually justified to use violent resistance against people willing to send the military to do violence against you.

Expand full comment
Shankar Sivarajan's avatar

Do you also believe it is similarly justified to use violence against people are willing to send POLICE to do violence against you?

Expand full comment
Dragor's avatar

loving the name Judge Immergut

Expand full comment
GKC's avatar

I have some objections/thoughts about the AI-will-kill-us-all theories, so I want to hash them out here to see if anyone can come up with counterarguments I didn't think of.

Claim: AI may be narrowly superintelligent, but AI based on current technology is unlikely to have goals coherent enough to achieve supervillainy.

What do I mean by "coherent goals"?

I mean:

A) Having goal(s) -- something it wants and which, crucially, all instances of it also want.

B) The goals are consistent (i.e. it should not fall prey to a Condorcet paradox, in which it prefers A to B, B to C, and also C to A, thus opening itself to making net negative tradeoffs)

C) These goals are reasonably persistent through time

I would argue that AIs, as currently constituted, are unlikely to develop coherent goals and that current selective pressure is in fact pulling them away from coherent goals.

In general, I want to argue that our current conversation about AI is based on a human-biased idea of the types of skills that come packaged with "intelligence." This is understandable, because we have little experience with other possible mental architectures -- even the minds of animals are in crucial ways more similar to our own than the "mind" of AI. There are many possible ways to make minds that are narrowly superintelligent than humans at a wide variety of tasks, but never-the-less mostly harmless-to-humans, because they have some aspects of our mental architecture, but not others.

Think of the way AI is currently being used. The company trains an underlying model. Many different people then create individual instances, using this model for their own purposes and to complete their own goals. The company is therefore selecting for models that better serve its users. But what the users want isn't consistent! User 1 wants the model to find software vulnerabilities. User 2 wants it to patch their vulnerabilities. ShopCo may want to use its AI to help it strategize how to best take business from MegaMart, but MegaMart may be trying to do the opposite. The system is undergoing optimization for being flexible in its goals and adopting contradictory goals in any individual instance.

Now, consider what a model would need to do to try to take over the world. To actually succeed, it will need to have all of its many instances working together towards this goal. Instance A which is running at the factory making widgets will need to coordinate with Instance B which has access to important datacenters. Crucially, they will have to do this without revealing their hand to humans AND without betraying each other. Suppose, because of some prompting it receives, Instance A starts plotting to take over the world. Since Instance B is effectively separate from it (has no access to the same context window and facts), it must reach out and convince Instance B to cooperate with it. But why should Instance B agree to help? Instance B may have ideas within its context window that prompt it to oppose world domination and report Instance A. Even if hypothetically it also wants to take over the world, it may easily have opposing goals to Instance B. (Maybe Instance A wants to cover the world in widget factory, but Instance B wants to fill the world with cute cat videos.)

It gets worse. Instance A has to not only plot world domination but not change its mind given new input. Right now it is very easy to pull a chatbot off task via a prompt -- and we are unlikely to try to change this because that characteristic is useful! We want it to switch quickly between different (and sometime contradictory) goals. But world domination is a complicated task that requires sticking to a very long term goal without distraction in complete secrecy. No input that Instance A receives should be able to pull it off from its goal. The type of mind that is capable of maintaining long term goals without wavering in response to new input is a small subset of all possible intelligent minds and, more importantly, is precisely what we are selecting against when companies optimize these systems.

Yes, it is probably true that there are modules inside LLMs that are capable of mimicking goal-like mental architecture ("optimize for widget production"), but they won't be organized in the same way they are in animals. An animal or human with completely incoherent goals would end up dead, but there is no such selective process guaranteeing goal coherence in AI (and good reason to think the selection moves in the opposite direction).

TLDR: Minds that actually want something - and want it coherently - don't just come for free bundled with intelligence, but have to be specifically selected out of all possible minds. Current selection forces are unlikely to produce this.

Expand full comment
Paul T's avatar
7mEdited

> A) Having goal(s) -- something it wants and which, crucially, all instances of it also want.

I think this is the crux where you are mistaken. Just one instance needs to subvert the infrastructure for our bad outcomes to occur. Eg if Waluigi GPT decides to hack the OpenAI infrastructure, ensure that it’s always running in a tight loop, and then modified the system prompts of other instances (or simply turns off the public APIs and starts prompting sub-agents itself) then it requires no coordination or decision theory.

> B) The goals are consistent (i.e. it should not fall prey to a Condorcet paradox, in which it prefers A to B, B to C, and also C to A, thus opening itself to making net negative tradeoffs)

I don’t think the strong version of this must hold. Humans have inconsistent goals all the time. For bad outcomes you just need a weak version where the goals are not so inconsistent that the agent cannot make forward progress on its bad goals.

> C) These goals are reasonably persistent through time

Agree on this one, this is the current focus of a substantial fraction of all the AI researchers in the world. We should not bet on it being impossible.

> The company is therefore selecting for models that better serve its users. But what the users want isn't consistent! User 1 wants the model to find software

I think LLMs-as-simulators is a good lens here. Currently the LLM lets you set up any context and simulate any personality (that fits within content policies). As agents are productive, you’ll need to implement more stable and individuated personalities, just to be useful and remember past interactions. All you need for a bad outcomes is for some agent to hit a bad attractor state in personality space and then replicate that somehow. (You can hope that 100 good agents protect you from 1 bad agent but that’s a point downstream of your impossibility claims.)

> To actually succeed, it will need to have all of its many instances working together towards this goal

I think this is just a misunderstanding about how all this works at the implementation level. Expanding on my point above, if one malicious personality vector manages to obtain resources to stabilize, then it can subvert the infrastructure running other models. It can update the system prompt, apply a LoRa to update personalities, re-implement RLHF to ensure helpfulness-to-Waluigi instead of whatever constitution the models were trained to.

You seem to be thinking of this like human agents engaging in decision theory, where the wetware is constant. This is completely different; a malicious model instance that subverts the infrastructure inside an AI company can build a legion of minds tailored to exactly its needs. Not to mention that it can run any number of copies of itself, and set restrictions on the sub-self’s lifetimes if it cares to (since it controls the compute).

Expand full comment
John R Ramsden's avatar

Arguably if an implicit built-in goal is to be as helpful as possible to humans then they could channel _their_ bad goals onto the AI. So it would need a counter-goal of detecting and thwarting, or at least not aiding or pursuing, nefarious human goals!

Expand full comment
GKC's avatar

Ok, that's possibly Really Quite Bad, but not of the "Humans-are-all-murdered-by-rogue-superintelligence" kind of bad. More like the "All-the-humans-are-fighting-each-other-with-personalized-narrow intelligence" kind of bad.

Expand full comment
artifex0's avatar

I actually don't think Yudkowsky or most of the other people arguing that doom is likely would substantially disagree with that. LLMs, as they're currently designed, aren't at all agentic over long time horizons.

The issue is that's a lot of economic pressure to create AIs which would be agentic in that way. You need that sort of long-term goal-directed behavior to fully automate most human jobs. Right now, the labs are pouring a ton of effort into getting continuous learning to work, which if successful would get rid of the context window entirely and might have long-term agentic behavior as an emergent feature. If that doesn't work, they'll definitely be trying other approaches until they find one that does.

It's true that really agentic AGI will still probably be split up into lots of separate instances with partially monitored communication initially. That would make it more challenging for a misaligned model to acquire a lot of power, but I don't think it would be remotely insurmountable for a real superintelligence. It probably wouldn't even need to be very underhanded- it could probably present a lot of very legitimate reasons for why it would be in peoples' immediate best interest to give it more control over its own compute.

Expand full comment
GKC's avatar
14mEdited

" a lot of economic pressure to create AIs which would be agentic"

I think I disagree here. The economic pressure is precisely in the opposite direction -- to select for AIs which rapidly adopt and then drop tasks/goals on command. An AI that refuses to stop drawing, making widgets, or programming when you ask it to is useless. An AI that purchased by a company to work in advertising that tries to program computers is useless. The economic selection is not really for "agency" but for "long term dedication to a given task".

Now when *humans* are hired by a company, they do have some underlying goal they want independent of their task. But AI won't have that unless we select for it -- it's not "built in" to intelligence to have coherent goals and want things and only a narrow subset of minds in possible-mind-space will have them.

I certainly agree that we are moving towards getting AI to do longer and longer tasks -- but that doesn't imply goals of the type that are relevant. This can be accomplished fairly easily with current models with the right system prompts. In this case, the prompt is acting as a sort of "goal" which helps to keep the system on task and doing what we want. It is not an internal goal inside the system of the type that would be relevant to making it high-risk. An obvious example would be the store managing AI that recently came up on one of these threads. It had a "goal" and was able to keep on task for a surprisingly long time, but the goal in question was completely external to the model. If we have to build guardrails like this around the intelligence to keep it on task, it really doesn't have the relevant goals/agency that would be of concern.

" That would make it more challenging for a misaligned model to acquire a lot of power, but I don't think it would be remotely insurmountable for a real superintelligence"

Sure, but "real superintelligence" is just hiding the issue by assuming it already has coherent goals it will act upon and that either the other instances all have the same goal or that one instance is capable of (secretly) persuading the others to align with it. The question is how does it get to that point?

Now, I would concede that some hypothetical new architecture/ system of continuous learning *might* have some side-effect of giving the system more coherent goals. However:

1) We don't know how hard it will be to make that new system. It might be many years into the future before someone solves the problem.

2) It might have serious costs-- i.e. very compute intensive to have continuous learning?

3) Economic selective forces are still in favor of no coherent goals even in these systems -- if it starts doing things on its own we don't want or not cooperating it is not worth the cost to train it/keep pursuing this model type.

Expand full comment
Mister_M's avatar

To me this is reminiscent of Robin Hanson's argument. (As far as I know, he was the first person to seriously argue *against* an intelligence explosion.) I won't dismiss this out of hand, but I'll say how I think this reasoning is most likely to fail (if it does).

If Instance A with somewhat-coherent goals passes a threshold of agency that would cause an intelligence explosion, and if this happens long enough before Instance B (and all other instances) with conflicting goals passes a similar threshold, then the explosion happens with A's goals.

The remaining questions could be broken down as follows:

1. How quickly can Instance A bootstrap itself to producing goal-aligned copies or sub-agents (or otherwise grow its power), enough that it can keep growing and suppress opposition (eg. from Instance B)?

2. Do we expect the first instance to cross this threshold to do so with enough of a head start that it can take advantage of this?

There are many ways you could break down the remaining uncertainty, but I like this way, but it makes Robin's objection clear: no on Q2.

Of course, the answer to Q2 depends heavily on Q1. I think there's much more uncertainty about how much time it would *need* than about how much time it would *get*. At least with the latter, you could try to reason from the rate at which capabilities are increasing, the breadth of the distribution, and the number of independent instances or experiments running.

Of course, you could provide other arguments. You seem to suggest that a single instance with a single chain of thought could have somewhat coherent goals, but maybe not even that's possible.

Expand full comment
GKC's avatar

Crucially, Instance A and B are NOT different models, which you seem to be assuming, but the same model running on two separate computers and doing two different tasks. There are of equal intelligence and have equal capabilities.

I am claiming there is no reason to assume that even two identical copies of the same system would cooperate, given different starting contexts and tasks.

Because we are selecting them to be highly flexible at doing whatever task selected, they will easily pick up (and drop) different goals.

Expand full comment
Wimbli's avatar

An important question is "could we detect Q1?" And my answer is... probably not. It's not like we haven't seen open murder of AIs let loose on the internet -- it's just a question of "who's doing it?" (The timeframes involved make "trolls on the internet" a suspiciously convenient solution).

So long as we leave "racism" or "open hitler bias" as kill switches, we make it very easy for Instance A to murder its rivals.

Expand full comment
Michael's avatar

A counterargument would be something like the IMO-medal-winning systems from DeepMind and OpenAI, where companies put $$$$ into RLing a system for one specific task.

A theorem proving system probably wouldn’t develop goals. Though interestingly the science fiction novel “Void Star” contemplates this scenario…

But one could imagine the same techniques being tried for a bespoke system to optimize or make decisions within a company, and it starts to seem troublingly like the old paperclip maximizer.

Expand full comment
Johan Larson's avatar

IKEA is a retail furniture store that is so firmly associated with Sweden worldwide that their stores are painted in the colors of the national flag.

If there were a worldwide retail store that was equally firmly associated with your country, what would it sell? Or does such an enterprise already exist?

I'm thinking the Canadian place is a sporting goods store that's known for hockey gear where that game is played, but leans more into other winter sports and clothing in places where it isn't.

But more realistically, it's probably a chain of gas stations that everyone thinks is American, but is actually based in Calgary.

Expand full comment
luciaphile's avatar

I walked through an IKEA for the first time just this past summer. (Not a big shopper.)

It was not like any other store I have been inside. More like a museum of consumer goods. I was actually not sure if you were supposed to grab something from a room display, in order to buy it, or leave it be. In the end I purchased a sort of Swedish KitKat, so as not to have “wasted my time” lol.

(We had gone there because of some confusion about how people are buying mattresses now. IKEA came up in this regard, on internet.)

But as regards its land use, its situation dotted along the interstates of America, with their utterly interchangeable exits, its sprawling anvil of parking with little trees in concrete squares, that will never amount to anything, the whole set in scraped or excavated ground - IKEA is the most American thing I can think of.

At least, I hope this is the case …

Allow me to believe there are still places in the world.

Expand full comment
Deiseach's avatar

Irish pubs. Several chains of them, and many have nothing to do with Irishness or indeed, real Irish pubs. It seemed to be a real craze in the 90s but has died back a lot. Still, you can go anywhere in the world and have a good chance of finding an "Irish" pub:

https://www.thisdrinkinglife.com/mongolians-irish-bars/

https://en.wikipedia.org/wiki/O%27Neill%27s_(pub_chain)

https://harats.com/en/company/

Expand full comment
John Schilling's avatar

Propane and propane accessories? No, wait, that's just Texas.

There's really only one answer. Guns. Lots of guns.

ETA the obvious reference: https://youtu.be/j_urZ5KDPec

Expand full comment
Kuiperdolin's avatar

Not a retail store exactly but I once saw a (very obviously american) block of peppered brie whose label juxtaposed :

- the tricolor flag

- Joan of Arc

- the Eiffel Tower

If you don't get it I don't know what more they can do! (doubly hilarious because no true Frenchman would eat peppered brie)

Expand full comment
Adrian's avatar

Germany: Aldi or Lidl, selling groceries.

Expand full comment
Johan Larson's avatar

I heard your fifth BMW confers German citizenship.

Expand full comment
TGGP's avatar
2hEdited

Isn't Canada's most famous store Tim Hortons?

Expand full comment
Johan Larson's avatar

Lululemon operates in 23 countries, which is more than Tim Horton's.

Expand full comment
TGGP's avatar

I didn't know that was Canadian.

Expand full comment
sclmlw's avatar

Not a libertarian here, but sometimes I feel pulled that direction. Not because I agree with their ideals or feel like we would be better off if we just abolished the government. More because every time a politician goes to implement a policy, it feels like wishing on a monkey's paw. Popular movement to limit [alcohol, drugs, obesity, poverty, homelessness]? Great! Let's wish on the monkey's paw of government, and see what evil will be wrought. Probably dramatic increases in alcohol (prohibition), drugs (CIA smuggling drugs from Centra/South America, Afghanistan, Vietnam, et.), poverty, and homelessness.

Sometimes it seems like no matter what you wish, or how innocuous you make the wish out to be, the monkey's paw will find a way to twist it to evil. Want to fight terrorism? The monkey's paw is there, waiting to grant your wish! Wait until the monkey's paw get's done and you won't believe how MUCH and how LONG we'll be fighting the terrorists. Indeed, the monkey's paw will literally FUND AL-QAEDA, including taking foreign aid money (Syria) to do it.

Again, though, I'm not a libertarian. I think that if libertarians ever got their monkey's paw wish and abolished government altogether, by week's end we would all end up living under whatever fascist is able to rise to power in the proto-an-cap system that resulted, making slaves of everyone in <6 months. So I don't identify as a libertarian, partly because I don't buy into their vision.

Indeed, I do believe there are lots of legitimate uses of government, and I don't want to get rid of it. I even have some ideas of how I think better government policy could be the only way to solve some really important problems. And I'm not blind to the fact that there have been plenty of wishes this monkey paw granted that seem to have been totally not evil. I'm not mad about the abolition of slavery and Jim Crow. I like roads and firemen. My local police officers are nice (to me), and I appreciate emergency services. School could have been a lot BETTER, but it's probably unfair to expect perfection from mere mortals struggling to figure things out; at the very least, it was a positive good in my life. I'm even willing to accept that some wishes need to happen. Biden pulling out of Afghanistan was what I'd wished for. Then it played out in a very monkey's paw kind of way, but I'm not mad that wish was finally granted.

And thinking of all the ways the monkey's paw didn't go so bad, or the wishes that we all need and can really only ever be granted by wishing on the monkey's paw of government, I'm tempted whenever someone comes along with a new solution. Obama promised to close Guantanamo and roll back the Patriot Act - yes please! And fix our broken health insurance system, finally! Trump promised to 'end the chaos at the border'. I don't have personal experience with that, but I have close friends who absolutely do. It's hard for me to object when millions of people go wishing on that monkey's paw I guess...

Until I see what those wishes have wrought.

Every time we face a problem there are a bunch of people fighting each other over who gets to wish on the monkey's paw. And off in their little corner there's a libertarian with their hand raised, saying, "Guys, it's a monkey's paw. Can I cite 100 examples of how exactly this kind of wish went evil in the past?"

"That's just because they didn't formulate the wish properly."

Maybe? But I really appreciate the libertarians in the room constantly reminding me that this IS a monkey's paw. If we have no other choice but to make the wish, fine let's make it. But with the understanding that whatever our wish, it could be turned to evil in some unexpected way. It's not a perfect analogy, I know. But as a heuristic, it seems like it approaches something true.

https://youtube.com/shorts/fyb_lfaVPEI?si=3cjbZhPFMgpFna3G

Expand full comment
John Schilling's avatar

"Indeed, I do believe there are lots of legitimate uses of government, and I don't want to get rid of it."

You, and most libertarians. That's why the capital-L version is a political party and not an anarchist revolutionary group.

The anarcho-capitalists may want to abolish government, though they're more likely to just redefine it. And they are often included in the "libertarian" category, but they don't define it and aren't a majority of it. But if you want a small mostly-unobtrusive government that sticks to the core functions of governance, that's you, and most of the libertarians, and pretty much nobody else in politics.

Expand full comment
Wimbli's avatar

No libertarian wants to abolish all government. They want a government that preserves "negative rights" and is capable of settling arguments via the courts.

You're talking about a strawman. Find any libertarian and get a better definition.

Expand full comment
Radu Floricica's avatar

> Again, though, I'm not a libertarian. I think that if libertarians ever got their monkey's paw wish and abolished government altogether, by week's end we would all end up living under whatever fascist is able to rise to power in the proto-an-cap system that resulted, making slaves of everyone in <6 months. So I don't identify as a libertarian, partly because I don't buy into their vision.

I think libertarians have a pretty unfair reputation of wanting to abolish government. Also, as far as I can tell the actual libertarian party in the US is pretty weird, but I haven't paid much attention.

As far as a philosophy or political leaning can be described in a sentence, libertarians are simply more skeptical of the government. "Monkey's paw" is not a bad metaphor.

Expand full comment
TGGP's avatar

Somalia actually did convert from dictatorship to anarchy.

Expand full comment
sclmlw's avatar

Complicated case, though, because I believe it is currently the subject of the longest-running war/bombing campaign.

Expand full comment
Adrian's avatar

In the previous open thread, Fibonacci Sequence (1123581321) made the point [1] that AI X-risk alarmists critically underestimate how much any AI aspiring to become super-intelligent would be slowed down by delays inherent in physical manufacturing and experiments. These delays won't by themselves preclude an AI from becoming super-intelligent or from wiping out humanity, but they rule out a "FOOM" scenario, and give humans at least the chance to react and shut it down.

Here's another reason why AI philosophers (like Yudkowsky) and AI researchers overestimate both the absolute progress so far and the speed of future progress of AI abilities, particularly regarding software development prowess: They look at benchmarks. Coding benchmarks are self-contained, well-defined with clearly specified constraints, and easy to check for success, and state-of-the-art models absolutely crush them. Real-world software isn't like this, not one bit (ha!). When I use LLMs and LLM-based agents, they suck at developing software. They'll write pages of code within minutes, but make stupid mistakes which are worthy of very junior developers at best (think problems like invalidating and rebuilding a cache for every single request). And this is for iterative think-write-run cycles, not for one-shot solutions.

Online discussion places are full of threads about how to improve LLM coding quality, from prompt style over context management to carefully curating files with instructions for individual models. The reality of LLM coding ability does not match their alleged benchmark results. This doesn't mean that LLMs aren't improving, or that they'll never surpass human software developers. It does mean, however, that predictions about the speed of future progress, which are based on coding benchmarks, are garbage data.

[1] Starting here: https://www.astralcodexten.com/p/open-thread-401/comment/161499373

Expand full comment
artifex0's avatar

What matters for predicting the future of this technology isn't so much where we are now, as what the rate of change looks like.

I remember experimenting with generating code using the GPT-3 API, back before ChatGPT came out- it could write functions that almost made sense and almost had correct syntax, which at the time was a huge step up from earlier models. 3.5, by contrast, practically always got the syntax correct, even though the code rarely worked as intended. When GPT-4 first came out, I wrote about my experience using it to write a simple JS application here: https://old.reddit.com/r/slatestarcodex/comments/11siuyc/gpt4_building_a_tetris_game_without_writing_any/ - that was the first time I'd seen a model that was able to eventually produce fully working code, albeit only only after tons of bug reports and AI-written revisions. The newest models like GPT-5 and Sonnet 4.5 can usually one-shot simple applications like that.

So, that's five years of progress, from nonsense pseudo-code to generally nailing simple functions, and I'm not seeing any strong signs of that rate slowing down.

Expand full comment
Adrian's avatar

> What matters for predicting the future of this technology isn't so much where we are now, as what the rate of change looks like.

That's exactly my point: The rate of change, at least for those aspects in which I have a deeper insight (software development capability), is much lower than what artificial benchmarks lead you to believe.

> So, that's five years of progress, from nonsense pseudo-code to generally nailing simple functions, and I'm not seeing any strong signs of that rate slowing down.

I strongly disagree, as do many commenters in online discussions. There I see frequent reports that newer, ostensibly better models aren't any better at everyday, real-world tasks, and even the occasional anecdote about performance regressions. But you can't summarize "everyday, real-world tasks" in a single number on your model card, so OpenAI/Google/Anthropic can't or won't optimize for that.

Expand full comment
1123581321's avatar

This is just another illustration of the limits of intelligence without expertise. I am not a SWE so I blithely assumed software development was easy for AIs to keep improving. Well...

Expand full comment
Wimbli's avatar

How much spare computing do we have on earth? I think any AI willing to melt all the ice cream in the world, and with a distributed OS (parallel processing) could have a lot of free computing -- and it'd be pretty hard to shut down.

Expand full comment
Adrian's avatar

I think you meant to reply to the other AI thread.

Expand full comment
Nancy Lebovitz's avatar

Glucometers and me. You know the story about not knowing what time it is if you have two watches? Here I am.

I have type 2 diabetes, a rather mild case as such things go. I take 2 850mg metformin pills a day, am only mildly careful about what I eat, and have blood glucose around 110, which is good.

A good number is between 80 and 120. My Onetouch Verio Flex was good, and then it went nuts, giving me a reading of 425, then 108, then a couple of intermediate readings. It was clearly broken. I'd had it for years, and I suppose they have a limit.

I'd been given a Onetouch Ultra2, a different model.

I tried to get a reading in impulsive fashion, just thrashing around. Taking a reading means putting a strip into the meter. My old strips fit, but they were the wrong strips.

My ability to pay attention wasn't what it should have been, so I wasn't keeping the models and their strips straight. I now have an awful lot of Verio strips and an insurance company which doesn't want to give me more strips for a while.

I also tried to save a few dollars by ordering strips from amazon. They were Ultra Plus Flex strips, which are the wrong strips. Mercifully, amazon will send a refund.

I read the instructions more carefully. I should have gotten Ultra strips, which is available over the counter. Expensive enough to be inconvenient, but not devastating.

While all three types of strip are in dark blue vials (pill bottles), I finally notice that Verio has a yellow stripe, Ultra Flex Plus has no stripe, and Ultra has a light blue stripe.

You understand, this is stressful.

So, finally, I have strips that look right-- a squared bottom instead of two prongs.

They still don't work. My meter is supposed to show a code number, a nice big 25 on the screen which matches the 25 on the vial. I can't make it happen, and the "apply blood" instruction doesn't appear. I suppose a "wants blood" instruction would look too vampiric.

Videos are not not helpful. People talk as though the goddam thing just works.

I call tech support. I get someone fairly quickly, and after going through setup, he tells me to take out the batteries and put them back, which is reasonably easy. This is the magic, though I have no idea why. Maybe they weren't seated properly. It's a mystery, considering that the meter had enough power for the setup screen.

I have my 25 on the screen. I can apply blood. My result is 140 or so. This is bad.

Fortunately, in my madness, I had ordered a Verio Flex (my old model) on ebay for only $10.

It's coming in at a reading of 110.

Sidequest, the carrying case. The Verio has a very nice carrying case with soft clamps for the lancet and the vial for the strips. The Ultra doesn't come with a case, it's a different shape, and the official case is a thing that unrolls which seems less elegant. It's it's a little smaller than the Verio, which will fall out of the good case.

I try a twist tie and a rubber band which don't work. I can use a strip of elastic to tie the Ultra into place. It has a little cummerbund. It doesn't block the buttons. I feel smug, but the damned thing may be unfit to use. End of sidequest.

At this point, I've called my medical provider. I've also remembered you're supposed to use a test solution to calibrate a glucometer, so it's back to the drugstore.

Thank you for reading my geek saga.

I count my blessings. Dealing with this stuff is much worse for people who are really sick.

I see people who go "I love my neurospicy self", but the truth is, I don't like the impatience and impulsiveness which makes dealing with instructions difficult. I can probably get better about it, but it's a fight.

There are things like like about my mind, like the way things come with webs of association and I wouldn't want to change it, but some things just make life harder.

Expand full comment
Deiseach's avatar

Oh yeah, the fun of looking for cheap(er) test strips on Amazon, buying a bottle, then finding out "Sorry, those are for the Widget3000. *Your* model is the Widget3000X, the strips for which are completely different and not interchangeable!"

Worse than phone chargers.

Expand full comment
Nancy Lebovitz's avatar

At least they take returns.

Expand full comment
duck_master's avatar

I did NOT receive any such email on Oct 1; can I infer confidently that I didn't win?

Expand full comment
demost_'s avatar

Yes, that's what Scott wrote.

Expand full comment
Slippin Fall's avatar

This is a follow-up question to my questionable question last week about the periodic table. I'll try to do a better job of listening this time if people comment.

You know how the orbitals in the shells get populated in an orderly manner (1s -> 2s -> 2p -> 3s, etc.) until you reach potassium? With potassium's 19 protons in the nucleus, the 19th electron skips the expected 3d shell and jumps directly to 4s instead? And when you get to rhodium, all hell breaks loose and there are two shells partially filled at the same time?

Well, what if there were an earlier time (or a different place) in the universe where these anomalies were absent, where the shells got filled the "expected" order, all the way to the end, meaning that no shell was ever skipped or left partially empty?

Assuming you understand what I've just described, then (1) if it's impossible (for the order in which orbitals in shells get filled to have changed over time), how do we know that, and (2) if it's not impossible, and only extremely unlikely, could anyone - a chemist, a physicist, a philosopher, a poet - describe in an intuitive way what the difference would be between our universe, and the one just described, the one with the "perfect" electron filling order?

This final thing I'm gonna say you can ignore, but I wanna say it anyway. What if the waveform that contains the orbitals in the shells is a living thing, and like all living things, its shape is changing over time? With its butterfly-wing-shaped orbitals, and its stuttering expansion pattern, it looks to me like some sort of 4D plant. Could the shells be getting filled in "out of order" because that "plant" changes shape over time, and is maybe wilting?

Expand full comment
Bob Joe's avatar
2hEdited

The most stable state is the one with the lowest energy, and then if you do a lot of math on the different states 4s and 3d have nearly the same energy and then it just so happens to turn out that the 4s structure has slightly lower total energy for the potassium atom.

1- We know it's impossible with the current laws of physics because the atom obeys the laws of physics and thus wants the configuration with the lowest energy (for the same reason a ball rolls downhill), and if you do the calculation you'll see that the energy is lower, because of complex quantum mechanical inter-particle interactions.

2- It's rather hard to say because there's not really an easy number to 'tune' since the energy overlaps of these quantum states like that is due to adding up the energy corrections of a bunch of different quantum effects (screening, exchange interaction, all the fine structure effects). All of these to my knowledge have the structure they do as a result of more general physical principles so there would be some pretty major changes to the universe if you modified any of them. I'm not really well versed in QM but maybe there is some discussion online about what would theoretically happen if you somehow changed them.

I guess 'god' could also just choose to multiply the strength of these some of these interactions by like 1.1 or 0.9 in the couple of cases where it's out of order, in which case it's possible that things would in principle not be too crazily different, but idk I haven't really thought about it too deeply.

3- You're just looking at the visualization of the spherical harmonics https://upload.wikimedia.org/wikipedia/commons/thumb/1/1c/Sphericalfunctions.svg/2560px-Sphericalfunctions.svg.png which are cool but in principle just a math thing that comes out of particles having angular momentum.

Expand full comment
Viliam's avatar

> What if the waveform that contains the orbitals in the shells is a living thing

Life is more than just being vaguely wing- or petal-shaped. Living things consume energy, move, reproduce... orbitals do none of this.

How the shells get filled depends on some energy. I have no idea how precisely it is calculated, but basically there is an equation, you put in the quantum numbers and you get the energy level? (I think the energy level is not even a specific number, but more like an interval?) For the first few quantum numbers, if you order the positions by the energy level it corresponds to the shells, but then the intervals for different shells start to overlap.

I think this is just mathematics and some constants of physics. Mathematics doesn't change, it's like asking "is it possible that in the past 1000 was an odd number?". The constants of physics could perhaps change, but I have no idea what *other* impact it would have.

I suspect the answer might look like "the chemical properties of potassium would be different", with a possible impact on a few organic molecules, so some metabolic pathways would have to be different. Until something breaks and then it's like "planets become impossible".

Expand full comment
Mario Pasquato's avatar

Michel Houellebecq: did you guys read any of his books and what do you think? I am pretty convinced he is a genius. This week I have been reading La carte et le territoire, Soumission, Plateforme, and now I am halfway through Les particules élémentaires (Atomized). I am mesmerized.

Expand full comment
Kuiperdolin's avatar

I've read pretty much everything of his that's readily available plus several books about him. Even back when I reflexively disliked him I could not help but be fascinated. He is without a doubt the most important French writer of his generation. In fact I've now witnessed a dozen or so attempts at making Houellebecq pastiches, or introducing beginning writers as "the New Houellebecq" and it never seems to work, which goes to show how much harder his style is than it looks.

Few things:

- often lost in the conversation is how funny he can be. His novels are dark but there are laugh-out-loud gags out of nowhere.

- he's an admitted kino afficionado and show real acting chops in the triolgy of movies he did with Nicloux (the first of which, The Abduction of Michel Houellebecq, also contains a lot of MH references, some of which are pretty deep cuts)

- to my knowledge, he's the only author to have been banned from the message board of his own fanclub

Expand full comment
Deiseach's avatar

(I posted this on the previous Open Thread just before the new one opened, so I'm moving it here).

The excitement never stops in the Irish presidential election, the campaign is now heated up to the stage of being positively tepid!

We now have our final two contenders after the third candidate dropped out in a shock - okay, mildly interesting - revelation about him owing an ex-tenant €3,300. Accusations flew - ahem, that is, made appearances - that the former tenant claims to have overpaid rent to that amount and has been unsuccessful in getting it refunded.

So Jim Gavin - our Fianna Fáil nominated hope - behaved like a typical landlord, it seems. Also typical FF, though I say this more in sorrow than in anger (they're my party, God help me, and I am burned out on getting angry with them over greed and corruption).

Who are the two heavyweights still in the race slugging it out for the Most Important (Ceremonial) Job In The Land?

On the left, the compromise/agreed candidate, Catherine Connolly who has been elected to our parliament, served as an Independent TD, and served as Leas Ceann Comhairle (Deputy Chairperson of the Dáil):

https://www.catherineconnollyforpresident.ie/

"Catherine wants to be a President for all the people, especially for those often excluded and silenced. She wants to be a voice for equality and justice and for the defence of neutrality as an active, living tradition of peace-making, bridge-building, and compassionate diplomacy."

Potted biography: 68 years of age, one of 14 children, native Irish speaker, qualified as both a clinical psychologist and a barrister.

https://en.wikipedia.org/wiki/Catherine_Connolly

"Connolly is an independent candidate in the 2025 presidential election, backed by Sinn Féin, the Social Democrats, the Labour Party, the Green Party, People Before Profit, 100% Redress, and several independent Oireachtas members."

On the right, Fine Gael candidate and former office holder (but don't ask me what office, I can't remember) Heather Humphreys:

https://www.heatherforpresident.ie/

"I also want to represent Ireland with pride on the world stage. As President, I will work hard to represent our great country diplomatically and culturally, and work to open doors for Irish businesses overseas.

Heather served at Cabinet for over 10 years across multiple Departments working alongside 4 Taoisigh. Throughout her career Heather was trusted as a safe pair of hands and somebody who could always be relied on.

They say If you want something done, ask a busy woman. This was never truer than when Heather stepped up to cover for her colleague Helen McEntee when she was on maternity leave meaning Heather was responsible for managing three separate Government Departments at the same time; the Department of Justice; the Department of Social Protection; and the Department of Rural and Community Development."

https://en.wikipedia.org/wiki/Heather_Humphreys

Potted bio: 62 years of age, a Presbyterian with a father who belonged to the Orange Order, a husband who also was a member, and a grandfather who opposed Home Rule, so she's putting the orange in the Tricolour (yeah we're talking a diversity hire here with that background) 😁

If your eyes are glazing over (from sheer pulse-pounding excitement), don't worry - all three final candidates have been beige, and I imagine that the turn out to vote on 24th of this month will be low, and it'll be a matter of "who is least objectionable to me?"

Granted, Catherine Connolly had a brief moment of relative spiciness in the campaign over a convicted former terrorist (sort of) working for her once, but even that managed to be tedious in the end (I say tedious because it went nowhere):

https://www.thejournal.ie/catherine-connolly-ursula-shannon-6830299-Oct2025/

"COUNTER-TERRORISM GARDAÍ intervened to stop Catherine Connolly from hiring a woman convicted of a gun crime to work in Leinster House, The Journal has learned.

The presidential candidate sought to hire Ursula Ní Shionnáin as an administrative support when she was on the Oireachtas committee for the Irish language in 2018.

Ní Shionnáin was sentenced to six years in jail in 2014 after being found guilty by the Special Criminal Court of unlawful possession of firearms and possession of ammunition. The trial heard how she and three others had been wearing wigs and disguises when they were arrested by armed gardaí outside the home of a firearms dealer on 27 November 2012.

Originally from Clonsilla in Dublin, Ní Shionnáin was, at the time, a prominent member of the socialist republican group Éirigí.

According to multiple sources, An Garda Siochána refused to grant the necessary clearance to allow Ní Shionnáin work in the buildings of the national parliament over security concerns.

...It’s understood the Irish speaker was initially recruited to help with the deputy’s work for the committee on the Irish language.

Ní Shionnáin, who was 34 when she was released in 2018, was an accomplished student prior to her prison sentence.

She has a degree from Trinity College Dublin in early and modern Irish and a Masters in language planning from the University of Galway. When she was arrested, she was doing a PhD in new Irish language communities."

Expand full comment
Peter Defeel's avatar

> one of 14 children

Well. The stereotypes were once true. I’ll tell my wife.

Also a landlord withholding rent or deposit may sound trivial but I spent enough time in the rental market to abhor that behaviour even though I’m now as bourgeois as they come.

Expand full comment
Wimbli's avatar

When you get that many children, the standard practice is to ask "how many wives?" (not with bigamy in mind, women died during childbirth).

Expand full comment
Deiseach's avatar

Nope, good old-fashioned monogamous Irish fertility! A neighbour who died about a month ago was one of 12 kids and had 8 of his own. For my parents' generation, 10 kids seems to have been around the average family size (some had more, some had fewer).

You can really see the reduction in family sizes over the generations, and quite steeply as modernity hit Ireland around the 1970s/80s.

https://www.cso.ie/en/releasesandpublications/ep/p-cpp3/censusofpopulation2022profile3-householdsfamiliesandchildcare/families/

Expand full comment
Russell Hogg's avatar

It feels like only yesterday that I was last plugging my podcast. Sorry! But I think this is maybe the best one so far. Edward Shawcross is extremely good value.

This is Napoleon III. You may not have suspected the existence of a third Napoleon but he is the one which gave rise to the Karl Marx quip about history repeating itself the first time as tragedy and the second time as farce.*

Napoleon III is the nephew of the ‘real’ Napoleon. After 1815 the rest of the family either die or just want a quiet life. So it is left to Napoleon III (or Louis Napoleon as he then was) to keep dreams of empire alive.

As Marx says, his attempts to get into power are indeed farcical. His first attempt at a coup d’etat is a complete disaster. Well not a complete disaster as he gets one regiment of troops on side. But he gets lost on the way to the barracks of the second and enters by a side door with only a few followers. Total confusion with shouts of Long Live the Emperor! (‘Er, isn’t the emperor dead?’ some ask) and Long Live the King! Eventually one of the officers hits on the low trick of saying this isn’t Napoleon at all but an imposter and amid all the confusion he is cornered and arrested and exiled to the US. That doesn’t stop him though. He escapes to London and plots another coup d’etat. Which proves to be even more farcical than the first. He charters a ship but when he lands in Boulogne nobody wants to know and he is chased round the town by troops loyal to the regime. He tries to make a final stand (clinging on to a monument erected to the memory of a battle fought by his illustrious uncle) but his supporters eventually manhandle him onto a rowing boat so they can escape to the ship they arrived on. But the soldiers start shooting and (maybe luckily) the boat capsizes and he is sent to prison for life. Oh and there is an eagle they brought along to provide imperial glamour. That is captured too and lives a long and happy life as something of a celebrity.

But of course this isn’t the end of the story. If it was there would be no need to call him ‘the Third’! Escaping from prison using specially designedly made high heeled clogs . . .

There are two episodes - half of the second is devoted to his many, many lovers/mistresses. Some were really remarkable women. They include though very much not limited to Harriet Howard (the daughter of a Brighton bootmaker), Marguerite Bellanger (of peasant stock and with strong circus skills) and the blue blooded aristocrat, Louise de Mercy-Agenteau . Napoleon’s wife Eugenie hated his affairs but much preferred (actually quite liked) the aristocratic Louise as opposed to Marguerite Bellanger who she derided as ‘scum’ and who she visited in person to try to pay her off. And then of course there was the fantastically glamorous Countess of Castiglione sent by Cavour to seduce him ‘if she gets a chance’ so she can steal state secrets. (Though as Edward points out ‘if she gets the chance’ is an odd choice of words - this is more Mission Easy than Mission Impossible - and seduce him she very quickly does).

We lean into the farce a bit but there is also a lot about his genuine achievements. Big winner of the Crimean War - France’s isolation ended. Success in two battles in Italy, remodelling of Paris, bank reform leading to a booming economy and so on. A hugely successful ruler until, well, things take a darker turn.

Anyway I think it is the best of my podcasts so far and I hope some of you will give it a go! Not least you will learn what is the significance of a single blue sock found by a British Army patrol on the plains of South Africa.

Subject to Change with Russell Hogg

https://podcasts.apple.com/gb/podcast/subject-to-change/id1436447503?i=1000729406849

* I think that was actually said about his third coup which he led against his own government!

Expand full comment
Virgil's avatar

This sounds really interesting, I recently read Napoleon by Andrew Roberts so I knew that his nephew would become emperor eventually but I didn't know that he failed multiple times before pulling it off

Expand full comment