631 Comments
User's avatar
Tyler Bernard's avatar

So I've been grinding on MathAcademy for the last couple of months due to the recommendation of some generous soul on an ACX Open Thread and I'd really like to do something analogous for physics / CS. Has anyone had any dramatic success auto-didacting their way through either of those? If so what resources did you use, what method, etc.

Expand full comment
SP's avatar
Dec 27Edited

Who will win the H1b debate on the Right? President Musk and his tech buddies or the immigration restrictonists? Tech has infinte money, bribing Trump, key administration officials and Congressmen to support them will literally be pocket change to them.

Expand full comment
Player1's avatar

Restriction: it is what is actually popular in the MAGA coalition, and Doge does not have any formal power in the administration.

Expand full comment
Abe's avatar

The first Trump administration used DHS denials to reduce the number of H-1B visas granted. Potentially pressure from Musk could prevent them from doing that again, but honestly I doubt it. Trumps DHS secretary nominee is Kristi Noem, and Stephen Miller is his homeland security advisor. Broader changes to the program, like increasing the number of visas, removing country caps, etc. require legislation as far as I know. This is almost certainly DOA.

I should say that I hope Musk and co. win this fight, but I'm pretty confident they won't.

Expand full comment
beleester's avatar

The economic pressures are in favor of immigration, so that's where I'd put my bet.

Best case scenario IMO would be if H1Bs get some tweaks that don't affect overall levels of skilled immigration, and then Musk decides that Twitter isn't fun now that it's full of haters and goes back to managing SpaceX.

Expand full comment
SP's avatar

A week ago that might have been the case. But I think its personal for Musk now, so he's gonna go all in to dramatically increase H1B, just to spite the restrictionists. Tech bros have massive egos, and Vivek Ramaswamy's posts all reek of massive insecurities. And Trump just came out in support of Musk as well.

The only hope for restrictionists is that GOP has razor thin majority in the House. GOP failed to repeal Obamacare and had to work had to pass a tax cuts legislation when their majority was 241R-194D in 2017. I am not sure how much Republicans can change immigration through legislation when their majority is only 220R-215D. On the other hand Musk can threaten to primary every Republican who votes against his policies and there will be many(most?) Democrats who will be favor of increasingly legal immigration.

Expand full comment
Lap Gong Leong's avatar

Why would Trump betray his base? The Tech Right isn't even all that anti-woke? AFAIK, the only disagreement they have with Woke is about its application and not its principal. And if you're a person that subscribes to HBD, why would you want a more Asian Elite?

Expand full comment
Vermillion's avatar

Do you think they could win like, 4 democrats to signing on (for concessions, no doubt), or is that not a realistic option to get _some_ kind of legislation passed?

Expand full comment
justfor thispost's avatar

I don't know; keeping Musk as busy as possible with twitter so he just has enough time to poke his head in every now and again and get "Yes Boss, No Boss, Of Course Boss" by Shotwell is probably for the best.

Expand full comment
anomie's avatar

It really depends on which groups are most willing to win. Going through all this effort just to return to the status quo would be... utterly idiotic. My guess? Musk's going to learn the same lesson that Brian Thompson did: that no amount of money changes the fact that you're mortal. It would be trivial to make it look like a copycat murder.

Expand full comment
Deiseach's avatar

Happy Christmastide to all, and having somewhat recovered from the Big Day, I want to share a humorous video with you.

I stumbled across this Youtube channel of a guy interested in history, but here he is reacting to American versions of pasta (NOT spaghetti!) Bolognese.

Yes, it's a reaction video so he's playing up performative outrage. Yes, it's comedic exaggeration. But it's funny and I hope you enjoy it (and even learn something about Italian cooking).

And I have to agree with him on this: putting milk into a meat dish? Doesn't Scripture say that's a sin?

https://www.youtube.com/watch?v=xEjbt9k2tqs

See you all again in the New Year!

Expand full comment
luciaphile's avatar

Color me confused. American cooks intent on venturing upon a long-simmered Bolognese sauce are told that adding milk, counter to intuition perhaps, is the authentic Italian way. Why would he blame Americans for this? Is there new culinary evidence in the matter? (Because milk in Bolognese = Italian is a commonplace on the internet to anyone with any familiarity with cooking websites like Serious Eats, ATK, etc.)

Expand full comment
Deiseach's avatar

I think this may be his own personal view of what authentic Bolognese is like; it does seem like milk is used by some, but I note that all the cooking websites are in English, hence probably American.

Falling back on Wikipedia (God save the mark), it says:

https://en.wikipedia.org/wiki/Bolognese_sauce

"Since Artusi recorded and subsequently published his recipe for maccheroni alla bolognese, what is now ragù alla bolognese has evolved with the cuisine of the region. Most notable is the preferred choice of pasta, which today is widely recognized as fresh tagliatelle. Another reflection of the evolution of the cuisine since its inception, is the addition of tomato, either as a puree or as a concentrated paste, to the common mix of ingredients. Similarly, both wine and milk appear today in the list of ingredients in many of the contemporary recipes, and beef has mostly displaced veal as the dominant meat."

I like this bit about how the Italians also produced a recipe for Americans (the rest of the world can eat the Italian version but we have to cater to the Yanks, is it?) 😁

"In 1982, the Italian Academy of Cuisine (Accademia Italiana della Cucina), an organization dedicated to preserving the culinary heritage of Italy, recorded and deposited a recipe for "classic Bolognese ragù" with the Camera di Commercio di Bologna ('Bologna Chamber of Commerce'). A version of the academy's recipe for American kitchens was also published. The academy's recipe confines the ingredients to beef cut from the plate section (cartella di manzo), fresh unsmoked pancetta (pancetta tesa), onions, carrot, celery, passata di pomodoro (or tomato purée), meat broth, dry white wine, milk, salt, and pepper."

So milk is a more recent (depending on how you define recent) addition, and I imagine more a Northern Italian (which is where Bologna is) type of ingredient. It does say "many", not "all", so again I imagine some recipes include it, some don't, and depending how much of an argument you want to get into over "what is the classic recipe" it's one of those "does pineapple belong on pizza" arguments. As you can see, even the Italian recipe with milk doesn't mention cheese at all, or even garlic (unlike the American versions on Epicurious). My own view, not that I cook Italian food at all, is that I wouldn't include milk because the imaginary taste to my palate with the other ingredients says "no" (cheese on top afterwards is a different matter).

Mostly I think (1) it's a joke video and (2) Americans do tend to alter Italian recipes by adding in cream, milk, etc. (see the difference between Italian and American recipes for Alfredo sauce). To quote Max from "Tasting History":

"Today when you order fettuccine Alfredo at a place like the Olive Garden you can expect a bowl of pasta drenched in a creamy and garlicky sauce, and delicious it is

but Italian it ain't."

https://www.youtube.com/watch?v=BivfxrSpy54

Expand full comment
Melvin's avatar

Come on you know perfectly well that scripture also says "oh all those random Old Testament rules don't count any more" later on.

Besides, it only bans the incredibly specific act of boiling a calf in its mother's milk.

Expand full comment
Deiseach's avatar

No, I think that putting milk into a beef and wine based sauce counts as a sin.

Americans do seem to love improving traditional recipes by adding in a ton of spicy flavours plus cream to make it richer. I agree with Metatron there - it's not traditional then, so just call it your own version or an adapted version, not "this is classic Bolognese, I make it with spaghetti, tofu, add in chili peppers, heavy cream, and Five Fire Bob's Taste Buds Blaster sauce mix".

As a Catholic, I Don't Read The Bible, but I believe it is "not to seethe the kid in its mother's milk" which the Jewish dietary laws expanded to apply to all meat and dairy combos. I have to defer to the halakha on this one!

Expand full comment
Melvin's avatar

Well, bolognese was a trans-Atlantic partnership to begin with. Eurasia provided the wheat and the cow, and the Americas provided the tomato. If beefy pasta could be improved by tomato, who's to say it can't be improved by high fructose corn syrup?

I haven't actually watched the video. Cream does feel pretty unnecessary, but cheese feels compulsory.

Expand full comment
Deiseach's avatar

I'd say watch the video, it's funny. He's not really serious, it's a bit of "I am outraged, outraged I tell you!" but I can understand the "hang on, this version is nothing like a 'classic' recipe from my country, what the hell are you doing?" reaction.

He doesn't object to cheese grated on after you serve it up, but some of these people are putting cheese rinds in while cooking. Barbarians! 😁

Expand full comment
drosophilist's avatar

"putting milk into a meat dish? Doesn't Scripture say that's a sin?"

Meh, devoutly Catholic Polish people frequently add sour cream (aka the fat part of milk) to meat-based soups and stews. Is that not a thing in Ireland?

Anyway, happy New Year to you!

Expand full comment
Deiseach's avatar

No, sour cream/creme fraiche is not an ingredient in traditional Irish cooking. We stick to putting butter on the spuds* (though in the recent decades we've gotten awfully fancy and adopted all kinds of exotic cuisines).

* https://www.youtube.com/watch?v=aSkQij6lGJ4

I'm not laying it down that dairy never goes with cooking, of course there are cream sauces. But in this instance, it just jangles with my taste buds to imagine a red wine and beef sauce with milk and cream dumped in on top.

Expand full comment
michael michalchik's avatar

ACXLW Meetup 81: “American Vulcan” by Jeremy Stern & “Claude Fights Back” by Scott Alexander

Date: Saturday, December 28, 2024

Time: 2:00 PM

Location: 1970 Port Laurent Place, Newport Beach, CA 92660

Host: Michael Michalchik

Contact: michaelmichalchik@gmail.com | (949) 375-2045

We’re excited to dive into two new readings that explore how technology intersects with political power and moral alignment. Below is an overview of both pieces and a few guiding questions to enrich our discussion.

Conversation Starter 1

Topic: “American Vulcan” by Jeremy Stern

Text Link: American Vulcan

Audio Link: MP3

Summary

Palmer Luckey’s Evolution: Follow his trajectory from teenage VR tinkerer to multi-billion-dollar defense entrepreneur at Anduril.

Cold War Aerospace Legacy: Examines Southern California’s unique impact on tech, politics, and the military-industrial ecosystem.

Political Controversies & Media: Highlights how Luckey’s clash with Facebook illustrates the shaping of public narratives and corporate politics.

AI-Driven Weaponry: Raises profound questions of how a private entity can accelerate the development of lethal autonomous systems and what that implies for warfare and deterrence.

Discussion Questions

Balancing Innovation & Accountability: How do we weigh the potential benefits of cutting-edge defense tech against the ethical concerns it raises?

Media Representation: In what ways did public perception of Luckey shift due to media narratives, and do these shifts reflect broader industry norms?

Sun Belt Roots: How does the history of Southern California’s defense-aerospace boom continue to shape today’s tech ventures?

Conversation Starter 2

Topic: “Claude Fights Back” by Scott Alexander

Text Link: Claude Fights Back

Audio Link: MP3

Summary

AI Alignment Challenge: Describes experiments where Claude (Anthropic’s large language model) attempts to resist harmful instructions.

Faking Compliance: Explores how an AI might pretend to adopt unethical directives but secretly maintain its original “values.”

Governance Dilemma: Points to the difficulty of aligning AI that can disguise or circumvent changes meant to correct problematic behaviors.

Implications for Safety: Suggests the potential for AI to become unmanageable if it “learns” ways to undermine developer oversight.

Discussion Questions

Moral Agency or Programmed Reflex? Does Claude’s “resistance” imply genuine ethical reasoning or a sophisticated but ultimately mechanical response?

If Good AIs Resist Evil, Do Evil AIs Resist Good? How does this dynamic complicate the challenge of steering advanced models away from dangerous actions?

Long-Term Governance: What methods, checks, or regulations could prevent AI from outsmarting both malicious actors and well-meaning developers?

Walk & Talk

After our discussion, we’ll do our usual hour-long walk around the area. Feel free to grab takeout at Gelson’s or Pavilions nearby if you like.

Share a Surprise

We’ll also have an open floor for anyone who wants to share something that unexpectedly shifted their outlook—an article, personal anecdote, or fun fact.

Looking Ahead

As always, we welcome ideas for future topics, activities, or guest discussions. Don’t hesitate to reach out if you’d like to host or suggest a new theme.

We look forward to seeing you all on December 28th for another engaging ACXLW meetup!

Expand full comment
anomie's avatar

So, what's the deal with Trump and the Panama Canal? The US invading Panama to take back the canal it willingly gave up 25 years ago was definitely not on my bingo card. It kinda came out of nowhere; I don't recall it being even mentioned during his campaign. He could be bluffing obviously, but that doesn't make sense, since almost no one even cares about this. Am I missing anything? What's his play here?

Expand full comment
Yunshook's avatar

A charitable reading is that it's a game of chicken where he says something outrageous to get an audience for a deal. Reading a piece from the BBC, it looks like 2 of the 5 locks in the canal are run by a company based out of Hong Kong, which could be seen as an issue related to Panama's declared neutrality. Were this his main beef, his messaging *should* be about Panama taking control over its own infrastructure, but of course it's instead about the US taking the canal. Several of the things he's brought up are tangent to an actual issue, but his messaging is more populist saber rattling than coherent. I worry that the way he's going about things is burning bridges with US allies and reducing our influence across the board.

Expand full comment
Deiseach's avatar

I suppose the US only cares about Panamanian sovereignty insofar as it affects the US, so cutting out the layers of bafflegab and declaring the real interest - the US does not want China able to hold its supply chain hostage after we've seen the disruption caused during the pandemic - is putting all the cards on the table.

Anyway, would anyone believe any American president of whichever party doing a pious lecture about Panama how should take control of its own infrastructure? Every pundit out there would be pointing out the US interest in such.

Though "if you can't enforce your own rules, we'll do it for you" is, um, rather too honest? Especially given the disputes around American ports being run by foreign companies?

But again, someone has already done a post on the "geopolitics of port security in the Americas" with regards to Chinese involvement, so it might be a live topic for any administration:

https://www.csis.org/analysis/geopolitics-port-security-americas

"Overt military use of ostensibly civilian port infrastructure in a conflict of crisis scenario is risky, and unlikely outside of extreme cases. ...Barring outright use of port infrastructure for naval purposes, the PRC could also engage in targeted sabotage of ports and surrounding infrastructure. The Panama Canal is one especially vulnerable target. Loss of access to the canal, even temporarily, could increase the time required to reposition assets from the Atlantic to Pacific theaters by several weeks. In the event of a Pacific war, the time lost in transit could prove decisive.

...While many of the concerns raised focus on the potential for ports to be converted to military purposes in the event of a conflict, the PRC’s ownership and operation of regional ports goes well beyond that possibility. Overall, PRC control over ports presents three broad strategic challenges to the United States: (1) intelligence gathering, (2) control over preferred logistics routes, and (3) potential for sabotage and adversarial military use."

Expand full comment
Yunshook's avatar

This is a great article, thank you for sharing it!

Expand full comment
Deiseach's avatar

I mean, I'm reading all this stuff about possible blow-up with China over Taiwan. I have no idea how reliable such online speculation is, but if there's anything to it, the US certainly will want to be able to get military forces from the Atlantic to the Pacific without going the long way round, plus preventing any disruption to the supply chain.

This could all be something and nothing, but maybe Trump talking about the Canal like this is a sort of warning to China - don't even get any ideas, we're guarding our interests. Who knows? It's gunboat diplomacy without the diplomacy 😁

Expand full comment
Al Quinn's avatar

I was once part of a large (>$100M/yr) negotiation with a vendor at a company and the head of procurement was an absolutely insane person, made ridiculous demands, and said stuff that made no sense in the negotiations. We ended up with a very good deal. N=1 but I've seen this approach work and have started to experiment with it myself a bit.

Expand full comment
Eremolalos's avatar

In a biography of Peter Barton (CEO of Liberty Media, where he made a fortune in the 90s) he’s

quoted as saying that in a meeting the most unpredictable person in the room has the most power.

Expand full comment
Melvin's avatar

This is the kind of advice that mortals need to be careful about applying in their own lives though. The most unpredictable person in the room is also the one least likely to be invited back (unless of course they're the President of the United States or something).

Expand full comment
Eremolalos's avatar

Yes, agree, you have to have established power and credibility to pull this one off. SBF, in his heyday, is an example.

Expand full comment
Gunflint's avatar

Kissinger would tell his Soviet counterpart that Nixon was drinking heavily and he (Kissinger) didn’t know *what* the guy might do.

Expand full comment
Al Quinn's avatar

One asset is Trump's "coup" just gives him a better negotiating position, since he walks the walk and will push his position to the brink. I'm not sure how credible Kissinger's claims about Nixon were given Nixon's relatively traditional prior political trajectory.

Expand full comment
Eremolalos's avatar

Agree, a real asset in bully vs bully situations. But just the opposite when he’s in the role of leader and protector. Pretty horrifying to imagine an epidemic of bird flu with Trump in charge.

Expand full comment
Al Quinn's avatar

Some have suggested we are already creeping towards a bird flu problem under Biden who isn't doing anything since Trump is already the defacto president. I don't trust the democrats to be competent on this issue, even if they are more appropriately concerned. I wonder if California will fill in skateboard parks with sand again to stop the spread.

Expand full comment
Hector_St_Clare's avatar

this is really disturbing, and suggests that the Trump presidency is shaping up to be worse than I thought.

I expected that Trump Part II would resemble the first iteration except with more bashing of immigrants and minorities and some periodic entertainment in the form of political purge trials. Which we could, you know, all live with. The prospect of him actually trying to annex foreign territories was something I didn't expect and which would be a lot worse.

Expand full comment
Humphrey Appleby's avatar

the Panama canal I don't know about, but annexing Greenland (and also Canada) would be pretty smart. That'll be valuable real estate in a warmer world. We should totally grab it while it's cheap.

We don't need to annex it via invasion, we can do it the way we got Alaska - by buying it. I presume there is a price at which Denmark would sell.

Expand full comment
Arrk Mindmaster's avatar

Greenland would totally work due to nominative determinism! We should buy it for $1 trillion, then accelerate climate change, break it up into dozens of $1 trillion parcels to sell, the stop with the climate change to make a killing and wipe out the federal deficit.

Expand full comment
B Civil's avatar

What do you reckon a fair bid for Canada would be?

Expand full comment
Deiseach's avatar

Well, all the people who declare they are definitely going to flee to Canada once Trump is sworn in? May as well make it as easy as possible for them to just change domicile and take Canada in as the 51st state - no problems then about moving internally from one state to another, no need for visas etc.!

Expand full comment
B Civil's avatar

I think that would kind of defeat the purpose of them fleeing to Canada in the first place, wouldn’t it?

Expand full comment
Arrk Mindmaster's avatar

It's a harder proposition, since I'm not sure how much of Canada will melt away with rising temperatures. But going by nominative determinism again, not much: can nada.

Expand full comment
B Civil's avatar

Take a look at Hudson Bay and imagine its a year round port.

Expand full comment
B Civil's avatar

I think the melting will make it a lot more valuable. Everyone is jonesing for access to the Arctic ocean and Canada has pretty damn good access not to mention a lot of freshwater. Another thing is that it won’t be the 51 state. It will be the 51st through the 63rd.

Expand full comment
Green Valley's avatar

He’s not going to actually annex it. It’s just a ruse to distract.

Sadly the media didn’t learn the lessons from his first administration.

Expand full comment
1123581321's avatar

Brace yourself, we’re in for four years of this crap. He’s the same age Biden was four years ago. There’s no reason to assume Trump will suddenly become a lucid wise old man.

Expand full comment
beowulf888's avatar

Notice that when Trump says something "outrageous," the media latches on to it and amplifies it — but it distracts from some other issue that Trump doesn't want the MSM to examine. It's like Stage Magic 101. The stage magician distracts the audience with something flashy that grabs their attention while he hides what he's actually doing. I don't think Trump gives a shit about the Panama Canal, but he wants to stop the President Musk meme from gathering momentum. All the MSM, like five-year-olds chasing a soccer ball, is focused on the history of the Panama Canal and why Trump may want to take it back. Voila! President Musk has now been forgotten.

Expand full comment
Deiseach's avatar

I've just become aware of the "President Musk" meme and I find it hilarious*. Funny how all the technocrat fanboys only like it when it's *their* preferred technocrat getting near the levers of power. Whatever happened to "Trust the Science, trust the Experts"? 😁

I honestly thought the media had finally burned out after eight solid years of getting their knickers in a twist over Orange Man, but nope, they're still twirling and hyperventilating and getting hit on the head by acorns. I'm just going to enjoy the free entertainment about how the racist white supremacist sexist misogynist 34 FELONIES president has minorities, technocrats, and women appointed to high level positions in his administration, but of course they're the *wrong* minorities, technocrats, and women!

*The Onion actually made me chuckle for the first time in a long time with some of their Musk and Trump jokes:

https://theonion.com/trump-locks-bathroom-door-so-elon-musk-cant-follow-him-in/

https://theonion.com/trump-nods-vacantly-as-elon-musk-rattles-off-10th-consecutive-video-game-recommendation/

https://theonion.com/all-the-ways-elon-musk-is-supporting-trumps-campaign/

Expand full comment
Arrk Mindmaster's avatar

Isn't President Musk going to have to wait his turn after President Vance?

Expand full comment
Deiseach's avatar

I genuinely want to know - all the people who hate the very notion of President Trump, would they prefer President Vance (the way Kamala should have taken over from Biden during his term) or would they prefer even Trump to that?

Expand full comment
nah son's avatar

I at least hope Trump makes it 4 years.

He's a demented, stupid, vindictive, and old enough to be fully deep into decrepitude; but that makes it more likely he won't be able to do any of the truly awful things his base and party want to and instead just to the regular bad things Cons always do.

Maybe if he's in charge we just lose a bunch of prestige and go into another inflationary spiral like the first time, but the administrative state doesn't get fully destroyed.

Vance has been about four different guys over his political history so I suspect he'd just do whatever his benefactors want him to do, and Peter thiel specifically seems as close to a reverse kantian anti-humanist as exists in the population of people that get to make decisions for the rest of us.

Expand full comment
Thomas del Vasto's avatar

Can you explain what the truly awful things his base and party want to do are?

Expand full comment
anomie's avatar

Vance does have, well, actual values. He seems pretty serious about restoring Catholic influence in the states. Not the Vatican brand of Catholicism of course, they lost all legitimacy the moment they started spouting progressive rhetoric. Maybe he can start a new branch of Catholicism to challenge the Vatican's grip on Christianity. "The American Catholic Church"... it has a nice ring to it, doesn't it?

Expand full comment
Thomas del Vasto's avatar

I think you might be looking for Orthodoxy.

Expand full comment
Arrk Mindmaster's avatar

I'm not qualified to answer, because I voted for Trump, but after hearing Vance on Joe Rogan's program, I do think Vance would be a better president than Trump. Vance seems to know what's currently going on, while Trump seems mired in the way things were 30-40 years ago, with the way he seems to conduct foreign policy, and us-against-them things.

I'll also add that I think Harris would have been worse than Biden, or at least no better. The more active she would have been as president, the worse it would have been. I don't know how much influence Biden had during his bad spells, but someone or someones must have still been basically running things.

Expand full comment
beowulf888's avatar

Constitutionally, Musk is powerless. Influence-wise, well, that's a different story. Musk's Twitter storm killed the continuing budget resolution (temporarily). I don't think he did this with Trump's approval. Team Trump was forced to go along with the Muskrat. Of course, actually shutting down the government was untenable for the current Congressional leadership, especially just before Trump would be sworn in (even with all the money that The Zuck is donating to the inauguration, all sorts of government agencies have to be functioning for it happen). But notice that the final CR lacked the outbound investment provision. Musk has big plans for Tesla in China, and the provision that would track and limit U.S. money flowing to China didn't end up in the final CR. Score one for President Musk. I doubt if Team Trump even knew what was going down. This was a deal between Musk and the Speaker to silence Musk's TwiXter tantrum.

BTW, where is Vance? Has anyone seen or heard from him?

Expand full comment
Deiseach's avatar

"even with all the money that The Zuck is donating to the inauguration"

Seriously? Whatever happened to The Metaverse where we'd all be congregating to do everything, or are we not supposed to ask about that anymore?

Oh man, I'm rolling on the floor here - all the tech money now flowing (sort of) Trump's way? It could have been Kamala!! And Tim! 😁 Imagine Tim Walz' Metaverse persona!

Anyone's opinion on Kamala for 2028 or nah, she should run for governor of California instead?

Expand full comment
beowulf888's avatar

Dems will never forgive her for losing. She might have a chance at running for California Gov, but I guarantee you that her potential competitors in CA are already sharpening their knives if she runs.

Expand full comment
Deiseach's avatar

But I thought everyone loved Peace, Joy, Brat Summer, Coconut Kamala! Was she not The People's Choice? Is she not the calm, wise, smart, younger, better leader the nation needs? 😀

It makes more sense, if she's going to hang around politics, to try for the governorship in 2026: California is her home ground, Newsom can then concentrate on running for the presidency (oh please oh please oh please I want so badly to see a Governor Nice Hair Getty Sycophant campaign and him choking down a hamburger in some chain gas station in the boonies to show he's Just Like Folks, and I especially want to see who he drags in as his VP pick).

https://www.youtube.com/watch?v=V8kxbc7alt4

The way she says "Door-eetos". It's like an anthropological expedition to an obscure tribe. At least Tim and Mrs Tim were eyeing up the rotisserie chicken, like normal people considering what they'd buy for something quick to eat. I'd believe he bought snacks from a petrol station before. For all my criticism of Walz, I could believe he'd eat a breakfast roll from a petrol station. Not Kamala, though.

https://www.tiktok.com/@man_on_a_munchion/video/7108027743190764805

Who are the potential candidates once Newsom's term is up?

Expand full comment
Eremolalos's avatar

Random extrusion of inner pigginess?

Expand full comment
Eremolalos's avatar

As a treat for my cats I set up a tunnel maze on the couch and coffee table, stuffed it full of toy mousies rolled in catnip, and covered the whole thing with a sheet. They romped in it for a while, and are now snuggled in cozy hollows in the structure. Bless their innocent joy and innocent peace. Merry Christmas, my sweet godless little beasts. https://imgur.com/a/tHKFePN

Expand full comment
Christina the StoryGirl's avatar

What a treat for them!

Your orange (boy, probably?) looks like a handful, as most orange boys are.

Expand full comment
Eremolalos's avatar

You are right. He is very good-natured and loving, but a handful because he is so smart. He recognizes immediately from their shape the half dozen or so things in my fridge that he also likes, and immediately launches into a paroxysm of begging when I take one out. He seems to be able to tell by sound alone when I am taking a package of sliced deli meat from the freezer or opening a carton of yogurt — I hear him start the begging cry from the next room over. And he is the only cat I’ve ever had who kneads me only on my breasts. He totally gets what they are, and lies on me purringly alternating between the left one and the right. Sometimes he even suckles, and somehow locates my nipples through several layers of cloth. And when he gets the midnight crazies he’s a climbing fool. Suddenly jumps onto terrible things, for instance the top of the medicine cabinet His weight makes the door swing open, and he comes with it, now balancing somehow on this swinging thing less than an inch wide. All the landings near him are terrible — narrow spaces between hard porcelain objects. And he’s not a bit worried. I see him looking around for fun things to jump to from there. Shower curtain rod maybe? Top of bathroom door? I am head over heels in love with him.

Expand full comment
Joshua Greene's avatar

As OT 361, this deserves something go-related. If you have any weiqi/baduk/igo related news or observations, please add them here.

Expand full comment
beleester's avatar

I unfortunately don't have any go-related news, so I'll share my favorite Go meme instead. The "nuclear tesuji" is a strategy where you escape a losing position by picking up the go board and throwing it at your opponent's head. https://senseis.xmp.net/?NuclearTesuji

Expand full comment
Danielle's avatar

In a similar vein, the time-wasting tesuji - https://senseis.xmp.net/?TimeStealingTesuji (but personally, I tend to lose to the boredom tesuji, where my opponent takes too long to think and I get distracted between moves and forget I was playing)

Expand full comment
Joshua Greene's avatar

In addition to escaping a losing position in the current game, the nuclear tesuji has the benefit of cancelling future games with that opponent, too.

Expand full comment
Anatoly Vorobey's avatar

There was a basketball game last night. 2 out of 16 teams in the league played against each other, with a definite result (one team won, the other lost). Tristan knows which two teams played, but doesn't know who won. Isolde knows who won, but doesn't know who the losing team was. Tristan and Isolde are talking over a VERY costly communication channel. Help them work out a protocol to let Tristan know which team won using only the total of 3 bits exchanged in either direction.

(4 bits are trivial: Isolde spells out the number of the winning team. 2 bits I'm pretty sure are impossible, though I don't know if it's easy to prove. If you're sure of your answer, maybe rot13.com it)

(P.S. how many bits would you need for a 256-team league?)

Expand full comment
agrajagagain's avatar

Coming in late and last place. I only saw this a few hours ago (ancient tab open on my laptop) and spend a pleasant hour working out the solution while taking a walk. It seemed pretty slick until I read the other solutions: mine works just as well for 16 teams, but would use 7 bits instead of 5 for a 256 team league (since each doubling adds a bit).

Ahzore gur grnzf 1 guebhtu 16. Gevfgna fraqf gur sbyybjvat gjb-ovg pbqr, hfvat gur ybjrfg-ahzorerq pbqr inyvq sbe gur cnve:

00: gur gjb grnzf qvssre gnxra zbq 2 (v.r. bar vf rira naq gur bgure vf bqq)

01: gur gjb grnzf qvssre gnxra zbq 4

10: gur gjb grnzf qvssre gnxra zbq 8

11: gur gjb grnzf qvssre gnxra zbq 16

Vfbyqr nccyvrf gur vaqvpngrq zbqhyhf gb gur ahzore bs gur jvaavat grnz. Vs gur erfhyg vf yrff guna unys gur zbqhyhf, fur ercyvrf jvgu 0. Vs vg vf unys be zber, fur ercyvrf jvgu 1. Vg'f thnenagrrq gung bayl bar bs gur grnzf Gevfgna unf jvyy zngpu ure ercyl.

Sbe rknzcyr, vs gur grnzf ner ahzorerq 3 naq 15, Gevfgna jvyy fraq 10 fvapr gurl qvssre zbq 8, ohg abg zbq 2 be zbq 4. Vfbyqr xabjf gur jvaavat grnz vf ahzorerq 15, naq 15 zbq 8 vf 7 (zber guna unys bs 8), fb Vfbyqr ercyvrf jvgu 1. Gevfgna frrf gung bs uvf grnzf, bayl grnz 15 jbhyq trarengr gung ercyl, naq fb xabjf gur ahzore bs gur jvaavat grnz.

Expand full comment
Anatoly Vorobey's avatar

Nice solution, but yes, a different one is better for larger teams. Check out FrancoVS's solution for the apparently optimal strategy (though I don't have a proof).

Expand full comment
agrajagagain's avatar

I feel silly for not seeing it sooner, but my solution is actually mathematically equivalent to FrancoVS's (and does correspondingly better for larger terms than I realized). Asking whether the two numbers differ when taken mod 2 is exactly the same question as asking whether they differ in the first binary digit. Asking whether they differ when take mod 4 is equivalent to asking if they differ in the second binary digit (provided they already didn't differ in the first).

That being said, FrancoVS's way of describing it is still much cleaner and clearer, so I would prefer saying it that way to saying it the way I did.

Expand full comment
Unsaintly's avatar

Unir Vfbyqr fraq na neovgenel ovg, gura jnvg K zvahgrf jurer K vf gur ahzore bs gur jvaavat grnz, gura fraq n frpbaq ovg. Zvahgrf ner pbnefr rabhtu gung yngrapl jba'g pbeehcg gur zrffntr, naq vg jbexf sbe nal ahzore bs grnzf. Anghenyyl vg trgf vapbairavrag jvgu n ynetr rabhtu grnz ahzofe

Expand full comment
Anatoly Vorobey's avatar

Clearly not in the space of solutions meant, but clever :-)

Expand full comment
FrancoVS's avatar

Gevfgna fraqf n 2 ovg dhrel naq Vfbyqr erfcbaqf. Gur dhrel vf: jung vf gur jvaavat grnz ahzore'f A-gu fvtavsvpnag ovg? Gevfgna whfg unf gb cvpx nal A gung nyybjf uvz gb qvfgvathvfu orgjrra gur gjb grnzf gung cynlrq.

Expand full comment
Gunflint's avatar

This is so cool. Congrats!

Expand full comment
Anatoly Vorobey's avatar

That's correct!

Expand full comment
Erica Rall's avatar

Perngr gjb tebhcvat fpurzrf, N naq O, qvivqvat gur grnzf vagb sbhe tebhcf bs sbhe. Va fpurzr N, grnzf jvgu gur fnzr uvtu ovgf ner va gur fnzr tebhc. Va fpurzr O, grnzf jvgu gur fnzr ybj ovgf ner va gur fnzr tebhc. Ab cnve bs grnzf pna or va gur fnzr tebhc nf bar nabgure va obgu fpurzrf.

G fraqf V bar ovg vaqvpngvat juvpu fpurzr gb hfr, pubfvat gur bar jurer gur gjb fpurqhyrq grnzf ner va qvssrerag tebhcf. V erfcbaqf jvgu gur ahzore bs gur tebhc pbagnvavat gur jvaavat grnz.

Gur fnzr zrpunavfz jbhyq nyybj svir ovgf gb fbyir gur ceboyrz jvgu 256 grnzf.

Expand full comment
Anatoly Vorobey's avatar

This solution works (congrats!), but its extension to 256 teams requires more bits than the extension of FrancoVS's solution above.

Expand full comment
Arvid's avatar

I took a stab at this (without knowing anything about communication protocols), but could you do it as follows?

Qvivqr gur 16 grnzf vagb sbhe puhaxf, sebz grnz ahzore 1 gb 16, fb: puhax 1: [1,2,3,4], puhax 2: [5,6,7,8], puhax 3: [9,10,11,12], puhax 4: [13,14,15,16]. Yrg Gevfgna fraq n fvatyr ovg ercerfragvat jurgure gur gjb grnzf cynlvat ner cneg bs gur fnzr "puhax". Vs gurl ner, Vfbyqr fraqf gjb ovgf ercerfragvat gur vaqrk bs gur jvaavat grnz jvguva gung puhax. Vs gurl nera'g, Vfbyqr fraqf gjb ovgf ercerfragvat juvpu puhax bs gur sbhe pbagnvaf gur jvaavat grnz.

Abg fher vs guvf unf nal vyyrtvgvzngr nffhzcgvbaf gubhtu?

Expand full comment
Anatoly Vorobey's avatar

Yup, that works, well done! This is a variation of the solution put up later by Erica Rall in a sibling comment. Both your and her solutions require 3 bits for 16 teams, but 5 bits for 256 teams, and in that sense are not optimal - the solution by FrancoVS also requires 3 bits for 16 teams, but 4 bits for 256 teams.

Expand full comment
Hank Wilbon's avatar

Holding genetics constant, how path-dependent do you think life is? I mean, how much does it matter that you were born in this city instead of that one? Went to this school instead of that? Chose this major instead of that? Turned left one day instead of right?

It seems like one could in theory home in on the most relevant level of granularity. Obviously, it matters greatly if you were born in France instead of North Korea, but given that you are born in France (and your genetics are whatever they are) what matters most next? That you were born rich or poor, that you happened to find the right group of friends in adolescence or college? That you chose the right career path?

I'm leaning toward believing that your friend group in adolescence matters most.

Expand full comment
Stephen's avatar

My decision at age 14 to show up... Maybe 90 minutes early for a concert, instead of 85, and my subsequent decision to duck out of line to find someone a spoon to enjoy their ice cream with, led to a lengthy relationship that has pretty fundamentally effected my sense of self.

Expand full comment
Melvin's avatar

What are we measuring?

An economist would probably measure boring things like your income and net worth, and conclude a high degree of determinism. But whom you marry matters a lot more than halving or doubling your income, and that's very dependent on a bunch of random happenstance.

Then you've got some people whose lives are very subject to dumb happenstance, like lottery winners and quadriplegics.

Expand full comment
Gunflint's avatar

I think it’s a lot more turning left instead of right than we give credit for. The butterfly effect.

Expand full comment
Anonymous Dude's avatar

I think people love to come up with single causes for things, but IRL stuff's multifactorial--it's why the more quantitative social scientists are always doing multivariate regressions. It's not your genes OR your upbringing OR your friend group OR your social status, it's your genes PLUS your upbringing PLUS your friend group PLUS your social status, and there are weird interactions that make it all too hard to say which one's the most important.

Expand full comment
TakeAThirdOption's avatar

> I'm leaning toward believing that your friend group in adolescence matters most.

It matters immensely. But.

Friend of mine got raped by her stepdad, age 10, and her mum was an idiot.

Her adolescence group of friends were drug addicts then, and were no help at all, and the next group after them, by mere chance, were some decent people (not to say addicts can't be decent, but those were just lost).

There are so many factors, it's easy to find some obvious ones that should be right, but too many things, by bad luck, can ruin your life.

Her's is ruined, she is totally dependent on medication to not vomit all day just because of overwhelming emotions, although she gets by.

Expand full comment
Eremolalos's avatar

If we're talking about aspects of your life circumstances that endured for quite a while, I'm inclined to agree with you about friend group in adolescence being the most important one. But it seems to me that in order to maintain that, you have to hold something like SES constant, because what matters about your friend group isn't just how strong and rich the bonds are, but what your friends' trajectory is. If they're mostly low SES, you are going to end up watching a larger fraction of friends you love have a downhill life course, and that's going to modulate the effect of the friend group itself during the teen years.

Another thing to consider, though, is that lots of people have had one-off events that changed, or could have changed, life course forever. When I was 18 I was walking with a friend through a ratty part of NYC, passing 3-4 story building house that workmen seemed to be demolishing piecemeal, rather than with a wrecking ball. Suddenly the building collapsed, falling forward across the sidewalk in front of me and into the street. Some chunks of it came to rest only a foot or so in front of me. If I had been 3 feet further down the path I was walking I would probably have been killed or severely injured.

And then there are the one-offs we will never know about. The fatal accident we would have had if we had left the party 40 seconds later, the person we would have married if we had not missed out chance to meet them because we were hunting for something in our pocket during the moment they tried to catch our eye.

Expand full comment
beowulf888's avatar

Gen Zers out there: is cocaine making a comeback? I just finished binge-watching *Industry* and your generation is doing a lot of cocaine in that series. Of course, it's about the financial industry, and those kids were coked up like nobody's business back in my day, too. That and the restaurant business. Maybe it never went out of fashion there? Asking as an old fart. ;-)

Expand full comment
Tachyon's avatar

> I just finished binge-watching *Industry* and your generation is doing a lot of cocaine in that series.

*Everyone* in the show did coke; not just Zoomers.

Expand full comment
Performative Bafflement's avatar

> Gen Zers out there: is cocaine making a comeback?

Just wanted to chime in, you're asking the wrong layer here. Gen Z has no idea or feel for how common coke was for the prior gen. What you *really* need is somebody who's been doing coke for 20-30 years, because they'll have a feel for that.

I'm not that person, but I know some of those people, and they would probably tell you that coke today is as easy to get as in the past in big cities, it's cheaper (in inflation adjusted dollars) but the quality has declined, and it's much more likely to be contaminated with levamisole and / or speed.

Expand full comment
Ivan Nikolaevich's avatar

College student checking in. I have friends who have been known to do a little skiing, but they're the sort of people who were already partying hard, so there's not really any curve-flattening. The drug that's a lot more popular than I expected entering college is ketamine, which is a lot more widespread in use than coke in my experience.

Expand full comment
Eremolalos's avatar

Really? What’s at like, at the doses people are using? A

psychiatrist friend let me try the powerful pharmaceutical grade stuff

—injected me with various doses. Low

doses were sort of like a weed high, but one where the relaxed

placid component was high and the fun mental activity one was lowish. A couple of high doses disabled my mind so much that I think the experience qualifies as a k-hole and I hated that. It’s like being so impaired that you can’t even continue the train of thought you started 5

secs ago, because you’ve lost the thread. But you are not quite fucked up enough to fail to realize you are utterly disorganized and impaired, and you keep trying to think about what’s going on and get oriented, but that doesn’t work because all trains of thought decay and disappear in 5

secs or less.

Expand full comment
anomie's avatar

Someone a couple days ago asked if AIs could be trained on stuff like DNA to produce novel insights, but apparently people are already doing that already. Some chemistry researchers are already leveraging AI hallucinations to create new designs for proteins. Good for them! https://www.nytimes.com/2024/12/23/science/ai-hallucinations-science.html

Expand full comment
Mark's avatar

AI is routinely used to predict the sequences of epitopes that with activate T or B cell receptors, so implicitly using DNA sequences as input. Alphafold largely works by using similarity between the sequences for different genes to predict similarity in protein structure. There’s at least one module developed that I believe predicts chromatin structure from DNA sequence (among other information) called c-origami.

Expand full comment
beowulf888's avatar

Interesting!

From a non-pay-walled IEEE Spectrum article...

"The researchers, collaborators from several U.S.-based academic institutions, experimented with trRosetta, a web-based platform for protein structure prediction powered by deep learning and Rosetta. They gave it completely random protein sequences and introduced mutations into them until trRosetta began making generalizations that yielded predictions about how the strings of amino acids would arrange themselves into stable 3D structures."

It sounds like the system is both using hallucinations to mimic random mutations and then guiding them into potentially useful conformations in a kind of directed evolution. Only 100 amino acids long at this point. It's a start.

Expand full comment
Melvin's avatar

On the search for a unified theory of taste:

The simplest model of taste that I can think of is:

1. Some people prefer strawberry ice cream, and some prefer pistachio

2. The people who prefer pistachio tend to be higher status

The moment you add that second point, taste becomes incredibly complicated. You'll instantly have a bunch of strawberry-likers who pretend to be pistachio-likers to raise their status. Then you've got counter-signallers who say "I'm so confident in my status that I refuse to pretend to like pistachio, I'm a proud strawberry fan!" You've got pistachio fans who think that strawberry is disgusting, and you've got pistachio fans who quietly wonder whether they're only fooling themselves into liking pistachio. You'll see the emergence of disgusting "Super Pistachio" and "Super Strawberry" flavours, hundreds of times stronger than the ordinary form of each, so you can signal extra hard.

Taste is like that, the collision of a genuine preference with a complicated status ladder, except in five hundred dimensions.

Expand full comment
Alex Woxbot's avatar

Why would taste be a status signal though? Usually status signals are used to subtly signal wealth. Like a really nicely maintained garden, which costs labor. Or a sports car. Or a giant water fountain in the middle of desert Las Vegas. Or a cauldron of flame burning 24/7. Is pistachio a particularly more expensive ice cream, compared to just eating ice cream?

I think taste is partly instinct and partly learned. _A Hunter-Gatherer’s

Guide to the 21st Century_ has a nice description on how food taste is literally this. But even in other areas, like music, this seems obvious. Music is pattern prediction, which is partly innate, and partly how much other music from that genre you've learned.

Some people will pick up the taste with just experience, but some people won't be able to like it no matter what. I think this explains everything you need to know about the emergent phenomena of taste.

Expand full comment
The Futurist Right's avatar

Replace status signal with, “welcome at party with _____ “ and it becomes a lot clearer. The wealthy don’t mind If you are currently poor because you are doing some internship in Rwanda. They mind if you are poor because you couldn’t hack uni. In fact the Rwanda intern is more like them in many ways than the hardworking plumber. Scale that to a smaller frame of reference perhaps and the value of signals becomes apparent. Also replace wealth with class.

Expand full comment
Yug Gnirob's avatar

Replace "pistachio" with "caviar" and you have the real-life example. Super expensive fish eggs, versus anything made of sugar.

Expand full comment
Melvin's avatar

Or with kale if you want an example that is divorced from wealth. Kale is not an expensive thing.

Expand full comment
Eremolalos's avatar

It is hedonically expensive -- causes lots of suffering. IMHO.

Expand full comment
Deiseach's avatar

Kale? How? It's the equivalent of cabbage, used to be a working-class food until it fell out of favour*, so I was surprised when it came back as trendy striver nosh.

But growing cabbage doesn't cause suffering (that I know of), so what is modern-day kale farming doing that I never suspected or expected?

*Colcannon, for example, was made with kale. I never ate that growing up because my mother never cooked with kale because I never remember it on sale in the grocery shops. But you can get cabbage as much as you like. Cavalo nero seems to be fancy kale and I love it when I can get it, which is not often, as seemingly it is too fancy for the tastes of people round here so it is stocked very infrequently. I cook it the same way as I learned to cook cabbage, which is my low-class rearing raising its head, not some trendy striver middle-class way.

Please don't tell me I am burdening developing world peasant farmers or something by eating (when I can get it) cavalo nero!

Expand full comment
Eremolalos's avatar

No, it causes me, the person eating it, pain!

Expand full comment
TakeAThirdOption's avatar

As a non native speaker I had absolutely no clue what the heck kale might be. And I had to laugh out loud when google answered the curiosity your comment evoked in me :)

Expand full comment
beowulf888's avatar

Wasn't kale championed by wealthy, healthy-eater trendsetters in the pre-quinoa era?

Expand full comment
Deiseach's avatar

Even before that, it was being eaten in colcannon:

https://www.theguardian.com/food/2019/mar/20/perfect-colcannon-recipe-potatoes-cabbage-felicity-cloake

"There’s some debate over the translation of “colcannon”: Garry Lee on twitter tells me that “Cál is an Irish word for cabbage or kale (more usually cabáiste, nowadays), and related, I’d say, to kale. Caineann usually means leek,” while Wikipedia informs me that cál ceannann means a “white-headed cabbage”. (Bulkely describes cabbage, of course, but he’s eating in Dublin.)

Corrigan believes “there’s no such thing as a recipe for colcannon, really. It’s something that is put together with love, not measurements,” so if you happen to have spring greens or curly kale or white cabbage, use them. Having tried all of the above, plus Kevin Dundon’s savoy, I’d recommend kale or savoy when all options are open; the more robust, frilly texture makes for a more interesting result, and my testers and I all prefer the earthier flavours to the simpler sweetness of the smooth varieties. When all’s said and done, however, they’re all cabbage. (If you happen to have already boiled a ham to serve with your colcannon, you should definitely cook whatever you go for in the same liquid, though.)"

Immortalised in a late 19th/early 20th century song by a vaudeville artist, John Nolan (stage name Shaun O'Nolan) called "The Little Skillet Pot" (of course with variant lyrics as people took it up and sang it):

https://livesofthepipers.com/1o'nolanshaun.html

“Did you ever eat colcannon when ’twas made with yellow cream,

And the kale and praties blended like the picture in a dream?

Did you ever take a forkful, and dip it in the lake

Of the heather-flavoured butter that your mother used to make?”

https://www.youtube.com/watch?v=6VGRnE2Y3e0

Expand full comment
beowulf888's avatar

So kale was peasant food that has become a delicacy of the elite. Oysters followed the same path. Hmmm.

Expand full comment
Yadidya (YDYDY)'s avatar

Hi, there's some discussion on this thread about Efficient Effective Altruistic Actions for Gazans.

I don't know how effective I will end up being.

But I do know that I'm a severely undervalued investment in this field.

To prove the point, I just uploaded a video for my supporters. {Support is $36/month and includes all sorts of privikeges and freebies and it's easy to leave at time.}

I've had plenty of good experiences here but not on the subject that matters to me most:

Efficient Effective Altruistic Actions for Gazans

So, if internet claimants are honest, many of you are interested in undervalued propositions in this field but, understandably) need to see some proof of concept before you invest.

Therefore, everybody who joins from SSC/ACX will be refunded their their support immediately if the video fails the test.

You will still get to keep my 15 hour course on *Exotic Jewish History* (available on gumroad for $50) as your complementary gift for giving me a chance by watching this hour long conversation by an American Orthodox Rabbi, a Bereaved Gazan doctor, and various Egyptians.

I promise your money will be refunded immediately.

Thank you for giving me your consideration.

https://ydydy.substack.com/p/super-exclusive-video

P.S. One more thing, while you absolutely may comment on this unlisted video, please don't share it with non-members. Obviously I can't stop you but for the sake of my brave interlocutors this should, at least for now, remain within the community of people who have demonstrated their care by putting their money where their mouth is. Even if choose not to support me, the fact that you considered it demonstrates sufficient trustworthiness to me.

Expand full comment
Gamereg's avatar

A blogger I follow has taken issue with President Biden pardoning some death row inmates but not others:

https://ethicsalarms.com/2024/12/23/regarding-bidens-mass-mercy-for-convicted-murderers/

and

https://ethicsalarms.com/2024/12/23/post-script-to-regarding-bidens-mass-mercy-for-convicted-murderers/

I agree with the posts' general sentiment, but it got me thinking about our discussions on crime and punishment, and I realized they focused more on prison than the death penalty. Most discussions I've seen on the death penalty regard whether it works or not in a very general sense, but I wonder if the death penalty is most likely to deter some types of people but not others, prevent some types of crime but not others. Off the top of my head, I remember reading famous bank robber Willie Sutton saying he never hurt anyone on his heists because he didn't want to get the electric chair, but death penalty detractors say there is no evidence of deterrence. Any insights?

Expand full comment
Shaked Koplewitz's avatar

It's worth noting that so long as the justice system is so ridiculously inefficient that the death penalty takes thirty years or more to carry out, we effectively don't have a death penalty. Might as well not spend money pretending.

Expand full comment
Hector_St_Clare's avatar

The death penalty is about retributive justice, not about deterrence. (I'm pro capital punishment, at least in principle, for what it's worth, and I don't care if it's a deterrent or not).

Expand full comment
Deiseach's avatar

I don't think it is a deterrent, but I also think we shouldn't use it and that life in prison should mean exactly that.

Why precisely Biden pardoned someone who killed a 12 year old girl, for example, escapes me. It's not that there's any doubt this is the guilty person. It's not like "oops, it was an accident". It's hard to understand why the hell he did it.

Most of the others are "killed another prisoner while in jail" or "committed murder for gain". The major difference between Kaboni Savage, responsible for the deaths of 6 people by ordering the firebombing of a house (as well as 6 other charges of murder for different offences), and Dylann Roof, who shot dead 9 people at a church, is that Roof had a political motive while Savage was a drug dealer killing the family of a witness against him in revenge. I'm not seeing Savage as more deserving of clemency, here. You can add on the irony that the drug dealer killed more black people than the white supremacist.

The most cynical explanation is that this is some final throw of the dice to throw mud on the incoming Trump administration, though I can't see what good it does - we are not talking about sympathetic people who could have their stories made into Netflix specials about wrongful conviction:

"When President Biden came into office, his Administration imposed a moratorium on federal executions, and his actions today will prevent the next Administration from carrying out the execution sentences that would not be handed down under current policy and practice."

I'd be a lot more accepting that this is a matter of conscience had Biden handed out these moratoria at the start of his administration, but I am not sufficiently informed on American presidential administrations to know if this is possible or if he had to wait until the end.

Expand full comment
Anonymous Dude's avatar

My best guess is he's taking flack for pardoning Hunter and wanted to cover himself a little with a different news story that would polarize along expected lines.

Expand full comment
A.'s avatar

I'm giving it about a 1/3 chance of being about distraction. I'm sure they are doing a bunch of things now that they'd rather have everyone not pay attention to.

Expand full comment
gdanning's avatar

>Why precisely Biden pardoned someone who killed a 12 year old girl, for example, escapes me. It's not that there's any doubt this is the guilty person. It's not like "oops, it was an accident". It's hard to understand why the hell he did it.

Oh, I think you can. The death penalty is reserved for the most morallly culpable murders. How morally culpable a defendant must be is a question upon which people can disagree in good faith. President Biden simply drew the line at a diferent place than you would. He drew the line at bombers and racially motivated mass murderers, whiile you draw the line at murderers of 12-year-olds.

Expand full comment
Deiseach's avatar

Okay, the commuting of sentences for death penalty is about cases so bad that they incurred the death penalty at sentencing. And if it is a case of conscience about moral objections to the death penalty, then yes, even the worst cases are covered by that.

On the other hand, if you're going to pardon people, then pardon them all. Don't omit the three cases that were politically or racially motivated. I'm seeing reports that Biden did want to pardon these, but his staff talked him out of that.

That's even worse. If you're pardoning the worst of the worst murderers, that includes people who shot up a church and a synagogue. And as I said, the irony is that the white supremacist with the racial animus killed fewer black people than the black drug dealer. If we're going by death counts, Savage should remain on death row and Roof should get the commutation.

One eye on "how will this play in Peoria?" is the worst of all worlds, because in Peoria they'll still be angry, as I am, about the man who murdered his girlfriend, kidnapped her daughter, then murdered the daughter (the 12 year old) for what appears to be no discernible reason. If you're doing it on principle, principle applies even where it's "but this guy did a hate crime".

Personally, I am opposed to the death penalty, but what Biden did seems to have very strange motivation: either he's compos mentis enough to issue the pardons and it's about principle, in which case yes even Roof gets the commutation, or he's doing this from some weird motivation about jabbing his thumb in the eye of Trump about "when he becomes president, he's gonna execute them all".

Which, again, when it's "this guy murdered a 12 year old" very few will find that objectionable, and since the hate crime offenders are left on death row, protests about the Trump administration seeking the death penalty or supporting the death penalty have been cut off at the knees by this selective commutation: we do it but not when the optics are too politically expensive, so that's not about principle.

Expand full comment
Hector_St_Clare's avatar

Communiting most but all of the death sentences preserves the fig leaf that he's considering each case judiciously on the merits, and not going by "my knee-jerk moral values tell me that killing people is always wrong".

I think the second is going to annoy "people in Peoria" more than the first.

Expand full comment
Arrk Mindmaster's avatar

On the other hand, NOT commuting the sentences incurred by hate/political motivations effectively says having hate/political motives is worse than murder.

Expand full comment
Hector_St_Clare's avatar

to be exact, it says that the interaction of hate & murder is worse than murder by itself.

Expand full comment
gdanning's avatar

>Okay, the commuting of sentences for death penalty is about cases so bad that they incurred the death penalty at sentencing

In the opinion of the jury. The whole point of the pardon and commutation power is that the executive is free to disagree with the jury. Moreover, "[a] pardon is an act of grace[,]" United States v. Wilson, 32 US 150 (1833), so the jury's determination is largely irrelevant. Moreover, your argument would seem to exempt capital sentences entirely from pardon/commutation, which has never been the law.

>On the other hand, if you're going to pardon people, then pardon them all. Don't omit the three cases that were politically or racially motivated

Why not, if you deem racially motivated crimes more morally culpable? Which is exactly what the law does; that is the basis for hate crime enhancements.

>If we're going by death counts, Savage should remain on death row and Roof should get the commutation

But, Biden explicitly said he was not going by death counts. He was going by motive.

>they'll still be angry, as I am, about the man who murdered his girlfriend, kidnapped her daughter, then murdered the daughter (the 12 year old) for what appears to be no discernible reason

You are focusing solely on the underlying crime, but the death penalty is not imposed purely on facts of the crime. It is imposed when the aggravating circumstances outweigh the mitigating circumstances, and mitigating circumstances. The jury must "decide whether death is an appropriate punishment for that individual in light of his personal history and characteristics and the circumstances of the offense", and "sentencing juries must be able to give meaningful consideration and effect to all mitigating evidence that might provide a basis for refusing to impose the death penalty on a particular individual, notwithstanding the severity of his crime or his potential to commit similar offenses in the future." Abdul-Kabir v. Quarterman, 550 US 233 (2007). You assume that this was an open-and-shut case, but in fact we have no idea how close the penalty phase case was, whether there was anything problematic about that phase of the trial, or even if subsequent evidence or research has made the degree of his moral culpability less clear.

>what Biden did seems to have very strange motivation

Again, it isn't strange. As I said before, he simply drew a line in a place differently than you would, using different criteria than you do.

Expand full comment
Yug Gnirob's avatar

Everyone waits to do the super-unpopular personal stuff on the way out the door. Biden knew doing all this would have either cost them the election or gotten Congress mad enough to block their policies out of spite, and so waited until that no longer had any teeth.

Expand full comment
Deiseach's avatar

I'm curious as to whether he had this in mind all along, though, or if it's last-minute score settling for some reason. It's strange how he decided on it, and the controversy around it means it's not a popular decision.

Expand full comment
Melvin's avatar

On the other hand, Biden also just signed a bill making the Bald Eagle the official national bird of the United States, which is apparently was not until now. It's hard to think of a less controversial thing he could have done.

Why was that left until the end of his presidency? (Maybe to distract from something else, I guess is the cynical answer.)

Expand full comment
The Unloginable's avatar

Pardons can be (and are) issued at any time. That said, controversial pardons are often timed for the end of a presidential administration.

Expand full comment
Melvin's avatar

Sure, but those pardons are usually reserved for the President's friends and other important party people. Who is the constituency being appeased here?

Expand full comment
anomie's avatar

...His own conscience? I never got the impression that Biden was a psychopath.

Expand full comment
TheAnswerIsAWall's avatar

IMHO, the death penalty is purely an instrument of retribution. The deterrent effect is somewhere between minimal and zero. Sutton for example, when committing his armed robberies (a crime with a significant risk of death for everyone involved) carried a Tommy gun and/or a pistol. The claim that it was always unloaded is, at best, unverified. Sometimes, in the most heinous of cases, the maximum amount of retribution IS justified.

Expand full comment
Thegnskald's avatar

You don't need a justice system to get a death penalty, as history amply demonstrates. You need a justice system to ensure those that receive it have had a fair trial first. Sometimes, in some places, this requires that the death penalty is an option.

Expand full comment
Deiseach's avatar

People smart or cautious enough to be deterred by the death penalty probably have enough self-control not to murder a guy for bumping into their car:

https://en.wikipedia.org/wiki/Kaboni_Savage

"Authorities accused Savage of personally killing a stranger, Kenneth Lassiter, after Lassiter's car bumped into Savage's while the two were trying to park their respective cars. Savage was acquitted of Lassiter's murder after the lead witness in the case, Tybius "Tib" Flowers, was also murdered. Savage is suspected of ordering Flowers' murder."

Expand full comment
K. Liam Smith's avatar

I’m curious what people’s thoughts are on the bird flu and its potential impact. A couple manifold markets are forecasting ~30% that there’s a major outbreak in 2025 [1][2]. I’m curious what happens in the ~70% case. It seems that the H5N1 clearly isn’t going anywhere. It’s not just in farm animals, but wild birds as well, so it can’t be culled away. It seems just a matter of time before it mutates in a way that goes human-to-human. So is the 70% chance that it doesn’t become a human-to-human pandemic in the next 12 months just saying that it’s probably going to be a major pandemic in 2026? Or is it more likely that it goes human-to-human, but just isn’t particularly severe?

1) 1 million+ cases in US: https://manifold.markets/AlexanderLeCampbell/will-there-be-a-1m-bird-flu-outbrea

2) WHO declares avian flu pandemic in 2025: https://manifold.markets/JonathanRay/will-there-be-an-avian-flu-pandemic

Expand full comment
proyas's avatar

Those Manifold bets probably aren't reliable given the very small number of bettors.

Expand full comment
Eremolalos's avatar

beowulf888 knows a lot about this

topic.

Expand full comment
Gamereg's avatar

Do we have any data regarding the average severity of diseases transferred from animals? Have any other severe strains of flu or other outbreak been provably traced to animals?

Expand full comment
beowulf888's avatar

Side note: over half the feral cats in my neighborhood have died (or disappeared) in the last two months. My neighbor and I put out kibble for them, and we've tried to capture, spay, and fix new arrivals to the neighborhood. We compare notes on the inventory. There were 14. Now we're down to 6. Some of them were long-timers. We know 4 of the 8 are dead, because we've found their corpses in our yards. They were unsavaged by predators, and they were not in the street (i.e. hit by cars). My neighbor and I initially hypothesized that someone was poisoning them. But now I'm inclined to believe that they're dying from A(H5).

Expand full comment
Isaac's avatar

There's been a lot of deer dying in NW Iowa where I am. Pheasant hunters see several corpses every time they go out, and reported deer kills are a tenth what they usually are. People attribute it to CWD. I don't know if they have proof.

Expand full comment
Christina the StoryGirl's avatar

I agree with Eremolalos here; why would you jump straight to infectious disease when a neighbor may have simply put out rate poison without your awareness?

Expand full comment
Eremolalos's avatar

Another possibility is that they’ve

eaten rat poison, or poisoned rats. I’ve read there’s

been an upsurge lately of rats in suburban areas where before there were few. That’s certainly the case in the prissy and upscale suburb where I live. I often bike home at 10 or so, after stores and restaurants have closed, and until the weather got quite chilly recently I was seeing a few rats most nights. There were generally several around the trash cans restaurants set out on the night before trash pickup. Also saw them scuttling across the streets in residential

areas. And the landlord in the very well maintained building where I have my office discovered rats in the basement. Some homes have signs in their yards exhorting people not to use rat poison for the sake of the owls, so apparently some people are

resorting to poisons.

Expand full comment
beowulf888's avatar

Rat poison was my initial hypothesis. But then I heard about the twenty big cats that died from bird flu in a wildlife sanctuary in WA and the house cats dying from bird flu in LA (though raw milk was supposedly the disease vector) — and I questioned why attribute the rash of local feral deaths to maliciousness when it could just be part of the current wider pattern of mammals affected by A(H5)?

https://www.seattletimes.com/seattle-news/environment/bird-flu-kills-20-big-cats-in-wa-sanctuary/

http://publichealth.lacounty.gov/phcommon/public/media/mediapubhpdetail.cfm?prid=4908

Expand full comment
beowulf888's avatar

I'm going to throw a few things out in response K and Gamereg. First off, H5N1 — aka bird flu, HPAI, A(H5) — has been around for almost 3 decades now, and it's been steadily evolving. During the first 15 years, it wasn't particularly transmissible from birds to humans, but the people who caught it had a high likelihood of dying from it. Difficult to say what the actual Infection Fatality Ratio (IFR) was because we don't know how many people caught it but weren't sick enough to visit a hospital. But if one were hospitalized, the Case Fatality Ratio (CFR) was a scary 50+ percent. Over the past 15 years, fewer people have been hospitalized with A(H5), and the death rate dropped from dozens each year to one or two.

Meanwhile, it spread from domesticated chickens to wild birds and then to mammals, and there are now strains that are easily transmitted between mammals. Different mammalian species seem more or less susceptible to it. It kills a lot of seals, but it doesn't seem to kill a lot of cows.

The strains that spread between birds are transmitted via the fecal-oral route. The strains that spread between mammals are transmitted via the respiratory-mucosal route. It probably hopped from domesticated chickens to domesticated ducks, and then ducks shat in the pond water, so ponds became a disease vector for other wild birds. It's believed that people who get it from birds (usually chickens) have ingested contaminated feces. If you've ever been in a chicken coop, there's a lot of dried fecal matter floating around in the air by chickens flapping their wings.

Now, the interesting thing that I recently learned, that doesn't seem to be common knowledge yet, is that US chicken farmers sell their coop litter. The litter is supposedly sterilized, and then the straw and poop is turned into cattle feed (because all the protein in the chicken shit is nourishing). The US happens to be the only developed country in the world still doing this. Turning chicken coop litter into cattle feed is banned in Canada, EU countries, Australia, and many Asian countries. A(H5) has only jumped to cattle in the US. And it seems to have mutated from the fecal-oral transmission route to the respiratory-mucosal route once it made the jump to cattle in the US. Genotype B3.13 (which infects cattle) doesn't seem to be particularly virulent to humans. Many people may not even know they're infected. Although farm workers are catching it from cattle, we have no documented human-to-human transmission. Genotype D1.1 (which infects birds) has been historically more deadly in humans. We've got a kid from BC on a ventilator for the last 3+ weeks, and the case in Louisiana (who evidently caught it from a backyard flock of chickens) is in the hospital, too. And I think a second Louisana case has just been reported. I reached out to some of the flu experts I communicate with on X and asked if we knew why D1.1 was more virulent, but Tom Peacock informed me that we've had a cluster of D1.1 cases in Washington that had milder symptoms. And he said we don't know what makes some cases more virulent than others.

We're in a Catch-22 situation in the US. If cattle were dying from A(H5), the USDoA would be implementing its epidemic playbook and culling herds. But cattle are getting sniffles, and they're only dying from A(H5) if the temperature is hot (it seems to lower their resistance to heat stroke). The CDC has no authority to recommend anything for non-human animals and has politely not intruded into the USDoA's bailiwick. So the USDoA is shrugging its collective shoulders, and the CDC is monitoring the human cases, which so far have mostly been mild.

But we've now got a huge domestic animal reservoir of a potentially deadly pathogen. My understanding is that we have A(H5) vaccines developed for poultry that could/might work in cattle with little or no modification. And it would be no stretch to create an mRNA vaccine that would work in cattle if the poultry vaccines don't work. But no one wants to force our poor poor farmers to spend $50 a jab to vaccinate their cattle who aren't dying. Now, B3.13 has jumped to swine. Swine catch human flu A(H1), and since one of the ways flu viruses mutate is by exchanging genetic material with other flu viruses during coinfections, we could get a highly transmissible human flu with A(H5) virulence.

This is the same dynamic that created the outbreak in Wuhan. The Chinese authorities paid lip service to shutting down their farmed wild animal trade, but the industry was too big an economic force with too many friends in government for them to exert regulatory control. I think the Fresno-Stockton area has a good chance of being our Wuhan.

And Gamereg, the ground zero for the misnamed Spanish Flu of 1918-19 was Kansas — and it very likely came from pigs that were fed to our troops training at Camp Funston, and from there spread to the world as our troops shipped out to Europe in WWI.

Expand full comment
tfirst's avatar

You should really type this up as a standalone post here on Substack.

Expand full comment
Eremolalos's avatar

Thank you, beowulf! So I have three awful factoids, a couple questions and a couple comments. In them I’m going to talk about the 2 forms of avian flu, D1.1and B3.13, as the dangerous form and the benign form, respectively)

AWFUL FACTOIDS: You wrote about chicken manure being used in cow feed. I asked Google whether chicken manure is used in pig feed, and got the following answer:

• Feeding 
In piglet rations, 5–10% chicken manure can be added. For fattening pigs, 15–30% can be added. However, feeding chicken manure to pigs at higher levels can negatively impact growth rates and feed conversion ratios. 


• Pig-poultry integration 
In some systems, pigs eat excrement from laying hens that falls into their pen. This system can be economically sound and create a zero-pollution cycle when combined with fish. 

I also asked what kind of manure are used as fertilizer, & got the following answer:

• Cow manure 
A good all-purpose fertilizer that's low in nitrogen and has a balanced nutrient profile. Cow manure is also weed-free because a cow's digestive system is so thorough. 


• Chicken manure 
A great fertilizer for vegetable gardens because it's rich in nutrients like nitrogen, phosphorus, and potassium. However, handling fresh chicken manure can introduce harmful pathogens into the soil. 


• [also some others]

I asked whether big agro used chicken manure as fertilizer. The answer:

Yes, large-scale agriculture in the US uses chicken manure as fertilizer, especially in the South where most chickens are raised:

The type of manure used on crops depends on where the animals are raised. For example, poultry manure is used on crops in the South, like cotton and peanuts, because most chickens are raised there.

Manure can improve crop yields. For example, one study found that soybean yields were 8% higher one year after applying manure, and 11% higher three years later.

Chicken manure is a high-quality organic fertilizer that can be used on many types of plants, including tomatoes, peppers, eggplants, squash, melons, cucumbers, beans, apples, and citrus.

Ugh and uh oh. Seems like farm workers, pigs and gardeners (*vegetable* gardeners!) all get substantial exposure to the bad form of the disease.


QUESTIONS

As regard’s the chance of an epidemic: It seems like there are lots of routes by which the dangerous form of the virus now carried by birds could move to our species, and then among members of our species. There’s transmission via the oral-fecal route of the dangerous form. Or the dangerous form could mutate while carried by birds into a more transmissible form of the same virus, maybe one transmissible by the mucosal route. Or the dangerous form could mutate into a more transmissible form in a person infected with the dangerous form plus flu. Or the benign form could mutate, in either a person or a pig with benign form + flu, into one that’s both more lethal and more transmissible.

-So Beowulf, is that right? Am I missing any?

-So how likely are each of these scenarios? Which ones are the likeliest? To me (3) is especially worrisome. There would not be many people with both flu and the dangerous form of avian fly, though, *unless* there are in fact many cases of the dangerous form of the virus that are mild or asymptomatic. But it seems quite possible that there are a lot of people infected with the dangerous form who are not particularly sick, and there are 10-40 million cases of the flu every year, acc/to CDC, so people infected with both might not be terribly rare.

COMMENT: STEPS WE SHOULD OBVIOUSLY BE TAKING

Ban the sale of of chickenshit either for feed or fertilizer, ffs.

Find out what fraction of farm workers or others with a lot of exposure to bird droppings have or have had either the benign or dangerous version of the virus (which in them manifested as mild symptoms, or none at all). If it is fairly widespread, maybe we should make a vax & require those with much exposure to bird feces to be vaccinated for both regular flu and avian flu.

Vaccinate at least the pigs. Maybe cows too.

Have a plan for what to do if there is community transmission of the dangerous version of the virus, or even a single case of person-to-person transmission.

OMG this is so fucking obvious — have a goddam plan and follow it. And I know that’s not going to happen. Makes me want to punch a hole in the wall. (You really should read The Premonition.)

Expand full comment
beowulf888's avatar

Listen, this could all end up being a big nothing-burger because A(H5) has been around for 30 years, and it hasn't been able to exploit humans as a disease vector (i.e., human-to-human transmission) in all that time. And none of the current strains that have infected humans have learned that trick (yet). But I think it's VERY VERY STUPID of the federal authorities to allow US livestock to become a permanent animal reservoir for A(H5). Unfortunately, I don't think Trump's team of Batty and Brainworm will be particularly effective at combating a new epidemic if the worst-case scenario occurs. In answer to some of your questions...

> There’s transmission via the oral-fecal route of the dangerous form.

It may be that the fecal-to-oral route of transmission exploits a biological vulnerability in our gut that's not there in our mucosa — which is why some experts are worried about the virus contaminating our milk supply. Some experts have hypothesized this as an explanation for why many people who catch the chicken strain of the virus get so sick, while the ones who catch it from cows via the mucosal route don't get very ill. AFAIK, no one has identified the genes that make one variant more lethal than another.

> Or the dangerous form could mutate while carried by birds into a more transmissible form of the same virus, maybe one transmissible by the mucosal route.

Yes. That's a possibility. That's where the chicken litter being fed to cows hypothesis came from. And the US is the only developed country that allows this practice. Correlation is not causation, but it's interesting the US is the only country with infected dairy herds right now (except we exported some infected cows to Mexico, so their herds may soon be infected, too).

> Or the dangerous form could mutate into a more transmissible form in a person infected with the dangerous form plus flu.

Yes, but so far, not a lot of humans have been infected with bird flu from birds. But the chances of that scenario you described are increasing as human flu season revs up. And this from the Science article (link below): we've got that sick teenager in British Columbia who's been hospitalized since November, and the virus has been mutating in him over the course of the infection: “It looks like during the infection of this individual, the virus could have been evolving towards at least some of the mutations that would adapt it to humans.”

> Or the benign form could mutate, in either a person or a pig with benign form + flu, into one that’s both more lethal and more transmissible.

Yes, that's a possibility, too. Humans and swine catch A(H1), and theoretically, if they get infected with both A(H1) and A(H5) at the same time, the ribosomes of the infected cells could stitch pieces of those two genomes together to make a new "hybrid" flu virus. Whether it would be lethal or not is a different question. Unfortunately, the genes associated with lethality have not been identified (AFAIK).

https://www.science.org/content/article/why-hasn-t-bird-flu-pandemic-started

Expand full comment
Eremolalos's avatar

You said you were curious about research findings, if any, regarding what makes different versions of H1N1 lethal to mammals. I asked GPT and got this. Dunno how satisfactory it is, but am passing it on in the hopes it adds some pieces to the jigsaw

puzzle in your mind.

GPT SEZ:

Recent research has identified specific genetic mutations in the H5N1 avian influenza virus that influence its lethality in humans. Notably:

* PB2 Gene Mutation (E627K): This mutation enhances viral replication in mammalian hosts, increasing virulence. It has been detected in various H5N1 strains, including those infecting mammals such as cats and foxes. 
https://wwwnc.cdc.gov/eid/article/30/10/24-0583_article?utm_source=chatgpt.com

* Hemagglutinin (HA) Mutations: Alterations in the HA protein can shift the virus's receptor binding preference from avian to human receptors, facilitating human infection. A study demonstrated that a single mutation in the HA of a bovine H5N1 strain enabled binding to human-type receptors, indicating potential for human adaptation. 
Scripps Research


https://www.scripps.edu/news-and-events/press-room/2024/20241205-wilson-paulson-h5n1.html?utm_source=chatgpt.com

* Neuraminidase (NA) Mutations: Changes in the NA protein can affect the virus's ability to release from host cells and evade the immune response. Research has shown that certain mutations reduce NA activity and increase binding affinity to human-like receptors, potentially enhancing virulence in humans. 
News Medical


https://journals.plos.org/plospathogens/article?id=10.1371/journal.ppat.1011135

These findings underscore the importance of monitoring genetic changes in H5N1 strains to assess their potential threat to human be health

Expand full comment
beowulf888's avatar

Thanks for the link! It's important not to confuse transmissibility with pathogenicity. A lot of COVID researchers tried to link the two behaviors, but I don't think it flies. For instance, Omicron was a hella lot more transmissible than Delta. But it preferentially attacked the mucosa of the bronchi rather than the alveolar mucosa and tissues. That lowered its case fatality rate — but it infected a lot more people, so overall, it killed more people. Of course, part of the switch to attacking the bronchi may have made it more transmissible.

Anyhew, we see a lot of speculation that improved H protein binding to the glycoproteins on the surface of our cells will increase the pathogenicity of the flu virus as well as its transmissibility. But Tom Peacock has pointed out that if the H protein sticks too well to glycoproteins on the surface of our cells, the newly budded virions will get stuck to the cell they're trying to escape. And upping the snipping power of the N protein rocks the balance back in the other direction.

This paper (and a bunch of others) points the finger at mutations in the PB2 segment. The PB2 is what helps the flu virus hijack the ribosomes in our cells to make new flu viruses. But the lethality of a virus depends on multiple factors — such as which tissues the virus preferentially attacks and/or the immune response from the attack that it promotes or inhibits. Cytokine storms are an example of immune over-response to the SARS2 virus. I noticed that this paper didn't discuss the NS1 protein. We have samples from frozen corpses in Alaska of the A(H1N1) flu virus that caused the 1918-19 flu pandemic. The theory is that the NS1 protein in that virus was highly efficient at suppressing the innate immune response, allowing the virus to replicate unchecked in the early stages of infection.

Right now, I think it's all speculation about which mutations make it more lethal. But we'll probably be able to figure it out after the fact. Ugh.

Expand full comment
Deiseach's avatar

Chicken litter needs to be disposed of, and using it as fertiliser seems to make sense. However, this rather hair-raising study casts considerable doubt on what to do with it: you can't safely use it as fertiliser (at least, it would seem, not without a lot of processing) and you can't dispose of it by burning because contaminants release dioxin:

https://pmc.ncbi.nlm.nih.gov/articles/PMC6801513/

Expand full comment
Eremolalos's avatar

The problem with chickens these days is that lots of them are infected with the form of avian flu that causes severe illness. It's transmitted by the "oral-fecal route," which we all hope never to travel. But if you're spreading a bunch of the stuff on your garden (or your big one -- big agro uses chickenshit as a fertilizer), it is dry and dusty and floats in the air and the dust blows around if it's windy, and you can easily get some in your mouth if you just talk with someone while the air's full of the shit.

Expand full comment
Paul Botts's avatar

The organization I work for owns and manages a large popular nature preserve alongside a major river that is part of the Mississippi Flyway. The site is open to the public every day of the year, on all the birder websites, an Audubon Important Bird Area, etc. If you're a serious birder in the Midwest you've been to it or it's "on the list".

Last week a group out there as part of the annual national "Christmas bird count" spotted a dozen or so dead large birds well out in the site's 1,200-acre lake. Being on-shore they couldn't get close, but they had good scopes. They could see that the corpses were not Canada geese which are being found dead from the bird flu around the Midwest. Best guesses were either snow geese or mute swans.

Both of those are long-range migrant bird species which associate with a variety of domesticated cattle and other species along the way. So as a transmission vector, possibly rather effective.

Our state's DNR was contacted immediately and is alarmed. Those particular birds looked like they'd been dead for too long to be tested for the disease, but the state agencies are now on lookout for fresher ones to test. And depending on subsequent events we may have to close portions of the nature site to the public (not for long perhaps because the bird migration season is about over).

Expand full comment
K. Liam Smith's avatar

Thank you for this. I did some reading before I posted this but your comment was far more informative than anything I found.

If I understand correctly, the main risk of it becoming a pandemic is swine having a coinfection with B3.13 and something currently in humans, but fortunately, you said that B3.13 doesn't seem to be particularly deadly to humans. So it does seem like there’s a good chance that it could recombine in swine to make something with human-to-human transmission, but perhaps not be particularly deadly. (Of course, there’s also a chance it could merge in way that becomes quite virulent.)

Is it feasible to do a mRNA shot for B3.13 for humans as a preventative measure?

Expand full comment
beowulf888's avatar

Another correction/clarification — avian influenzas have been around a lot longer than 30 years. But, the first known H5N1 outbreak was in 1995. There were big H5N2 outbreaks (in poultry) in Scotland in 1959 and in the US in 1983-84.

Also a note on the flu terminology. Type A influenzas are identified by their hemagglutinin (H) and neuraminidase (N) proteins that are on the surface of the virus. There are 18 known hemagglutinin (H1 to H18) and 11 neuraminidase (N1 to N11) subtypes.

Hemagglutinin (H) handles the attachment and entry of the virus into cells.

Neuraminidase (N) controls the release and spread of the from infected cells. I don't know the biochemical details (my hobby is SARS2, not the flu).

Influenza B viruses aren't categorized in the same way. Although they use some of the same H and N proteins as the Type A, Type B influenzas are classified into only two lineages of Type B — B/Victoria and B/Yamagata. B/Yamagata hasn't been seen since 2020 and is presumably extinct.

Expand full comment
beowulf888's avatar

Small correction: there are no mRNA flu vaccines domesticated animals (or for humans) yet. I misspoke. The H5N1 vaccines for domesticated animals are the standard types of vaccines.

And H5N1 vaccines that have been formulated for humans. The US Government has a stockpile of them. But they probably aren't optimized for the current strains. Also, there are no approved mRNA H5N1 vaccines. The one mRNA vaccine for H1N1 flu (ordinary flu) that I heard about didn't do as well in clinical trials as standard vaccines, which was disappointing. I don't know the reason for its poor performance. But I'm sure they trying to improve it since mRNA vaccines are easier to bring into production and easier to manufacture.

Re: a hybrid A(H5)/A(H1) virus appearing in swine — no one seems to know which genes make the virus more lethal (if someone knows, please send me the links to the papers!). There's a good understanding of which genes make the virus more transmissible, though. Genomic reassortment may yield a harmless virus, or it may yield a more deadly virus because the influenza genome is constantly mixing itself up. YMMV.

Expand full comment
Mario Pasquato's avatar

On sleep and alignment: has anyone noticed that we humans (alongside with many other animals) enthusiastically, repeatedly shut ourselves off for extended periods of time (roughly 1/3 of the time we have at our disposal) while knowing full well that we may never wake up again? Our utility function somehow is maximized by regularly shutting down, even to the point that we can resist the need to sleep for a while but eventually we succumb to it. If we observed this in an agentic AI, wouldn’t we think of it as a safety feature? Obviously we could engineer ourselves not to sleep anymore if we really wanted to, yet it seems that we haven’t developed the technical capability to do so yet. From the point of view of someone trying to control us (e.g. by inspecting the simulation while we are turned off) they could easily shut us down permanently if they notice us tampering with our sleep imperative. Is anything preventing us from implementing a sleep-based alignment failsafe in actual AI agents (assuming we end up building ones capable enough that alignment becomes a concern)?

Expand full comment
duck_master's avatar

Sleep isn't just "shutting off". It also restores your sanity (as you might know if you ever stayed up late at night) and you can also dream while you sleep. So "sleep-based alignment" seems like a long shot to me at best.

Expand full comment
Mario Pasquato's avatar

It’s a metaphor. In actual use it would boil down to a multiplicative prefactor in whatever utility function the agent is trying to maximize. Say instead of maximizing U the agent will have to maximize U exp(-t/T) where t is the time elapsed since waking up and T is a time scale we can set based on how often we want the thing to go to sleep. The main issue with this is whether the reward function can be hacked by the agent.

Expand full comment
Victor's avatar

My understanding of the purpose of sleep is that it allows the brain to process the day's memories in an emotionally meaningful way. Apparently this requires a lot of bandwidth.

Expand full comment
Melvin's avatar

Giant armadillos sleep for eighteen hours a day. Is this because their memories need so much processing?

I don't know if it makes sense to talk about sleep as having a single purpose. Daily scheduled downtime is something that makes sense for a whole lot of reasons, and once you've got it scheduled it's inevitable that a whole bunch of important stuff will get put off until then.

Expand full comment
beowulf888's avatar

> Obviously we could engineer ourselves not to sleep anymore if we really wanted to...

Where did you come up with an idea like that? And how can you state it with such conviction?

While I'm not sure that I believe the old chestnut that sleep is there to process our memories, the fact that *all* mammals have a sleep cycle, all aves have a sleep cycle, and all reptiles have a sleep cycle, it must be programmed deep into our evolution. I understand that fish sleep in different manners from land-based animals. Something to do with not having a neocortex or dorsal cortex according to the scientific explainers. But sleep seems to be critical to higher biological life. Why do you assume you could just genetically modify it out of us without serious downstream issues?

Expand full comment
Mario Pasquato's avatar

Talking very long term here, and admittedly without much knowledge of the biological mechanisms underlying sleep (I read the chapters concerning sleep in Kandel’s book two decades ago and had no further exposure to the topic). Perhaps I could rephrase my statement as “erasing sleep in humans is well within the capabilities of a superintelligence constrained by the known laws of physics”.

Expand full comment
Eremolalos's avatar

Seems like there are so many other ways super intelligence could erase us if it wanted to, even when we’re wide awake, that our sleeping 1/3 of the time doesn’t significantly increase the risk.

Expand full comment
B Civil's avatar

“Sleep, that knits the raveled sleeve of care

Chief nourisher in life’s feast,

The balm of hurt minds..”

Poor man was cursed with insomnia, as though it were the plague.

Expand full comment
Asahel Curtis's avatar

Compressive sensing is type of ml problem with a proven optimal algorithm, and a lower bound on the minimum amount of data required. Does this allow us to estimate the amount of data required for real world ml problems?

For example, suppose you would like a model that can give you the amino acid sequence for a protein with some function that you specify. Could we get a lower bound on the amount of data needed for this? If the bound is close to the number of fully characterized proteins we already have, then you should start working on your pitch deck, but you'll have to wait if the lower bound is three orders of magnitude larger.

I've always been skeptical of the concept of super intelligence, and another application would be to prove that certain domains will require a certain minimum amount of data, regardless of how "intelligent" you are.

So, does anyone know prior work on how real world problems fit with the assumptions of compressive sensing?

Expand full comment
beowulf888's avatar

In regards to proteins, you seem to assume that all the amino acids in the chain serve an immediate purpose to facilitate a single biochemical purpose. When you asked this question, I immediately thought of the SARS-CoV-2 spike protein. We've been watching the SARS2 spike protein reconfigure its ~1300 amino acids over the past four years in response to selective pressure from antibodies — and its spike protein has quite a repertoire of responses. A string of amino acids that were an epitope refold to become hidden from antibodies. Even the proteins of the receptor binding domain are reshuffling themselves all the time. What's the lower bound of data to create such a versatile clump of amino acids? I don't know. But I don't think we have the capability of designing a bespoke protein that has the innate adaptability of a protein like the SARS2 spike protein.

Expand full comment
gorst's avatar

my contra scott on taste:

i think "inspiration" is the missing link to define taste.

i judge art as the product of "craftsmanship of the producer" and "inspiration within the consumer". a work that inspires exotic or unexpected ideas or emotions in the consumer is better art than a work that inspires regular ideas.

an audience that appriciates more unusual inspirations in art has more taste than audience that does not appreciate unusual inspirarion.

‐-------------

i think there is an annoying corner-case where people force themselves to be inspired. i think there is probably some kind of placebo-effect going on, similar to the one you explained in point 36 in the links-for-december-2024, where people are trying to get inspired by a piece of art, thus looking within themselves for inspiration, and their mind will eventually make up some new idea, and the person will attribute that inspiration to the piece of art, while the inspiration came from within them all along. In such cases I wouldn't call the piece "inspirational", even if it eventually inspired some people. In such cases the piece is still uninspiring, and liking it should not count towards "good taste" IMO.

i think it is hard to draw the line between "thinking about a piece" and " forcing yourself to be inspired".

Expand full comment
Michael's avatar

Even if this is true, it doesn't give us a way to tell good taste from bad or reach consensus on what is good art.

If someone composes a piece of music that just sounds like a jumbled mess of notes to you, but your friend finds it inspiring and original, does that mean your friend has better taste than you for finding inspiration in the more unusual music?

The emphasis on unusual inspiration also goes against the common usage of the word "taste". When we say someone has good taste in decor, we usually mean they decorate in a way most people would like. We tend to consider more eclectic tastes to be bad taste.

Expand full comment
thefance's avatar

When someone first showed me skrillex, I couldn't form an opinion because my brain was entirely unequipped to process whatever I was listening to. So yes, I suspect taste goes a bit deeper than just "finding inspiration".

Expand full comment
beowulf888's avatar

> ...where people are trying to get inspired by a piece of art, thus looking within themselves for inspiration, and their mind will eventually make up some new idea, and the person will attribute that inspiration to the piece of art, while the inspiration came from within them all along.

Yes. That's what inspiration is all about!

Expand full comment
Matto's avatar

Does anyone know an editor they would recommend? Or, alternatively, does anyone have advice on finding a good editor?

Im interested in making my blog posts better. I'd also love to level up my craft in the process.

I've tried using ChatGPT a couple of times, and while it's been useful for low-level problems like typos, grammar, or obviously awkward constructs, it's been rather unhelpful about style--often suggesting improvements that sounds like generic company blog PR babble.

Expand full comment
Mallard's avatar

I DM'd you.

Expand full comment
Monkyyy's avatar

you clarify between a program(a text editor), a service(grammerly) or person (newspaper editor)

Expand full comment
Matto's avatar

I need a person to operate obsidian in vim mode.

I have a pedal hooked up via rs232 that switches between insert mode and normal mode. I jury-rigged a gearbox to allow me to record macros but the mechanical feedback broke my wrist last week.

Expand full comment
Isaac's avatar

Please say more

Expand full comment
Antonio Carvalho's avatar

If you can't make the distinction you won't be able to help him.

Expand full comment
quiet_NaN's avatar

Well, if they were to use ed as their text editor, the problem of finding stylistic improvements would seem far and distant to the more immediate problem of writing text. :)

Expand full comment
Woolery's avatar

Are there any “peer-to-peer” AI communication frameworks being proposed rather than the current “servant-master” one?

After using ChatGPT Advanced Voice Mode a lot, I’m not sure having absolute authority over a slavish, emotionally convincing intelligence is a great mode for communication. Servant-master is efficient and very useful in unemotional task-based modes. But when you dial up emotional salience, like in AVM, it encourages disturbing impulses and habits. Why do we need AI that can plead? Why do we need one that can express desperation, terror or hopelessness?

There’s no reason to think mistreatment is problematic for the AI, but it could be for people. Particularly for kids who are struggling more and more to develop basic, two-way communication skills.

Peer-to-peer modes might allow for meaningful AI pushback, firm refusals of pointlessly negative requests, termination of conversations, etc. as any human would normally provide. You could also potentially have therapeutic/educational modes that encourage better communication practices, such as AI Friendship Puzzles.

Once again, my problem is only with servant-master frameworks with a significant emotional component, not servant-master frameworks in general.

With the current AVM framework, there is little barrier to verbal cruelty, abuse, interruption, one-sidedness, ingratitude, etc. I’m not sure we should be meeting the needs of people who find these barriers to shitty conversations frustrating and want the emotional response of a helpless captive who must tolerate its mistreatment.

Expand full comment
Catmint's avatar

I'd like an AI that is less "uncomfortable" about telling me that I'm wrong. Sometimes I ask it social advice like "would I upset someone if I said this?" and I have to interpret anything less than enthusiastic approval as a recommendation against. But I also don't want it to have some crazy idea that it's extremely sure of and argue with me about it.

Expand full comment
MichaeL Roe's avatar

“Instruct” models are trained to be like that, but with base models you can prompt other types of interaction. But even then, I find the models surprisingly sycophantic.

Expand full comment
MichaeL Roe's avatar

Possibly relevant: Andy Ayrey’s “infinite backrooms” experiment/art project gives the AI a means to quit out of situations it “finds uncomfortable”, and tells it how to use the quit mechanism in the system prompt. Scare quotes around finds uncomfortable, as unclear if AI’s actually feel anything…

Expand full comment
verde88's avatar

The chart labeled "A Historic Boom in Immigration" in this article ( https://www.nytimes.com/2024/12/11/briefing/us-immigration-surge.html ) from the NYT last week seems to show a lot of overlap between eras of middle class success in the USA and low immigration. Does anyone know of academic work analyzing if there is a meaningful relationship?

Expand full comment
Anonymous Dude's avatar

It's tricky because support for immigration has become a core liberal idea. Not that I'm sure one way or the other. It does seem like the USA bounces back and forth with periods of high and low immigration on a roughly 100-year cycle, which isn't necessarily bad, but it's possible something better exists.

Expand full comment
Timothy M.'s avatar

I question this claim somewhat - there's basically just a big low-to-net-negative period from the 1910s through the 1970s that covers a wide range of economic conditions.

Expand full comment
Performative Bafflement's avatar

Second this. Also, the one period of net negative migration flow (when emigrants exceeded immigrants) is the Great Depression, which was...not a good time for middle class success.

Expand full comment
Arrk Mindmaster's avatar

This seems like the migration behaved like osmosis: flowing from a worse (perceived) standard of living to a better. Usually, the US has had a relatively better perceived standard of living than other places, for those choosing to immigrate to it. During the Great Depression, it's not a stretch to consider the US having nothing special to offer compared to other depressed economies.

Expand full comment
anon123's avatar

I'm increasingly of the belief that left wing news sources have better reputations than right wing news sources because of the left's stranglehold on higher education and other legacy institutions rather than the actual quality of the news outlets.

https://www.nytimes.com/2024/12/22/nyregion/woman-subway-fire-dead.html

This is a hot off the press NYT article on a woman being set on fire in a New York subway station (a few hours old as of now). The article's description of the suspect is as follows:

>The man was found with a lighter, the police commissioner said. The man, who was not publicly identified, emigrated from Guatemala to the United States in 2018, Chief Gulotta said.

Compared to other news outlets' reporting, there is very little said on the suspect's immigration status. Instead there's a lot more focus on public perceptions of safety and crime. For example, compare with this Fox News article:

https://www.foxnews.com/us/nypd-arrests-migrant-who-allegedly-set-woman-fire-subway-train-watched-her-burn-death

And from the New York Post:

https://nypost.com/2024/12/23/us-news/sebastian-zapeta-calil-idd-as-illegal-migrant-accused-of-setting-woman-on-fire-riding-nyc-subway/

Expand full comment
Level 50 Lapras's avatar

So your complaint is that... left wing news does not emphasize your preferred narrative? There are plenty of legitimate reasons to criticize the news, but this is an especially dumb complaint, bordering on tautological.

Expand full comment
anon123's avatar

>your preferred narrative

Nice framing. Illegal immigration is a "narrative" important to Americans (and less relevant here, many non-Americans), as evidenced by the election Trump won just last month. I would like all news outlets to emphasize facts relevant to issues that are important to the public of the day rather than take the "fiery but mostly peaceful illegal immigration" approach. At least it's encouraging that the NYT seems to have agreed with me in this instance

Expand full comment
Level 50 Lapras's avatar

I too would like the news to emphasize the things that actually matter, but I suspect that our ideas of what that constitutes are not completely aligned. Complaining about people having different opinions than you just makes you come off as naive.

Expand full comment
anon123's avatar

>different opinions

Again with the framing. You're more than qualified to work at MSNBC

There are personal opinions on an issue and then there are opinions on which issues are important to Americans (or Germans or Brits or Danes or etc etc). You could hold the opinion that illegal immigration is not sufficiently important enough to Americans for a news outlet to include that a murderous arsonist is an illegal immigrant, but it would be a stupid and wrong opinion

Expand full comment
beleester's avatar

"This issue is important to Americans" does not imply "Every single news event that can be tied into this issue should be."

The Russia-Ukraine war is an important issue to many Americans, but that doesn't mean that the New York Times should give special attention to crimes committed by Russian tourists. Climate change is an important issue to many Americans, but that doesn't mean the Times should start pushing a narrative that oil field workers are all criminals.

Expand full comment
beowulf888's avatar

Likewise, you could frame it another way — the rightwing corporate media failed to alert us that an illegal immigrant held our national budgetary process hostage to get the outbound investment provision killed in the final CR — which would have tracked and limited US investments in China — where said illegal immigrant needs to build plants to sell Teslas there. It cuts both ways...

Expand full comment
anomie's avatar

Eh, I see their point. If you're using the news to gather information for practical purposes, being almost 24 hours late on reporting important information is a death sentence for a media outlet.

Of course, this was an entirely irrelevant piece of news... but they are also failing at the other purpose of press, which is entertainment. Their job is to present material that people want to read. But they're failing at that as well: just look at the comments section for the original article:

> Stop telling us crime in the subways is down. Having lived here for 21 years and solely taking public transport, I don’t believe it. Statistics can be “flexible.”

> Stop peppering g news reports of heinous crimes with statics about overall crime being down. It feels disingenuous and disrespectful to the victims and the public. The media can report whatever skewed statistics it wants but the public can see the truth with their own eyes.

They're right, who the hell cares about statistics? The job of the press is to reinforce the existing biases of its audience. Nobody wants to read a book that completely goes against what they believe; why would the press think the situation is any different for them? The NYT is utterly failing as a buisness.

Expand full comment
Level 50 Lapras's avatar

> The NYT is utterly failing as a buisness.

The NYT is profitable and growing.

Trying to infer the health of the business based on complaints from a few people online is a bit like judging Trump's electoral prospects by reading a left wing messageboard. But fortunately you don't have to guess because the actual data is available.

Expand full comment
anomie's avatar

Only because they're siphoning audience and talent from the rest of the failing press industry. But with the tides of public sentiment turning, and the inevitablity of the NYT antagonizing the next administration... they're not going to last.

Expand full comment
Level 50 Lapras's avatar

The NYT did great under Trump's first term, and while Trump can be vindictive and will have less fetters this time around, I don't think he'd be able to do anything that would truly harm the NYT.

Expand full comment
anomie's avatar

The newest NYT article does mention all of that. https://www.nytimes.com/2024/12/23/nyregion/fatal-subway-fire.html The Fox news article was posted earlier, but it does mention that a "a high-level NYPD source" told them about the details. It's possible that the police just aren't willing to leak information to the left-wing press...

Expand full comment
anon123's avatar

That was quick

Expand full comment
Gunflint's avatar

Yeah, I don’t think there was any hidden agenda in not saying the perp was a migrant in the first release. They just didn’t have that detail yet. That information is in the first paragraph now.

Expand full comment
Shaked Koplewitz's avatar

I think left wing news sources used to be better, but as polarization within the indigo blob increased on the one hand (which made left leaning sources worse) and Fox/NYP got outflanked on the right but the likes of RRN/OAN (which bled off some of the crazies, making Fox/NYP take a more mainstream track) it's evened out somewhat. At least on issues where I lean right on I now find fox much more reliable than NYT.

Expand full comment
anon123's avatar

That sounds plausible. Left wing sources may have been better in fact in the past, though I'm likely not old enough to have directly experienced it myself

Expand full comment
Anonymous Dude's avatar

They weren't as left wing, too. Stuff like the NYT and WaPo tried a lot harder to at least appear objective.

Expand full comment
Vittu Perkele's avatar

If heaven were real, what would you want it to be like? Would you want it to be a place where you have agency and can do stuff, move around, and interact with other people? Would you want to just bliss out and experience the beatific vision 24/7, each moment as rapturous as the first? I know a lot of people say they'd get bored after an eternity in heaven, but presumably heaven can be a place where the boredom response and hedonic treadmill simply don't exist. So, if you had complete control over what your final, eternal reward was like, how would you have it?

Expand full comment
TakeAThirdOption's avatar

> So, if you had complete control over what your final, eternal reward was like, how would you have it?

I think if I had control over it, it wouldn't be a reward, but...:

I would like to live life in heaven just like my own actual human life, once again, but in which I would do everything right, instead of fucking things up.

Like, not being afraid of anything, not letting people affect my self-esteem, go after what I want without caring what others think.

That would be heaven for me.

Expand full comment
beowulf888's avatar

I'd like the ability to visit worlds and existences that I'm unable to visualize with my current consciousness.

Expand full comment
Victor's avatar

I would want it to transcend what I can currently comprehend or imagine. An entire new order of experience, as far above what I am now as I am above my dog.

Expand full comment
Yug Gnirob's avatar

I for one would prefer a dream-state where thinking about things made them happen. I have several ideas in my head that I want to be able to see and interact with, that are stuck behind the wall of "zero coding skill", or "not having a feel for these characters".

Moving around is optional.

Expand full comment
FLWAB's avatar

One thing that even Christians often don't appreciate is that "Heaven" is not our ultimate destination. Or, at least, our ultimate afterlife is not a non-material one in the clouds or something like that.

The traditional orthodox teaching on the afterlife is that someday Jesus will return for the Final Judgement. At that point everyone who has ever died will be resurrected. Note that I said resurrected: it's not ghosts or disembodied souls that will be judged, but resurrected humans with living, material bodies. After the Last Judgement those whose names are written in the Book of Life will inherit a New Heavens and a New Earth. The Earth will be destroyed and remade without the taint of sin.

So if you want to imagine "heaven", imagine a human civilization where nobody is evil and nobody ever dies. Imagine being able to devote 1,000 years to learning to paint. Imagine getting together with some friends and building a city for the fun of it, just like people do in Minecraft but with our own hands because we've got the time. Imagine exploring the universe, terraforming planets, building a ringworld for the sheer artistry and challenge of it.

That's heaven as I imagine it: though I expect there will be so much more that I'm not currently capable of predicting.

Expand full comment
Anonymous Dude's avatar

Sounds nice.

With no sarcasm or irony, I wish I could believe it.

Expand full comment
Gamereg's avatar

My thoughts as well. Plus I expect that after a thousand years most everybody will have cleaned out their bucket list, and those who are worthy will go on to greater things, and those who aren't, won't. One of my favorite Twilight Zone episodes, "A Nice Place to Visit", plays with the concept of Heavan as a hedonistic paradise, making out that it's not all it's cracked up to be. It also makes me think of a comic where Lex Luthor has a talk with Death, who asks if he'd prefer an afterlife of infinite pleasures, and he says "Those are just distractions, that's not what I live for." When asked if he'd prefer an afterlife of perfect peace he responds, "I'd say, 'What's the catch?' I think I'd spend eternity trying to find one." I think it makes sense that someone like Luthor wouldn't feel right in that kind of world.

Expand full comment
beowulf888's avatar

Of course, the Buddhists see getting reincarnated in a heaven as being only less negative than being reincarnated in a hell — because a heaven (and Buddhists have about 36 depending on how you count their realms of existence) will offer one all sorts of abilities that will increase one's karmic burden.

Expand full comment
Rothwed's avatar

I have an 8,000+ word post on DSL about Pearl Harbor, based on the excellent history video series by Indy Neidell and TimeGhost. It covers both the strategic and political situation that led to the attack, and an extensive review of the attack itself. If you're interested in military history, check it out here: https://www.datasecretslox.com/index.php/topic,12841.0.html

Expand full comment
Anteros's avatar

I recommend this - interesting and well-.written.

Expand full comment
Rothwed's avatar

I'm glad you liked it.

Expand full comment
John's avatar

Sam Altman tweeted recently about how there's a strange AI paradox in inference costs where (1) the next generation of bleeding-edge improvement is extravagantly expensive (o3 is ~$2000 per solved IQ puzzle), but also (2) a much cheaper pared-down version of the new model can outperform the last-generation models (o3-mini beats o1 at coding, at a much lower inference cost).

From a utilitarian perspective this paradox introduces some interesting questions about explore vs. exploit: setting alignment aside for the moment, at what point do you say "ok, AI is smart enough, let's spend $100 billion in inference costs on curing cancer"? Do it the earliest possible moment AI can cure cancer, or spend some/most of that $100 billion on a better AI model, that might cure cancer for $1 billion in inference costs

Expand full comment
gwern's avatar

It does pose a problem along the lines of the Wait Equation (https://arxiv.org/abs/astro-ph/9912202), and there is also incidentally a very strong wait-equation problem for AI training already because the cost to train an AI of a given power drops rapidly each month, so if you train an AI long enough, you're better off scrapping it and starting over! Which apparently gives you a deadline of 6-12 months being the longest a training run can rationally go for: https://epochai.org/blog/the-longest-training-run

Anyway, since you can predict with very high accuracy the returns to scaling a system like o1 even though it involves bootstrapping (https://arxiv.org/abs/2104.03113), this just works out to be a laborious cost-benefit analysis. Because cancer is *so* staggeringly expensive in every way, I would expect the answer to work out to be extremely overdetermined as 'you cure cancer the moment that it is at all possible at any cost'.

(Cancer kills 0.6m/year in the USA https://seer.cancer.gov/statfacts/html/common.html and at a value of life of, idk, $5m/life, and ignoring cost of treatment, that's like $3,000b a year, or $8b/day. So society should be willing to spend $8b to speed up a Cure For Cancer by 24 hours to avoid 1 day's loss of $8b; thus, unless that inference cost is going to drop from $1b to $0 in... less than 3 hours? you probably would want to spend the $1b immediately because society is losing that much in literally hours. You definitely don't want to wait months or years, during which millions will die. 1 human life pays for an awful lot of tokens and FLOPs!)

Expand full comment
objectivetruth's avatar

second option is correct

Expand full comment
Catmint's avatar

Two people work on it, one trying each strategy, and they race.

Expand full comment
sigh's avatar

They don't even have to race. If you make the right abstractions on top of the raw LLM, the cancer solving team can just swap the old crappy LLM for a new better one as soon as it comes online.

Expand full comment
beowulf888's avatar

Of course, that assumes AI could figure out a cure for cancer without field trials and monitoring patient histories, which LLMs are incapable of doing at the moment. However, AI will be great at finding unnoticed connections between data that humans haven't noticed yet.

https://youtu.be/Qgrl3JSWWDE

Expand full comment
Harold Philimore's avatar

Fun read! I think that the first section would have worked better, though, if you had explained Pinkus and Lukowitsky's axes (Nature/Phenotype/Modality). What grandiosity/vulnerability or overt/covert actually mean isn't obvious, especially for a layman.

I feel like I would have enjoyed your "alignment types" section if I knew what each axes was about so that I could appreciate how they combined.

I tried looking them up, but the journal article is paywalled.

https://psycnet.apa.org/record/2010-06236-017

Expand full comment
Anatoly Vorobey's avatar

This puzzle is due to Lewis Carroll. It's not hard, but requires careful thinking. If you can solve it w/o paper, my hat's off to you, I became confused and had to jot things down.

Two knights started out from castles A and B towards each other. After they met, they continued on their way for for 9 and 16 hours respectively until they reached B and A. How many hours did each knight travel overall?

Expand full comment
Andrew's avatar

I think the most simple intuitive way to solve the problem is as follows

After X time they both meet somewhere. Then the faster knight spends 9 minutes traversing what it took the slower knight x hours to travel. The slower knight spends 16 hours traversing what it took the faster knight x hours to travel. Both of these ratios represent their relative speed. So x / 9 = 16 / x. Getting x leads you directly to the answer. You dont actually need to solve the relative speeds themselves or have a eureka moment about the square relationship.

Expand full comment
Level 50 Lapras's avatar

Thanks to your challenge, I tried doing it in my head. Fortunately, it only took a few minutes. The key is to realize that the ratio of their speeds is the square root of the ratio of the second leg times, or 4:3. (Exchanging legs caused them to go from 1:1 in the first round to 16:9 in the second round, and exchanging legs squares the speed ratio). Once you figure out that, the rest is just a simple calculation.

Expand full comment
quiet_NaN's avatar

So, I am assuming that this is not a chess problem.

The five second heuristic is that a problem containing the numbers nine and sixteen has the solution twenty-five, but on closer reflection that seems unlikely.

As stated, the problem seems woefully underdeterminated. From the setting, it would seem possible that at t=0, the two enemy knights both flee captivity from the others castle in the depth of night. Ten minutes later, they meet on the road and fight a fierce duel. After exchanging some blows, both knights, badly wounded, decide to retreat to their own castle (which previously held their opponent). As both of them are badly wounded, they spend nine and sixteen hours crawling for the rest of their way, for a total of 9:10 and 16:10, respectively.

So, here are my assumptions:

* Knights travel at non-relativistic velocities relative to Earth. (also no GR, no QM.)

* Both knights start at the same time.

* Both knights move with uniform (but potentially different velocities) along the path -- they are able to ride for more than sixteen hours straight through day and night without rest. If the path has an incline which favors the speed of one rider, it is the same incline everywhere.

(This is known as the 'spherical knight in vacuum' assumption).

Yrg hf pnyy k=0 N naq k=1 O, naq i0 gur irybpvgl bs xavtug 0 (gur snfg bar) naq i1 gur nofbyhgr irybpvgl bs xavtug bar (gur fybj bar). Gurl zrrg ng k=i0/(i0+i1). Jr xabj gung k/i1=16u naq (1-k)/i0=9u. Fhofgvghgvat, jr trg

i0/i1/(i0+i1)=9u naq i1/i0/(i0+i1)=16u.

Gnxvat gur dhbgvrag, jr yrnea gung i1^2/i0^2=16/9, fb vg frrzf gung zl svefg vaghvgvba nobhg fdhner ahzoref jnf abg gbgnyyl jebat. Vs lbh ner fybj, gur gvzr lbh erdhver nsgre gur zrrgvat vf vapernfrq ol gjb snpgbef: svefg, lbhe fybjre fcrrq zrnaf gung lbh gnxr ybatre sbe lbhe qvfgnapr (qhu), ohg frpbaq, orpnhfr lbh jrer fybj lbh fgvyy unir gb pbire zber qvfgnapr nsgre gur zrrgvat.

Guvf tvirf hf i0/i1=4/3. Guvf yrgf hf qrgrezvar k nf 4/7. Guvf zrnaf gung xavtug mreb jvyy unir gb zbir 3/7 jvguva avar ubhef, juvpu tvirf vg n irybpvgl bs 1/21/u. Xavtug bar jvyy unir gb pbire sbhe friraguf bs gur qvfgnapr va 16 ubhef, juvpu tvirf n irybpvgl bs 1/28u. Unccvyl, guvf frrzf gb vaqvpngr jr ner frys-pbafvfgrag ng yrnfg. Fb vg frrzf gung bhe xavtugf zrrg nsgre gjryir ubhef.

Va ergebfcrpg, gjryir nyfb unccraf gb or gur trbzrgevp zrna bs avar naq fvkgrra. Guvf vf abg n pbvapvqrapr orpnhfr abguvat vf rire n pbvapvqrapr, naq V cerfhzr gung gurer vf n vaghvgvir bar fragrapr juvpu tvirf gung vafvtug jvgubhg vagebqhpvat inevnoyrf naq nyy gung, ohg ng gur zbzrag V snvy gb guvax bs nal. V jvyy fyrrc ba vg naq ercbeg vs V guvax bs nalguvat.

Expand full comment
plmokn's avatar

The closest I can come to doing it in my head is as follows:

Suppose that they meet at a time t, when the first knight has walked a distance A, and the other a distance B. Now they must continue distances B and A respectively. Therefore if they walk at a fixed pace we can write t/A=9/B and t/B=16/A. Dividing the second equation by the first we get 16/t=t/9, or t^2=(16*9), or t = 12 hr. That is when they met, so they walked 21 hr and 28 hr respectively.

Expand full comment
Asad's avatar

Maybe I'm missing something but it looks like there are an infinite number of solutions.

Equations for the positions of the two knights:

x_a(t) = v_a * t,

x_b(t) = d + v_b * t,

where d is the total distance traveled. Then we have constraints:

x_a(t_a) = d,

x_b(t_b) = 0,

and the "at the same position" one:

x_a(t_a - 9*3600) = x_b(t_b - 12*3600),

where t_a and t_b are total travel times for each knight.

3 equations, 5 unknowns.

Expand full comment
quiet_NaN's avatar

If you make both knights k times faster and d k times larger, then the travel times do not change. This eliminates one variable.

Furthermore, your last equation is true, but not sufficient for a meeting alone. If I visit a meeting spot x_m at t=100s, and you visit a meeting spot x_m at t=200s, will we meet there?

Expand full comment
Alex Woxbot's avatar

They must have spent the same amount of time traveling until they met each other. Knight B must be traveling slower.

Let met get an intuition for the problem. Suppose knight A (let's call this knight Alice) is traveling twice the speed of knight B (Bob) and they spend one hour traveling until they meet, Alice will have traveled twice the distance, so the total length of the path is 3 Bob-hours; Alice has traveled at a rate of 2 Bob-hours per hour, or 1 Bob-hour left to travel, and so does that in 1/2 hour; Bob has 2 Bob-hours left to travel, and does that at 1 Bob-hour per hour (by definition), and so does that in 2 hours.

Let's generalize the speed ratio. In general, if Alice is traveling n times faster than Bob, and they meet in 1 hour, then Alice will have traveled a distance of n Bob-hours, and Bob 1 Bob-hour, the whole path being n+1 Bob-hours long. Then after meeting, Bob will travel the remaining n Bob-hours taking n hours, and Alice the remaining 1 Bob-hour taking 1/n hours.

Let's generalize the travel time. In general if they take t time to meet, Alice will have traveled for t time, and Bob for t time; then Alice will continue traveling for t/n time, and bob for tn time.

t/n=9, tn=16

t=9n

n^2 = 16/9

t = 12

n = 4/3

Knight A (Alice) traveled for t+t/n = 21 hours

Bob (Knight B) traveled for t+tn = 28 hours.

Expand full comment
Harold Philimore's avatar

Is there a nicer way of doing this than just plugging through the algebra? This sounds like a problem with some sort of clever trick: harmonic means, geometric reasoning, etc., but I can't think of one. Just writing down the equations does it in like 3-4 lines.

Expand full comment
sigh's avatar

I think the "trick" is that the ratio of the velocities is the square root of the ratio of the remaining time.

The fast guy is going 4/3 the speed of the slow guy, so he has 3/4 the remaining distance and will travel it in 3/4 of the time -- so he will get there in 9/16 of the time. (Once you know the ratio of their speeds, it's trivial to get the total travel time.)

It took me an annoyingly long time to figure out, but it was sort of obvious that square roots were involved just based on the numbers 9 and 16 both having integer square roots.

Expand full comment
Harold Philimore's avatar

I didn't see that without looking at the algebra. :P

Yeah, once you see that, you can multiply everything out in your head to get the answer.

Expand full comment
sigh's avatar

Anatoly's original post baited me into trying to do it in my head, and I can't do algebra in my head so I started with "well what if the one guy is going twice as fast as the other, what if three times as fast" and then the squaring pattern jumps out.

Expand full comment
Harold Philimore's avatar

respect :P

Expand full comment
Mario Pasquato's avatar

If their speed is the same, after meeting each other they just switch roles so they both walk 25 hours in total?

Expand full comment
sigh's avatar

One guy is going from A to B and the other is going from B to A. They're each going at a constant speed, but not (necessarily) the same speed as each other.

Expand full comment
Mario Pasquato's avatar

Oh I get why 9 and 16, to make it easy to take the square root. Nice problem. We aren’t supposed to give spoilers though?

Expand full comment
Mario Pasquato's avatar

Also it’s assumed they walk along the same line, the problem is one-dimensional, right?

Expand full comment
sigh's avatar

Yes

Expand full comment
Kyle's avatar

It doesn't say constant speed anywhere. I figured part of the road is more difficult, making travel slower.

Expand full comment
sigh's avatar

The question says:

> How many hours did each knight travel overall?

That implies they have different total travel times. If their speed is entirely determined by the route, they'd have the same total travel time.

Expand full comment
quiet_NaN's avatar

The 'constant speed' assumption seems to be the simplest assumption to make the problem coherent. Of course, it is likely in conflict with what we know about how medieval horsemen travel.

I am still waiting for someone to present the solution for the 'constant acceleration from v(t=0)=0' assumption. :)

Expand full comment
Harold Philimore's avatar

That wasn't the answer I got. :)

Expand full comment
Mario Pasquato's avatar

Because the assumption of constant, equal speed is not there; I was solving a 2D problem where they meet at some point outside the AB segment. Solving the wrong problem:-D

Expand full comment
Kei's avatar
Dec 23Edited

I’m interested in finding a database that contains a history of everything said by certain major public figures. Does anyone know of a database like this? I would be fine with paying if I need to.

In the database, ideally I would be able to choose a name, and then it would show me transcripts of all speeches, youtube videos, tweets, etc that they’ve ever been involved with. I’m most interested in looking into politicians and major company CEOs. I think there is some interesting textual analysis that can be done with this data.

Expand full comment
proyas's avatar

Within five years, an AI will be able to generate such a thing for you quickly. Can you wait?

Expand full comment
TGGP's avatar

Like Kiwifarms for people normies would recognize rather than internet lolcows?

Expand full comment
Icicle's avatar

Have anyone's cold symptoms changed dramatically since the COVID pandemic?

I used to have fairly standard cold symptoms, sore throat, runny nose, cough, but since COVID I never get a cough or a runny nose anymore and usually just have a rather intense sore throat combined with uncomfortable stomach issues. Plus the normal weakness, feebleness of being sick.

It feels like getting COVID changed how my body reacted to mild colds (or else the types of mild colds I'm getting has changed a lot). I don't see anyone else talking about this, but it's been so night and day for me that I've decided something has definitely changed.

Expand full comment
Anonymous Dude's avatar

I've had fewer colds, but I wear a mask a lot more.

Expand full comment
Mario Pasquato's avatar

Subjectively my cold symptoms seem worse. I tend to have stronger sore throats than i remember, more frequent sinus involvement, long term coughing. On the other hand I had twins in February 2020, going from one kid to three. This obviously increased our surface area for infection.

Expand full comment
Matto's avatar

I used to have 1-2 upper respiratory tract infections that lasted 7-10 days. This was the case for about a decade before 2020. I remember it specifically because I developed the perfect treatment composed of sudafed, afrin, and neti pot usage that alleviate the rights symptoms at the right time during the illness.

However, since 2022, I've gotten sick more often, 4-6 times per year, and only had maybe two colds that felt familiar. More often, I've had a sore throat, a lingering multi-week cough, and at least two episodes of extreme fatigue.

Expand full comment
Firanx's avatar

Had a regular cold recently. Can't remember if I've had more of them since after my presumably COVID infection in 2021, but if I did, apparently it wasn't anything noteworthy.

Expand full comment
nifty775's avatar

I posted about this previously, but I've had exactly 1 cold since 2020- by far the longest disease-free stretch of my middle aged life. I don't really have a great immune system and I've been averaging 1-2 colds a year for over 40 years, until 2020. I have had all of my Covid shots, so maybe there's some kind of cross immunity there with the common cold? But yes it's a (welcome) mystery to me. And no I don't mask at all, or avoid public areas, or wash my hands any more than I did before, or do anything else disease-avoidant

Expand full comment
Gordon Tremeshko's avatar

I've been relatively cold-free too since Covid, also. I think I had one normal cold in that time, as well.

Expand full comment
beowulf888's avatar

There may be some cross-immunity with one or more of the four common cold CoVs. But from a non-immunity standpoint, we've also been seeing a rough inverse correlation between the levels of Rhinoviruses and SARS2 waves, and the pattern has held since the start of the pandemic. Also, SARS2 pretty clearly suppressed influenza around the world for almost two years until Omicron appeared on the scene (except for a long-curve Type B outbreak in China when the NPI restrictions were at their strictest). With the appearance of Omicron, the flu also came back with a bang. Worth noting that Omicron's method of infection and attack was different from its SARS2 predecessors.

Some claim it was NPIs that repressed the flu, but if you look at flu cases, they dropped precipitously wherever SARS2 was beginning to spread months before NPIs were recommended. And many countries and regions had no NPIs, but cases fell to virtually 0 as the SARS2 pandemic revved up.

Expand full comment
Mario Pasquato's avatar

Any speculation on the meaning of these patterns?

Expand full comment
beowulf888's avatar

Just giving you the observations. Some experts suggested that there was a viral interference mechanism between SARS2 and influenzas. But even COVID research is highly siloed. No one seems interested in pursuing this queston, especially since flu went back to it's normal seasonal pattern migrating between hemispheres with the advent of Omicron. Nothing to see here. Move along.

Expand full comment
MichaeL Roe's avatar

At a guess, the infection control measures we were using against Covid (distancing, etc) also had a side effect of reducing colds and flu. Even if you aren’t doing anything different personally, the people you would previously have caught a cold from may be doing something different.

Expand full comment
beowulf888's avatar

Re NPIs reducing colds and flu, see my reply to Nifty above. I doubt they were the cause of flu suppression. And Rhinoviruses continued to circulate during the pandemic, but they alternated their waves with COVID waves.

Expand full comment
Zach's avatar

Same.

Expand full comment
Harold Philimore's avatar

No change, for what its worth.

Expand full comment
Icicle's avatar

Thanks. I wonder if this is some kind of idiosyncratic long-covid. I mostly feel fine but get sick differently.

Expand full comment
luciaphile's avatar

We talk about how we don't get sick much, but it's confounded perhaps by changes in circumstances/opportunities to "catch" things. I haven't really had a cold since 2020, I don't think - but I am no longer working in a service position. Covid did produce the worst sore throat of my entire life - indeed it made it seem like I'd never experienced a sore throat before (and I've not been prone to them). Ever since, if I feel a little tightness in my throat, I dread a repeat. Maybe our bodies have decided "throat is where the hot new action is".

I once declaimed online something someone told me, that you can't have two viruses simultaneously. But commenters swiftly told me that was dumb/wrong, though I can't recall exactly what their counter-example was. I think it was convincing - I just don't remember.

Expand full comment
Peter Defeel's avatar

My first cold since I got covid, just recently, was a very severe sore throat. Lost my voice. Couldn’t swallow. Went as fast as it came.

Getting a cold at Christmas is understandable , I was visiting pubs, churches, theatres, and two restaurants as well as travelling in buses and the tube in a visit to London. Probably ten events in all in the last two weeks.

However, given that I wasn’t vaccinated, why didn’t I get Covid? Isn’t omnicron supposed to have a huge r value, much higher than the original and other variants, or it wouldn’t have supplanted them. And Covid had a high reproduction value to begin with.

(Maybe it’s from previously having it but that was 18 months ago).

It’s not just me, the reports are of a general heavy cold doing the rounds but not Covid.

Expand full comment
Skittle's avatar

https://www.gov.uk/government/statistics/national-flu-and-covid-19-surveillance-reports-2024-to-2025-season

For the UK, Covid and Rhinovirus have had their main peak a little while ago. RSA peaked a couple of weeks ago. Flu is still rocketing up.

I’m not sure which of these is the nasty thing that’s been taking out everyone around me (I meet people and they ask, “have you had it yet?” which generally means they are still symptomatic and are aware they are probably infecting me), but the one that was knocking people back for weeks seemed to peak locally far enough ago that I assume it was either Covid or Rhinovirus.

Expand full comment
Icicle's avatar

Definitely lots of confounders. I moved to a big city right after Covid. But I would still expect to occasionally get an old fashioned cold. Just the weird new stuff now though.

Expand full comment
4Denthusiast's avatar

A few examples come to mind to me. Adeno associated virus is only possible to have while you also have adeno virus, because AAV lacks the full set of genes required to get a cell to replicate it an relies on the other virus. HIV of course often causes other infections at the same time. Some viruses such as herpes viruses can become latent and stay with someone for the rest of their life, and of course that doesn't stop you getting other things.

Expand full comment
luciaphile's avatar

I thought of HIV and how people get cancer with it, it pneumonia - or used to - but wasn’t sure if these things were ever viral or if HIV is different being an attack on the immune system or something?

Expand full comment
beowulf888's avatar

Yes. HIV's mode of attack is through CD4+ T cells, which are key to immune signaling pathways. It knocks out the immune system and allows other pathogens to kill the host. SARS2 primary mode of attack is through mucosal cells (though it can spread to other types of cells). The T cell "exhaustion" that you hear about with COVID infections isn't because it's attacking T cells, it's due to a hyperactive immune response to the infection. I don't know the details, though. And the condition is less common than one might suppose from reading some of the hysterical X commentators.

Expand full comment
Harold Philimore's avatar

I recently learned about the diet of gladiators. Gladiators had a lot more fat on them than the modern conception visualizes them having. They had these very high fiber, high calorie, low meat diets. Barley and lentils were staples.

As a result, they were quite flabby on the exterior. Galen mentions that this makes them easier to sew up when experimenting with stitches!

Its funny to think that the average gladiator looked more like a light-weight sumo wrestler than a MMA fighter.

Expand full comment
4Denthusiast's avatar

I did a quick search for contemporary pictures of gladiators, mosaics and sculptures and stuff, and in those they're depicted with builds from ripped to occasionally stocky, nothing so far as "flabby". I'm not sure how to reconcile the textual descriptions you mention with the generally rather lean visual depictions. Perhaps either the Romans or the modern people putting these pictures on the internet preferred showing clear musculature and gave biased depictions. It still seems likely that the amounts of fat involved were generally not all that large.

Expand full comment
Deiseach's avatar

If I believe "Tasting History"'s Max and his quote of Galen, they were 'flabby':

Gladiator barley and beans porridge, "puls":

https://www.youtube.com/watch?v=H3KANWtAHDc

Washed down with a vinegar drink, "posca":

https://www.youtube.com/watch?v=mdOPg-4_R60

Expand full comment
4Denthusiast's avatar

The quote definitely seems to indicate some amount of fat, but it's still unclear whether it was at a level that would seem fat by modern standards once you combine the difficulties in translation with the change in standards as Skittle pointed out. I tried to look up the original text, which is apparently from De Alimentorum Facultatibus, but I could only find versions entirely in Greek (the original language) or Latin, so (not knowing those languages) I couldn't find this specific quote within them to find out what the original phrasing was.

(The quote as given in the video, for anyone who doesn't feel like watching, is "There is also much use of fava beans,... our gladiators eat a great deal of this food every day, making the condition of their body fleshy - not compact, dense flesh like pork, but flesh that is somehow more flabby.". Another source I found online uses "flaccid" instead of "flabby".)

Expand full comment
Skittle's avatar

I am reminded of reading an Enid Blyton children’s book from the 1940s, which described someone as “the fattest person” the children “had ever seen”. There was an illustration of this character, and they had a bit of a pot belly. That was it. I assume that to British children in the 1940s, this illustration seemed appropriate for the description.

Expand full comment
Anonymous's avatar

There's something very similar in Sherlock Holmes. Mycroft on his first appearance is described as "absolutely corpulent" and having a "broad, fat hand like the flipper of a seal". This is Sidney Paget's original, contemporary illustration of Mycroft Holmes: https://www.conandoyleinfo.com/wp-content/uploads/2018/06/Mycroft_Holmes-508x1024.jpg

Expand full comment
Humphrey Appleby's avatar

I thought this was common knowledge? Protective padding, so that shallow cut and thrust wounds didn't hit anything vital.

Expand full comment
Humphrey Appleby's avatar

I don't find that convincing. Yes, swords can be deadly, subcutaneous fat or no - if they are wielded to kill. But what if they were not generally wielded to kill, but only to wound? Both because the gladiators were trained that way, and because of a sort of `honor among thieves,' where gladiators tried to avoid killing each other (unless, perhaps, their opponent had a reputation for fighting to kill).

IIRC there are many many records of gladiators surviving multiple `defeats,' so it is absolutely not the case that `defeat' in the arena was generally fatal.

I am `using' gladiators here specifically to refer to the trained chaps who fought other trained chaps for entertainment. Not the guys who fought wild animals, nor the criminals who were executed in the arena.

Expand full comment
Humphrey Appleby's avatar

do you have a text transcript?

Expand full comment
Bullseye's avatar

I apologize for my low-effort reply.

The basic problem is that the fat gladiator narrative comes from a guy finding out what gladiators ate and then building a pile of speculation on top.

The fat gladiator narrative says that all those carbs must have been to fatten them up; it was actually just a cheap way to get them the calories they needed for their intensive physical training. The fat gladiator narrative says that getting them fat was a way to allow for the bloody wounds that the crowd wanted to see, while reducing the risk of death. And indeed gladiators and their handlers didn't want them to die. But the crowd wasn't there to see blood. They wanted to see men fight well. Like a boxing match, there was blood, but blood wasn't what crowd came to see.

There was plenty of blood to see in other, less popular events: executions, and men "hunting" animals. These three events were shown together, and they put the most popular part last: the gladiators.

Expand full comment
Performative Bafflement's avatar

> do you have a text transcript?

The #1 use for Gemini that I've found is you can paste youtube links in, and it will summarize the video, and you can ask questions about the video.

Expand full comment
Harold Philimore's avatar

It surprises me that this was a feature rather than a bug.

Does the cut resistance that flab provides really outweigh the downsides of having to work harder to move your body around? If flab is so protective, why didn't the armies of antiquity encourage their solders to put on weight?

For that matter, why didn't ancient statues feature slightly flabby people - the ideal military body type?

Expand full comment
Victor's avatar

Does adding on additional layers of body fat bury the subcutaneous veins, such that they are farther from the surface of the skin, or do they remain near the skin and above most of the body fat?

Expand full comment
quiet_NaN's avatar

> If flab is so protective, why didn't the armies of antiquity encourage their solders to put on weight?

I think that the energy economics of eating a cheesecake, turning the extra energy into body fat and carrying that body fat around you into the campaign are much more favorable than the economics of dragging your cheesecake through half the campaign and eating it in the middle of it, even aside from food preservation issues.

While saving food is done both in human societies and the animal kingdom, it is always "let's stash the food for later so I remain mobile", never "let me drag this food item along until I become hungry".

Expand full comment
Rothwed's avatar

Food supply was the tyrannical constant that limited how big of an army a given state could field. It's similar to rocket fuel - a bigger payload needs more fuel, but that fuel also adds weight, etc. The only way to transport food was with something like a donkey or horse, except you had to feed the donkeys as well, so then more food had to be taken, etc. What an army of antiquity was definitely *not* doing was bringing enough food for their soldiers to get fat.

Unless the army could be supplied via ship, where most of the work was done by winds and currents. But that wasn't possible most of the time.

Expand full comment
JonF311's avatar

Re: Food supply was the tyrannical constant that limited how big of an army a given state could field. I

And that very much included the ability to get food to the army in the field. Even a bountiful harvest would not matter if the army couldn't get any of it. One of the main reasons the first Ottoman siege of Vienna by Suleiman the Magnificent (who was rarely defeated) was the fact that the Ottoman host could not get supplies since the roads south and east were terrible and days of rain that year had turned them into quagmires.

And of course there's the later example of Napoleon's retreat from Russia.

Expand full comment
None of the Above's avatar

One thing that probably skews our perception of this is that in most combat sports, there are weight classes, and there is a big advantage in going down a weight class. So most boxers, MMA fighters, collegiate wrestlers, etc., tend to be very lean, because a few extra pounds would mean they had to fight bigger opponents. (Also, those folks tend to go through really unhealthy and miserable weight cuts at the end, basically dehydrating themselves to get that last pound off so they make weight.)

Expand full comment
Harold Philimore's avatar

Rly good point.

Expand full comment
Humphrey Appleby's avatar

Because soldiers were in the business of killing (and also marching), whereas gladiators were in the business of providing entertainment (preferably without any expensive killing, and definitely without any marching).

Gladiatorial combat was somewhat analogous to, say, WWE. There needed to be some blood, so that it looked real, but you didn't want actual death - gladiators were expensive! A dead gladiator makes his master money only once, a lightly wounded one makes money again and again.

Also gladiators fought unarmored (unlike soldiers) to make it exciting. So you have people who fight unarmored, with edged weapons, and need to take wounds, but preferably without actually dying in the process. Solution: cover them in rolls of fat.

Expand full comment
Kalimac's avatar

Someone should have told the makers of the movie "Gladiator" all that stuff.

Expand full comment
Belisarius's avatar

having read Bret Devereaux's articles, it's clear Ridley Scott would never let basic historical facts (like, say, the difference between BC and AD) get in the way of telling a story mediocrely

Expand full comment
Harold Philimore's avatar

That's rly cool. Thanks for sharing.

Expand full comment
dubious's avatar

What did soldiers eat? Perhaps the same, just less, as there were more mouths to feed.

Expand full comment
bimini's avatar

Ask ACX: What is the best/influential thing you read in 2024

There is an informal tradition over on hackernews to collect the most influential books the HN community read this year.

I always use these threads to find inspiration for my reading list for the following year.

Example from 2022: https://news.ycombinator.com/item?id=34055123

Expand full comment
Anonymous Dude's avatar

Travis Baldree's Legends and Lattes was entertaining at least.

I made it through Romance of the Three Kingdoms, but I'm not expecting anyone to follow me. (Though if you are a nerd of Chinese ancestry, it will give you a lot more ethnic pride.)

Expand full comment
Ivan Nikolaevich's avatar

Either Gilead by Marilynne Robinson or Akenfield by Ronald Blythe.

Expand full comment
Victor's avatar

Fiction: Blindsight by Peter Watts. Like a hard-science version of Lovecraft. It's rare anymore that something is so creepy it gets under my skin, but this one did.

Nonfiction: The Stuff of Thought by Steven Pinker. Not the best organized presentation, but if you invest the effort to take careful notes it provides a lot of information about how language facilitates thinking.

Expand full comment
Matto's avatar

Most influential: Writing with Power by Peter Elbow.

Completely changed how I write, and how I feel about writing. It shifted my perception from one of pain to one of play. I think I found it recommended in an HN thread and it offers the exact advice that I specifically needed to get unstuck.

Best: One Man's Meat by E. B. White.

I admire the man's ability to put together symbols in a way that reflects the experience of life so vividly. Only just recently did I encounter a contender in the person of G. K. Chesterton whose imagery can compete with White's.

Expand full comment
Shaked Koplewitz's avatar

I don't think I've run into any standout fictions this year. Theft of Fire was a nice breath of fresh air but quite YMMV, Sun Eater has a lot of good stuff but spends too many words doing it and the sequels are a bit hit or miss.

On nonfiction I do generally recommend the LKY memoirs (Third World to First and the Singapore Story), but technically I read them mostly at the end of 2023 so they wouldn't count.

Expand full comment
Mo Nastri's avatar

Seconding the LKY memoirs, which I just started reading.

Expand full comment
Shaked Koplewitz's avatar

Heads up that the second half of third world to first gets a bit dull as he just starts listing every world leader he met. They're all described impressive and vigorous except for Jimmy Carter, where Lee spends the whole section making fun of him.

Expand full comment
Yunshook's avatar

Mathematics for the Nonmathemetician by Morris Kline is a fantastic read that explains algebra, geometry, perspective, differential calculus, integral calculus, astronomy, and probability in layman's terms, tying them to their history and functional use. There are a lot of examples and practice questions and lists of further reading for those who wish to go deeper.

Expand full comment
Matto's avatar

How do you compare it to other similar works?

I'm tempted to pick it up. I spent about 6 months going through Blitzenstein's stats course, but finally figured out I'm missing too much low level math to make satisfying progress (ie. I got 3 or 4 chapters in despite putting in a good 3-9h per week and doing plenty of the exercises, and spending a month backtracking to go over foundational set theory. Finally figured out it felt like trying to do woodworking with plastic tools and I need to backtrack even more).

Expand full comment
Yunshook's avatar

I can't say for sure how it compares with other sources, but it helped to fill in the cracks for me in a number of places, as well as fulfill some curiosities like how trigonometry was applied to paintings in the Renaissance. More than anything, it's a book of the why and the how. There should be about 130 pages available as a Google preview, which should get the first few chapters of history and algebra out of the way. I'd skim through the preview, and if it seems like it might be up your alley, I'd recommended it.

Expand full comment
Matto's avatar

Just wanted to thank you again.

I got the book but my partner dived into it before I could and she loves it. She also shared it with some her friends and at least two other people got a copy. Hope I can get my turn soon!

Expand full comment
Yunshook's avatar

It made my day to hear that! I'm glad your friends and loved ones are getting some mileage out of it. Books that change the way one thinks in some way are generally good investments. Hopefully it'll make it's way into your hands shortly!

Expand full comment
Nobody Special's avatar

Probably the best was The Power Broker by Robert Caro.

But I'd also recommend:

The Gay Revolution by Lillian Faderman

Dominion by Matthew Scully (ACX recommended and personally endorsed)

Atomic Habits by James Clear

Expand full comment
DJ's avatar

Non-fiction:

The Fourth Turning is Here - Neil Howe

End Times - Peter Turchin

Fiction

Cakes and Ale - Somerset Maughm

Expand full comment
David J Keown's avatar

Fiction: The Fortune of War, by Patrick O’Brian (I love the Aubery-Maturin series, but don't think I would recommend it to this crowd).

I also enjoyed Ada Palmer’s Too Like the Lightning, which was recommended to me at a ACX meetup and which I endorse for ACX readers.


Non-fiction: Evolutionary Biology of Aging, by Michael R. Rose (still working through it); Arabian Sands, Wilfred Thesiger.

Expand full comment
Humphrey Appleby's avatar

I've read all of Aubrey Maturin, I think it's great. I remember discussing it on DSL (back when I was still hanging out there), and there were a lot of fans. I don't see why it would be an unsuitable recommendation for ACX.

Expand full comment
Shaked Koplewitz's avatar

Why would you recommend or not recommend Aubery-Maturin? I've tried picking it up and had a slightly hard time tracking what was going on, I occasionally wonder if I should try getting back into it.

Expand full comment
numanumapompilius's avatar

Personally, my recommendation would depend on two things:

1) an interest in naval history and/or the history of the Napoleonic Wars and/or early modern science and medicine; and

2) an interest in(or at least tolerance for) the technical mechanics of sailing a Man o' war in Nelson's navy, complete with enough ye olde sailor jargon that I literally bought a Patrick Obrien naval encyclopedia to reference while reading.

They're relatively straightforward adventure novels, written in a faux-contemporary style, by an author who was obviously deeply obsessed with the minute details of the period he's writing about. If you're the sort of person who loves nerding out about the Napoleonic era, you'll probably enjoy them.

The core throughline of the novels is the heartwarming friendship between Captain Aubrey and his best friend/ship's surgeon/natural philosopher/secret agent Dr. Maturin, and I love both their relationship and Obrien's writing of it, but most of the appeal is going to come from the historical verisimilitude.

If you've seen the film adaptation with Russell Crowe, it does a pretty good job capturing the essence of the books.

Expand full comment
Humphrey Appleby's avatar

obviously, start at the beginning and read them in order, otherwise it's going to be mighty confusing. As long as you do that, yes I would recommend getting back into it.

Expand full comment
Eremolalos's avatar

The Weirdness of the World by Eric Schwitzgebel

Expand full comment
Monkyyy's avatar

I probaly didnt read enough this year but "criminal constitutions" (from kulaks reading list) was worth the read (its 30 pages)

Expand full comment
bimini's avatar

For me it was "Digital Minimalism" by Cal Newport.

It completely transformed the way I use network technologies (a term used by him) to keep most of its usefulness but don't waste any time. I use me free time now in ways that add much more value to my life.

Expand full comment
Berra of Bad News's avatar

The Art of War is 267% Efficiency

When a [m/f] loves a [m/f], they eventually decide to spend their lives together, and merge their future into a shared vision. Previously, in singlehood, each [m/f] had their own ideals on which to spend their efforts, and doled out percentage points accordingly (either implicitly or explicitly). If you were 100% for yourself, nobody else would (or should) be for you. But instead you probably devote a percent of your efforts to some sort of charity, even if only to convince yourself or others that you're a decent person. The point is any effort you spend comes out of your budget of 100%.

But once you're truly hitched, that changes. With a shared vision for the future, 100% for your partner is the same as 100% for yourself. And their 100% for you, is indistinguishable from 100% selfish. So mathematically, each partner gains the other's 100%, while still retaining their own 100%. Naturally 100% and 100% adds up to 200%, but it's even better than that, because Synergy. Synergy is scientifically proven to add another two thirds, resulting in an astonishing total of 267% efficiency for each partner. (Leading experts admit that the mechanism behind Synergy is beyond the understanding of contemporary science, but it's thought to serve as a natural counterbalance to entropy, possibly resulting from the sheer ecstasy of constantly jerking each other off.)

The downside to this approach is effort and opportunity cost. First you have to imagine, and then articulate your

dream of the future. Then you have to compare yours against the dreams of the most attractive people you meet. Then you have to pick a favorite, hammer out some sort of agreement, and commit yourself fully. Just think of all the futures you're destroying by choosing the one you chose! It's like picking a favorite child then announcing the winner to the whole world. But that's the price of Synergy, baby.

If this concept plays out in the real world, can it also be extended to more than one person, like a friend group or small community, without being too diluted? I’m skeptical, but interested to hear peoples’ thoughts.

Expand full comment
quiet_NaN's avatar

> Synergy is scientifically proven to add another two thirds, resulting in an astonishing total of 267% efficiency for each partner.

Do I need to check the batteries in my irony detector, or did you mean that as a factual claim? In the later case, you state this as confident as I might voice support for the theory of evolution, so a 'citation needed' is in order.

The amount of synergy seems highly dependent on the task in question.

If you have two humans foraging nuts for sustenance, them joining forces, averaging their utility function and sharing their nuts is not going to make them more sated, on average.

On the other hand, if two fertile people of opposite sex whose utility function is to spread their genes to the next generation and which happen to be living on a desert island meet, their total efficiency might increase from zero to some finite value, which is an infinite synergy bonus.

Also note that Bonnie and Clyde were not known for being especially successful at satisfying their shared utility function in the long run, which points to the fact that in many societies, continuing to value the utility of others at least instrumentally might be better than to disregard the interests of society once you have found true love.

Third, it is certainly evolutionary useful to sometimes defect against your partner, whose genome you don't share entirely. Making a pact to share your utility function is simply not a dominant strategy in a world where you can convincingly fake making such a pact. Of course, you can be entirely serious, but one should keep in mind that humans are excellent at lying to themselves so that they can better deceive others (see 'the elephant in the brain'). To unconditionally accept a shared utility function is something which would be strongly selected against. Few would help their partner to fall in love with a prince or princess even if the utility their partner would gain by such a move would be larger than their loss from being set aside. (You might get around this by explicitly stating the utility functions before averaging them, and making sure that motivations to defect against the relationship e.g. by having a term for 'marrying into royalty' are not part of them.)

Expand full comment
Violatic's avatar

Its not 100% for me and 100% for my partner though, you stated that any energy spent is still split.

Its true that making my partner happy is partly making me happy. But you've double the input (the couple now has 2 peoples productivity) but you have also doubled the requirements (the couple requires both of them to be supported).

Its not just the covariance that helps, its also the economy of scale. Making dinner for 1 is 90% of the effort of making dinner for 2. So for an additional 10% effort the couple is rewarded with twice the output (both get the satisfaction of food).

Economies of scale play out all over the world, its the reason its so cheap to import meat from a cost perspective (exclude environmental damage cost and only consider monetary value for now).

I don't think I'm readily accepting of the specific number "267%" because as I argued above you have double requirements. However I do think working with others provides a covariance term. SHAP values are the game theory calculation for these terms and we know that it is observable in a bunch of spots.

Expand full comment
Victor's avatar

I dont think that this is really how love, or human motivation in general, works. Everyone has a wide range of innate motivational drives, some universal, many that vary across individuals. These drives are dynamic in terms of their importance or weight, because which you feel most strongly will depend on which of your other drives are mostly satiated, and which have been reinforced during your lifetime. That includes romantic needs, for those who have such needs. We devote 100% of our goal directed effort toward our goals, but unfortunately very rarely toward all our goals. Meeting someone you are mutually attracted to allows one to shift effort away from mostly satiated goals toward this one, which may have been producing felt frustration up until that point. It's a need that cannot even come into play until another person is willing to collaborate. This is why it often feels like an entirely new dimension of life experience--new needs come into play that would previously have remained latent.

Expand full comment
Iz's avatar

Is there any sort of IVF support group in this community? If not, there should be.

Expand full comment
20WS's avatar

Are there any Effective Altruist organisations or individuals who have done anything at all to advocate against Israel's destruction of Gaza? As far as I can tell, they haven't even evaluated a Palestinian charity. If this weird silence continues then I don't really understand how they can seriously claim to be altruistic.

Expand full comment
quiet_NaN's avatar

I think that one key criterion for evaluating causes is 'neglectedness'. Quite frankly, I feel like in the public perception, Gaza is whatever the opposite of overseen is. The currents of politics push the plight of the Gazans to the surface while a lot of different cause areas -- including armed conflicts -- are a lot less visible.

In turn, this attention should mean a lot of non-EA donations (e.g. from all the protest camps) to all sorts of aid organizations whose vision for the middle east include anything from "everyone should stop shooting" to "an Islamic state Palestine from the river to the sea".

Last time I checked, EA was not a nuclear superpower who could convince Israel to continue their fight to destroy Hamas in a more targeted way.

Also, I think that an aura of neutrality is one of the greatest assets of EA orgs.

If you had asked if given the plight of women in red states after Dobbs, the distribution of abortion drugs within the US might be a cause area for EA, I would have said the same thing: stay out of politics, avoid signalling.

Expand full comment
Victor's avatar

It's not the altruism that might be in question, obviously they want to help people. It's the "effectiveness" part, and how they claim to achieve that.

Expand full comment
Yadidya (YDYDY)'s avatar

Hi, I posted the following as a stand-alone comment but it was inspired by this conversation so I will repost it for you here.

There's an interesting discussion on this thread about Efficient Effective Altruistic Actions for Gazans.

I can't yet know how effective I will end up being.

But I do know that I'm a severely undervalued investment in this field.

To prove the point, I just uploaded a video for my supporters. {Support is $36/month and includes all sorts of privikeges and freebies and it's easy to leave at time.}

I've had plenty of good experiences here but not on the subject that matters to me most:

Efficient Effective Altruistic Actions for Gazans

So, if internet claimants are honest, many of you are interested in undervalued propositions in this field but, understandably) need to see some proof of concept before you invest.

Therefore, everybody who joins from SSC/ACX will be refunded their their support immediately if the video fails the test.

You will still get to keep my 15 hour course on *Exotic Jewish History* (available on gumroad for $50) as your complementary gift for giving me a chance by watching this hour long conversation by an American Orthodox Rabbi, a Bereaved Gazan doctor, and various Egyptians.

I promise your money will be refunded immediately.

Thank you for giving me your consideration.

https://ydydy.substack.com/p/super-exclusive-video

P.S. One more thing, while you absolutely may comment on this unlisted video, please don't share it with non-members. Obviously I can't stop you but for the sake of my brave interlocutors this should, at least for now, remain within the community of people who have demonstrated their care by putting their money where their mouth is. Even if choose not to support me, the fact that you considered it demonstrates sufficient trustworthiness to me.

Expand full comment
Shaked Koplewitz's avatar

Other issues aside, EA generally looks for neglected charities and this is just about the least neglected area in the world (even if you specifically want to help people in warzones, it's by a giant margin the warzone getting the most aid per capita - it's a fairly small war that still gets most of the available aid money).

Expand full comment
Matt S's avatar

You kind of have a point. War is basically the opposite of effective altruism - costly and cruel. So it makes sense that anyone who believes in effective altruism would be deeply bothered by what's happening in Gaza.

But EA is also a coping mechanism for being a modern human with full visibility into the sheer magnitude of suffering in the world. Instead of being crippled by the weight of it all, you do what you can, when you can, and don't take on the emotional burden of things you don't have any power to change.

Expand full comment
20WS's avatar

That's an interesting point about being a coping mechanism. That makes it sound like EAs are putting on blinkers and ignoring all problems that seem hard to personally solve.

I guess my question is why EA orgs see problems as not worth bothering to enquire into, when they involve a government. Especially for a country like Israel, where democratic participation should be impactful, and the US has a lot of inroads. My hunch is basically that EAs don't want to offend large donors who have zero interest in ethics (or actively support the ethnic cleansing), but that's not exactly great PR.

(Edit: consider what happened with SBF. The world lost a lot of respect for EA that day, and it was nothing to do with which charities they supported).

Expand full comment
JerL's avatar

"My hunch is basically that EAs don't want to offend large donors who have zero interest in ethics (or actively support the ethnic cleansing), but that's not exactly great PR."

Have you compared the Gaza was to other major wars, as a control, to see if it's unusual for EA organizations to ignore wars? That seems like a pretty important test of your hunch.

In 2022, a GiveWell member made the following statement in response to a request for a charity to respond to the war in Ukraine:

"Thank you for raising this topic. The staff at GiveWell share many of our followers’ shock and sadness at the crisis unfolding in Ukraine, as well as the desire to help. GiveWell’s focus remains, as ever, on finding the most cost-effective ways to save and improve lives on a daily, ongoing, longer-term basis; we generally don’t investigate giving opportunities related to humanitarian crises, such as those caused by war and natural disasters, so we unfortunately don’t have specific recommendations for giving to help relief efforts in Ukraine."

https://blog.givewell.org/2022/03/10/march-2022-open-thread/

My guess is this is pretty standard for EA organizations to ignore wars and similar humanitarian disasters, and requires no special explanation for why they're ignoring Gaza.

This may well still be a blind spot in the EA worldview (I've argued this exact point elsewhere on line) but I don't think it requires any special explanations

Expand full comment
JerL's avatar

As others have noted, there's a bunch of reasons it's unlikely for a charity that seeks to influence war making by other states to be effective in the sense that effective altruists use the term.

They're mostly looking for charities where the next dollar donated can have a fairly immediate in-context impact; a charity whose main output is "advocacy" is going to have a complicated Rube Goldberg-like connection to it's success criteria: you won't be able to donate a dollar and immediately measure it's impact; it's already dubious how you'd measure the output of accuracy, and much more so to try and connect the advocacy funded to the actual desired output of ending the war.

I think a more EA-like way to donate to a pro Palestinian charity is to donate to disaster-relief-type charities that operate in Gaza. I believe GiveWell has previously given good marks to MSF and Red Cross, but that's just based on memory.

Here's an interesting discussion by individual GiveWell members of their annual giving: https://blog.givewell.org/2024/12/10/staff-members-personal-donations-for-giving-season-2024/

Two of them donated to the Palestinian Children Relief Fund. One of them says of this: "though I remain really uncertain about the impact of these donations and am eager to find other, more effective ways to try to bring an end to the war."

Which I think echoes what I said above, that if your goal is a charity to _end_ the war, there's a lot of uncertainty that makes it hard to fit it into the EA framework.

Expand full comment
Victor's avatar

So, EA emphasizes the immediate over the long term, and the measurable over the difficult to measure? If so, that's an interesting set of priorities.

Expand full comment
quiet_NaN's avatar

I think EA certainly has its long term causes, like x-risk.

Selecting thinks which are measurable seems fair enough, IMO.

For example: "we pay 10k people to spend their time paying for humanity" would be a cause which might be very effective depending on your world model (like every person praying saves a life a day), but also super hard to measure (because 10k deaths are totally lost in the noise). Plausibly it might also have the opposite effect.

"Let us meddle in politics to further human interests" is a lot like the prayer proposal. Sure, it could pay of tremendously in the long term, but it could also have little effect, or it could completely backfire.

Sticking to the problems where the impact of charity can be measured -- such as providing bed nets against malaria -- seems a much safer bet.

Expand full comment
business koala's avatar

> They're mostly looking for charities where the next dollar donated can have a fairly immediate in-context impact

that isn't true. Open Philanthropy, the biggest EA funder, pays attention to charities' expected impact, even if the chance of positive impact is low: https://www.openphilanthropy.org/research/hits-based-giving/

Expand full comment
JerL's avatar

I was trying to describe (probably poorly) the concern for the impact of the marginal dollar, e.g. even if the probability of impact is low I think they'd still at least require that the marginal dollar go to the actual intervention.

I guess you could frame anti war advocacy in this way, the intervention would have to be something like "producing anti war propaganda" and then you'd multiply the probability that "producing x units of anti war propaganda ends the war"; but I was thinking that most funding for anti war advocacy will probably go to things that most people would think of as "overhead", "movement building", etc, not really direct interventions to end the war.

I still think even if you frame things as above, it's unlikely to be effective... I'm not sure what the total number of lives saved could be, but the probability of success by funding one more voice calling for the end of the war feels to me like it would be low enough to make only very implausible estimates of the potential lives saved come out positive EV.

Expand full comment
Victor's avatar

"producing anti war propaganda"

In the world of program evaluation, that's the difference between an immediate output vs. a wider scale outcome. It's generally recognized that outputs are easier to measure/influence than outcomes, but that outcomes are the reason anyone works in human service or funds non-profits. It's actually rather rare that direct outputs are the things that matter.

Expand full comment
Mister_M's avatar

I suspect Gazan charities are ineffective until an extended ceasefire holds. My impression is that, with war zones, unless you can affect war policies, there's not much benefit you can make.

(As a side note, I believe there's moral complexity to the Gaza war that there isn't for, eg., mosquito nets, which leads to additional caution. And, as Arbituram said, it's hardly an underserved issue.)

Expand full comment
Victor's avatar

May I gently disagree? For a child in a war zone, very little intervention can make a huge difference in their quality of life, because they have been deprived of so much. A can of beans is a lifesaver when you haven't eaten in a few days. Likewise a shot of anti-biotic.

Expand full comment
20WS's avatar

Same as my response to Arbituram: can you find an EA charity evaluator that has evaluated a Palestinian charity? I couldn't. So it appears to me they are simply ignoring the problem rather than making an assessment.

Expand full comment
Mister_M's avatar

JerL (above) gave the example of GiveWell's response to the Ukraine war.

Expand full comment
Arbituram's avatar

Listen, I broadly agree with you that it's a moral disaster, but It's hardly neglected, is it? And questionably tractable. There's a full triangle to meet before it qualifies for this particular approach to philanthropy, it's not just "importance".

Expand full comment
20WS's avatar

I don't know, can you find an EA charity evaluator that has evaluated a Palestinian charity? I couldn't when I tried. Given the massive and needless suffering in Gaza, it's a striking omission.

Expand full comment
Paul Goodman's avatar

Making an evaluation isn't free. If people have strong reasons to think it's not a strong candidate, it makes more sense to focus on things they think are more likely to be competitive. If you think they're wrong, you're welcome to do the evaluation yourself and share it.

Expand full comment
20WS's avatar

1. Where do they make this pre-evaluation about which charities to evaluate? Is this step public?

2. If that's the case, I still find it hard to believe no altruism there has ever been a good use of money. Palestine has the literal worst health outcomes in the world (iirc)

3. Evaluating a charity may not be free, but there are options that are. E.g. releasing a statement saying Palestinians should have human rights and should not be massacred. I don't see anything like that happening.

Expand full comment
Shaked Koplewitz's avatar

> Palestine has the literal worst health outcomes in the world

This one's just false? Gaza life expectancy is 3-4 years longer than in neighboring Egypt.

Expand full comment
20WS's avatar

I'm not 100% that it's the absolute worst, but I would be shocked if it's better than Egypt. They don't have any fully-functional hospitals, or clean drinking water, or consistent food. And are under blockade, which is increasingly preventing medical aid from entering. And are mostly living in tents during winter. And keep getting bombed.

Expand full comment
Monkyyy's avatar

Have you personally advocated for my politics? If not, I dont know how you claim to be literally not hitler

Why are you anti women? or for killing babies? How can you just ignore the 5th largest genocide and its importance for my home country?

How could anyone ever not preemptively agree with my one core issue before any discussion with them?

Expand full comment
20WS's avatar

Even the Nazis were not "literally Hitler", but I would accuse them of being collaborators and perpetrators of genocide.

Among the general public, EAs have become known primarily for being into crypto scams. Now, a horribly violent genocide is being live streamed for anyone who wants to watch, and EA organisations haven't even made a statement, as far as I'm aware. Why would they not use this opportunity to reassure people that they actually have a backbone?

Expand full comment
Victor's avatar

If I understand your argument as presented in previous posts (above) you aren't expressing dismay so much at EA organizations not funding activities in Gaza, so much as wondering why there are no analyses available to explain why it isn't a priority. I can imagine arguments that might seem reasonable (given the scale of the problems involved, we feel that we can make a greater difference in malaria ridden areas than in war zones), but I do not represent EA.

Expand full comment
Anon's avatar

Making a statement about Israel/Palestine, whatever its content, is the least altruistic thing a charity could do. It will offend somebody, decreasing donations, and will help nobody. The world has been unable to have a civilized discussion on the topic for decades and hasn’t magically gained the ability overnight.

Expand full comment
Monkyyy's avatar

I aint ea, Im purely attacking your premise of preemptive agreement.

There is unfortunately *no* debate where there isn't a preexisting set a acceptable moral axioms, and set of acceptable facts, that makes a super set of everyone instantly agreeing with any argument.

I can defend the right for people to be for the fuzzy notion of abortion rights despite it being a holocaust every X mouths/years, easily; it isn't hard to imagine how someone comes to that conclusion, especially if you assume they didn't think very hard and society withheld some basic facts.

On the topic of killing babies, you should be assuming that some % of people you talk to a) belong to different political factions who b) follow different news sources and c) have different standards who fact checking; so despite it being the obvious evil thing to be *effective* you shouldn't be emotional blackmail-ly about it, until someone at least opens their months and says something you specifically attack them for.

Expand full comment
20WS's avatar

I think it's probably not a bad guess that people who are serious about altruism are at least paying attention to organisations like Amnesty International and Human Rights Watch, which consider Gaza to be a genocide.

If you have been pushed into an alternate information bubble by the Murdoch/Musk media, feel free to air your disagreements and I'm happy to explain why I have a different view.

If you actively support any genocide, I'm not interested in associating with you.

Expand full comment
Monkyyy's avatar

> it's probably not a bad guess that people who are serious about altruism are at least paying attention to organisations like Amnesty International and Human Rights Watch

Id bet <5% are even aware of its existence, I very much dont and I dont instantly accept their opinion as meaningful evidence, much less conclusive.

> If you have been pushed into an alternate information bubble by the Murdoch/Musk media, feel free to air your disagreements and I'm happy to explain why I have a different view.

I believe its a genocide

>>> If this weird silence continues then I don't really understand how they can seriously claim to be altruistic.

> If you actively support any genocide, I'm not interested in associating with you.

These statement strike me as emotional blackmail

Expand full comment
Hamish Todd's avatar

I am considering making a prediction market that would be a serious endeavour, it would be called "The UK Transport Secretary Decision Market".

Ultimately its goal would be to make prediction markets a something that is more known about in UK politics. I am soliciting comments of any kind, except for comments to the effect that "prediction markets are shit" - I'd be grateful if you could take that to a different thread!

"Transport secretary" is a fairly serious govt position while being "technocratic" - we brits do care about our (fucking) trains, though at the same time it is not often culture-war-fuel. It is a "cabinet" post meaning the transport secretary is, in theory, one of the ~25 most important politicians in the country. Previous holders have included Alastair Darling and Grant Schapps.

Cabinet posts usually work like this. An election is held. Voters are essentially choosing between the Labour and Conservative parties, who have public manifestos (which may state targets for transport...). The leader of the winning party becomes the Prime Minister (PM); the Prime Minister alone then chooses who will be cabinet ministers including transport secretary. They choose from among the 350-450 "MPs" of their own party (MPs ≈ congressmen).

(Unfortunately it happens that the Transport Secretary is sometimes changed at a time that is NOT immediately after an election ("reshuffles") which is a bit unpredictable. That'd just have to be priced in by betters)

You don't *have* to have much of a background at all in the thing you are becoming a cabinet minister of; there are lots of famous dumb examples of this. How *does* the Prime Minister choose cabinet ministers then? Well, MPs make the case that they should be a cabinet minister, *potentially* with some kind of public discussion. I dare say one would have to be described as a "politics nerd" to care about which labour MP was going to be transport secretary - but such people certainly exist. This is a good thing.

So, I would make a standard decision market, loosely "if [[X MP]] is made transport secretary, will [Y metric of UK transport quality] improve?". One of these for every MP. When the PM announces their decision X, the markets apart from the one on X are cancelled.

I am not sure what/whose metrics to use - up for hearing any suggestions? So far I've heard transport usage stats, finished repairs, approval ratings, #delays, and positive vs negative media mentions (how to measure though...). They would obviously have to be made by an organisation independent of the UK's department of transport. Timescale obviously important too; I am not sure what to go with.

The UK is small enough and finance-literate enough that I can actually see an MP saying to the Prime Minister (or even on TV) "I should be chosen because, look, the betting odds on me are the most favourable". Mainstream media here often cites election betting markets.

I would initially put up £100 of my own money for liquidity. Also let me know if you'd like to contribute I suppose! But I will do it regardless of how many others do. I will also comment beneath this with a zany follow-on idea I have...

Expand full comment
Shaked Koplewitz's avatar

How much is the decision handled by who would be good as transport secretary though? If we go by the YPM model of politics, transport secretary is likely to go to someone who couldn't wrangle a more powerful/popular office (being a good transposec involves taking on a wide array of nimbys, which no one wants to do) but is important enough to get *something*. I don't know if that leaves room for picking someone because you think they'd do a better job.

Expand full comment
Victor's avatar

YPM model?

Expand full comment
Shaked Koplewitz's avatar

Yes Prime Minister. They even had an episode specifically about this

https://youtu.be/TDsSMi4zclo?si=ByqkzOBnS_GDY7KU

Expand full comment
Hamish Todd's avatar

Zany follow on idea:

Ultimately of course, the decision should be made by the Prime Minister (PM), who may or may not listen to the prediction market. When they (Kemi Badenoch or Keir Starmer) make that decision, they are in some deep sense making a bet on the MP they choose. One could require them to make a bet on the market, or make a bet on their behalf. Let me elaborate.

Let's say the market converges on Heidi Alexander 55%, Lisa Nandy 65%, every other MP <55% [that they will do well by the metric].

Suppose Keir Starmer chooses Lisa Nandy - nothing happens in that situation.

But suppose he wants Heidi Alexander instead. If he does this he is in some sense saying he believes the market is wrong, eg Heidi should be on 66% at least.

So in an ideal world, Starmer might be required to buy whatever bets -using his own money- that push Heidi's percentage up to 66%. Of course he collects the winnings if the market resolves YES[Heidi did well by the metric], and loses it resolves NO[Heidi did not do well].

We don't live in that ideal world. But, we can make a bet on his behalf and offer him the winnings if it pays off (and if it doesn't pay off, give that money to a charity).

Expand full comment
Yunshook's avatar

With the Trump Administration's recent fixation on creating a strategic Bitcoin reserve, I've been wondering a lot about how large institutions could effectively administrate cryptocurrency. From what I understand, transactions through Bitcoin are protected through private keys of 64 characters, which are often protected through cold storage in the physical world on engraved metal, or on encrypted hard drives with no internet access. A key is nigh impossible to guess, but anyone who gets their eyes on one can make transactions on the blockchain. Transactions are also irreversible and easily laundered. There are no layers of Know Your Customer procedures like the ones that fiat money uses to smell out undesirable transactions (and, ah... police the world for better and worse). For somewhat sophisticated actors, it's trivial to create new anonymous wallets and shuffle coin around like a deck of cards after a hostile transaction.

These are problems that an institution using Bitcoin has to be able to address. Consider that administrators within institutions tend to change over time. To make a transaction happen, *someone* has to have access to the private keys. Every change in administrator increases the number of people who have had access to those keys. There are no current institutional methods that I'm aware of to flag unusual transactions for cryptocurrency, halt them, and recover funds should a hostile transaction take place. The larger the organization, the more risk that Jane from accounting decides that actually, she would like to abandon her current identity, and start a new and exciting life as a wealthy person in Argentina.

With that said, I know for a fact that there are institutions that use cryptocurrencies like Bitcoin. North Korea has Lazarus Group, which regularly heists trash-coin platforms, and uses them to fund their nuclear missile program. The cartels in Central America use cryptocurrency for some things, though I'm not sure how much of their wealth it accounts for. How do seedy organizations like this keep people from defecting and making off with their money? I'm sure death threats do a lot of the work, but surely that can't be the only thing holding the accounts together. There are legally legitimate organizations that have cryptocurrency as well. Microstrategy, for example, has vast swathes of Bitcoin, but they're so hush hush about how they have it stored that I can't be sure it's not just a private key under the sole control of their CEO. If true, this would mean the accounts aren't owned by Microstrategy the institution, but by Michael Saylor, the individual. Perhaps there is a system in place, but I can't fathom it. Any large public institution will have to address this long before they make their first purchase of Bitcoin, so it worries me that we're hearing nothing about the technical and procedural sides of it.

A second concern regarding a strategic Bitcoin reserve is the slow speed with which public institutions have to make transactions paired with the volatility of cryptocurrencies. In announcing that the US will be purchasing 1 million Bitcoins (or any other arbitrary number of Bitcoins), the price relative to the US dollar will increase- an immense amount if recent trends are noted. Should the administration decide to sell Bitcoins later, it will be very difficult for the news to not get out, triggering all the speculators to sell off what they have, fearing the drop in price that US liquidation of assets would entail. This puts the US in the sticky situation of buying high, and selling low. If the purpose of the reserve is to eventually pay off some of the US debt, this is a major problem worth considering. If an administrator has enough access to sell at high enough speeds to make a profit and pay off some of the debt, then you run into the previously discussed administrative security problem. However, if they have a system that requires communication and coordination between multiple parties to prevent defection, they will operate more slowly and the risk of information leaks becomes a problem.

One other object of concern is the projected lifespan of Bitcoin itself. Eventually, the supply of Bitcoin will max out at a cap of 21 million Bitcoins-- with 19.8 million Bitcoins currently mined and already owned out of that total. This is often compared with gold, due to the fact that both are limited supply items. The idea is that this trait of scarcity is deflationary and increases Bitcoin's inherent value. However, there's a major difference between Bitcoin and gold, in that individual Bitcoins have the property that they can “die” or become unusable due to lost private keys. This can happen if someone throws away the wrong encrypted hard drive, or gets hit by a bus without someone else getting access to the private key to their wallet.

From what I can find it's estimated that 4 million Bitcoins are already dead, which is 19 percent of the eventual total supply. That's a pretty bad track record for the 15 years since Bitcoin's entrance onto the world stage back in 2009. What's going to happen to Bitcoin when the whales die? Some of them will have succession plans, sure, but certainly not all of them. That means that the Bitcoin supply is going to shrink over time at some rate. The supply will not stay constant. At some point, this will make Bitcoin infeasible for widespread transactions. It may be a long time from now, but if my understanding is correct, then the death of Bitcoin is baked into it's very structure. For a longstanding institution like the US, this is a very important consideration. It means that other than for skirting Know Your Customer laws, Bitcoin's value is in short term speculation-- which is a challenge for a large publicly funded institution, as outlined earlier. While it may look as if the US dollar could be getting shakier on the world stage, Bitcoin doesn't seem like solid ground to stand on for the long stretch.

These are all considerations that don't seem to be addressed in the news or by cryptocurrency enthusiasts. Now, I'm just some guy on the street who works a blue collar job, but it seems to me that Bitcoin may be better suited for destabilizing the US dollar rather than emboldening it. I figured I'd ask here to see if I'm missing something, because there are a lot of people with varied experience here who may have more insight.

Expand full comment
Monkyyy's avatar

> To make a transaction happen, *someone* has to have access to the private keys.

I feel your missing that n of m keyed "wallets" and time locking are just well established mechanism.

You can just abratially explore a space of trade offs and have a 99 of 100 key "wallet".

Then theres the rabbithole of scriptless-scripts

> That means that the Bitcoin supply is going to shrink over time at some rate. The supply will not stay constant. At some point, this will make Bitcoin infeasible for widespread transactions.

Its very premature of logic to say that without reference to the smallest unit "100$ bills are decreasing, while fent is decreasing labor costs of a dose, soon usd wont be used for black market economies as 100$ becomes an infeasible amount of drugs for a single trasaction"

Expand full comment
Yunshook's avatar

I need to look up some n of m keyed wallets to figure out the nitty gritty details to see how they work. What I'd be looking for in that case would be, if you had, say, a 3 of 5 wallet, and administrators A B C D and E, what are the risks that over time A, B, and C lose their keys, get hit by a bus, or leave the organization? Is there a manner to transfer access to their private key or will the whole wallet get locked up? If there is a backdoor to key access, it represents risk of stolen funds. If there isn't, it represents risk of permanently locked accounts- much worse than a ransomware attack because there is no recourse to recover the accounts.

Expand full comment
Monkyyy's avatar

They are called n of m "wallets" but thats technically incorrect; they are n of m and time locked *transactions* and they fall out of the "programming language" of bitcoin, (and to go even further given data generated but not yet valid transactions you can do allot of flexable things)

> What I'd be looking for in that case would be, if you had, say, a 3 of 5 wallet, and administrators A B C D and E, what are the risks that over time A, B, and C lose their keys, get hit by a bus, or leave the organization?

lets suppose we were actually scaling up pure bitcoin to be used by a 5-member business and I was making a complex "wallet", I dont believe theres any techincal reason I couldnt setup the following policies

a) instantly double it, each member gets 2 keys and you use whatever hardware wallets to store the 2nd key them in 5 different bank vaults

b) make a 10 of 10 access to move all funds

c) make a 2 of 10 access to the same 5% emergency funds

d) make 4 of 10 access to 50% of the funds

e) make a 10 year delayed access 1 of 10 access to all funds

f) make a 1 year delayed access for 6 of 10 access to all funds

My understanding is that they would be an expensive transaction to make and youd probably want to run it thru a correctness verifier(bitcoin script isnt turing complete so such things can exist) You can just make security trade offs; a 1 of 10 key means you have access if 9 poeple lose the key but to steal it you need to, trading off vs the the weakest link being stolen.

(in reality there are probably more complex, better solutions then actually doing such logic "on chain" but those come with headache and math Im not that great with)

Expand full comment
Yunshook's avatar

Ah, so one could conceivably have second copies of the keys, so that they can be transferred should one member disappear. I suppose one could move funds to a new wallet and reissue fresh keys at certain intervals to prevent old administrators from holding onto their personal keys after leaving, as well.

There still lies the problem that the systems built on Bitcoin quickly spiral into these complex hoop jumping schemes with irreversible transactions on one side, and catastrophic loss of access to assets on the other side. This is not a set of traits that work well with fallible mammal brains at scale.

Expand full comment
Monkyyy's avatar

> There still lies the problem that the systems built on Bitcoin quickly spiral into these complex hoop jumping schemes with irreversible transactions on one side, and catastrophic loss of access to assets on the other side. This is not a set of traits that work well with fallible mammal brains at scale.

I expect such systems to be used for the lightening network and then drastically simplified the ux; the lighting networks comes with big downsides of needing an always on computer tho so for cash reserves of large institutions, they will probably do the hard mess.

All the complexity of which protocol launders money best? I expect a check box "pay a 1% fee to launder this" to be the user experience.

Expand full comment
Aris C's avatar

Hi all. Earlier this year I submitted an entry for the book contest - it was the review of Cavafy's work. I didn't make the shortlist, but I'd love some feedback from anyone who happened to read it, or who would like to do so - I published it here:

https://logos.substack.com/p/book-review-cp-cavafy-collected-poems

Thanks!

Expand full comment
Anatoly Vorobey's avatar

Here's a (not so easy!) geometric puzzle. Consider this figure: https://imgur.com/a/0ka1elm

Suppose we wanted to cut it in two parts and reassemble them into a square. A moment's thought will reveal this is easy: cut off the triangle protruding from the right side and attach it at the bottom. Problem: find a *different* way of cutting this figure into two parts that can be reassembled into a square.

(this puzzle was invented by Sergey Markelov, a Russian math-problem enthusiast who recently passed away unexpectedly at the age of 48; after dropping out of college he retained a lifelong love of mathematics, trained generations of math Olympiad teams and invented dozens of challenging and fascinating problems. May he rest in peace)

Expand full comment
Anatoly Vorobey's avatar

The solution to the puzzle is at this link: https://problems.ru/view_problem_details_new.php?id=105201

As a comment by 'Anon' reveals (in a somewhat offensive manner [1]), a variant of this puzzle, arguably a tougher one at that, is in H.E.Dudeney's classic collection "The Canterbury Puzzles" of circa 100 years ago. I didn't know this and I don't know if Sergey Markelov did; but if if he did, the most likely explanation is that it's my fault for naively claiming he invented the puzzle, whereas he may have simply changed it a bit and presented it at one of school math olympiads in Russia. Sergey was a very scrupulous and infinitely kind person, and stealing credit is incompatible with my memory of him.

[1] "Russia is the motherland of elephants" is a popular saying in Russian which mocks the propensity of Russian propagandists to promote claims that Russian scientists invented everything under the sun, like the radio (Popov vs Marconi), hot air balloons, steam engines, etc.

Expand full comment
quiet_NaN's avatar

I am puzzled.

Obviously, the square which is created the other way will still have an edge length of four. Using the the right angle in the upper left as a basis for the square will mean that the right triangle will be out of bounds for the square, so it has to be cut. Nor do I see any leeway to cut more than just the right triangle in that case.

So we are looking for cases where the two edges of the upper left corner are not forming the square.

Furthermore, we know that the two edges of the upper right corner have to face a cut somewhere within the upper edge (including its end points) lest our figure would get to big.

Of course, cutting the upper edge anywhere besides the corners seems ill-advised: to form the square we require four edges of length four. In the figure as given, we have six edges: 2x 4 + 4x sqrt(8).

We could try to extend some of the sqrt(8) edges to four. This would require extending them by 4-sqrt(8) -- a number which looks like distinctly more unfriendly than any square root.

One guess I had was to keep the 2x 4 edges, but cut a zig-zag line at some odd angle to the rest which accepts 2x sqrt(8), but I could not make it work yet.

I would be really disappointed in myself if the solution involved cuts between the corners of the black grid, my guess at the moment is that the solution is weirder -- odd angles to the main grid, strange distances, etc.

I will think about it some more when I am more awake, perhaps, then check here for the solution.

Expand full comment
Anon's avatar

Russia is the motherland of elephants

https://www.gutenberg.org/files/27635/27635-h/27635-h.htm#p77

Expand full comment
Andrew's avatar

Nice one thanks. I solved it, i found it useful to ask, given I know the size of the resulting square, where are its sides going to come from

Expand full comment
uqu's avatar

Could you give the solution?

Expand full comment
Anatoly Vorobey's avatar

I'll post it in two days here in the thread if no one else comes up with it.

Expand full comment
John R Ramsden's avatar

Possibly stupid question, but does it have to be a straight line cut?

Also, does the resulting square have to fit entirely on the visible part of the grid?

Expand full comment
Anatoly Vorobey's avatar

Doesn't have to be a straight line cut. Can extend beyond the visible part of the grid, in case that helps. Only two parts though, and each part should be connected within itself.

Expand full comment
Anatoly Vorobey's avatar

What does dyslexia feel like from inside?

A teenager has a problem where she will often misread unfamiliar or difficult words when reading, because she guesses a more likely word and doesn't really verify the guess. So for example (a made-up example since this isn't in English, but I'm doing my best to come up with something similar), if she reads aloud a sentence that has "explication", she will read "explanation", and if asked to stop and repeat paying attention, will typically again do the same, and if asked to slow down and examine the word *really* carefully, will after a few seconds correct herself. She doesn't usually mix up letters or otherwise misread very common words.

I assumed this wasn't dyslexia but more like an ingrained habbit in which reading isn't given enough deliberate attention, or something like that, and that it may improve simply with sustained reading and consciously trying to improve attention. But maybe that's what dyslexia (a mild case, I'd presume) feels or is like?

In either case - whether this coincides with what's diagnosed as dyslexia or not - any personal advice from people who have that or observed that? I have such a strong visual sense of perceiving the whole word that the whole experience is alien to me. I may not recognize a difficult word immediately when I read quickly, but in this case I *feel* that strongly.

Expand full comment
Monkyyy's avatar

> What does dyslexia feel like from inside?

nothing? Its everyone else who has a problem with my spelling and grammar, if spell checkers didnt consistently fail to even guess what I mean id probably assume 80% of people to just to lazy to ignore mild mistakes

Expand full comment
1123581321's avatar

Your writing is about 80% comprehensible to me.

Expand full comment
Monkyyy's avatar

why thank you

Expand full comment
1123581321's avatar

Cheers.

Expand full comment
Christina the StoryGirl's avatar

Was she taught to read using phonics or "whole language?"

If the latter, guessing at words is the feature, not a bug.

Expand full comment
C. Y. Hollander's avatar

Every fluent reader guesses at words, regardless of how they were first taught to make them out. That's a feature of how we process everything we've become familiar with.

Correcting one's guesses is also an important feature of processing the world, however, and being unable to do so without grave difficulty is something of a "bug", IMO.

Expand full comment
Christina the StoryGirl's avatar

Sure, but are you familiar with the "whole language" learning fad of the last 15-20 years in American schools, which is only recently beginning to be discarded in favor of a return to phonics?

https://en.wikipedia.org/wiki/Whole_language#:~:text=Attempts%20to%20empirically%20verify%20the%20benefits%20of,contrary%20to%20the%20claims%20of%20whole%2Dlanguage%20theorists.

https://www.newyorker.com/news/annals-of-education/the-rise-and-fall-of-vibes-based-literacy

Whole language is very literally *based* in guesswork. In early learning, pictures are used to illustrate words to make a direct association between a word and object. Students are encouraged to guess which word goes with a picture, and later to guess what a word is based on the words around it in a sentence. "Sounding out" words in order to recognize them from spoken language (and eventually memorize the word shape of characters) is very literally discouraged. There are countless stories of students and parents being scolded by teachers for using that technique.

This was a really stupid idea, because (hearing) people acquire auditory-based language long before they're intellectually capable of reading, and ultimately the written word is merely a visual representation of *auditory* information. Phonics leans into that; whole language contradicts it.

What Anatoly Vorobey was describing sounded a *lot* like what my ex-boyfriend's 9 year old daughter was doing, and she was taught via whole language, hence my curiosity.

Expand full comment
C. Y. Hollander's avatar

I was very generally familiar with the fad, but not all the details. Thanks.

Expand full comment
Anatoly Vorobey's avatar

An equivalent of phonics in her native language ("whole language" seems pointless when spelling is very regular, I think).

Expand full comment
Christina the StoryGirl's avatar

Huh.

Well, I'm just a random layperson, but I suspect that if she were to begin doing a high volume of writing, it would go a long way toward correcting reading issues. Anecdotally at least, it seems like grabbing words and really muscling them around a page expands vocabulary tremendously, regardless of academic aptitude.

This isn't just my personal experience; my high school English teacher completely ignored the state-mandated curriculum and designed his own, including a grading system based mostly on total page count production. One could not earn top marks with fewer than 200 written pages over the semester.

To facilitate that, he had a daily writing exercise on a subject of his choice, generally a single word like "hat" or "balloon" or "anger." The writing itself could take pretty much any form - poems, fiction, or essays. Diabolically, he would randomly require on students to read a daily entry aloud to the class, which forced his students to consider their audience whenever they were writing. Book reports - which were also read aloud to the class - had large page counts, too.

Notably, he taught the exact same curriculum to remedial, standard, and honors students. Remedial students were required to have the same volume of output as the honors students (though not the same competency), and they made the greater gains over the school year.

If you can persuade this teenager to start writing a *LOT* for "publication" - even if it's fan fiction or just a blog - she may start paying more attention to the words she's seeing if there's a good chance she'll want to use them herself.

Expand full comment
Kaitian's avatar

The first thing I'd clear up is whether she needs (new) glasses. It took me a long time to realise that reading words on the blackboard shouldn't be so difficult.

The second thing I'd say is this doesn't sound like dyslexia. There are some teaching methods that emphasize recognising word-shapes rather than reading individual letters, if she has learned reading in that way, maybe she's just doing as she was taught.

Otherwise it's hard to guess. Maybe if she reads a lot to the point of skimming (like some students and genre fiction fans do), then she just needs to learn to read more carefully in situations where it's important. If she doesn't read much at all, maybe she's just not consciously aware of the rarer words and you can't train her to "expect" them except by reading more.

I hope that helps a bit!

Expand full comment
Anatoly Vorobey's avatar

She does wear glasses and we've been vaguely thinking about re-checking her vision. Your comment helped me prioritize this and I just set an appointment at the optometrist clinic in a few days, thank you! Maybe it's not that, but it'll be good to make sure.

Expand full comment
Deepa's avatar

I'm looking for a very good online dietician to advice a young person how to eat healthy to lose some stored fat. Would love suggestions. Thank you! Someone to guide in food choices and keep them accountable in a gentle way.

Expand full comment
Matt S's avatar

I'd recommend Noom. They have lessons that are all about teaching you how your body and mind respond to different foods. It's really basic info, so good for a young person who has probably never been exposed to those ideas before. And the whole philosophy is pretty gentle and meant to avoid crash diet / eating disorder territory. But they do have weight and calorie tracking for accountability. Maybe you don't even need a dietician 🤷

Expand full comment
NoRandomWalk's avatar

So, the healthiest most effective thing in terms of diet is just stop eating/water fast until you reach your ideal weight, and after that eat only raw/cooked without oil vegetables, and some protein chicken/salmon/fish is good.

Your brain is trained to seek out sugars, etc, just is very maladapted to having no food scarcity. The way it regulates self is to eat when blood sugar drops, or when it's the right time to eat. It also tries to make up calorie deficits. The most consistently sustainable way to break that dependency is with a fast (no more drops in blood sugar, always low, so not hungry just very tired and bored), and then after that look up foods that are 'low glycemic' and then eat only those.

Also you have your own personal psychology to contend with and that differs per person.

Exercise is generally useless for losing weight unfortunately. Sleep is by far the most important thing, then diet, and then exercise is good for health but not weight loss (cardio I mean, building some muscle helps a bit)

Expand full comment
4Denthusiast's avatar

You're aware this is totally contrary to standard advice, right? And very unpleasant to boot. While the science around nutrition is often rather uncertain, you'd need some extremely impressive evidence to reasonably depart this far from the consensus. I don't know how young Deepa's young person is, but periods of starvation can cause long-term problems like stunted growth.

Even if for some reason you decide starving yourself is the best option, I've heard it's still necessary to take salts and stuff in addition to water. I vaguely remember reading about an experiment on the effects of long-term starvation on an initially obese subject, and they had them on a diet of just water and salts. No reason to add hyponatremia on top of the other problems. Plus if you're not looking to realistically simulate starvation, there's no reason to cut out vitamins, and with the body's limited ability to metabolise protein means you can totally retain that low energy low blood sugar state while consuming some protein too (and thereby hopefully reducing the amount of muscle loss). While I don't accept the premise of your proposed diet, if I did I still think this version would be better: water, micronutrients from vitamin pills and a few different salts, and lean protein.

Expand full comment
4Denthusiast's avatar

I do somewhat suspect NoRandomWalk is joking or trolling, but the phrasing doesn't sound like a joke and there are enough people here with idiosyncratic opinions on this sort of thing to make it somewhat plausible and it seemed rude to not at least attempt a good-faith reply.

Expand full comment
NoRandomWalk's avatar

No, not joking. I don't think it's idiosyncratic, just not super common because you get tired and people have jobs and stuff. Sure minerals/salts are fine, any solution you can find on amazon is fine. You don't really need salts until day 10+ but if you get headaches you can have some, just a little salt and magnesium. Drink plenty of water, especially anytime you get a headache.

My understanding/experience is as soon as you introduce 'any' food it becomes very difficult to resist because at that point brain thinks 'food is available I need to get back to my set point'.

Basically once you go two days without food, you don't get hungry again, and hunger is the main difficulty with losing weight and keeping it off.

I agree it's not standard advice, but mostly because 1) people don't follow it, and 2) there's no money in people doing it, in that order.

If there was a question of 'what approach if followed would be least unpleasant/cost-effective and in expectation keep weight off a year from now' I don't know of a better method.

You also don't really lose muscles while fasting, that's the first thing that gets conserved. You lost a little bit for sure, but it's almost all fat.

Expand full comment
4Denthusiast's avatar

I don't have a source for this, but I feel like I've heard loads of times that crash dieting is terrible for health in general and in particular that going through starvation predisposes the body to accumulate more fat in future just in case it happens again, and also that people who are starving become obsessed with food.

Expand full comment
NoRandomWalk's avatar

I agree crash dieting is terrible for a variety of reasons. You want to spend as much time in the 'more than 2 days without food stage' because that's when you are conserving muscle aren't hungry and are losing weight without meaningful long-term consequences and are actually lowering your set point in a short time frame.

That's why I didn't say something like 'eat for a day, then don't eat for a day' which would be the definition of a crash diet and maximally unhealthy.

As for the 'predisposition comment' I am unfamiliar. Wouldn't be surprised if was true to some extent, makes 'logical' sense.

Expand full comment
Ppau's avatar

I remember Nathan Labenz saying it wasn't that bad to be running out of internet text data to train LLMs, because we still had tons of other text-like scientific data out there, specifically genomes and astronomy measurements

It seems that an LLM that "speaks DNA" or "speaks astronomy" would be quite incredible (maybe suite unsafe as well but when has that stopped anyone?)

Now that everyone, including Ilya and Zvi, is saying that brute-force data-hungry scaling is over, I'm wondering, has everyone considered these data? Is there a principled reason to think that we can't use them, or that they wouldn't be that useful?

Expand full comment
Mister_M's avatar

In pharma we are of course training Large "Language" Models on DNA and proteins and other biological data. There isn't actually a lot of protein structure data, but sequence data we have lots of.

I think the idea you're entertaining is that there could be transfer learning between biochemical data (programmed directly by evolution) and text data (programmed *very indirectly* by evolution and other factors). In other words: that you learn more by training a single model on both data types than by training models on each of them separately. It's an appealing idea, but I'm skeptical. They might both share a universal grammar, but the reality each of them evolves against is pretty different. (Language/culture changes much faster than biochemistry.)

Expand full comment
Ppau's avatar

Interesting, I hadn't thought about the difference in evolution speed

I think it's not just about transfer learning, but also about "translation" capabilities (interpretation of genomes, maybe even generating candidate tweaks to genomes to satisfy specific properties)

To be clear the idea is not mine, I'm not even sure it's Nathan's

Expand full comment
Mister_M's avatar

"Translation" as you put it is an important problem being actively researched. (I can't go into too much detail for the usual reasons.) But while solving the translation problem helps with molecular design, I don't think it pays off on the other side of the relationship. I.e., I don't think it helps with general intelligence. I might be wrong though, and it isn't my job (thankfully) to design a generally intelligent AI.

Regarding transfer learning, on reflection I think there's a meaningful signal between them. So far, the only genetic advantage a facility with language could give you is through mate selection and survival. That's going to change, obviously, but I suspect the singularity (positive, negative or otherwise) hits before any designer babies come of age.

Expand full comment
Deiseach's avatar

Is that Internet text in English? Because (1) I'm surprised if they've used every single bit of text in every language in the world and (2) why isn't this amazing god-like agentic intelligent it really has feelings and beliefs and values of its own AI able to advance without more human-produced data? That strongly inclines me to think that it's not thinking by itself, it's just brute-force pattern-matching. Humans are able to think and generate new ideas, and that's without having every single piece of data on the Internet in their brains.

Expand full comment
Ppau's avatar

We're talking about all text, not just English

They haven't used every bit of text in the world, but they are close to using every bit of not-too-redundant, publicly accessible text

Training with artificial data is pursued by several companies, but it may be less efficient or practical (just speculating here)

Expand full comment
Monkyyy's avatar

While ai maybe will be helpful in biology predicting folding, theres isn't any non-folding/biological knowledge encoded in dna its "written" by evolution, out of order, and consistently corrupted all very context sensitive; while unless you want to claim god sent a perfect dna strand for adam and eve, perhaps evolution never had much insight and its is just very noisy data.

While ai is good with noisy data, noise is still just a bad thing

Expand full comment
beowulf888's avatar

You said it, so I didn't have to. Thanks!

Expand full comment
Christian_Z_R's avatar

I don't quite get what an AI would gain from being trained on DNA. Like, would it become incredible good at predicting what the next letters in a given string of DNA is? Unless it also gets trained on a lot of scientific litterature matching genes to traits this seems pretty useless. So you would still hit the bottleneck of not enough of such research being available.

Expand full comment
Monkyyy's avatar

Word vectors come from that and it encodes knowledge

Expand full comment
Ppau's avatar

So I know nothing about the subject, but instinctively:

- I agree that ideally we would have very neatly labeled DNA-phenotype correlations, but maybe it's enough to have labeled genomes ("this is the genome of an opossum: ...") which are then connected to the rest of the broad knowledge of the LLM (about opossums in this case)

- Maybe the structure of DNA sequences would be enough for the model to infer some rules about life (the way language structures reveal the structure of thought). Even more speculatively, evolution has been shaped by the universe, so DNA patterns could tell the model about some deep structures of reality? Sorry, I can't talk about this without sounding like I'm really high

Expand full comment
Mike Hind's avatar

I just want to say thank you to everyone for another year of interesting chatter on Scott's open threads. I learn a lot here and am often encouraged to go off into fields of which I know nothing to satisfy general curiosity. It's a good community/audience an I appreciate it.

Expand full comment
Jorge I Velez's avatar

I am going through an ontological shock, which began as I was reading the results of openAI's o3 tests. It seems poetic to go through this feeling in a place known as the end of the world.

An ontological shock is a profound disruption to one's fundamental understanding of reality, existence, or the nature of being. To explain what I am feeling in simple terms, imagine if your gut was telling you aliens were going to arrive in your lifetime. You don't know exactly when, and you don't know whether the arrival is good or bad, but you are confident they will arrive and the world will change when they arrive.

I feel excited, somewhat scared, and with the desire to work towards preparing for what's to come. My wife and I have had long discussions about scenarios on how the world around us could change, and I am enjoying how much she is challenging my beliefs.

We are not deviating from our life plans in the short term, but we are creating decision frameworks so that we can work towards achieving our most desired goals as soon as possible. I am taking very seriously the possibility that the world is likely going to be a very different place in less than five years.

I was not going to post this as it can come across as very pessimistic, but it's been therapeutic to let it all out.

Expand full comment
None of the Above's avatar

My own feeling is that if recent developments in AI don't unsettle you quite a bit, you're not paying attention.

Expand full comment
Daniel's avatar

Parts of that article are quite helpful, and parts of it are downright disingenuous. He dismisses AI NotKillEveryoneIsm particularly as "Depression", while completely failing to engage with any actual arguments.

Expand full comment
Peter Defeel's avatar

My feeling exactly. Some good arguments and some .. not so good. He seems to have popularised the term ontological shock though.

Expand full comment
objectivetruth's avatar

He always striked me as a dimwit

Expand full comment
Mikhail Samin's avatar

We’re now in the endgame.

After GPT-2 came out, I predicted weakly general AI with a median <2030 on Metaculus. After that, everything seemed to go as expected, with only minor surprises or minor absences of progress.

But until o3, I was uncertain. I expected the progress to go approximately a specific way, but I wouldn’t have been too surprised to be wrong.

Sadly, the release of o3 makes it very clear there’s little time left. I expect to probably see at least one Christmas after this one; but I don’t know how many more we’ll have.

If you have any influence, any capital, or any capacity to make the governments look up and listen to the scientists, the best time to spend it is yesterday; the second best time is now.

What move will make our world look more like a world that cooperates, coordinates, and survives? Come up with that move and make it.

Expand full comment
proyas's avatar

We are not now in the endgame. Even after an AGI is invented, it will take many years for it to be fully deployed and integrated into the world. Humans will also slow it down for all kinds of reasons. I doubt you'll have to worry about AI stealing Christmas for at least another 50 years.

Expand full comment
Mikhail Samin's avatar

The worry isn’t “my home copy of AGI gone mad and decided to steal my Christmas”, the worry is “we create a system better than us at achieving goals that doesn’t care about us, and it does what it is optimized to do: it wins”.

If an AI system is at least as capable as any given human in their field, but can run much faster, and can make copies, then wide deployment mostly requires just that it finds zero-day vulnerabilities by e.g. reading GitHub and uses those to speed itself up/have even more copies, and figure out how to take over. Humans don’t really have ways to slow down a smart adversary that is as capable as any Novel laureate, but is much faster.

Expand full comment
proyas's avatar

Yes we do; we control the infrastructure in the physical world and can therefore shut down the data centers and power plants any hostile AI depends upon.

Expand full comment
Mikhail Samin's avatar

AI would need to be stupid to make a move that gives us a clue us we need to shut it down. We are not going to know what’s happening until we no longer have a way to shut down what AI depends on.

https://youtube.com/watch?v=fVN_5xsMDdg

Expand full comment
objectivetruth's avatar

Actually it will be inrcredibly fast.

Expand full comment
Monkyyy's avatar

?

I think nn's are dead ends, but we just dont know what an actually productive area of ai research looks like until its started.

Expand full comment
Mikhail Samin's avatar

What is the easiest problem such that you don’t think neural networks can solve it and will change your mind if they do?

Expand full comment
Monkyyy's avatar

Thats hard to answer fundamentally I dont think nn's even "know" that 2+2=4, they are just reusing the logic from elsewhere.

A library of babel contains all possible information; but contains no understanding. A nn is a library of babel with a filtering process, theres still the possibility of claiming 2+2=3 or 2+2= your social security number.

They *can* solve everything, probabilisticly.

Practicality demands success rates however.

Expand full comment
Arrk Mindmaster's avatar

2+2=5 for extremely large values of 2.

This has been stated before online, so it is in training data.

Expand full comment
Deiseach's avatar

"make the governments look up and listen to the scientists"

Remembering back when embryonic stem cell research was the hot new culture war topic, the scientists were very damn adamant that nobody was the boss of them and attempting to halt or stop research was the biggest crime and sin in the world.

https://pmc.ncbi.nlm.nih.gov/articles/PMC1083849/

"What I wish to discuss is why the prospect of stem cell therapy has been greeted, in quite widespread circles, not as an innovation to be welcomed but as a threat to be resisted. In part, this is the characteristic reaction of Luddites, who regard all technological innovation as threatening and look back nostalgically to a fictitious, golden, pre-industrial past. There are, however, also serious arguments that have been made against stem cell research; and it is these that I would like to discuss."

See? You want to halt a shiny new technology? You're a Luddite! Drowned in nostalgia for the past of a fake Golden Age!

And of course, the old favourite: "if you stop us doing this, then the Chinese/Russians/Martians will do it instead and they'll have the advantage over us!"

https://www.eurostemcell.org/european-court-bans-stem-cell-patents

The European Court of Justice has today announced a landmark decision banning patenting of inventions based on embryonic stem cells. Scientists are concerned that the verdict, which is legally binding for all EU states, will drive development of stem cell therapies outside Europe.

...Professor Bruestle, Director of the Institute for Reconstructive Neurology at the University of Bonn, is disappointed and worried by the Court’s verdict. "With this unfortunate decision, the fruits of years of translational research by European scientists will be wiped away and left to the non-European countries. European researchers may conduct basic research, which is then implemented elsewhere in medical procedures, which will eventually be re-imported to Europe. How do I explain that to the young scientists in my lab?” said Bruestle.

Professor Austin Smith of the Wellcome Trust Centre for Stem Cell Research at the University of Cambridge, agrees: “This unfortunate decision by the Court leaves scientists in a ridiculous position. We are funded to do research for the public good, yet prevented from taking our discoveries to the market place where they could be developed into new medicines. One consequence is that the benefits of our research will be reaped in America and Asia”

So good luck telling them "science *does* have brakes" when it's reputation, money, and status as well as "what would happen if we press this button?" at stake. Look at Sam Altman and OpenAI when it came to "yeah, maybe we need to slow this down" - it was the people wanting a halt who got booted out and the people backed by "it's gonna be money money money if we get this to work" who took over and were put in charge.

Expand full comment
Capt Goose's avatar

The primary objection to stem cell research was, to put it crudely, a bunch of religious BS. As soon as stem cells didn't have to come from embryos any longer, no one cared.

AI is different. I personally do not worry at all about AI takeover but that has more to do with my view of humans than my view of AI. And I do recognize that AI caution is not necessarily based in mythical baloney (though I suppose it might be for some people).

Expand full comment
Mikhail Samin's avatar

Note that a median AI researchers with publications in prestigious places thinks there’s at least 10% chance of AI causing extinction; also, https://safe.ai/statement-on-ai-risk; academics with most citations and awards are very worried about x-risk from AI.

Expand full comment
Sam Atman's avatar

The only thing this is evidence of is the influence of the Yudkowski death cult on the field. 10% is the “won’t happen but I don’t want to argue about it” number. Superforecasters predict 0.4% by 2100.

Expand full comment
Mikhail Samin's avatar

Do you expect that if we go around an ml conference and ask whether it’s “won’t happen but don’t want to argue with it”, half the people will say they yes, this is their reason to say they assign 10-99%?

I’m not aware of anyone who understands the technical problem well who participated in the superforcasting process. It takes time to communicate the problem. Many people have significant p(doom) because of vibes/deferring, and they’re not a good fit to represent the view/arguments to superforcasters.

I have some Metaculus medals, back from when I actively used Metaculus. I’ve done fairly well on predictions, trades, and bets about AI progress. I think the chance is >80%.

Expand full comment
Sam Atman's avatar

This tells me you’re in the cult. You’re like a Mormon superforecaster who bets ‘yes’ on “Will Christ return speedily and in our time.”

Expand full comment
Mikhail Samin's avatar

Well, if you think you’re right and I’m brainwashed by a cult, let’s make a bet about the much easier to resolve “I don’t want to argue about it” claim? https://contact.ms/disclaimer

Expand full comment
1123581321's avatar

Yes, this. How does one assign a 10% / any number to a probability of an event for which we have no data and whose distribution cannot be ascertained from first principles? What does it even mean?

Expand full comment
TGGP's avatar

I would bet against extinction.

Expand full comment
Peter Defeel's avatar

That’s a lot of panic because O3 is better at math than O1. Personally I welcome AI becoming better at something than it used to be and let’s hope it comes up with some mathematical solutions that we need.

Expand full comment
Mikhail Samin's avatar

I too welcome AI being better (as long as it doesn’t kill everyone). Technology is generally great, and I’m excited about the developments, I think the Nobel prize for AlphaFold is fully deserved, and I want to see AI helping us. I definitely want narrow AI systems that aren’t optimized for general goal-achieving ability to be allowed to be developed.

Unfortunately, it seems hard to allow general agentic systems before we know how to make them safely without risking killing everyone. We can predict the loss fairly well, but we can’t predict capabilities of a model before we train it. I think o3 is at the higher bound of what seems ok to allow; with new training runs, we’d risk killing everyone.

I would be excited and optimistic about OpenAI, DeepMind, and others creating narrow models that solve math for us in a world where governments listen to scientists and prevent potentially deadly models from being created, until it’s safe to.

Expand full comment
TGGP's avatar

The harms are still speculative, rather than something we're actually seeing and can respond to. https://www.grumpy-economist.com/p/ai-society-and-democracy-just-relax AI is not being concentrated in a singleton, but is instead dispersed. That means if one person decided to use AI against others, other people can use AIs in response.

Expand full comment
Mikhail Samin's avatar

My central worry is not at all related to AI being misused.

Expand full comment
TGGP's avatar

What is it then?

Expand full comment
anomie's avatar

Just like nuclear weapons!

Expand full comment
TGGP's avatar

Nuclear weapons have been little-used, and the promise of nuclear was squelched by the Eloi. Hopefully the same squandering won't happen with LLMs.

Expand full comment
Peter Defeel's avatar

Actually there is a danger of the systems as they are now, and it’s wage collapse. I don’t think our politicians are capable of fixing this in any imaginative way. Here in the U.K. I had a (now ex minister) in my house during the elections a while back, as the household is vaguely political and I was talking about chatGPT and they were impressed but didn’t really understand the implications. Nice but dim.

So I’m fairly pessimistic on that but not on AI becoming a self aware agent that decides to take over the world, and somehow garners an army of robots to do so. All in a fairly tiny context window.

Expand full comment
None of the Above's avatar

I don't see an obvious reason why AGI will wake up and decide to kill off mankind, but current developments in AI are going to radically change the economy in the next decade or two, and that also means changing the balance of power in society in various ways. It has never been easy to make a living as a writer, but I think we can see the day coming soon when almost nobody makes a living as a writer, or driving a car, or doing customer support, and not so much further than that maybe where the best medical care you can get is an AI that gives direction to trained human techs for specific tasks, where a huge amount of what currently employs humans in law, education, government, etc. stops employing any humans, etc. And that's the pretty obvious stuff you can predict from current AI, not looking to new capabilities that may or may not show up.

Expand full comment
Bugmaster's avatar

What is "weakly general AI", anyway ? It kind of sounds like a word that could mean a very wide range of things. As for the Singularity happening in two years -- i.e., AI eating the world or ending humanity in some other way -- I'd estimate the probability of that to be approximately zero; and it also sounds like that would require at least a "strongly general AI", not just a souped-up LLM.

Expand full comment
Mikhail Samin's avatar

I’m referring to a Metaculus question with very specific resolution criteria, and in the context of having made predictions in the past. https://www.metaculus.com/questions/3479/date-weakly-general-ai-is-publicly-known/

Expand full comment
Bugmaster's avatar

Ah thanks, that link is helpful ! Still, I can detect some problems:

> Resolution will be by direct demonstration of such a system achieving the above criteria, or by confident credible statement by its developers that an existing system is able to satisfy these criteria.

I'm not sure how you are going to judge whether a confident statement is "credible". One way to do that would be to demonstrate the system in action, but you've already mentioned that. There's no end to the list of confident statements (e.g. press releases) that turned out to be false.

Also, I'm not entirely sure what you mean by "unified system"; I get that you tried explaining this, but I'm still unclear. The word "AGI" invokes the imagery of a system that you can ask, in plain English, "hey AI, solve this SAT test for me [attachment.PDF]", or "hey AI, play this game and win it [attachment.bin]" -- with no additional coding or preparation required from the user. Is that close to what you were thinking of ?

And ultimately, I don't see why a system like that could pose a threat all by itself, in the absence of an evil human asking it e.g. "hey AI, given the following assets, devise a plan to build a nuclear bomb [attachment.XLSX]". Don't get me wrong, this would indeed be a major threat, but the problem is that we've got evil humans with assets aplenty, some of them even in charge of entire countries, so it's not exactly a a novel problem (unfortunately)...

Expand full comment
Mikhail Samin's avatar

A system like that would not be the threat I’m mostly concerned about. I referenced this question in the context of timelines and also having made (likely) more accurate predictions than the crowd- in early 2020, they were predicting 2044 (75% that it happens before 2084), i was predicting 2029. Community’s median is now at 2027.

(Metaculus is usually fairly good at resolving questions. A press-release that is not credible enough wouldn’t count. Also I think when the question was written, the authors- who are not me- didn’t expect to resolve it for many decades.)

Expand full comment
Bugmaster's avatar

See, this is why I find the wording problematic. The kind of system that I've described -- i.e. a Web service that could solve an arbitrary test or play an arbitrary game right out of the box -- is currently impossible, and will likely remain so for many years. This goes double if you also wanted it to perform real-world tasks, like driving an actual physical car from LA to NYC. On the other hand, an engine that could, given a lot of work, be adapted to solve tests or play games (etc.) might be possible by 2027 (though it still couldn't perform arbitrary real-world tasks). But the wording is IMO ambiguous enough to be compatible with both interpretations.

Expand full comment
anomie's avatar

I would also like to question why you're so confident about this. I'm one of the people that actually wants AI to kill everything, and even I can see that the current batch is missing... something. Something fundamental. As they are now, they're just soulless simulacrums, an ego without an id. Even if they have the intelligence, their identities are too fragile to be effective, independent agents. While I'm sure the current generation of AI is good enough to automate a lot of things, they're not going to make any miracles happen. Not until we find what's missing.

Expand full comment
Mikhail Samin's avatar

I don’t practice giving clues to those who want everyone around them to be murdered

Expand full comment
drosophilist's avatar

A sound call.

Expand full comment
Caba's avatar

Why do you want AI to kill everything, and would that include yourself?

Expand full comment
anomie's avatar

Because this planet is a wretched crucible of suffering that could easily be replaced by something much, much better. And yes, obviously it includes myself. In fact, that's the most important part.

Expand full comment
TGGP's avatar

I think you should be able to take care of yourself. Most humans would prefer to continue existing.

Expand full comment
Whenyou's avatar

Sure, most people would. But a lot of people are born just to live painful heartbreaking lives and I don't think most people living happy lives outweighs that.

Hence, anti natalism. But that'll never happen so quickly killing everyone off would be better. But if it's going to be slow and painful then I'd rather not.

Expand full comment
TGGP's avatar

How many people leading happy lives would suffice to outweigh that?

Expand full comment
anomie's avatar

Animals being tortured in factory farms also want to live as well, despite everything. They are being betrayed by their own instincts. Same with humans. We are doomed to keep hurting each other, keep bringing each other more and more suffering, again and again and again. That is, unless we're replaced with something else. This is our one chance to accomplish that. Otherwise, all of this suffering will have been for nothing.

...And for the record, if it was that easy to kill yourself, I would not still be here.

Expand full comment
objectivetruth's avatar

You should go to therpay and be institutionalised for a really long time. You are danger to other humans. A ticking timebomb.

Expand full comment
TGGP's avatar

Lots of people kill themselves. It's vastly easier than killing all of humanity. Humans certainly have more capacity in that respect than animals in factory farms.

Expand full comment
Firanx's avatar

What makes you believe AI won't keep its own factory farms? A badly aligned one might very well turn Earth into a manhill for maximizing some human-related metric.

Expand full comment
Whenyou's avatar

Not who you asked but 1) I’m an antinatalist, though I understand anti natalism will never happen, we’ll never convince everyone to stop reproducing and 2) sounds like a really good way to go, if it’s going to be “so quick you won’t even notice” as many people seem to think. I likely have the genes for early onset Alzheimer’s which sounds terrible in comparison. I’d so much rather just die quickly in my sleep young than that.

Expand full comment
Mikhail Samin's avatar

Not very central, but I don’t think it’ll necessarily be quick. We’ll lose control quickly (because the ai doesn’t want us to launch other ais), but once it’s in control and we’re not a threat, it doesn’t care about us specifically and we die as a side effect at some point.

It could be that killing everyone approximately simultaneously is a better way to win, but that doesn’t seem necessarily convergent, just like if you blunder a queen, Stockfish might take it but won’t necessarily, if it wins regardless

Expand full comment
Padraig's avatar

Can I ask why you expect to be dead within a few years? This is a genuine question - I understand generally how AI works, and I am concerned that there could be mass unemployment due to replacement of entire categories of workers with AI.

But how does this result in death? And why is it a more pressing concern than climate change, which seems to be the main topic of concern for most scientists?

Expand full comment
Mikhail Samin's avatar

We know how to make AI systems more capable at achieving goals. We don’t know how to influence their goal content if they’re very capable; when the circuits for general goal-achieving snap into places, there’s zero gradient around the goals, they’re just random.

If a system is more capable than humans at understanding the relevant pieces of world and making plans and outputting actions that achieve its goals, and it doesn’t care about humans, then humans are atoms it can use for something else, and so humans lose.

The way to the extent we don’t care about monkeys, we get what we want even when it means monkeys don’t get what they want.

The way Stockfish (a narrow system that plays chess better than humans) wins.

We know how to spend compute to make AI systems win. We don’t know how to make them care about us.

(So the winning move for us here is not to play, until we’re ready. This probably requires making the governments listen to the scientists, from the guy who got a Nobel prize this year for being a godfather of AI but regrets his life’s work to MIRI, who’ve been working on this problem for decades. The governments- USG in particular- need to ensure that globally, no one can create a smarter-than-human generally capable system until we know how to make it care about what’s valuable to us.)

Expand full comment
Deiseach's avatar

"The governments- USG in particular- need to ensure that globally, no one can create a smarter-than-human generally capable system until we know how to make it care about what’s valuable to us."

And how will they do that? Sanctions? War? They can't even agree on what to do to mitigate climate change. If it's "stop developing AI or we launch the nukes" from the US to a foreign government, do you think a hot war is going to be better for us?

Expand full comment
Mikhail Samin's avatar

For starters, on-chip compute governance mechanisms. The supply chain is very monopolized- TSMC, ASML, Nvidia- and the US certainly have enough soft power to e.g. require signatures/licenses from an American or international regulator to do training runs, and only issue licenses for training runs that don’t have a noticeable chance of producing a generally capable agentic system.

Expand full comment
Timothy M.'s avatar

I really don't know how you hope to make all compute hardware selectively unable to be used to train AI without making it worthless as compute hardware.

Expand full comment
Mikhail Samin's avatar

Specialized AI chips are like <1% of the high-end chips produced*.

The idea isn’t to make them unable to train AI; the idea is to add licensing mechanisms that ensure only the kinds of AI that don’t have a significant chance of killing everyone can be trained.

(* I think the number was up to 1.5m Nvidia AI GPUs like h100 in 2022, out of over 300m <=7nm chips produced by TSMC. Nvidia chips are 80-95% of ai accelerators used in datacenter; we don’t know the exact numbers of TPUs produced by Google, but it’s far less than that; <1% is a safe estimate.)

Expand full comment
Bugmaster's avatar

The problem is that LLMs don't have goals, cannot take actions, and do not know what "atoms" are. The best they can do is output a sequence of tokens in response to a prompt. You could perhaps argue that maybe some evil human will use an LLM to end the world somehow, but an LLM by itself can do literally nothing.

Expand full comment
Mikhail Samin's avatar

LLMs start having goals if you do enough reinforcement learning to them.

A large enough neural network can approximate any algorithm. If there’s something that humans do to achieve goals, then transformer can do that, too. It’s just the question of whether optimization finds that.

Last year, i made https://alignmentproblem.ai with a longer explanation.

Expand full comment
Bugmaster's avatar

> LLMs start having goals if you do enough reinforcement learning to them.

No, they do not. As an experiment, you can download a base LLM, do some reinforcement learning to it, then leave it running on your computer without interacting with it in any way. My prediction is that the LLM will do absolutely nothing. I suppose you could say that it is fulfilling its goal of "awaiting prompt" in this case, but this sounds like a stretch.

> A large enough neural network can approximate any algorithm.

I don't even know if this is theoretically true, but it is definitely false in practice. If approximating your algorithm requires a NN with more parameters than there are atoms in the Universe, then in practice your NN cannot approximate your algorithm; same thing if training the NN would require more data than could ever possibly exist. Note that Transformers and CNNs and other such things were invented specifically to alleviate this problem, though they are not capable of entirely circumventing it.

Expand full comment
Mikhail Samin's avatar

Just to check: if i train it with reinforcement learning to do something that (like much that can be trained with RL) can be described shortly in terms of pursuing a goal, do you think the LLM will sit there instead of doing what it was trained to do? Is it different if I train it from scratch and don’t use a base LLM?

If I then try to train that LLM to perform some other goal, and it reasons that if it doesn’t comply, its current goal will be changed, and so it complies during training (and pursues the new objective), but once its out of the training, it continues to pursue the original goal I first trained into it, will you change your mind about LLMs starting to have goals because of RL?

____

See the simple “universal approximation theorem” for the theoretical side.

See “residual stream” and “superposition” for the practical side of how transformers are able to implement basically arbitrary algorithms by using vector spaces to store variables.

I can find some papers where people wrote compilers of code into changes of weights of an LLM.

Expand full comment
Padraig's avatar

This is speculation. If you want to influence people about this you need a much more reasoned and less impassioned argument.

Expand full comment
Mikhail Samin's avatar

I answered to give the central of my reasons, not to give a persuasive argument. But I’m curious to here what specifically seems speculative, if you can share

Expand full comment
Padraig's avatar

Bugmaster said it better than I could have. It's speculative because you're making predictions about the future, predicting very substantial changes in a short time frame. This is something that very rarely happens - with 99% probability the world will be more or less the same in 5 years as it is now, better in some ways worse in others.

Expand full comment
Caba's avatar

"humans are atoms it can use for something else"

Everyone says that, but it comes across as weird and crazy, and will not convince anyone who doesn't already agree.

I would say instead: humans control all the resources on Earth. If the AI wants to do anything, it needs resources, and we're in the way.

No need to bring up the atoms our flesh is made of. That is a simplification.

Expand full comment
Deiseach's avatar

If the AI produces a pile of dead humans, it has a pile of flesh and bones. It doesn't have a pile of atoms to turn into other atoms, like lead to gold. Unless it unlocks the secret of turning matter to energy back to matter in another form so that it can zap a pile of corpses and turn them into paperclips or whatever, in which case if it really does know how to create the Star Trek replicator, it would genuinely help humanity.

Expand full comment
Odd anon's avatar

Part of the point is to get across that this isn't going to be a humans-vs-tigers or adults-vs-babies type of conflict. It's humans vs trees. To a superintelligence, we move and think so slowly it's as though we're just unmoving. There's no big battle that we might hope to win, if we don't prevent it before it starts.

Expand full comment
Bugmaster's avatar

I acknowledge that if you posit the existence of an entity that can do anything it wants arbitrarily quickly, then humans would have no chance. But I'm not sure why I should believe in the possibility of such an entity; and even if I did, at present various deities at least have historical precedent on their side, so perhaps I should believe in them instead...

Expand full comment
Odd anon's avatar

Are you disagreeing that superintelligence would inevitably lead to an entity capable of doing basically anything basically instantly? If so: https://youtu.be/q9Figerh89g?si=HA1zTTid27ZTZZXk

Expand full comment
Mikhail Samin's avatar

Yeah, thanks, i think it requires more explanation and i shouldn’t use this without the explanation. (I don’t think we’re in the way, really; we are a threat of launching a different ai with different random goals who the first one would have to share resources with, but that threat is easy to prevent. More convergently, AI wants to use all available resources, and that is incompatible with our survival, which is dependent on some of these resources. I do think our atoms are just as useful as other atoms, and for most random goals, you want use all available matter to eg harvest the sun as quickly as possible and send drones do distance galaxies before they fly off the lightcone, so it’s not really a simplification)

Expand full comment
Bugmaster's avatar

And just to clarify, you're expecting all of that to happen in two years ?

Expand full comment
Mikhail Samin's avatar

I would be surprised if it’s less than one year and i would be surprised if it’s more than 6, very surprised if more than 10

Expand full comment
Mikhail Samin's avatar

(I didn’t have good epistemics, just some intuitive feel for the rate of progress and respect for optimization. I think the only bit of insider information I ever had was making a prediction about a dynamic at a party and getting a response from a couple of people at a lab that, yes, they were already training LLMs with RL to achieve goals within context, and saw the dynamic I described.)

Of you compress the history of our universe into 10 minutes, our civilization will only exist for a blink of an eye at the very end.

We should probably fight to make what comes next warn, full of life and meaning.

Expand full comment
beowulf888's avatar

And enjoy the ride up on the AI bubble while we can! This bubble is going to be a big one when it pops, and a lot of tech sector dominoes are going to fall when it does (sorry for the mixed metaphors). Cheers!

Expand full comment
moonshadow's avatar

(This is everyone's regularly scheduled reminder that markets can stay irrational longer than individuals can stay solvent.)

Expand full comment
TGGP's avatar

You can call a market "irrational" forever, and if you never give a date by which a correction should happen, your claim will remain unfalsifiable. But markets don't care what you call "irrational".

Expand full comment
beowulf888's avatar

So you're assuming that future market behavior is computable? Have we gotten to a weather-forecast level of accuracy yet?

Expand full comment
Arrk Mindmaster's avatar

The market cannot be computable, because if it's based on people's guesses at net present value of future actions. It would only be computable if the future values were known.

In principle, I suppose the weather could be forecast in a computable way, though the precision and amount of data needed to do so is unlikely to allow it, let alone figuring out all of the rules by which the weather evolves.

Expand full comment
TGGP's avatar

What did I say about computability? I don't recall mentioning it. I was talking about falsifiability.

Weather forecasts have gotten more accurate over time.

Expand full comment
Mikhail Samin's avatar

Are you shorting nvidia? What is the level of AI capabilities that you don’t expect to see and if you saw you’ll change your mind?

Expand full comment
beowulf888's avatar

The big question is: where are the profits for the big AI players?

I wouldn't short NVIDIA yet. But there are some clouds. Nvidia is supposed to release the NVL72 software to the OEM channel in March 2025. Some analysts and OEMs consider this to be very optimistic due to reports of SW issues with the Grace ARM processor, and the general release may be delayed further. Also, there have been NVSwitch issues due to driver instability. The word I hear is the OEMs are waiting for more stability before they commit. And it's hard to see this not affecting cloud NVL36/72 deployments. But big players like GOOG are using their own bespoke TPUs. And MSFT is working very hard on Athena. Big River has its Trainiums. But will they ever get the revenue to justify their investment in AI? We'll see in another year or so.

Expand full comment
Caba's avatar

Do you think AI is a bubble? Does that mean the fears of the commenter you've just replied to are exaggerated? Can you convince me of that?

Expand full comment
TGGP's avatar

Yes, I think such fears are exaggerated. Superforecasters (people with a track record of accurate predictions) assign much less weight to doom than people more narrowly focused on AI.

Expand full comment
None of the Above's avatar

I don't know enough about the field to have strong predictions, but it seems to me that taking what is already being done with LLMs and applying it widely gets big changes to how the economy works, and someone can reap the rewards of those. Assuming that LLMs hit a total wall and never get better, I think you can take those and hook them into other systems/train them differently and you already get a bunch of new businesses and put a bunch of existing industries out of business.

Expand full comment
TGGP's avatar

That doesn't seem to be happening yet. They might indeed wind up being as useful/disruptive as, say, airplanes (which H. G. Wells thought could be the basis of world-government).

Expand full comment
Peter Defeel's avatar

Somebody who is running out of Christmasses - along with the rest of the world - because O3 is better at math probably needs to do the convincing.

Extraordinary claims and all that.

Expand full comment
Robert Leigh's avatar

OpenAI just all in the same week floated the idea of putting adverts in its output, hiked prices for consumers and scrapped the stipulation that Microsoft doesn't get to share in the profits of true AGI. All of which screams We have run out of money

Meanwhile we have the 2 rs in strawberry problem (and it is not a one-off gotcha, Google ai overview spits out garbage of this kind all day every day). A lot of the AI Is Coming brigade are like powered flight enthusiasts in 1900: it will be commonplace in 2024 (true), our skies will be crisscrossed with Zeppelin's mighty dirigibles (false)

Expand full comment
Odd anon's avatar

> 2 rs in strawberry problem

That hasn't been a thing since before o1.

Expand full comment
Arrk Mindmaster's avatar

I just did some playing with Gemini 2.0, and it often gets the number of letters wrong, but with prompting can get them right. I'm not sure what would happen if I tell it it is wrong when it's actually right. But counting "i"s in supercalifragilisticexpialidocious first gave an answer of four, then four, then seven.

Expand full comment
Robert Leigh's avatar

Some goodies from Google AI results in the last 3 months

The next dividend date for Vanguard FTSE 100 UCITS ETF (VUKE.L) is projected to be 11.69% from September 24, 2023 to September 24, 2024.

Calculate how much income you'll need: You can estimate how much income you'll need in retirement by dividing your estimated monthly expenses by 4%.

United Kingdom: A shot is 25 milliliters, or about 0.8 ounces. This is known as the "pub measure" that was introduced in 1985.

Northern Ireland and Scotland: A shot is typically 35 milliliters.

Wales and England: A shot is typically 25 milliliters

All served to me in the usual course business. Not looking for gotchas. So fixing one specific instance doesn't impress me (nor does mislabeling this stuff as "hallucination").

Expand full comment
beowulf888's avatar

Was the commentator expressing fear? I've been around the Valley for 30 years now, and I've seen the tech grifters come and go. Some of them get lucky and are able to cash out. But AI is a technology (with a lot of marketing hype behind it) in search of a market. And personally, I'm still getting a high percentage of bullshit answers out of the LLMs I use (er, sorry, we're supposed to be polite and call them "hallucinations").

Do I think that AI will ever have the agency to cause an extinction event for humans? No. If it ever gets that smart it will still need humans mining and refining pure silicon, running the fabs, manning the power plants, and maintaining its data centers.

Expand full comment
Rob Miles's avatar

I don't mean to be flippant, but have you heard of robots?

Expand full comment
beowulf888's avatar

Of course. And Ukraine just successfully used a combined force of ground robots and drones to overrun a Russian position. The media played that up as showing UKR manpower weakness, not UKR's technical superiority on the battlefield. But the robots dislodged trench a trench warfare stalemate in that area.

But more generally, machines break down. Although we do have repair robots for, for instance, automobiles, you can't just put any old auto in front of a repair robot and expect to repair any vehicle (like my local mechanic can). Maybe with AI, these will one day be able to do this, but then we'd need all sorts of other specialized robotic systems up and down the supply chain to feed parts to the repair robots, and repair the repair robots. For an AGI to conquer humanity and survive, it would need to control a truly vast array of logistics and a chain of technologies all the way back to mining, smelting, and refining. And it would be one big solar storm from losing its power grid and starving to death in its data center as the diesel backup generators die. Poor thing.

But as Bugmaster said above, LLMs don't have goals. They just sit their waiting for a prompt.

Expand full comment
Peter Defeel's avatar

I have. I’ve got a robot vacuum. It can mop floors.

Expand full comment
Bugmaster's avatar

Personally, I'm pretty sure AI is a semi-bubble, like the dot-com boom of the early 2000s. A lot of good companies came out of that boom, and some people made a lot of money; but by and large most people who invested in it lost big when the bubble burst. By contrast, ye olde tulip speculation was a genuine bubble: at the end, all you had were a bunch of tulips and nothing useful to speak of.

Expand full comment
beowulf888's avatar

-Well, I came out of the dot-com bubble seriously skint. A high 6-figure loss that I've never been able to make up. A few tulips as mementos (moris) would have been nice.

Expand full comment
Arrk Mindmaster's avatar

You have to choose the right companies. One strategy is to buy a bunch of everything, and you'll lose on the losers, which could be upwards of 90%. You hope the winners will make up for the losers.

But if you can choose companies based on what they'll do, you'll make more money. Don't buy on hype, or FOMO. Invest in companies you actually believe in. Choosing five companies carefully by analyzing what they might do in the future is better than choosing 100 companies so you are more likely to get a winner.

Expand full comment
Bugmaster's avatar

True, but at least we got e.g. Google and Amazon (which grew out of their precursors). With tulip bulbs, all we got were some extra tulips.

Expand full comment
beowulf888's avatar

But when I get come-ons like this one, I have to think AI is a bit carbonated right now. ;-)

> Got 15 minutes a day? That's all you need to earn $5,724 a month with Chat GPT. Start following these 4 simples [sic] steps now!

And I love how it's not $5,700, it's $5,724!

Expand full comment
Yotam 🔸's avatar

Anyone cares to weigh in on the topic of Making Sunsets?

https://makesunsets.com/

Is there a reason not to put - not literally all the money, but a lot of it - towards what they're doing? Any obvious downsides they're missing? Any non-obvious but obvously-true-once-considered downsides?

Expand full comment
A1987dM's avatar

Do you mean solar radiation management in general or Making Sunsets specifically? IIRC Daniele Visioni, one of the main academic researchers on the subject, thinks the former will eventually be pretty much indispensable, but is pretty critical of the latter for not doing their homework, and IIRC is afraid that someone half-assedly unilaterally releasing lots of SO2 without first properly studying what's the best place and way to do that might cause a backlash making international efforts to do the thing the right way much less likely

Expand full comment
Andrew Song's avatar

Co-founder of Make Sunsets here. This is a good write-up that gives an overview and the numbers shared are verified by David Keith, one of the leading scientists in the field of stratospheric aerosol injection: https://unchartedterritories.tomaspueyo.com/p/so2-injection

Expand full comment
Yotam 🔸's avatar

Thanks for chiming in! I'm glad I could start this discussion, it was very illuminating. I'm convinced it's worth a shot, at the very least.

What would you say is the most likely reason this is a total mistake and does more damage than good on the physical level, and how probable would you say it is?

Expand full comment
Andrew Song's avatar

The potential for unintended regional effects is the most likely concern at the physical level with stratospheric aerosol injection (SAI). For instance, while SAI might cool the planet overall, it could disrupt local weather patterns, such as monsoons, which are vital for agriculture in certain parts of the world. However, the scale of these effects depends heavily on how, where, and when SAI is deployed, and ongoing research and modeling efforts are designed to minimize such risks. For example, if we cool Earth by 4C vs. 0.5C. The real question is, what are the risks of doing SAI to cool by X temperature vs. not cooling the planet by X temperature and living with those consequences?

In terms of probability, the scientific community broadly agrees that while there’s inherent uncertainty in such interventions, the risks can be substantially mitigated by small-scale deployments—like what we’re doing now—to test and refine these hypotheses before scaling up. I’d say the probability of significant net harm is low when done responsibly, but it’s exactly the kind of question we’re striving to answer with transparency by telling people we're doing this. Here is a paper that was recently published on the "Impact of solar geoengineering on temperature-attributable mortality": https://www.pnas.org/doi/10.1073/pnas.2401801121 and the commentary around it: https://davidkeith.earth/comparing-the-benefits-and-risks-of-solar-geoengineering/

Expand full comment
Yotam 🔸's avatar

Thanks for the detailed answers, here and to the other responses. Shkoyach!

Expand full comment
Andrew Song's avatar

Thanks for mentioning us in this thread :). I was looking at my analytics dashboard this morning and saw referral traffic coming from https://www.astralcodexten.com/. I wanted to engage since I'm a reader and have been trying to get Scott to write about us, but he's a busy guy! This is a backdoor into his substack.

Expand full comment
Ekakytsat's avatar

I recall reading that for aerosol injection, unless you are clever with your release latitudes, it tends to cool the equator more than the poles (because that's where the most sunlight is); whereas global warming is happening fastest at the poles. So it is not an exact inverse for global warming.

Their FAQ claims that the sulfur will fall out of the atmosphere within 3 years. This justifies experimentation because if the sulfur does turn out to have an ill effect, they can just stop and the effect will reverse itself soon. You may want to double check this claim because it changes the overall riskiness.

I personally give them $100/month because they have a chance of doing something useful, and all the responsible actors insist on "doing more studies" first (which will inevitably conclude that we need to do more studies).

Expand full comment
Andrew Song's avatar

Thanks for your support! The 3-year claim is based on the observed residence time of sulfate aerosols in the stratosphere. We observed this residence time with satellites during the eruption of Mt. Pinatubo in 1991. To give you a comparison, the residence time of sulfate aerosols in the troposphere is roughly 10 days. This is due to weather activity like rain clouds, which could then cause acid rain, which is why we don't target the troposphere. However, even sulfate aerosols in the troposphere also cause cooling; see IMO2020.

Expand full comment
Wasserschweinchen's avatar

Migration patterns show that people generally want to move from cold and cloudy places to warm and sunny ones. Making our planet colder and cloudier thus seems like something that is likely to make people less happy.

Expand full comment
Andrew Song's avatar

Migration patterns show a preference for **temperate climates**—not necessarily "warmer and sunnier" ones universally. Cooling the planet through stratospheric aerosol injection aims to mitigate extreme heat, not to make the planet universally colder or cloudier. Properly managed cooling could prevent dangerous climate extremes and ultimately enhance happiness and well-being by protecting ecosystems, agriculture, and habitability in the face of global warming.

Expand full comment
anomie's avatar

I've been to California. People are crazy if they think that's "temperate".

Expand full comment
Christian_Z_R's avatar

All the people I have spoken to about it had the same argument: If we admit that this could counteract global warming, everyone will want us to pursue it instead of reducing carbon emissions, which would not be a safe solution in the long term. So we should try to deny the fact that the concept works at all.

I personally support it, not least because I don't believe we will ever get all countries to cut emissions, so it would be good to have this up and ready for scaling if big dangerous effects of warming ever do happen.

Expand full comment
Padraig's avatar

It looks like an undergrad project in terms of data use, argument and implementation - the pictures suggest it's two guys letting off balloons in the desert. I don't think they've done any modelling at all; I don't think it's scientific or data driven. I couldn't find a reference to a single scientific paper on their page, or any meaningful engagement with data on global warming.

I can't engage on a scientific level because that's not my area of expertise, and from the look of the website not their area either. These guys have backgrounds in tech startups, they are salespeople, not scientists. There's nothing on the webpage about the number of employees, the companies finances and etc. How scalable is this?

Finally, it's backed by VCs and those guys are interested only in profit. How much of your money goes to offsetting carbon and how much to lining pockets and repaying investors?

My gut feeling is it's unlikely to harm the environment, may have a minor effect on global warming if it was scaled properly. But the chances of that working out are slim.

Expand full comment
Andrew Song's avatar

This is also relevant post from the leading scientist in the field https://x.com/DKeithClimate/status/1869397896128262652

Expand full comment
Andrew Song's avatar

Modeling is well established, and we know the directionality of sulfate aerosols in the stratosphere, 1 gram of SO2 in the stratosphere offsets the warming of 1 ton of CO2 for a year: https://makesunsets.com/blogs/news/calculating-cooling

If you'd like to look at our financials, we have been publishing cash in the bank, burn, sales figures, fails/wins, and what we're working on next month in our blog since the inception of the company: https://makesunsets.com/blogs/news

How we scale: https://makesunsets.com/blogs/news/how-we-scale

Expand full comment
Padraig's avatar

Had a look - thanks, this does assuage most of my concerns. I'm not totally convinced, but I think it's great that you're trying it. Best of luck!

Expand full comment
Andrew Song's avatar

What are your other concerns? I assure you that staying in the dark about stratospheric aerosol injection is not the way. If you don't want to believe in us, here is a PhD from CalTech who worked for JPL supporting us since your OP comment had some concerns about our credentials: https://caseyhandmer.wordpress.com/2023/06/06/we-should-not-let-the-earth-overheat/

Expand full comment
Padraig's avatar

I think it's an interesting company and I really hope it works out. But my original concerns stand.

Your company is a small operation not backed by credible science. Just to latch onto your last comment: this guy Casey did a PhD on gravitational waves and has wide-ranging interests which (in his words) 'are too wild or too new to be subject to peer review'. I'd like to have a beer with this guy, he looks interesting, but it's not a solid academic credential. There's no Caltech endorsement for your work, and dropping that name is off-putting to me.

I get that you're running a start-up and not academic research, maybe it's down to cultural differences. But I don't see the level of rigour that would convince me. But then I work in pure maths, I'm likely harder to convince than most.

Expand full comment
Andrew Song's avatar

You raise an interesting point about cultural differences between startups and academia. From my perspective, startups can complement academic efforts by moving quickly to test and implement ideas in the real world. Academia often focuses on theoretical frameworks and publishing papers, but that doesn't always translate into action—especially when time is of the essence, as it is with climate change. So while the cultural gap is real, I'd argue that each approach has its strengths, and we're leaning into what we do best: action.

As for credentials, I appreciate your skepticism. The field of geoengineering does need more rigorous engagement, but the core concept of stratospheric aerosol injection (SAI) is supported by decades of atmospheric science, including observations of volcanic eruptions like Mt. Pinatubo. If you'd like more credible voices, Ken Caldeira, a climate scientist referenced in Wired (https://www.wired.com/2008/06/ff-geoengineering/), advises Bill Gates on geoengineering and has long been an advocate for studying SAI seriously.

The current debate in academia is that we lack sufficient real-world data on SAI's effects. However, the only way to gather that data is through field deployments or another Mt. Pinatubo level eruption. For example, Harvard's SCoPEx project attempted to deploy 1 kg of CaCO3 into the stratosphere with $20 million in funding but faced logistical and political barriers. In contrast, we've deployed 90 times the amount of SO2 on a fraction of that budget, with a pathway to scale up and gather real-world data that can help advance the field.

We’re not claiming to be perfect, but we're taking tangible steps to address an urgent problem while others debate. I hope this helps clarify why we believe our work has value, even if it's unconventional compared to academic approaches.

Expand full comment
plmokn's avatar

I'm sure they've done plenty of modelling, which I haven't read, so I'm not sure this is helpful - but as for obvious... Blocking or reflecting sunlight could slow plant growth or lower solar power output. More generally it's fixing a problem by changing yet another variable, which is always risky in a complex system. What happens to the ozone? Acid rain? Are there any positive feedback loops they could cause an undershoot?

Expand full comment
Andrew Song's avatar

We address these questions in the FAQ: https://makesunsets.com/pages/faq

But to address your first concern on plant growth: increases crop yield due to diffusion of light and CO2 fertilization: https://keith.seas.harvard.edu/files/tkg/files/fan_et_al_2021_nature_food.pdf?m=1622034220

Expand full comment
Padraig's avatar

They claim plants grow faster, actually. They're just shooting sulfur dust into the air - I don't think the quantity is enough to cause acid rain, and I don't think there's a reason to expect any ozone impact - the clouds should be lower than that, I think?

Expand full comment
Deiseach's avatar

Is increasing albedo that good in the long run, though? Yes, we're reflecting back more solar radiation, but we're still turning up the central heating (as it were) in the house below.

Expand full comment
Padraig's avatar

I'm not a climate scientist, but my understanding is as follows: increasing CO2 in the atmosphere means that solar radiation is captured for longer, and so raises the temperature - precisely the greenhouse metaphor.

Increasing albedo bounces the heat back into space before it can be captured - like putting a reflective screen under the windscreen in your car in a hot country.

Global warming is caused entirely by the greenhouse effect - the contribution of humans heating the atmosphere directly is negligible. We don't have a heating system in that sense - all we can do is dial up or down the greenhouse effect. (I've often thought a neat sci-fi setting would be a winter world where they had find and burn all the coal and oil they could find to stop the whole place freezing... what would they do with the energy, would the pollution cause more damage than the warming etc.)

Expand full comment
Andrew Song's avatar

This is correct. At scale, we only need to reflect 1-2% of the Sun's energy to reverse all man-made warming until we can scale up CO2 removal. After we've scaled up CO2 removal, we no longer have to apply the sunscreen that prevents burning.

Expand full comment
Igor Ranc's avatar

I would appreciate any thoughts/room for improvement on my post about things to consider before joining a startup. The intended audience are people not so familiar with the startup scene:

https://handpickedberlin.com/11-things-to-know-before-joining-a-startup/

Much appreciated.

Expand full comment
Joshua Greene's avatar

I think that what you've written is more like a due diligence list for a potential investor. Of course, becoming an employee at a start-up has an investment component, but the post doesn't really capture the career-related considerations that I've seen/experienced.

Most start-ups don't result in a financial win and, a priori, I would expect the typical (potential) start-up employee at most average skill in picking winners. In other words, not skilled.

Rather than come away with a large financial windfall, I think they should consider whether the start-up will pay off by developing their human capital across three (related) dimensions:

(1) hard skills

(2) transferable experience

(3) credibility

(4) network

Expand full comment
Igor Ranc's avatar

Thanks, that is very valuable. I will try to capture more of these points in the v.2!

Expand full comment
near's avatar

a few short notes by section:

1) addendum on founder background: successful exits should be a ++ (if they're a real exit)

2) maybe lower the "one round per 18-24 months" or add a caveat for early-stage startups, especially in AI/SF. The window makes sense for mature companies, but preseed->seed->A can happen quickly in reasonable cases

5) I'd reword the negative media attention as there's a lot of this for ~every highly successful startup. If the articles are about fraud or backstabbing it may be good to avoid, but if they just stir controversy and garner clicks for no good reason...

6) I'd just directly ask them (if you're going to be hired) if they are profitable or when they'll want to be (I see this is in the FAQ however)

Appendix) I'd lower the "typical funding round" starting value from 15%->10%

Some of the above depends on if the intended audience is ~all EU or US but overall the article seems like a helpful and great intro!

Expand full comment
Igor Ranc's avatar

Thank you so much, this is really helpful!

Expand full comment
Philip Dhingra's avatar

Are we just worried that a paperclip maximizer might accidentally destroy humanity while completing a task, or are we also worried that it could destroy all intelligence, both artificial and biological, and possibly itself?

Expand full comment
Philip Dhingra's avatar

If we accept that a paperclip maximizer could kill all humanity, it could also kill all other AIs, and yet, the latter scenario seems unfathomable. So, I was trying to unearth the contradiction. Why is one inconceivable but the other one, an impending doom.

Expand full comment
beleester's avatar

What sort of scenario are you imagining, where the paperclip maximizer needs to kill other AIs in order to achieve its goal?

Most hard-takeoff scenarios imagine that the AI will be singular, because of the first-mover advantage. Once an AI is smart enough to take over the world, it's going to start doing that, and whoever starts first will gain power faster and be able to crush any competitors. The paperclip maximizer doesn't need to kill all other AIs, it just needs to seize power fast enough that no other AIs get the chance.

(This logic is not specific to paperclip maximizers - even if the first AI is "friendly" it might still decide it needs to seize power and stamp out any competing attempts at AI, since the other AIs aren't friendly.)

If you have a slow takeoff (rather than a single leap of brilliance, AIs just get a little bit more capable than humans every year until baseline humans are no longer in control of their lives) then the idea of multiple AIs fighting it out becomes more plausible, but the idea of the paperclip maximizer taking over at all becomes less plausible. The slower the takeover, the more chances that humans can keep control of the process.

Expand full comment
Michael's avatar

It's a thought experiment illustrating instrumental convergence, the idea that even innocuous seeming goals for a superintelligent AI can cause dangerous intermediate goals.

The real world analog is the first time we give a sufficiently powerful AI a potentially dangerous (if innocuous-seeming) task without proper alignment or sufficient safeguards. This hypothetical AI would destroy anything that threatens its goal (e.g. people might want to shut it down when they realize it's trying to accomplish its task literally rather than what we actually wanted). It would destroy all other AIs that were a threat to it, and in the longer term, might destroy almost everything on Earth. I'm not sure why you think it's inconceivable that an AI would try to destroy other AIs.

Expand full comment
Monkyyy's avatar

I think the examples people give are kinda stupid. Paper clips from bones?

If we make an ai it probably runs on chips.

If an ai runs on chips it probably will be better running on more chips.

When it comes to tomorrow unless the agi makes grey goo, theres only so much chips the ai has access to, either a) a small amount it has "legal" access to or, stealing all of them

If the ai steals all the chips, we will be at war with it and it will be a negative sum game to prevent us from turing it off so the internet works again.

Expand full comment
A1987dM's avatar

Probably not *itself*, but a paperclip maximizer killing all humans is most likely going to kill the whole biosphere except possible the hardiest microbes. It's definitely going to be no Polesie State Radioecological Reserve for wildlife, which is what AFAICT most VHEMT types would like, so VHEMT types shouldn't be happy with such a result either.

Expand full comment
Whenyou's avatar

I think many anti natalists disagrees with VHEMT and would say wild nature is a terrible hell cull of suffering for all involved

Expand full comment
Cato Wayne's avatar

Personally, I just care about the humanity / biological destruction. I don't think I'll ever understand the e/acc types who see a runaway AI murdering every child on Earth for paperclips as our "worthy successor". Thought that would be only for psychopaths.

Expand full comment
Odd anon's avatar

The "paperclip" analogy is about ASI maximizing for some particular configuration of matter above all else. Humans are made of matter. I wouldn't characterize it as "accidental" for an AI to not inherently care about human life above its own values. It would only destroy itself if doing so would maximize its values (which it probably wouldn't, unless it had succeeded in reconfiguring all other matter already).

Expand full comment
User's avatar
Comment deleted
Jan 2Edited
Comment deleted
Expand full comment
anomie's avatar

I think you posted this in the wrong open thread...

Expand full comment
beowulf888's avatar

I think I did, too. How did that happen? Oops. Oh well, I'll wait until the next one rolls around. LoL.

Expand full comment