Open Thread 422
...
This is the weekly visible open thread. Post about anything you want, ask random questions, whatever. ACX has an unofficial subreddit, Discord, and bulletin board, and in-person meetups around the world. Most content is free, some is subscriber only; you can subscribe here. Also:
1: Are you interested in whether AIs are conscious, or what to do about it if they are/aren’t? The Cambridge Digital Minds group invites you to apply for their fellowship program. August 3-9, Cambridge UK, £1K stipend, learn more here, apply here by March 27.
2: Also from the European branch of our conspiracy: superintelligence alignment seminar in Prague, April 28 - May 28. Free tuition and lodging, possible help with travel expenses. Learn more here, apply here by March 8.
3: An ACX grantee, still in stealth mode, writes:
Feeder mice and rats are among the most numerous farmed mammals in the U.S., yet almost no one is working on alternatives. We’re building a CPG company developing snake food designed to replace conventional feeder rodents at scale. We’re looking for a GM/COS/Head of Growth to help build and scale the company—owning strategy, growth, operations, and core execution. This is for someone motivated by utilitarian animal impact and excited to build in a deeply neglected space. Depending on experience and comfort with ownership, this could look less like a traditional employee role and more like co-founding and building the company together. You can apply on LinkedIn here: https://www.linkedin.com/jobs/view/4374609335/. If you do, please leave a short note on how you heard about the role.
4: I was recently mentioned in a Harper’s article on Bay Area AI culture. I agreed to be included, it’s basically fine, I’m not objecting to it, but a few small corrections:
The piece says rationalists believe “that to reach the truth you have to abandon all existing modes of knowledge acquisition and start again from scratch”. The Harper’s fact-checker asked me if this was true and I emphatically said it wasn’t, so I’m not sure what’s going on here.
The article describes me having dinner with my “acolytes”. I would have used the word “friends”, or, in one case, “wife”.
The article says that “When there weren’t enough crackers to go with the cheese spread, [Scott] fetched some, murmuring to himself, “I will open the crackers so you will have crackers and be happy.”” As written, this makes me sound like a crazy person; I don’t remember this incident but, given the description, I’m almost sure I was saying it to my two year old child, which would have been helpful context in reassuring readers about my mental state.
The article assessed that AI was hitting a wall at the time of writing (September 2025). I explained some of the difficulties with AI agents, but I’m worried that as written it makes it look like I agreed with its assessment. I did not.
In the article, I say that I “never once actually made a decision [in my life]”. I don’t remember this conversation perfectly and he’s the one with the tape recorder, but I would have preferred to frame this as life mostly not presenting as a series of explicit decisions, although they do occasionally come up.
Everything else is in principle a fair representation of what I said, but it’s impossible to communicate clearly through a few sentences that get quoted in disjointed fragments, so a lot of things came off as unsubtle or not exactly how I meant them. If you have any questions, I can explain further in the comments.
5: In What Happened With Bio Anchors, commenter David Schneider-Joseph makes a point I hadn’t heard before:
Cotra estimated “~2.5 OOM worse [than the brain], +/- 1 OOM”, based on reference points like how much less efficient dialysis machines are than a human kidney, how much more efficient solar panels are than leaves, and the FLOP/watt efficiency of a V100 GPU. But most of those anchors had little to do with where ML algorithms were in 2020 when bioanchors was written, and would have given a very similar estimate for “present state of ML algorithms” 20 years earlier or 20 years later.
This is sufficiently interesting that I’m curious to hear from someone who engaged with Bio Anchors and forecasting more deeply than I did - did we all just miss this?
