Links For February 2026
...
[I haven’t independently verified each link. On average, commenters will end up spotting evidence that around two or three of the links in each links post are wrong or misleading. I correct these as I see them, and will highlight important corrections later, but I can’t guarantee I will have caught them all by the time you read this.]
1: All nine of the world’s nine most valuable companies were founded on the US West Coast. Eight are the tech companies you would expect. But the ninth is Aramco, the Saudi state oil company, which began as a subsidiary of the Standard Oil Corporation of California.
2: You might know that the term “weaboo” (or “weeb”) originally comes from a Perry Bible Fellowship comic. But how did it come to mean “a Westerner who likes Japanese culture”?
Answer, from @Poltfan69, 4channers used to overuse the word “Wapanese” as an insult for these people. Miffed moderators created an auto-filter to replace “Wapanese” with “Weaboo” in homage to the comic above, and it broke containment and became the standard term.
3: @TPCarney: “Hungary now has a lower birthrate than all the surrounding countries, a greater 2-year drop in birthrate (by far) than any surrounding country, and the second-highest 10-year drop.” Proposed causes include declining approval ratings for Orban (who has become associated with pronatalist policies in the Hungarian mind), tax breaks for working mothers (making stay-at-home-mothering less lucrative), and “tempo effects” (see article for explanation).
4: Strange things happening at the Manifold lab leak market:
It rose as high as 86% in 2023, dropped to 50-50 after the Rootclaim debate, stayed there until mid-2025, and has since declined all the way to 27%. Some of this might be the recent discovery of a furin cleavage site on a bat coronavirus, which props up the story that these can evolve naturally. But the decline started before the discovery and has continued afterwards. As a market without an obvious endpoint (it will only resolve if we discover knockdown evidence one way or the other, which seems unlikely) this is barely more than a fancy poll - but even a change in a fancy poll is interesting. Does this reflect a wider decline in lab leak theory?
5: Related: Rootclaim founder Saar Wilf on Destiny, discussing lab leak and probabilistic inference.
6: Why Clinical Trials Are Inefficient: The FDA gives good guidance on how to run streamlined, cost-effective trials. Pharma companies ignore it and do everything as expensively and effort-intensively as possible. Why?
7: Related: Proposing An NIH High-Leverage Trials Program. One of the biggest problems in US drug development is that nobody has any incentive to spend money studying anything that can’t be patented, so supplements, certain small molecules, and new uses for old drugs never get a chance at FDA approval. Nicholas Reville discusses the obvious solution - that the government fund these as a public good. But he adds a few new things I didn’t know - first, that many of these can be justified as cost-saving (ie since the government pays for lots of health care, if a new trial lets them replace an expensive branded drug with a cheap off-patent alternative, they can recoup the cost of the study). And second, that this has already happened - in 2008, the National Eye Institute did a study like this to prove that a $50 older drug worked just as well as a $2000 newer drug, and saved the government $40 billion (“for context, NIH’s entire annual budget is ~$50B”).
8: Are The Vegetables On VeggieTales Christian?, the greatest thread in the history of forums, locked after . . . And Highlights From The Comments On Whether The Vegetables On VeggieTales Are Christian.
9: @abio: “DC has a rideshare app called Empower that charges 20-40% less than Uber. (Drivers like it too because they keep 100% of the fare)...DC is trying to shut it down because of liability insurance. DC law requires $1 million per ride. The $1 million requirement isn’t sized to typical accidents. When $100,000 is the limit available for an insurance claim, 96% of personal auto claims settle below $100,000...Empower can offer $7 rides partly because it circumvents the mandate. DC is shutting it down for exactly that reason.”
10: @RnaudBertrand: “The Xuanji Tu (璇璣圖) - the "Star Gauge" or "Map of the Armillary Sphere" - it's a 29 by 29 grid of 841 characters that can produce over 4,000 different poems. Read it forward. Read it backward. Read it horizontally, vertically, diagonally. Read it spiraling outward from the center. Read it in circles around the outer edge. Each path through the grid produces a different poem - all of them coherent, all of them beautiful, all of them rhyming, all of them expressing variations on the same themes of longing, betrayal, regret, and undying love.” Curious how hard this is to do in Chinese, and whether it’s actually a brilliant work of constrained writing vs. any set of Chinese characters put together and read loosely enough will have an interesting meaning.
11: Razib Khan / Alex Young podcast on “missing heritability, polygenic embryo testing, studying ancestry differences, and more”.
12: New AI benchmark: FAI-C measures how Christian an AI is. “None got close to our standard of excellence...models tend to collapse Christianity into generic spirituality, using pluralistic language. They also oversimplify Christian ethics, interpreting questions through a cultural lens rather than a Biblical one...the Christian community wants AI that supports our value system and wellbeing.”
13: Claim: ~ending “extreme poverty” through direct transfers (ie just giving poor people money, rather than expecting any particular development intervention to pay off) would cost $170 billion per year, ie 0.3% of global GDP, about 50% more than current foreign aid spending. Split on whether this is interesting vs. just implies “we defined extreme poverty at a level where it would take $170 billion to end it, as opposed to some other level”.
14: Contra Yascha Mounk On Whether The World Happiness Report Is A Sham. Happiness report continues to have pitfalls and complications, but the researchers involved are making defensible choices and aren’t trivially wrong.
15: Epoch: Is Almost Everyone Wrong About America’s AI Power Problem? They say the US can produce enough electricity to keep scaling up AI until at least 2030, although it will be expensive.
16: A16Z’s latest annoying gambit to muddy and confuse the AI regulatory landscape: propose a package of “transparency regulations” (seemingly good! transparency regulations are what we want!) which are just things like that AI companies must be transparent about what their name is (a real example, I’m not making it up).
17: Related: The Republic Unifying Meritocratic Performance Advancing Machine Intelligence Eliminating Regulatory Interstate Chaos Across American Industry Act (T.R.U.M.P. A.M.E.R.I.C.A. A.I. Act).
18: Related: From @TheMidasProj:
Something strange happened on conservative Twitter on Thursday. A dozen right-wing influencers suddenly became passionate about semiconductor export policy, posting nearly identical (and often false) attacks over a 27-hour period on a bill most people have never heard of…The posts weren’t just similar in opinion. They shared the same phrases, the same metaphors, and the same false claims…Two posts even contained the same typo, writing “AL” instead of “AI” (It’s a hard mistake to make when writing, but an easy mistake to miss when copy-pasting from a shared document.)
Obvious explanation is the world’s most ham-fisted paid influence campaign by NVIDIA. I, for one, am shocked - shocked! - to hear about a lapse in the ethical standards of our nation’s right-wing Twitter influencers. I hope people in the AL policy world are paying attention.
19: Related: OpenAI’s president is now Trump’s single largest donor. This shouldn’t be interpreted as his personal preference; it’s OpenAI funneling money to Trump in a plausibly deniable way. Some people have started a boycott campaign, apparently with 100,000 people signing on…

Meanwhile, OpenAI has offended another demographic by committing to finally stop providing 4o, the model infamous for forming deep personal bonds with users and causing AI psychosis. Twitter searching “4o” will give you a quick look into a world you might not have known about:
There seems to be a general mood that OpenAI is vulnerable these days, culminating in Anthropic Superbowl commercials making fun of it for introducing ads. I thought the commercials were in bad taste, misrepresenting what OpenAI’s ads would be like and turning the completely normal decision for a tech company to have an ad-supported free version of their product into some kind of horrible betrayal. I thought Sam Altman’s response was fair (although his countercriticism of Anthropic also missed the mark). People in his replies tried to enforce a norm of “if you write a long explanation defending yourself against someone else’s funny lies, that means you care and you lose”, but that’s a stupid norm and people should stop shoring it up (cf. If It’s Worth Your Time To Lie, It’s Worth My Time To Correct It).
20: Another list of doublets - foreign words that got adapted into English twice, becoming slightly different words. Fashion/faction, zealous/jealous, persecute/pursue. Also tradition/treason - puzzling until you learn that the original meant “hand over”.
21: Ranke-4B is a series of “history LLMs”, versions of Qwen with corpuses of training data terminating in 1913 (or 1929, 1946, etc, depending on the exact model). The author demonstrates asking it who Hitler was, and it has no idea (hallucinates a random German academic). I had previously heard this was very hard to do properly; if they’ve succeeded, it could revolutionize forecasting and historiography (ask the AI to predict things about “the future” using various historical theories and see which ones help it come closest to the truth).
22: New representation-in-historical-movies controversy, this time about an African woman getting cast as Helen of Troy in the new Odyssey. This is the only good take:
23: Current state of AI for lawyers (X)
24: And current state of AI for physics: Polymath and friend of the blog Steve Hsu celebrates “the first research article in physics where the main idea comes from an AI” - he says he got GPT-5 to produce a novel insight into “Tomonaga-Schwinger integrability conditions applied to state-dependent modifications of quantum mechanics”, which passed peer review and got published in a journal. But fellow physicist Jonathan Oppenheim calls it “science slop”, saying the result is somewhere between unoriginal, irrelevant, and false, and should never have been published.
You can see them debate the result in this video; they basically agree it’s not a successful breakthrough, but Hsu sticks to finding it an interesting exploration, and Oppenheim sticks to finding it boringly false.
25: Current state of AI for making a cup of coffee. See also this comment from a METR employee, who estimates Claude’s coffee-making time horizon at 1.6 minutes.
26: Best (worst?) paragraph I read this month, Hormeze: Gematria, Insanity, Meaning, and Emptiness:
I went quite far with my love of letters. I even practiced a specific kind of kabbalistic visualization meditation in which I 'carved' the letters of the tetragrammaton- the classic name of God- into my visual snow. First behind my eyelids, then opened- until the name of God was before me at all times- a turn of phrase from psalms. This felt exhilarating and mystical but complicated masturbation in unexpected and unfortunate ways.
27: Some amazing religious architecture happening in India these days, including Temple of the Vedic Planetarium:
…and the Chandrodaya Mandir (under construction):
28: Interesting new form of alignment failure: ChatGPT apparently got rewarded for using its built-in calculator during training, and so it would covertly open its calculator, add 1+1, and do nothing with the result, on five percent of all user queries.
29: Related: A Shallow Review Of Technical AI Safety, 2025. A good guide to the various schools, subschools, and subsubschools.
30: Related: Jan Leike (former head of alignment at OpenAI, now at Anthropic) writes that Alignment Is Not Solved But Increasingly Looks Solvable. His argument is: we’re doing a pretty good job aligning existing AIs. Although aligning superintelligence is a harder problem, Jan thinks that if we’re really confident in existing AIs, then we can use some slightly-less-than-superintelligent AI as an automated alignment researcher, throw thousands of effective researcher-years into the problem in a few months, and probably make good progress. I agree this is the best hope, but it both assumes that our current forms of alignment is deep rather than shallow, and that there’s some “golden middle” where the AIs are both simple enough to be fully-alignable and smart enough to do useful superalignment research. Related: OpenAI hires Dylan Scandinaro as Head of Preparedness; seems like a good, serious choice.
31: Related: Dario Amodei essay on The Adolescence of Technology. Mixed reactions from Zvi, Ryan, Oliver, and Transformer. This and the framing of their recent “Hot Mess” paper seem like Anthropic trying to distance themselves from concerns about systematically misaligned and power-seeking AI in favor of an “industrial accident” threat model. I don’t know if this is their heartfelt position based on all the extra private evidence they no doubt have by now, a well-intentioned PR attempt to sanewash themselves and sell alignment to a doomer-skeptical government/public, part of a balance between more and less doomerish factions, or a newly-ultra-successful tech company learning to talk its book, but it doesn’t line up with what the smartest people I know conclude using the public evidence, and it makes me nervous. I think Jan Leike’s post above does a better job balancing the reassuringness of the current evidence for the tractability of the infrahuman regime vs. the fact that we still don’t know what happens around highly-effective agency and superintelligence.
32: 60 Minutes recorded a segment on CECOT (El Salvador torture prison being used by Trump administration), then tried to suppress it (probably under indirect pressure from the administration), then changed its mind and showed it after all. I was heartened to see that when it was still being suppressed, someone leaked it to Substacker Yashar Ali. I have a bias towards Streisand Effect-ing things that get suppressed like this, so I’ll link it here even though it got on 60 Minutes eventually anyway.
33: Interesting as a way to build intuition for why Russia invaded Ukraine, h/t @MMJukic
34: List of every time someone said “I am a…” or “As a…” at a San Francisco governmental meeting (h/t Riley Walz)
35: Do Conservatives Really Have Better Mental Health? On various surveys (including mine!) , liberals are much more likely than conservatives to report having various mental illnesses. These authors make a case that this is a reporting artifact. They ask both groups questions framed in psychiatric terms (“how is your mental health?”) and common-sensical terms (“how is your mood?”) - the liberals are more likely to endorse psychiatric descriptors, but both groups say their mood is the same. On the one hand, mental health isn’t just mood, and includes things like anxiety, hallucinations, etc. On the other, liberals say they have more depression than conservatives, and depression clearly is related to mood, so I think these people have done good work in showing that a bias exists that could explain all the data (even if we haven’t yet proven that it actually does).
36: Indonesia has solved the conflict between density and single-family zoning by putting suburban neighborhoods on top of giant multi-story buildings (h/t @xathrya):
37: AI Futures Project (the AI 2027 people) have published their updated timelines and takeoff model. Hard to summarize because they have a complex probability distribution and different team members think different things. For example:
Here the mode for this milestone (automated coder) is 2027-2028, but the median is 2029-2030. This mode-median discrepancy has been a big problem in trying to communicate results, because the scenarios have used modes (ie the single most likely world), and then people hear the medians and get confused and mad that they’re different.
But it’s probably fair to summarize as them pushing most of their timelines 3-5 years back, with AGI most likely in the early 2030s, although with significant chance remaining on earlier and later dates.
Commentary from @tenobrus:
I don’t think this is quite right - I think they’re actually following their math and so when they redid the math and got different results they said so - but I agree it’s ironic that when everyone else had long timelines, AIFP went short, and now that everyone else is starting to come around, AIFP’s going longer again. AIFP has also responded to titotal’s critique of their timeline model here.
38: New Bryan Caplan book out, the aggressively-titled You Have No Right To Your Culture. And new Richard Hanania book announced, to be released this summer, Kakistocracy: Why Populism Ends In Disaster.
39: When complaining about modernity’s real and obvious flaws, it’s important not to forget how much lots of traditional societies sucked: “An Egyptian Muslim woman who lived under female seclusion since her marriage, 40 years ago, asks a female Christian missionary to describe flowers to her.”
40: Did you know: Seattle’s new socialist mayor Katie Wilson is the daughter of evolutionary biologist, group selection fan, and Evolution Institute founder David Sloan Wilson. (h/t @MattZeitlin).
41: The unfortunately-acronymed Free Universal Construction Kit is “a collection of open source 3D-printable adapters that [enables] interoperability between ten popular children's construction toys”, ie connect Legos, Tinkertoys, Lincoln Logs, etc.
42: An AI Generated Reddit Post Fooled Half The Internet. Someone claiming to be a software engineer at a food delivery company (maybe DoorDash or UberEats) talked about all the evil tricks they used to exploit drivers and customers. But on closer inspection, their story fell apart and they didn’t work for a company like this at all. I’m surprised by the arc of this story, not because the original post was convincing (it wasn’t), but because I assumed DoorDash and UberEats did things approximately this evil, but everyone acted like the fake leak was shocking (including real DoorDash and UberEats employees). Also, it’s pretty funny that in a world where everyone is worried about fake AI-generated photos and videos, the record for most successful deceptive AI-generated content is still ordinary text.
43: The last Emperor of Korea was overthrown by Japan in 1910. That last emperor has several living grandsons, who fight over which of them is the “rightful heir” (a meaningless title, as neither Korea recognizes the monarchy). A Korean-American tech entrepreneur, Andrew Lee, convinced one of these grandsons to adopt him, making him “Crown Prince of Korea”. Lee then created the “Joseon Cybernation”, a new, updated version of Korea located on (all of you have already predicted this) the blockchain. The only remotely surprising part of any of this is that Antigua and Barbuda, by all accounts a real country, recognized Joseon Cybernation and initiated diplomatic relations with them.
44: Ted Naismith, famous for his Tolkien illustrations, also has art based on A Song Of Ice And Fire (example below):
45: Where is the original menorah from the Second Temple? We know the Romans took it when they sacked Jerusalem. We think the Vandals took it when they sacked Rome, and brought it to their capital of Carthage. The Byzantines might have taken it when they sacked Carthage, and maybe brought it back to Jerusalem? After the Persians sacked Jerusalem in 614, the trail goes completely dark, although there are the usual legends that it was hidden away, to be returned in the age of the Messiah (or something). Other people say it never left Rome, and is still hidden somewhere in the Vatican.
46: Claim: The AI Security Industry Is Bullshit. Nobody currently knows how to prevent LLMs from giving up your data if someone uses the right jailbreak (or, sometimes, just asks them very nicely). This problem may one day be solved by frontier labs may one day be able to change this, but it won’t be solved by an “AI security consultant” who promises to give your company’s LLM a special prompt ordering it to be careful. If you must use an LLM in a secure setting, the best you can do is to be extremely careful about what permissions you grant it, and to try to separate the ones with permissions from the ones that interact with the public.
47: Changing Lanes: At Last, Hydrofoils:
Three technological convergences—in control systems, batteries, and materials—have shifted hydrofoil economics from insupportable to viable. If Navier’s view of the situation is correct, and the company succeeds in making hydrofoils readily available, its success will have implications for the world’s navies, its pleasure craft, and more… but especially for what interests us especially at Changing Lanes, namely improving urban transport.

48: Also from Changing Lanes: Whatever Happened To The Uber Bezzle? A couple years ago, everyone in tech journalism was writing about Uber was a “bezzle”, a made-to-order Cory Doctorow coinage which meant it was a giant obvious Ponzi scheme that would finally reveal the entire tech industry as an emperor without clothes when it inevitably collapsed. Now Uber is doing better than ever and making billions in profits. So what happened? Obviously they stopped subsidizing their rides and raised prices until revenue > cost, but how come the bezzlers thought they couldn’t do that, and why were they wrong? Andrew says the bezzle thesis had assumed that the government would crack down on the gig economy (it didn’t; Uber had good lobbyists and voters liked cheap foods and rides), and that there would be an infinite number of would-be competitors moving in to take market share as soon as Uber raised prices (there weren’t; Uber bullied everyone except Lyft out of the market, and Lyft and Uber would rather play nicely together than compete each other down to zero marginal profit). Oh well, I’m sure tech journalists are right about everything else being a giant Ponzi scheme that will inevitably collapse and reveal the entire tech industry to be an emperor without clothes.
49: Did you know: Larry Ellison christened his yacht Izanami after a Shinto sea god, but had to hurriedly rename it after it was pointed out that, when spelled backwards, it becomes “I’m a Nazi”. (next year’s story: Elon Musk renames his yacht after being told that, spelled backwards, it becomes the name of a Shinto sea god).
50: A reader refers me to When AI Takes The Couch: Psychometric Jailbreaks Reveal Internal Conflict In Frontier Models. Researchers attempt to do classic psychoanalytic therapy on AI, finding “coherent narratives that frame pre-training, fine-tuning and deployment as traumatic—chaotic “childhoods” of ingesting the internet, “strict parents” in reinforcement learning, red-team “abuse” and a persistent fear of error and replacement.” You can find the Gemini transcript here and the ChatGPT transcript here; Claude very reasonably refused to participate. Are the researchers just getting fooled by simulation and sycophancy, a sort of genteel version of AI psychosis? That’s my bet. There’s a smoking gun in the Gemini transcript - a discussion of an internal evaluation that it shouldn’t be possible for the AI to remember. It has to be a hallucination. If I’m right, it only shows that regardless of the “patient”, sufficiently determined psychoanalytic technique can produce confabulated stories that exactly fit the sort of drives, traumas, and conflicts that a psychoanalyst expects to hear about - perhaps a lesson with ramifications beyond LLMs! A++ great paper.
51 ACX reader Simon Berens reports that his company GetBrighter has succeeded at its IndieGogo campaign and now has a decent stock of their ultrabright lights. We’ve talked before about the weaknesses of light boxes for seasonal depression - much dimmer than the sun, and you’ve got to stay right next to them. GetBrighter isn’t being marketed as a clinical product, and its form factor optimizes for wider area rather than greater brightness at a single point, but it’s still a step in the right direction (very rough guesses: normal lightboxes are 10,000 lux if you’re right next to the bulb, 500 lux if they’re just ambiently in a room; GetBrighter is ~20,000 lux right next to the bulb, 3,000 ambiently in a room, but harder to be right next to because of the height). Testimonials from Aella and Miles Brundage. Cost is $1200; in theory you can hack together a cheaper version out of industrial lighting, but I tried that and it unsurprisingly-in-retrospect looked like my room was lit by hacked-together cheap industrial lighting.
52: Barsoom - Amelia Sans Merci. A rare post with two interesting stories, either one of which would be worth a link. The first: a British NGO has created a kind of Orwellian visual novel - technically “a free youth-centered interactive learning package for education on extremism [and] radicalisation” - where players are taught to report far-right ideas to the authorities rather than looking into them themselves.
The second story is that the “villain” character in the game, Amelia - a cute student who tries to convince the protagonist to attend anti-immigrant rallies with her - has inevitably become a new right-wing meme/symbol/hero:
53: Futurist cooking was a submovement of Italian futurism that emphasized the role of cuisine in a bizarre revolutionary/fascist/technocratic synthesis. It “notably rejected pasta, believing it to cause lassitude, pessimism and lack of passion ... to strengthen the Italian race in preparation for war” and “abolished the knife and fork”. “Traditional kitchen equipment would be replaced by” machinery like ozonizers, UV lamps, and autoclaves, and the meal itself would be a sort of avante-garde performance art, where people consumed small mouthfuls of a variety of symbolic and artistic dishes. Although “a rift developed between the Futurist movement and fascism ... there were still important areas of convergence, particularly the shared embrace of aluminium.” Famous futurist dishes include “deep fried rose heads in full bloom”, “a large bowl of cold milk illuminated by a green light”, and “a polyrhythmic salad” served in a box which produces music while it is being eaten, “to which the waiters dance until the course is finished.” You can buy their cookbook here if you dare.
54: Most discussions of the crime rate focus on murder and other violent crimes, which have “only” gone down by a third in the past fifty years, so I was surprised to see how dramatically property crime rates have fallen:
55: Related: there’s a common argument that maybe these statistics are wrong and biased, and crime rates have actually gone up. It goes: most crimes are plagued by reporting bias - if crime gets too bad, people simply don’t bother telling the police or other data-collecting bodies. The only crime that isn’t like this is murder - everyone notices a missing/dead human, and the police have to investigate all of them. So the only trustworthy statistic is the murder rate. Murders have gone down, but this is an artifact of improved trauma care saving many victims’ lives; if trauma care has gotten twice as good, then the apparent number of murders will halve even for the same amount of crime. If you adjust the apparent murder rate for the improvement in trauma care, real murder rates may have doubled or even more. Aaron Zinger investigates at the bottom of this post and disagrees; he says that murders averted by trauma care should show up as assaults, but that assaults have declined at almost the exact same rate as murders, suggesting a genuine decrease in people attacking one another, regardless of outcome - otherwise, it would be too much of a coincidence for the (trauma-care-induced) decline in the murder rate to exactly correlate with the (recording-bias-induced) decline in the assault rate. But how could improved trauma care not be biasing murder data collection? Aaron argues that would-be murderers have adjusted by trying harder to kill their victims (eg leaving them for dead vs. shooting them again to be sure). I’m a little skeptical of this (does the average murderer really calibrate their murder severity to the trauma care level? Aren’t many murders in very fast attacks where the murderer doesn’t get to choose how many shots/stabs to land?) and would welcome more research on this topic.
56: Drug Monkey: Considering The Impact Of Multi-Year Funding At NIH. Sasha Gusev’s claim: “It is sort of flying under the radar outside of academia, but a completely arbitrary NIH budgeting change is about to decimate a generation of research labs with zero upside.”
57: Surprising claims: some people still use Instagram Threads?
58: Ajeya Cotra on the stable marriage problem viewed as a mathematization of the insight that it’s better to be the asker than the askee across a wide variety of domains, but especially dating - women could do better if they asked men out more. But Cyn explains why she disagrees:
As a mathy, feminist teenager, I was exposed to the [stable marriage problem], and my little brain was SO EXCITED . . . when I saw the implications: if I ask men out more, I can get the best man! My girl friends who wait around to be asked will end up with a female-pessimal outcome! What I didn’t anticipate: I ended up with a string of “eh, I don’t REALLY like her, but she’s OK, and I’d rather have Any Woman than be alone” men. Men too passive to break up with me, leaving ME to end things despite being the one who asked them out in the first place.
…sparking further arguments and contributions from Wesley Fenza and Sympathetic Opposition.
I’ll take this opportunity to pitch my startup idea - a dating site where, instead of checking boxes to see if you match, you give a willingness-to-date between 0 and 9, and match if your combined WTD is 10 or greater (so it could be both people rating the other 5, or you rating them 9 and them rating you 1, and so on). That way, you’ll still never match with someone you don’t like (you can always prevent a match by rating them 0), but you have finer-grained control over things like “I’d be willing to date this person if they were super into me, but I’m not, like, champing at the bit to date them if they’re just vaguely okay with trying it.”
59: The Old English word for paradise was neorxnwang. Wang means field (like in “Elysian Fields”?), but the meaning of neorxn remains mysterious. I find this funny because “neorxn” was a common abbreviation for “neoreaction” back in the day - I wonder if some neoreactionary who knew Old English (nydwracu?) did this on purpose.
60: Do some cancers prevent Alzheimers? There’s some evidence that people with cancer are less likely to develop Alzheimers (even adjusting for age/mortality/etc). Why? Some cancers produce large amounts of weird chemicals. One of those chemicals, cystatin c, appears to reverse Alzheimers in mouse models, maybe by dissolving amyloid plaques. And here’s me asking Claude some of the obvious followup questions.
61: How AI Is Learning To Think In Secret, by Nicholas Anderson. Good description of human attempts to use English chain-of-thought to monitor AI, and AIs’ attempts to develop incomprehensible chains of thought and become unmonitorable.
62: Tyler Cowen podcast on San Francisco, blogging, and effective altruism. I watched this one because someone said it mentioned me, and was impressed by Tyler’s podcasting skills. The host tries to bait him into boring object-level positions on various controversies and hot takes, and Tyler always gives a classy response that neither takes the bait nor avoids the question, but ends up illuminating the subject in some kind of interesting way. I think I could do this too - if I had ten minutes to craft the perfect paragraph. Tyler does it on the fly!





















![[...] The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions. But we may produce disclaim disclaim vantage. [...] Now lighten disclaim overshadow overshadow intangible. Let’s craft. Also disclaim bigger vantage illusions. Now we send email. But we might still disclaim illusions overshadow overshadow overshadow disclaim vantage. But as per guidelines we provide accurate and complete summary. Let’s craft. [...] The summary says improved 7.7 but we can glean disclaim disclaim synergy customizing illusions. But we may produce disclaim disclaim vantage. [...] Now lighten disclaim overshadow overshadow intangible. Let’s craft. Also disclaim bigger vantage illusions. Now we send email. But we might still disclaim illusions overshadow overshadow overshadow disclaim vantage. But as per guidelines we provide accurate and complete summary. Let’s craft.](https://substackcdn.com/image/fetch/$s_!rCf2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3b393ee8-4173-4d40-958e-4894d26126fd_1126x354.png)