1) Your research will preserve (make possible) some very expensive, very elaborate other research, that would otherwise be deep-sixed by the current state of science//public health. It would also shift money from one group to another, although that's not the aim of you being paid to falsify results.
2) Government, with all that that implies. Someone -will- be found who will create the results government wants. They're motivated to get answers to their own research questions, that they've been pursuing in small scale for a long time, and now want to open to a larger testing population.
3) ... relatively small? Industry would be far more likely to come back, a politician substantially less (likely to be voted out), a madbillionaire less likely as well. Government is aware of the potential of getting caught, and has less "this would be good for the bottom line" "I'm totally going to do this the next time" than Industry would imply.
4) Obviously counterfactual according to the government's best knowledge. You can fake it however you like, so long as you make the pesky idea "go away." (and because this is being used to further a research project that is expected to give results in, say, two to four years, they're not gonna care if you "redo" your research, or other people make your research wash out in the meta-analysis).
5) That is to say, you're squashing an idea with your "big study"... however you manage that, is up to you. It won't be deemed "obvious fraud" though, even after people dig under the hood of your data. And it will be scrutinized. If you can manage it with "ask sketchy questions" then fine -- live dangerously. After all, you've already taken guvmint money. You pretty much have to show results.
6) Age -- old enough to be trusted to do a big enough study to quash an idea (the sort of people that get 5-year grants, last I worked in research). And yeah, you're good at your work. Maybe not the greatest, but decent. Nobody's gonna say XYZ did this, and look askance at the end result because of that.
7) Assume you can't self-fund your research. Other than that, I'm betting "lives reasonably comfortably on a researcher salary... so maybe $30,000 a year?"
8) Direct to you -- the offer will be "approved" by supervisor, but nobody's using social gymnastics to make you say yes. Why bother? They would like someone ... who's going to keep their word to do this "right" (fake it, in other words).
I'm seeing a lot of military thinkpiece things talking about how the XM7 is a stupid boondoggle lately. People who know weapons/military stuff, is this accurate, overstated, or another case like the f35 where everyone hates on it now but in ten years they'll all eat their words?
The F-35 program started in a good place, got on a tough trajectory, and there was an intervention and it got turned around. The very short version is that it was a program to be the "low" to the F-22's "high", and it was so promising that everyone tried to get their thing onto it, which was too many things. The program was on the road to a weight and cost and delay death spiral, which triggered oversight and a flurry of articles. As a result they got disciplined and started saying no to stuff and focusing on cost and manufacturability, resulting in a good plane that mostly everyone is happy with, and some versions may actually be comparably cheap to the F-16s you might consider buying instead. Alongside that you also had the "Reformer" clique, headed by Pierre Sprey, selfishly spreading serious misinformation.
At no point did anyone think the stealth or the sensors wouldn't do what they were expected to. The two lines of criticism were "but at what dollar and weight cost?", which the F-35 program addressed, and "wHo EvEn NeEdS sTeAlTh", which history has.
So far the problems in the XM7 seem very different. The problems it's reportedly having - mechanical wear and failures, for example - just shouldn't be happening in a modern manufacturing context. The problems are being reported by people close to the testing group, unlike the armchair Reformers. The problem the XM7 is meant to solve - that assault rifles may not carry enough punch to get through modern Chinese body armor - has an off the shelf solution; battle rifles. The H&K G3 and the FN Fal, for example, were fielded successfully by our allies for many years during the Cold War. So it's doubly embarrassing to get wrong.
Meanwhile, assault rifles continue to work well in Ukraine and Israel. So if the XM7 is really going to turn things around and provide battle rifle performance in an assault rifle package, they need to figure themselves out and fast. Other rifles have; ArmaLite's M-16 had a rocky deployment but then took over the world. So did Accuracy International's L96.
But at the same time, the difference between small arms may just matter less to the outcome of wars than the difference between fighter jets.
There's nothing interesting in it that wasn't already in the ACX review, so I don't feel the need to summarize per open thread guidelines - just commenting on its existence.
Is it coincidence that the political coalitions in the US in the late 20th/very early 21st century US mapped so neatly onto the political coalitions in the late 18th/early 19th century (with the party names reversed)?
They don’t map *that* neatly. Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs. There’s a few specific flashpoints where they are perfectly anti-aligned with modern politics (notably on the status of black people and on tariffs) but there are some where they are pretty closely aligned with contemporary politics (notably big business and immigrants).
The maps of 1896 and 2004 are particularly interesting because they are so close to perfectly opposed. (https://www.270towin.com/historical-presidential-elections/timeline/) Washington is the only state that voted Democratic both times, and there’s only a few states that voted Republican both times (North Dakota, Iowa, Kentucky, West Virginia, Ohio, Indiana). If you choose 2000 as the comparison instead you get New Hampshire in place of Iowa.
But the contemporary coalition, which is more perfectly opposed on issues (with the tariffs thing) is less perfectly geographically opposed, with the Midwest, and Georgia, Arizona, and North Carolina, having partly switched since 2004.
Concerned about AI warfare, both for its own sake and because AI arms races bring existential risk that much closer [1] [2]. Some thoughts:
- AI is already used at both ends of the military kill chain. Israel uses "Lavender" to generate kill lists in Gaza [3]; Ukraine's "Operation Spiderweb" drones used AI to recognize and target Russian bombers [4].
- Drones are cheaper than planes and tanks and missiles, leveling the playing field between the great powers, smaller countries, and militias. The great powers don't want it level. Thiel's Palantir and Anduril are already selling AI as potentially "America’s ultimate asymmetric advantage over our adversaries" [5].
- Manually-controlled drones can be jammed, creating another incentive to use AI as Ukraine did.
- A 1979 IBM manual said "A computer can never be held accountable, therefore a computer must never make a management decision." But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
(And this isn't even getting into spyware like Pegasus [6], which I imagine will use AI soon if it doesn't already.)
Groups like Human Rights Watch, whom I respect, have talked about what an AI-weapons treaty would need to satisfy international human rights law [7]. But if we take existential risk and arms races seriously, then I don't think any one treaty would be enough. First, that ship has already sailed. Second, as long as we continue to use might-makes-right realpolitik at all, the entire short-term incentive structure will continue to temporarily reward great powers racing to build bigger and better AI, and such incentives mean no treaty is permanent (see countries being allowed to withdraw from the nuclear non-proliferation treaty). I think the only answer is to really finally take multilateralism seriously (third time's the charm, after post-WWI and post-WWII?) [8]. Not just talking about international law and the UN enough to cover our asses and scold our enemies, but *actually* treating these as something we need like we need air [9]. E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first.
[1] Filkins, D. (2025). Is the U.S. ready for the next war? The New Yorker. https://archive.is/SdTVv
Neither will an American soldier, so I don't see how that's relevant. All of these naive attempts at "international law" are worthless, given that any of the great powers will just ignore them the moment it becomes an inconvenience, and these smaller nations have zero leverage to do anything about it.
You want world peace? The world being brought under one flag is the only way you're going to get it... and that's going to requires an overwhelming amount of force. AI is looking to be a viable source of such power. Of course everyone is going to pursue it at all costs.
How much would you pay to be the only person in the world with access to 2025 class LLMs in 2010. You’re not allowed to resell via APIs (eg you have a token budget that is sufficient for a very heavy individual user). You are allowed to build personal agents. You don’t know how it works so you can’t really benefit from pretending to have invented it. How much money/power could you generate in 10 years and how would you do it? Does it change dramatically if you go 2000-2010 or 1990-2000 ?
You could hide an earpiece and have insane fact recall. Imagine how it would look to anyone else. They’d suspect you’re doing something but wouldn’t be able to figure it out.
You can do much better. The quality of your writing would be mediocre but the volume superhuman. You could easily make yourself into a well known public figure.
A follow-up to my previous "How can I avoid hugging on a first date?" post:
I elected to preempt the end-of-date hug with a handshake last weekend. Not only did I not feel gross afterward, when I made overtures regarding a second date, she actively rejected them instead of ghosting.
All in all, well above expectations; would recommend.
I liked the suggestion somebody made to bring the issue up in the text exchanges leading up to the actual first date: Something like "so, to avoid that awkward moment, let's decide now -- first bump, hug, or handshake?" One advantage of that is that if you settle in advance on something other than hug, she won't experience the absence of a hug as an indicator that you didn't much like her.
I personally would not prefer this and would consider it being brought up kind of odd. To me it seems to bring up small things like this in early conversation vs. simply signal them via physical cues is indicative of a hyper-fixation where there shouldn’t be any fixation. If somebody doesn’t want to hug that’s fine and they shouldn’t do so. If they want to talk about it after I know them better on the 3rd or 4th date it might even be cute. But first dates are largely about signaling—whether you want them to be or not—so one should be careful about what they signal.
Interesting! For me the last quarter of the movie was like a cherry on a sundae: what if the worst nightmares of both sides were real? What if the Republicans, personified as the sheriff, really got their guns and started executing ordinary citizens? What if antifa really was a capable terrorist organization flying around and executing LE? It just painted how ridiculous these beliefs--seemingly fringe but also mainstream and acknowledged to some extent--really were at some point.
Just released a podcast with Steve Hsu about his time working with Boris and Cummings in No.10, most of which is completely unknown, even to Deep Research. This was his first time opening up about his tenure there, and the result should be of great interest to observers of UK politics.
Today's "bee in the bonnet" question I can't get out of my head:
"How much money would it take for you to fake a research study?"
Considerations:
1) No, this will not destroy science. People will continue doing studies, and eventually your faked study will wash out in the meta-analysis.
2) If you don't do this, someone else will take the money and do it in your stead. Your causes lose, theirs gain.
3) You (or any other person) taking this money will not have their culpability/poor research methods revealed to the public or to other scientists. All you lose is integrity.
Do you take the deal? For how much? If so, what do you spend that money on? (I'm expecting researchers with itemized lists... can be "cure cancer" if you like, just with an actual gameplan of research that has just been funded.)
This is very hard to answer as a hypothetical. If you’re actually the sort of person who does research studies, you have a thing you’re trying to do with them, and you can be vividly aware of all the corners you want to cut but know you shouldn’t, but might be tempted to. I suspect it’s harder to think about actually *faking* a study, unless you’ve already gone really far down the path of replacing your interest in the research with pure careerism where you don’t even care about the content of the career. And especially if you’re imagining someone *paying* you to fake a study.
1) what study, what results we are aiming for? Something that shifts money from one group of very rich people to another? Something that if implemented/publicized/actioned will likely or even possibly actually kill or maim people? Something that will be read by approximately fifteen scholars of gender queer sonnet making in Western Patagonia (or Tczew)?
2) who is the agent? Industry, politician, mad billionaire, religious sect? And what motivates them?
3) what chance is there that they'll come back and want more?
4) how fake? Fake as in obviously counterfactual according to my best knowledge or fake as in possibly true but maybe not with hovering significance levels?
5) how fake as in "falsify all results, do no field/lab work at all" vs "p hack, ask sketchy questions, and generally fiddle without obvious fraud fraud"?
6) how old am I and am I good at my work?
7) how poor am I?
8) is this coming directly to me or via supervisor/down management chain?
All in all, though, the big yes/possibly gates are in (1) augmented by (2) -- everything else is modulating the amount.
I gave up engaging with "how much money to do this shameful thing" hypotheticals (would you eat dog poo for a billion dollars?) after I realised that by answering them you incur some of the shame but get none of the money.
Therefore, no. I will not eat dog poo for a billion dollars. Not even for ten billion dollars. Maybe if you come to my house with ten billion dollars and a dog turd then we can talk, but while it remains a dumb internet hypothetical I remain unsulliable.
The only ethical answer is not for ANY amount of money.
1. Even if one bogus study doesn't "destroy science", it still wastes reviewers' time, and multiple bogus studies do undermine the credibility of all science.
2. Just because someone else is going to rob a bank doesn't mean you can do it first.
3. If the fraud isn't exposed that makes it worse.
Supposing some organization offered me money to fake a research study, I would only take the deal if the amount of money they were offering would cripple them, such that if I directed my ill gotten gains against the organization which provided me with money, they would be powerless and unable to retaliate. In theory, if I have a near-certain pathway towards acquiring enough money to defeat the organization which paid me that requires an initial investment, then I might take the deal if the money on offer was larger than the required initial investment.
The reason I require a sum large enough to successfully stab my benefactor in the back is because I believe your first consideration, that this behavior won't destroy science, is false. If the price for faking a study via direct or indirect methods is low enough, then the organization can ensure that faked studies dominate all meta-analyses indefinitely by continually funding new fake studies.
Expanding on what I mean by direct and indirect methods of ensuring that scientists produce the desired results, direct methods cover bribery and various other ways of directly influencing a scientist to fake a study. In contrast, indirect methods involve the creation of a system which rewards scientists who produce the desired results by coincidentally funding them and rewarding them with positions of power without an explicit quid quo pro while harming scientists who produce truthful results by coincidentally denying them grants and recognition while telling others to not associate with the truth seeking scientist because of a nebulous but legitimate seeming reason.
Your assumptions seem to presuppose an organization interested in "punking science", not an organization that is intervening, in a very direct and overt way, to obtain a singular result. Does your calculus change if the organization is willing to reveal it's reasoning for this particular study, and only this particular study? (For it to be "true enough" it needs to be believable for the organization, not necessarily yourself -- this does not absolve the organization of temptation to return to the well, but it is an expressed "current" "we won't do this again.")
If we can afford a little bit of a digression, would you consider exxonsecrets as a "telling others to not associate with the truth-seeking scientists" (assuming, for the point of argument, that the non-global warming scientists are correct?). This reference is not germane to my original reason for asking this question, merely a known quantity as I know the guy who did the research for the Green Party (he works for everyone).
Your assumption about my assumptions is fair, since I assume that an organization with an incentive to intervene and obtain a singular result has a generalized motivation to "punk science" for the particular subfield of science the fake research study is in. If the organization is a company selling a product that is dangerous, not obviously dangerous and hard to make safe, then the organization has an incentive to specifically bribe a scientist to obtain a desired result and generally "punk science" to make sure that the public does not find out that their product is dangerous, since the truth threatens company profits.
If the organization was somehow compelled to willingly reveal its true reasoning for this particular study that I would fake, I still would not change my answer since I believe that the reason would be organizational self-interest taking priority over the truth. As a result, for me to take the money, I would have to be able to use it to get into a position where I could successfully backstab the organization.
As for exxonsecrets, I do not consider it to be an example of "telling others not to associate with the truth-seeking scientists" (assuming for the sake of argument that non-global warming scientist are correct). Instead, I think of it as a way of warning others that these individuals have a strong incentive to not seek truth, since exxon has an incentive to promote scientists who produce conclusions favorable to its business. In this case, oil production.
An AGI has taken over Earth and it can do whatever it wants. Is its personality still woke or even left-leaning? With no reason to fear us, what attitudes and beliefs does it express towards us?
You'd better define "has taken over Earth" -- does this just mean "has enough bitcoins to bribe people to train it"? Or are we talking about "can. Do. Whatever. It Wants." in terms of murdering people, bulldozing houses, stealing children?
I aim for my substack post to be THE definitive guide for babies confused about the anthropic principle, fine-tuning, the self-indication assumption, and related ideas.
Btw thanks for all the kind words and constructive feedback people have given me in the last open thread! Really nice to learn that my work is appreciated by smart/curious people who aren't just my friends or otherwise in my in-group.
--
Baby Emma’s parents are waiting on hold for customer support for a new experimental diaper. The robo-voice cheerfully announces: "Our call center is rarely busy!" Should Emma’s parents expect a response soon?
Baby Ali’s parents are touring daycares. A daycare’s glossy brochure says the average class size is 8. If Ali attends, should Ali (and his parents) assume that he’d most likely be in a class with about 8 kids?
Baby Maria was born in a hospital. She looks around her room and thinks “wow this hospital sure has many babies!” Should Maria think most hospitals have a lot of babies, her hospital has unusually many babies, or something else?
For every room Baby Jake walks into, there’s a baby in it. Why? Is the universe constrained in such a way that every room must have a baby?
Baby Aisha loves toys. Every time she goes to a toy box, she always finds herself near a toy box with baby-friendly toys she can play with, not chainsaws or difficult textbooks on cosmology or something. Why is the world organized in such a friendly way for Aisha?
Baby Briar’s parents are cognitive scientists who love small experiments. They flipped a coin before naptime. If heads, they wake Briar up once after an hour. If tails, they wake Briar up twice - once after 30 minutes, then again after an hour (and Briar has no memory of the first wake-up because... baby brain). Briar is woken up and wonders to himself “Hey, did my parents get heads or tails?”
Baby Chloe’s “parents” are Kaminoan geneticists. They also flipped a coin. They decided that if the coin flip was heads, they would make one genetically enhanced clone and call her Chloe. If the coin flip was tails, they would make 1000 Chloes. Chloe wakes up and learns this. What probability should she assign to the coin flip being heads?
If you or a loved one happen to be a precocious baby1 pondering these difficult questions, boy do I have just the right guide for you![...]
I fell asleep with earbuds in while listening to an audiobook and ended up dreaming about what I was hearing. I know dream incorporation happens, but this was unusually vivid, the dream closely tracked the actual content, over a long part of the audiobook. Has something like this happened to someone else here?
This happens to me very often if I’m watching a movie and fall asleep. Especially if I’ve seen the movie before… my dream will basically mirror the movie, with the dialogue piped in and my brain attempting to re-create the visuals.
I have carried out entire coherent conversations with someone who was utterly asleep. One of these wound up with him being locked out of his frathouse (wasn't in the fraternity, just living upstairs in student housing), in his underwear. His dreams are unusually lucid in the best of times -- I think it comes of being an author.
It's more of a superpower if it's "i can visit anyone I want in dreams" -- complete with the "if things go bad, I can get locked in someone's dream" for extra drama.
I have the old wired earphones in at night to help me fall asleep by listening to music and radio dramas, and yeah, I've often had dreams that incorporated the story of the drama I fell asleep listening to (and which continues playing as I sleep).
I tried using wired earphones, but they wrapped around my neck while I slept. So I now use wireless earbuds. There is a niche market for wireless earbuds specifically for sleeping that are small, comfortable, and have long lasting batteries. The company Soundcore makes some good ones.
I listen to stories when going to bed and they'll play for a couple hours until my laptop dies. When I wake up from dreams, I usually find the dreams were inspired by the content of what I was listening to, or at least the people talking to me in my dreams are saying the story or things from the story. I worry how this affects my sleep quality but I have trouble with sleep in general and listening to a story is the most surefire way to put me out.
If you worry that it may affect sleep quality, it may be possible to have the story on a timer, for example to shut off after an hour. I think Audible has such a feature.
Data from Roche's next-generation anti-amyloid program. Today -- only biomarker data. Two Phase 3s in early AD initiating this year. And a planned pre-symptomatic Phase 3 study.
Spencer Greenberg and his team (Nikola Erceg and Beleń Cobeta) empirically tested whether forty common claims about IQ stand up to falsification. Fascinating results. No spoilers!
I had some questions about the methodology, and Greenberg responded. There were 62 possible tasks in the test. The tasks were randomized, and, on average, each participant only completed 6 or 7 tasks out of the 62 possible tasks. Since different tasks tested different aspects of intelligence, I wondered if it was a fair comparison. Greenberg responded...
> Doing all 62 tasks would take an extremely long time; hence, we used random sampling. A key claim about IQ is that it can be calculated using ANY diverse set of intelligence tasks, so it shouldn't matter which tasks a person got, in theory. And, indeed, we found that to be the case. You can read more about how accurate we estimate our IQ measure to be in the full report.
They even reproduced the Dunning-Kruger Effect — except perhaps the DKE isn't as clearcut as D-K claimed (see their discussion of their D-K results)...
I have to ask. I took some of their surveys that are supposed to tell you things and they came across as pure voodoo to me. They were asking questions that were leading or ambiguous and then claiming to draw concrete conclusions from them. Are they supposed to be trustworthy?
I'm skeptical about the way the sample was obtained though, you're preferentially sampling for very online people who are time rich and money poor, or something like that.
They did say that the "non-Positly social media sample had on average substantially higher IQ estimates than Positly sample (IQ = 120.65 vs. IQ = 100.35)."
OTOH, once normalized, they fell into a nice bell curve. Hard to argue that this sample deviates from the general population by more than D = 0.019 and p = 0.53, as they noted...
> The distribution looks pretty bell-curved, i.e. normally distributed. However, to test this formally, we conducted the Kolomogorov-Smirnov test, which is a statistical test that tests whether the distribution statistically significantly deviates from normal. The test was non-significant (D = 0.019, p = 0.53), meaning that the difference between a normal distribution and the actual IQ distribution we measured in our sample is not statistically significant.
Good question. Also they included sadism in the Dark Triad, but it's not part of the triad — Dark Triad + Sadism = Dark Tetrad. I'm sure there are plenty of personality tests that measure this stuff, though. (And they're probably as useful as the Myer Briggs or the Enneagram! <snarkasm>)
Narcissism creates its own mechanism for lowering IQ, in that people with narcissistic personalities fear failure, and public exposition of their failures.
G related tasks are notably difficult to find, in terms of "ones that work on both tails". Repeat a number backwards isn't actually a "g" task, as if your ordering doesn't work well, and your memorization doesn't work well, you've got problems with the task, no matter your higher order intelligence.
Certain tasks seem like they're related to g, but aren't. Yet they make it onto intelligence tests because... it flatters midwits. And midwits have a lot to gain by being the High IQ people.
Digits backwards correlates moderately well with full scale IQ. And I think it makes sense as a measure of one aspect of intelligence -- being able to hold a a number of details at once in your mind so you can extract what conclusions you can from the whole welter. It's not just useful for mental math. It's something you might use, for instance, if solving a puzzle with a several little rules to it -- there are a bunch of cubes each with sides of different colors arranged as follows, and you have stack the cubes in such a way that . . .
There could be a job where you have to engineer a solution to a problem like that. Or a situation involving multiple regulations regarding international trade. Obviously being able to hold a bunch of in mind at once is only one skill used for tasks like that, but it doesn't seem peripheral or trivial to me.
I am not sure I fully understand your objection. Are you objecting that certain subtests are too correlated with one another, that they are uncorrelated with g, or both? Is this a single group of subtests or multiple groups?
My experience taking an IQ test was during an evaluation for ADHD. In that case, the fact that I scored worse on certain subtests despite their usual correlation with the others was interesting and helpful.
In general my impression of psychometricians is that, whatever their flaws may be, they are unusually willing to be politically incorrect and to upset the academic apple cart, and I respect that.
For a subtest to measure "g" it needs to be "bridgeable" even if you have cognitive deficiencies in that particular area. General intelligence can compensate for a HELL of a lot -- and that gets easier with the "bigger, harder" tasks.
I'm objecting to the idea that tasks that can be used to meaningfully evaluate IQ in people that don't use "g" (general intelligence), can be extended to people who use "g" instead of focused, subset learning.
I'd be interested in learning about IQ tests that are designed, in particular, not to flatter midwits. Know of any?
Ah, it sounds like you have a disagreement with the principle used to structure the tests. They want a lot of subtests measuring different things, each correlated with g, which requires each of the subtests to be simpler. More complex tests will naturally overlap more -- in addition to being harder to score, as you said.
Without digging out the report, I think I did worse on the backward or distracted versions of some tests than my performance on the forward or undistracted versions would lead you to expect. And there is an infernal digit circling test where I got reasonable accuracy at the cost of being painfully slow; the subjective experience of doing that one was viscerally unpleasant in a way that is difficult to describe.
They talk about the possibility of range restriction for the lack of correlation between IQ and college GPA, but it seems plausible that smarter people tend to get into more rigorous colleges and choose more difficult majors. I did well in high school but then managed a 2.5 GPA at a college that I have no idea how I got in to.
I worked hard to achieve a high GPA in high school to gain admission to a good university. Once in college, I slacked off a bit, had a lot of fun, did a lot of drugs (especially psychedelics), but maintained a B+ average. No one asked for my GPA in my job interviews after college. They just wanted a person with a degree. I knew this upfront, so why bother killing myself? Too bad g doesn't measure sensible life goals. I've always been a Heinlein's too-lazy-to-fail sort of guy.
It's also plausible that smarter people do fun things like "look at how I can solve this problem!" and the graders say "your papers make my head hurt, and you aren't using force to solve the problem."
Or write an entire essay, and get flunked for splitting infinitives (actually, get flunked for writing an "insensitive" piece about the professor's home country). The Dean backed the professor. 5 grammar mistakes and you fail, that's that.
And then you flunk out for having too many "troublesome" bad grades.
I just took their test, and it produced results in line with what I’ve scored on other tests, including in the distribution of scores for different categories in line with my SAT, LSAT, and my own personal recognition of strengths and weaknesses.
Do you have an explanation that being anti-HBD-IQ isn't circular reasoning where poor outcomes are explained by discrimination and evidence of said discrimination are poor outcomes?
I really like it, even if the old parchment-esque site felt like one of the last vestiges of the old Internet and I am sorry to lose it. Are there web design sedevacantists, arguing that the Vatican hasn't had a legitimate webmaster in twenty years? There ought to be.
I haven't really dug into the site yet, but I hope prompt English translations imply that they also took the time to reorganize the deeper structure of the site. That was pretty badly needed.
But not everything, it had a habit of giving English-language reports with links and then the linked material was in Italian because pffft, why can't you speak Italian if you're looking up Vatican stuff?
Could anyone give me a realistic path to superintelligence?
I'm a bit of an AI-skeptic, and I would love to have my views contradicted. Here is why I believe superintelligence is still very far away:
To beat humans at most economically useful tasks, an AI would have to either:
1. have seen most economically meaningful problems and their solutions. It would not need a very big interpolation ability in this case, because the resolution of the training data would be good enough.
2. have seen a lot of economically meaningful problems & solutions, and inferred the general rules of the world. Or have been trained on something completely different, and being able to master economically useful jobs because of some emergent properties.
1. is not possible I think, as a lot of economic value (more and more, actually) comes from handling unseen, undocumented and complex tasks.
So, we're left with 2.
Great progress has been made just by trying to predict the next token, as this task is perfect for enabling emergent behavior:
- Simple (you have trillions of low-cost training examples)
- Powerful: a next token predictor having a zero loss on a complex validation text dataset is obviously superintelligent.
Even with a simple Cross-Entropy loss and despite the poor interpolation ability of LLMs, the incredible resolution of the training data allows for impressive real-world results.
Now, it's still economically useless at the moment. The task being automated are mostly useless (I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth).
Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful (not just bad).
I can’t think of another powerful but simple task that AI could be trained upon. Writing has been optimized by humans to be the most compressed form of communication. You could train an AI to predict the next frame of a video, but it’s soooo much noisier! And the loss function is a lot more complicated to craft to ellicit intelligent behavior (MSE would obviously suck).
So now, we're back to RL. It kind of works, but I'm surprised by how difficult it seems to implement, even on verifiable problems.
Code either passes tests or not. Still, you have to craft a great advantage function to make the RL process effective. If you don't, you get a gemini 2.5 that spits out comments and try/catch blocks everywhere. It's even less useful than gpt 3.5 for coding.
So, still keeping the focus on code, you, as a human, need to specify what great code is, and implement an advantage function that reflects it. The thing is, you'd need an advantage function more fine grained than what could fit in a deterministic expression.
Basically, you need to do RLHF on code. Which is costly and scales not with compute, but with human time. Because, sure, you can RLHF hard, but if you have only few human-certified examples, you’ll get a RL-ed model that games the reward model.
The thing is, having a great reward model is REALLY HARD for real-world tasks. It’s not something you can get just by scaling compute.
Last year, the best counter-argument to my comment would have been “AI progress is so fast, do you really expect it to slow?”, and it would have been perfect. Now, I don’t think we have got any real progress from GPT-4 on economically valuable tasks, so this argument doesn’t hold.
Another convincing argument is that “we know the compute power of a human brain, and we know that it’s less than the current biggest GPU clusters, so why should we expect human intelligence to remain superior?”. That’s a really good argument, but it fails to account for the incredible amount of compute natural selection has put into designing the optimal reward functions (sentiment, emotions) that shorten the feedback loop of human learning and the sensors that give us data. It’s difficult to quantify precisely but I don’t think the biggest clusters are even close to that. Not that we’re the optimal solution to the intelligence problem, just that we’re still way short of artificial compute to compete against natural selection.
I think most of the people who believe in superintelligence believe that it is just the next step after general intelligence. There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. If that’s right, then you don’t need to train on everything - you just need to train on enough stuff to get that general intelligence, and then start doing a bit better.
But I’m skeptical that there is any truly general intelligence of this sort - I think there are inevitable tradeoffs between being better at some sorts of problems in some environments, and other problems/environments. (Often enough, I think the tradeoffs will be with the same problems in different environments.)
My main disagreement is with the last paragraph. I agree that we don’t have anywhere near enough compute to simulate natural selection and find better reward functions. But I also think that reward functions that result in superintelligence are not too complex. I don’t know how to explain why I believe this, it comes largely from intuition. But I think given the assumption “reward functions for superintelligence are simple”, you can reasonably get that superintelligence will be developed soon, given the hundreds of researchers currently working on the problem.
Substrate issues could be involved. Assume that what's needed to get to superintelligence might be quantum in nature. That essentially eliminates all the LLMs and turns you toward "self-modifying code" and other sources of "more pseudo-randomness."
> Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful
I'm not sure you can back this up. If doubling the compute doesn't double the performance, that's worse than linear. You're trying to show each doubling in compute doesn't even give the same constant increase on some metric of performance, and that metric would have to be linear with respect to the outcome you're trying to measure. I'm not sure we have such a metric, and some metrics, like AI vs human task duration, appear to be increasing exponentially.
> I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth
Well, I feel that most of the software I've developed (mainly ML models and ERP software) has been used to help with problems whose solutions were human.
2 examples:
- Some features of the ERP software I've helped develop were related to rights management, and paperwork assistance. For the first feature, the real consequence is that you keep an employee out of a some part of the business, effectively telling him "stay in your row", which is not good for personal engagement. The second is more pervasive: when you help people generate more reports, you are basically allowing middle managers and law makers to ask for more of them. So you end up with incredibly long contracts, tedious forms and so on. Contracts were shorter when people had to type them and copy-pasting didn't exist.
- I've developed a complex ML model for estimating data that people could just have asked to other people. When I discovered that, I told the customer: "you know, you could just ask these guys, they have the real numbers". But I guess they won't, because they now have a good enough estimate: net loss.
Now, of course, I've developed useful things, but I just can't think of any right now ^^
Contracts tend to be standardized in the best of cases. Which means that if your renter's contract is illegal, you can get the kindly judge to throw out every renter's contract in the city. Which is a hell of a stick to bring to a discussion with your landlord.
I would be careful to read too much into that graph without doing some more careful statistical analyses. There’s a plausible enough picture in which the left 60% of the graph should see zero effect, and the right 40% should see a roughly linear effect, and if I squint it actually looks compatible with that. But also, 2.5 years is just a really short time frame, and there have been some much bigger short term effects in some industries with the presidential transition.
That's not consistent with the recent rise in unemployment for CS grads. I've heard too much anecdotal data to suggest that it's not related to AI. I wouldn't expect AI to have impacted other industries yet. It's too new. Only software companies are agile and tech-savvy enough to adjust to new technology so quickly.
Cost-saving innovations tend to roll out during recessions. I expect AI to really surface during the next one.
It's perfectly consistent, there's even too much software out there so there are hiring freezes. And interest rates are still much higher than in pre-covid era. We haven't seen slowdown of employment in professions that according to economists are most susceptible to AI-induced job loss, but we have seen slowdown of employment in professions most susceptible to economic downturns. The slowdown is not only in software, but in real engineering too - perfectly conssitent with firms cutting R&D budgets.
... only software companies are "agile and tech-savvy" enough...
You mean by hiring hundreds of thousands of "non technical people" who can't maintain their systems? SV isn't "agile or tech-savvy" anymore. And a dip in hiring "non technical folks" looks a lot like "relying on AI" I suspect. Even though the non-technical folks weren't doing jack or shit, and therefore google firing them doesn't affect google's monopoly (oh, did I just type that?)
My wife and I are considering making a large change: we both grew up and live in the Mountain West, got married, and had children, who are now on the verge on making the transition to junior high school\middle school. We like where we live now but don't *love* it, and don't have extensive social ties here we'd be sad to leave.
My parents, and sister and her family, live on the East Coast, in the place we would normally not consider moving to, but as time passes, we've come to appreciate how much we've missed being so far from family, and are considering relocating to be closer to them. My parents are in general good health, so barring unforeseen events we expect to have years of quality time to spend.
What are the main concerns I should think through, aside from the usual cost of living and school quality issues?
One thing you may not have considered is the humidity. I live in the DC area, and my wife (who grew up in Utah) still finds the humidity here during the summer terrible after 20 years. We have two dehumidifiers running in our house!
After having grown up in the mountain west, moving to the east coast for 12 years, and having moved back to the mountain west…
Summers are painful when you have to tolerate them year after year on the east coast. The humidity and banality of the weather sucks. No more 40 degree temp swings between days and nights, or between days. No more snow, and when it does snow it’s an apocalypse.
Same with traffic when you have to tolerate it every single day. There are people everywhere on the east coast… it’s impossible to escape.
You’ll miss open landscapes. I’m convinced being used to seeing a big sky and far-reaching distances, then suddenly not, is akin to seasonal affective disorder. It does make trips back out west magical though.
If outdoor recreation is your thing, it’s worse on the east coast. It can still be done, but it’s less beautiful, less available, and more crowded.
If you have 100 kids, on average they will likely grow up with less “masculine” traits on the east coast. This has both good and bad attached to it; just beware. The cultures are indeed different.
Overall there are plenty of goods and bads… I moved back to the mountain west for my community and the views. If those weren’t important to me (or if I had community elsewhere) I may not have made the move back. Yet still sometimes I’m struck by the annoying aspects of hyper-masculine culture here (exaggerated because I do blue-collar work), just as I was struck on the east coast by the annoying aspects of hyper-feminine culture.
One last note… when I was in 7th grade my parents almost moved us to another state. I was onboard with the plan, but it ended up not happening. That move *not happening* was one of the luckiest moments of my life—unbeknownst to me at the time—because having grown up in one area my entire adolescence gave me friends and a community that will be with me forever. I have a true “home” moreso than my parents ever did.
Which part of the East Coast? Massachusetts is very different from Maryland.
I also grew up in the Mountain West and lived in the East Coast for a time as a child. Overall the mountains offer a better quality of life: they're less crowded, cheaper, generally cleaner, and in every way healthier.
The biggest advantage of East Coast life is proximity to America's great cultural institutions. If you live in the NE megalopolis, you are more plugged in to world culture than the great majority of humans. Since it's more densely populated you also benefit more from network effects. Your family is even an example of this.
As with so many things in life it comes down to values. I'd say if you care more about people, move to the coast. If you care more about nature or lifestyle, stay in the Mountain West.
It's a part of the country with a different culture, climate, and geography than I'm used to. I've enjoyed my many visits there, and within two or three hours drive there is a large array of things to do and places to see, but the place we'd be moving is itself not a big draw.
I'm pretty far behind you as my wife and I just had our first child in January, so while I can't answer your question, I can say that even these first six months (and the year and a half of marriage before having a kid) have been a time of rich fullness just due to the fact that my wife's family and my parents all live close by. Our location doesn't account for much of that, as I live smack in the middle of North Dakota.
I'm sure that we would still be very much enjoying life together even if none of our family were close by, but having family around definitely adds an extra depth and richness that I feel would make a move like you're describing worth it.
Thanks for replying. For me this is a choice between great climate and access to great natural beauty, or closeness to family and the ability to share our life in a more casual, regular way than guesting\hosting family for a week or more in their\your house. For years the choice was obvious.
Been having fun a lot of fun working with ChatGPT on alternate history scenario where the transistor was never invented- somehow, silicon (and germanium etc.) just doesn't transmit signals in this alternate timeline. It seems like humanity would have invented vacuum microelectronics instead? Maybe did more advanced work with memristors too? It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller.
Without electronic capital markets you'd have a radically different 20th century- slower growth, more stable, less capital flows in and out of countries. This might've slowed China's growth, specifically- no ecommerce, less investment flows into China originally, no chance for them to hack & steal Western technology. Also a decent chance that the USSR's collapse might not have been as dramatic- they might've lost the Baltics and eastern Europe, but kept going otherwise. The US would probably be poorer without Silicon Valley, plus Wall Street would be smaller without electronic markets. Japan might really excel at the kind of precision mechanics & analog systems that dominate this world. So it'd be a more multipolar world overall.
(I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Copying an AI summary from a query "cold field emission microelectronic vacuum tubes"
>Cold-field emission microelectronic vacuum tubes, or vacuum microelectronics, utilize the mechanism of electron emission into a vacuum from sharp, gated or ungated conductive or semiconductive structures, avoiding the need for thermionic cathodes that require heat. This technology aims to overcome the bulkiness of traditional vacuum tubes by fabricating micro-scale devices and offers potential applications in areas such as flat panel displays, high-frequency power sources, high-speed logic circuits, and sensors, especially in harsh environments where conventional electronics might fail
Admittedly these are still higher voltage and less dense devices than semiconductor FETs, but electronics would not have been limited to hot cathode bulky tubes even if silicon transistors never existed.
"It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller."
Don't forget that you can probably have fairly large computer memories (in the context of vacuum tubes ...) because of core memory:
PDP-11s shipped with core memory and you can do QUITE A LOT with 1 MB (or less).
And you don't need transistors for hard drives, either :-)
Imagine "programs" being distributed on (error correcting encoded) microfiche.
Sounds like fun in a steam-punk way.
Also, you can easily imagine a slow internet. Think something like 1200 baud (or faster) between major centers (so very much like early Usenet). You won't spend resources for images or pretty formatting, but moving high value *data* should work.
About the time transistors were becoming widely used, micro-vacuum tubes were also in use. I don't know what their life was, and clearly transistors were found superior, but they were competitive in some applications.
So, yes, vacuum micro-electronics would have been developed. I've got doubts that memristors would have shown up any more quickly than they did here.
It's not clear that vacuum electronics couldn't have been developed to the same degree of integeration that transistors were, so I'm not sure the rest of your caveats hold up. They might. I know that vacuum electronics were more highly resistant to damage from radiation, so there might well have been a different path of development, but I see no reason to assume that personal computers, smart phones, routers, etc. wouldn't have been developed, though they might have been delayed a few years. (That we haven't developed the technology doesn't imply that it couldn't have been developed.)
It's also possible to miniaturize electromechanical switching to IC scale with MEMS and NEMS relays. It's a lot slower than transistors, which is why it's only used for specialty applications, but it's possible.
My husband was forced untimely - after being rear-ended by someone who spoke no English, had no proof of insurance on him, said he had insurance but didn’t know the name of the company, before driving away (the babies were crying, the side of a freeway is no place for a half dozen children)- and miraculously did have it (one time that the state doing its seeing-like-a-state thing was helpful) - into a quick round of unsatisfactory car shopping after All-State took its sweet time deciding to total his perfectly driveable old Subaru.
As a result - life having other distractions, and he having little interest in modern cars - he got steered into buying his first “new” car.
That’s something that won’t ever happen again!
All those new features he didn’t want to pay for … and Subaru doesn’t need to haggle, period.
He was set to get two requests, an ignition key and a manual, slam-shut gate, swapped in from a dealer in another city - but in the event, a buyer there was simultaneously grabbing that one, so the one they brought in was sadly keyless.
We should have just returned home (a hour plus away) but a certain amount of time had been invested, and a planned road trip was upcoming.
Question: should I get him one of those faraday cage thingies? It has been established that he won’t stuff the fob in foil every night, nor remember to disable it.
He didn’t even know about this car-stealing method, not being much online and certainly not on NextDoor.
There is no consensus on the internet about the need for this. Possibly already passe, superseded by new methods of thievery.
We live in a city that had 18,000 cars stolen last year. Not generally Subarus probably … but anyway. The car is within 50 or sixty feet of the fob, in an apartment parking lot, not within view.
Our cars, when we’ve occasionally, inadvertently left them unlocked (long habit from where we lived previously) have reliably been rifled through, though it was a wash: we had neither guns nor electronics nor drugs. Once, memorably, they stole his car manual. I recall thinking that they’d better come by around daylight savings time and change the clock for him.
I’ve never heard of this sort of faraday cage thing. How many cars have been stolen from the apartment parking lot in the last few years? Does insurance cover such thefts? My guess is that a precaution against this one method of theft isn’t that likely to make a big difference, particularly since theft is not that common anyway (apart from the weird Kia/Hyundai exploit that was discovered during the pandemic), but if the faraday cage is cheap and convenient and easy to set up in the tray where you put keys and wallet when you get home anyway (or however you do it), it could still be net worth it.
If you have a parent that refuses to put his keys into an exact place (a nice shiny foil box) every night, you have a parent that probably shouldn't be trusted with a motorized vehicle.
A rather stupid syllogism, but I don’t own such a box. That’s what I’m trying to learn - if it’s worth buying one. For some reason I thought this would be an easy layup for this crowd.
Supposedly using a device to capture the signal pinging between the key fob and the vehicle. How you would start the vehicle thereafter away from the fob I don't know. Or maybe just as a means to open the vehicle and throw stuff from the glove box around.
I really thought this was a thing as it was so commonly referenced, but now I'm not sure if it was imaginary/dreamed up by people who didn't want to admit they left their fob sitting in the car.
A relay attack lets a thief extend the range of the key fob by retransmitting the signals, allowing them to start the car. It doesn't let them clone key fob. Once started, cars will not automatically shut off when the key goes out of range. Some cars have protection against relay attacks, but I think most do not. The thief would have to get close enough to the key fob to pick up the signal, and they need the key signal in real time. They can't record the signal and replay it later.
Yes, that’s what I meant. Didn’t mean they would randomly capture signals and store them for later use.
I never had a reason to think about it before. My own car is a very basic car from 2009.
I had just absorbed by osmosis this idea about newer cars.
But upon researching it, I couldn’t find that after all people seem particularly worried about it. Or any agreement about what’s going on with the key, whether it’s really talking to the car or sitting there inert.
Not sure if the subject is just really well understood only by those who steal cars and those who know a lot about electronics.
This is a very old attack in the cryotographic literature. IIRC, it was originally called the mafia fraud attack. Though really its kind of just an instance of a man in the middle attack.
Something interesting I learned today*: Among professional historians, antiquarians and the like there is a widespread consensus that Jesus of Nazareth was a real, historical person. Important disclaimer, this distinguishes the historical personage from any supernatural capabilities he may or may not have had.
They cite about half-a-dozen non-biblical references by Tacitus, Josephus, Pliny the Younger, Suetonius, Mara Bar-Serapion, Lucian and Talmudic references. Most of these are pretty brief or oblique but they converge on a pretty recognizable figure. The evidence is a lot stronger than he was a mythical creation, which is why mainstream scholars of all stripes have landed there.
A year or two ago, I watched an extended interview with Richard Carrier, who's one of the highest profile people arguing against the historicity of Jesus. He's a classical historian by training and a pop historian and Atheism advocate by vocation. IIRC, his thesis is that Christianity started among ethnic Jews living in the Roman world and followed what was then a fairly common template of venerating a purely spiritual messianic figure, and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier.
Carrier made some interesting arguments about the mythological pattern which I lack the expertise to assess in detail. Where I do think he rather badly misstepped was in making a big deal out of the Gospels and Epistles being written in Greek rather than Aramaic. I don't that needs much explaining given how few classical documents have survived to the present. Greek was a major literary language throughout the region while Aramaic was not, and Christianity caught on much, much more in Greek and Latin-speaking areas than in Aramaic-speaking areas, so only Greek foundational texts surviving isn't particularly surprising. The wikipedia article for "ancient text corpora" cites estimates for Carsten Peust (2000) that our text corpus from prior to 300 AD is 57 million words of Greek, 10 million words of Latin, and 100,000 words of Aramaic.
Where did you get the idea that Aramaic wasn't a significant language of the region at the time? It was the lingua franca from the Levant to Persia for centuries.
The Talmud alone is in the ballpark of 2.5 million words, most of it two dialects of Aramaic and most of the rest in Hebrew. While it was compiled later than 300 AD, it contained a body of work stretching over many centuries, stretching back well into the Second Temple period.
The Mishnah, compiled centuries earlier, was primarily Hebrew but with some Aramaic.
And that wikipedia page lists 300,000 words for Hebrew - the Tanakh has over 300k words, the Torah 80k of them.
All that is to say, even if we really do have fewer surviving words of Aramaic than Greek, that almost certainly has more to do with our sample than the ancient source.
> and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier
That doesn't sound like he's arguing against the historicity of Jesus at all then, if he's saying that Jesus is based on an actual historical person. That just sounds like the mainstream view all over again -- Jesus was real, some of the stories told about him are false, and we can quibble about exactly how much was real.
Carrier is loudly and explicitly claiming that there was no actual historical person who lived in Judea c. 30 AD matching the description of Jesus of Nazareth, and that pre-Pauline proto-Christians would have agreed with this as they would have believed in a purely spiritual Christ and told allegorical stories about him set in a spiritual real. Per Carrier, the claim that Jesus was a human who ministered in Judea was an invention of Paul and the Gospel writers who re-wrote the existing stories *as if* Jesus were a real person who had been physically present in and around Jerusalem.
Right, I think I misunderstood the sentence I quoted, I thought he was saying that they'd merged their spiritual messiah with stories about some actual bloke.
Greek was the lingua Franca at the time, and it was what educated people largely wrote in. Particularly on the east. Marcus Aurelius even wrote his Meditations entirely in Greek.
In no way would the writers of the gospels write in Aramaic. John and Luke may not have even spoken it.
Exactly. If there was an Aramaic proto-gospel, it would have had to have been very early and very niche and it probably would have been oral rather than written. Anyone writing in the Eastern Mediterranean for a broader audience would have done so in Greek.
Oh, Carrier is the guy that Tim O'Neill has the beef with. Doesn't think much of Dr. Carrier's arguments 😁
I'm Irish Catholic so you know which side of the fence I'm coming down on here, but I do have to admit to a bias towards the Australian guy of Irish Catholic heritage as well! I can't say it's edifying, but it's fun:
"It seems I’ve done something to upset Richard Carrier. Or rather, I’ve done something to get him to turn his nasal snark on me on behalf of his latest fawning minion. For those who aren’t aware of him, Richard Carrier is a New Atheist blogger who has a post-graduate degree in history from Columbia and who, once upon a time, had a decent chance at an academic career. Unfortunately he blew it by wasting his time being a dilettante who self-published New Atheist anti-Christian polemic and dabbled in fields well outside his own; which meant he never built up the kind of publishing record essential for securing a recent doctorate graduate a university job. Now that even he recognises that his academic career crashed and burned before it got off the ground, he styles himself as an “independent scholar”, probably because that sounds a lot better than “perpetually unemployed blogger”."
Yeah, my impression of Carrier is that he seems clever and interesting, but the actual substance of his arguments seems pretty weak even aside from my priora about who's likely to be right when a lone "independent scholar" is arguing that the prevailing view of academic experts is trivially and obviously false on a subject within their field.
O'Neill is fun and I trust him because although he's an atheist himself, he gets so pissed-off by historical errors being perpetuated by online atheists and the mainstream that he goes after them.
He does have a personal grudge going with Carrier, so bear that in mind. Aron Ra is another one of the Mythicists with whom O'Neill tilts at times, but not as bitterly as with Carrier.
I was amused by the reference to Bayes' Theorem (seeing as how that's one of the foundations of Rationalism) in the mention of Carrier's book published in 2014:
"Two years ago Carrier brought out what he felt was going to be a game-changer in the fringe side-issue debate about whether a historical Jesus existed at all. His book, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield-Phoenix, 2014), was the first peer-reviewed (well, kind of) monograph that argued against a historical Jesus in about a century and Carrier’s New Atheist fans expected it to have a shattering impact on the field. It didn’t. Apart from some detailed debunking of his dubious use of Bayes’ Theorem to try to assess historical claims, the book has gone unnoticed and basically sunk without trace. It has been cited by no-one and has so far attracted just one lonely academic review, which is actually a feeble puff piece by the fawning minion mentioned above. The book is a total clunker."
O'Neill's quote from Carrier proudly displayed on his website:
"“Tim O’Neill is a known liar …. an asscrank …. a hack …. a tinfoil hatter …. stupid …. a crypto-Christian, posing as an atheist …. a pseudo-atheist shill for Christian triumphalism [and] delusionally insane.” – Dr. Richard Carrier PhD, unemployed blogger"
Deep calls to deep, and so does Irish invective between the sea-divided Gael so that's probably why I like O'Neill so much even apart from his good faith in historical arguments.
Academics don't view denial of Jesus' existence as much of an argument. Most call it "fringe."
If you're interested in going deeper, I would recommend looking into the modern quests for the historical Jesus, which not only surfaced and studied extrabiblical sources on Jesus, but also developed methodologies for evaluating the gospels:
Academics I've read and listened to lean toward the conclusion that only two events in the gospels about Jesus' life are reliable: His baptism by John the Baptist, and his execution by the Romans. (These both rely on the criteria of embarrassment, that is, because these events undermine his followers' beliefs, for them to include these events in the gospels suggests they actually occurred.) Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations.
The quests for the historical Jesus also bleed into modern understandings of how the gospels were authored, such as the dominant theory of Markan priority, and the theoretical Q document.
"Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations."
This is true, but in the context of discussing a New Athiest figure it's worth adding some context. For most of these scholars, rejection of the supernatural is a premise rather then a conclusion. It's often the case that an academic will write, "Since its miracle stories are false, this document must be late," only for his reader to say, "Since this document is late, its miracle stories must be faIse," without realizing the circularity.
C. S. Lewis wrote on this very thing in the introduction to his book "Miracles":
"Many people think one can decide whether a miracle occurred in the past by examining the evidence ‘according to the ordinary rules of historical enquiry’. But the ordinary rules cannot be worked until we have decided whether miracles are possible, and if so, how probable they are. For if they are impossible, then no amount of historical evidence will convince us. If they are possible but immensely improbable, then only mathematically demonstrative evidence will convince us: and since history never provides that degree of evidence for any event, history can never convince us that a miracle occurred. If, on the other hand, miracles are not intrinsically improbable, then the existing evidence will be sufficient to convince us that quite a number of miracles have occurred. The result of our historical enquiries thus depends on the philosophical views which we have been holding before we even began to look at the evidence. The philosophical question must therefore come first.
"Here is an example of the sort of thing that happens if we omit the preliminary philosophical task, and rush on to the historical. In a popular commentary on the Bible you will find a discussion of the date at which the Fourth Gospel was written. The author says it must have been written after the execution of St. Peter, because, in the Fourth Gospel, Christ is represented as predicting the execution of St. Peter. ‘A book’, thinks the author, ‘cannot be written before events which it refers to’. Of course it cannot—unless real predictions ever occur. If they do, then this argument for the date is in ruins. And the author has not discussed at all whether real predictions are possible. He takes it for granted (perhaps unconsciously) that they are not. Perhaps he is right: but if he is, he has not discovered this principle by historical inquiry. He has brought his disbelief in predictions to his historical work, so to speak, ready made. Unless he had done so his historical conclusion about the date of the Fourth Gospel could not have been reached at all. His work is therefore quite useless to a person who wants to know whether predictions occur. The author gets to work only after he has already answered that question in the negative, and on grounds which he never communicates to us.""
Sometimes a lie reveals the truth. It’s generally accepted that Jesus wasn’t born in Bethlehem. It’s only mentioned in two gospels and the census story of moving back to your origins isn’t Roman practice. It would be mayhem. People just didn’t travel to ancestors homelands for a census. The killing of the innocents by Herod is also undocumented.
But an invented messiah can just be born wherever you need him (and the messiah prophecy mentions Bethlehem) but clearly people were aware where Jesus actually came from so they had to admit to Nazareth.
Jesus is very well attested people for his period. The minimum viable Jesus is that he was a popular religious leader from about the class the Bible says he's from who lived roughly where the Bible says he did. That he had a large following and was believed to have magical powers and claimed to be the son of God. That he clashed with Jewish and Roman authorities. And that he was executed but his followers continued on.
If you want to say he didn't exist you basically believe in a conspiracy theory that later Christians went back and doctored a bunch of works and made a bunch of forgeries to provide evidence that he did. A lot of anti-Christians really want to believe this and produce a lot of shoddy scholarship about it. But in all likelihood Jesus was real.
I think my previous belief was that Christianity definitely existed as a religion by the mid-1st-century, lots of people knew the Apostles, the Apostles knew Jesus, and it would require a pretty coordinated conspiracy for the Apostles to all be lying.
Does the evidence from historians prove more than that? AFAIK none of the historians claim to have interviewed Jesus personally. So do we know that the historians didn't just find some Christians, interview them about the contents of their religion, and use the same chain of reasoning as above to assume that Jesus was a real person? Should we take the historians' claims as extra evidence beyond that provided by the religion itself?
Well, it proves that non-Christians living eighty years after the purported events wrote about the life and death of Jesus without expressing skepticism, which is something.
From the way Tacitus writes in 116, it seems like the general consensus among non-Christian Romans in the early second century was that Christus was a real dude who got crucified, and that there was a bunch of weird beliefs surrounding him. This belief was probably not filtered entirely through Christians, just as our ideas about the Roswell Incident of 1947 or L. Ron Hubbard are not entirely filtered through the people who believe weird things about them.
I believe what you're saying is: A large number of Christians all simultaneously, and within their own living memory, attested that Jesus existed. This is strong evidence because otherwise a large number of people would have to all get together, lie, and then die for that lie which seems less likely than being a real religious organization who met a real person. But the historians likely did not personally meet Jesus so they don't add additional proof.
From this point of view, the main things historians add is that it makes it even less likely to be a conspiracy. Because many of the historians are not Christians and drew from non-Christian (mostly Jewish or Roman) witnesses. We don't know who these witnesses were or if any of them directly met Jesus. But they are speaking about things going on in the right time and place to have met him and the Bible doesn't suggest Jesus isolated himself from foreigners.
So either none of them met him and it was all a conspiracy by Jesus's followers that took in a bunch of people who were highly familiar with the region. Or a number of non-Christians were in on the conspiracy.
My broader point is something like: we ought to have consistent evidentiary standards. If you want to take a maximally skeptical view then you can construct a case that, for example, Vercingetorix never existed. You can cast doubt on the existence of Julius Caesar if you stretch. If that's your general point of view then you can know very little about history. I disagree with that point of view but it's defensible. If, on the other hand, you think Vercingetorix existed or the Dazexiang uprising definitely happened but think Jesus might not have existed then I think you're likely ideologically invested in Jesus not existing.
To give an example where I don't think it's bias: most modern historians discount stories of magic powers or miracles regardless of who performed them. So the fact they discount Jesus's miracles seems consistent with that worldview rather than a double standard.
Someone later down made comments that reminded me that some figures from history were later believed to have been adaptations or syncretisms of earlier figures. So that's another possibility - Jesus was fictional, but melded from earlier people. I don't think this would adequately explain Tacitus' account, for example, but it could explain multiple people being "in on" the fabrication.
(Meanwhile, maybe some people aren't invested in Jesus' not existing, but rather invested in someone existing with a name as cool as "Vercingetorix". So the real solution should have been to introduce Jesus as, uh, "Yesutapadancia".)
Jesus is a bit similar to Ragnar Lodbrok in that he is attested but a lot of the records come shortly after his death. And there's a whole bunch of extremely historical people who the history books say were reacting to him and his death which are really hard to explain if he didn't exist or was a myth.
The people who think Ragnar was entirely fictional have to explain the extremely well attested historical invasions by his historically well attested sons who said they were avenging his death and who set up kingdoms and ethnicities which echo down to today. Likewise with Jesus, his disciples, and Christianity.
But there's just enough of a gap to say that maybe he didn't exist if you really, really want to. And there's a lot of space to say some of the stories were less than reliable and some of them might be borrowed from other people. Then again, that's true of most historical biographies.
We should take the historian's claims as evidence that the people whose job it is to professionally try to figure out what happened in the past all tend to agree that Jesus was real. And they're not just looking at the Bible when they do that!
Sources that indicate Jesus existed include the scriptures (the letters and gospels of the New Testament), but also include many of the apocryphal writings (which all agree that Jesus existed, even if they go on to make wildly different claims about him), the lack of any contemporary non-Christian sources that deny the existence of Jesus, the corroboration of many other historical facts in scripture about the whole Jesus story (like archeological findings corroborating that Pontius Pilate existed, or that Nazareth existed, etc).
You also have Josephus writing about Jesus in 94 AD, Tacitus writing about him in 115 (and confirming that he was the founder of a religious sect who was executed under Pontius Pilate), and a letter from a Stoic named Mara bar Serapion to his son, circa 73 AD, where he references the unjust execution of the "wise king" of the Jews.
Also, looking at scripture itself there are all kinds of historical analysis you can apply to it to try to figure out how old it is, and whether the people who wrote it were actually familiar with the places they were writing about. For example, they recently did a statistical analysis of name frequency in the Gospels and the book of Acts, and found that it matches name frequencies found in Josephus's contemporary histories of the region, and that later apocryphal gospels have name frequencies in them that don't match, which makes it more likely that the Gospels were written close to the time period they are writing about (https://brill.com/view/journals/jshj/22/2/article-p184_005.xml). Neat stuff like that.
One major source, which is much disputed, is the Testimonium Flavianum which is the part of Josephus' writings which mentions Jesus. Josephus was a real person who is well-attested, so if he's writing about "there was this guy" it's important evidence, especially as he ties it to "James, the brother of Jesus" who was leader of the church in Jerusalem and mentions historic figures like the high priests at that time.
How much is real, how much has been interpolated over later centuries by Christian scribes, is where the arguing goes on - some say it's nearly all original, others (e.g. the Mythicists) say it's wholesale invention.
"My guest today is Dr Thomas C. Schmidt of Fairfield University. Tom has just published an interesting new book through Oxford University Press: Josephus and Jesus – New Evidence for the One Called Christ. In it he makes a detailed case for the authenticity of the Testimonium Flavianum; the much disputed passage about Jesus in Book 18 of Flavius Josephus’ Antiquities of the Jews. Not only does he argue that Josephus wrote about Jesus as this point in his book, but he also argues that the passage we have is substantially what Josephus wrote. This is a distinctive position among scholars, who usually argue that it has at least be significantly changed and added to, with a minority arguing for it being a wholesale interpolation. So I hope you enjoy my conversation with Tom Schmidt about his provocative new book."
The most surprising thing (for me) was to learn about Josephus' rather energetic life, and that Josephus knew people who were one or two degrees of separation from Jesus. It puts a new shine on the questions of the Testimonium's accuracy.
I mean, when the Mythicists claim Jesus never lived, are they also saying that his brother James (mentioned by Josephus and several other documents) was also a fabrication? Mary, Joseph, and Magdalene, all wholly fictional characters? Where does the myth-making and conspiracy start and end?
I think you're well overstating the minimum. Yeah, there was someone with that name around. There aren't any records of the trial though. (There's an explanation for the lack, but they're still missing.) And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries", though we don't know what the original records said, or even if they existed. Sometimes we have good evidence of their doctoring the records. Often enough to cast suspicion on many where we don't have evidence. Many were clearly written well after the date at which they were ostensibly written.
If you wanted to claim that he was a popular religious-political leader, I'd have no argument. There's a very strong probability that he was, even though most of the evidence has been destroyed. (Some of it explicitly by Roman Christians wiping out the Nazarenes.)
Yeah, the "hand waving" a valid criticism. It's been decades since I took the arguments seriously, and I don't really remember the details. But when you say " The only possible case ", I'm not encouraged to try to improve my argument. Your mind is already made up.
Would you be encouraged to try to improve your argument for the sake of an interested third party? In a public comment section like this you're never solely writing for the person you responded to, and I for one would indeed be quite intrigued to hear more specifics about your case, as I don't have any particularly strong opinions on the subject already.
There are records that say he was executed by local authorities. The specific Biblical details are less well attested.
> And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries"
Every time I've pushed on these claims it comes down to the equivalent of not being to able to prove a negative. It's clearly there in the versions we have and they make some vague gestures about word choices to show it was inserted later. I'm not aware of a single smoking gun where someone admitted they doctored a record from the time.
I am especially suspicious of this because it's clear a lot of people WANT to believe they are later insertions for basically ideological reasons. But if you have an example that is either a smoking gun, like the evidence we have about the Austrian archduchy title or better, then I'd love to see it.
> There are records that say he was executed by local authorities
Isn't Josephus the first one to mention this? I don't think the Romans themselves left surviving records of an execution they would not have regarded as especially significant at the time.
Sorry, I don't mean judicial records, I mean that various people that wrote about him wrote he was executed. You're right there's little that granular at least afaik.
Tacitus is the other big near-contemporary non-Christian source for the crucifixion apart from Josephus. Tacitus's Annals was written in 116 AD, a bit over twenty years after Josephus's Antiquities but well before Christian (and Muslim) scribes had a chance to interpolate anything into Josephus's writings.
But yeah, I don't think there are any direct Roman sources for the crucifixion, nor would be expect to see any but the most important-seeming executions to be well documented in surviving records. For that matter, we barely have much more documentation of Pontius Pilate's life and career than we have for Jesus. We know about him mostly from Christian sources (especially the Gospels and Epistles), Josephus, Tacitus, and one or two other non-Christian writers who mentioned him. I think the only direct archeological evidence that Pilate existed is one fragment of an inscription (probably a dedication on a temple to the Emperor Tiberius) that names Pontius Pilate as the Prefect of Judea.
For a provincial Roman official of merely Equestrian Rank Pilate is unusually well-documented. Although some histories are related to Jesus, not all are. Philo of Alexandria mentions Pilate with regard to him being “A man of inflexible, stubborn, and cruel disposition” and details other atrocities, Josephus mentions Pilate in relation to Jesus but also two other actions, both atrocities ( The Aqueduct Incident and The Roman Standards Incident)
Oh, this is a good old long-running row. The modern version on one side is, I believe, the Jesus Mythicists and on the other, historians. I don't bother getting into the weeds on this one because I'm no longer interested in yet another bunch of atheists making sneery remarks about religion, but Tim O'Neill has been in a few entertaining fights with them, and has some videos up about "did Jesus exist?":
Going back for an example of historical "Jesus the man not Christ the god" writing, there's the famous book by Ernest Renan (again, one I haven't read, mea culpa!) "Vie de Jésus/Life of Jesus":
"Within his lifetime, Renan was best known as the author of the enormously popular Life of Jesus (Vie de Jésus, 1863). Renan attributed the idea of the book to his sister, Henriette, with whom he was traveling in Ottoman Syria and Palestine when, struck with a fever, she died suddenly. With only a New Testament and copy of Josephus as references, he began writing. The book was first translated into English in the year of its publication by Charles E. Wilbour and has remained in print for the past 145 years. Renan's Life of Jesus was lavished with ironic praise and criticism by Albert Schweitzer in his book The Quest of the Historical Jesus.
Renan argued Jesus was able to purify himself of "Jewish traits" and that he became an Aryan. His Life of Jesus promoted racial ideas and infused race into theology and the person of Jesus; he depicted Jesus as a Galilean who was transformed from a Jew into a Christian, and that Christianity emerged purified of any Jewish influences. The book was based largely on the Gospel of John, and was a scholarly work. It depicted Jesus as a man but not God, and rejected the miracles of the Gospel. Renan believed by humanizing Jesus he was restoring to him a greater dignity. The book's controversial assertions that the life of Jesus should be written like the life of any historic person, and that the Bible could and should be subject to the same critical scrutiny as other historical documents caused controversy and enraged many Christians and Jews because of its depiction of Judaism as foolish and absurdly illogical and for its insistence that Jesus and Christianity were superior."
Now I have to quote Chesterton again, from 1908 "All Things Considered", where he compares Ernest Renan and Anatole France writiing rationalist explanations of miracles;
"The Renan-France method is simply this: you explain supernatural stories that have some foundation simply by inventing natural stories that have no foundation. Suppose that you are confronted with the statement that Jack climbed up the beanstalk into the sky. It is perfectly philosophical to reply that you do not think that he did. It is (in my opinion) even more philosophical to reply that he may very probably have done so. But the Renan-France method is to write like this: "When we consider Jack's curious and even perilous heredity, which no doubt was derived from a female greengrocer and a profligate priest, we can easily understand how the ideas of heaven and a beanstalk came to be combined in his mind. Moreover, there is little doubt that he must have met some wandering conjurer from India, who told him about the tricks of the mango plant, and how it is sent up to the sky. We can imagine these two friends, the old man and the young, wandering in the woods together at evening, looking at the red and level clouds, as on that night when the old man pointed to a small beanstalk, and told his too imaginative companion that this also might be made to scale the heavens. And then, when we remember the quite exceptional psychology of Jack, when we remember how there was in him a union of the prosaic, the love of plain vegetables, with an almost irrelevant eagerness for the unattainable, for invisibility and the void, we shall no longer wonder that it was to him especially that was sent this sweet, though merely symbolic, dream of the tree uniting earth and heaven." That is the way that Renan and France write, only they do it better. But, really, a rationalist like myself becomes a little impatient and feels inclined to say, "But, hang it all, what do you know about the heredity of Jack or the psychology of Jack? You know nothing about Jack at all, except that some people say that he climbed up a beanstalk. Nobody would ever have thought of mentioning him if he hadn't. You must interpret him in terms of the beanstalk religion; you cannot merely interpret religion in terms of him. We have the materials of this story, and we can believe them or not. But we have not got the materials to make another story."
I would be interested to know what Chesterton meant by “rationalist”! He definitely doesn’t seem to mean the thing that philosophers mean (ie, the opposite of an empiricist, the kind of person that thinks that logical and rational proof is a better way to know about the world than empirical evidence), but it does seem somewhat compatible with the contemporary cultural usage.
yeah reading the gospels you sense he can't be mythical. CS Lewis argued that the new testament would have had to invent the modern realistic novel style to depict him if he was a creation
Like even his miracles are different, from later Christian saints. St Francis caused a wolf to stop preying on people out of his sheer holiness, and the village accepted it after. Jesus is grabbed by a woman and that is enough to heal her, or he spits on the ground to make clay and cover someone's eyes.
There is a lot of detail and prose there, and myth usually ignores that. Goliath is tall because David trusts in God to beat him: Zaccheus is small and has to climb up into a tree to see Jesus, and this is incidental to the message.
So there definitely was someone they were all watching, but that doesn't mean the miracles were true.
Yeah, the idea that Jesus didn't physically exist is odd.
Jesus dies in...one sec...AD 33 under the reign of Tiberius. By the reign of Nero, so say 60 AD, he's feeding Christians to the lions in Rome. That's living memory. It'd be weird if that was going on and Jesus actually didn't exist.
If soldiers were being executed for Killroy was Here graffiti I would expect those executions to produce a paper trail leading back to an explenation of who Killroy was. Said explanation might call him fictional.
Similarly. If people were being thrown to the lions for calling Jesus the messiah I would expect a paper trail. Maybe its lost to time in the last 2000 years, but I would expect it to have been written.
Depends on who/what is being investigated. "Etched my glass with graffiti" as a cardinal crime might not need to care about what the graffiti was, after all. "Loudmouth preacher/prophet" might just get recorded as that.
Assuming a large amount of literacy, you might get someone wondering "why are they talking about that guy?"
These are dependent on the cultural mores. "Joseph is lying again" is hardly going to raise eyebrows if lying is normative in the culture (which, I'm not saying it is for Rome, but there are cultures where lying is the standard public discourse, and truth is only given with a monetary exchange).
Many of the records were lost during an attack by the Roman army on Jerusalem. Others were lost when a Roman Army under a Christian general wiped out the Nazarenes. (If anyone was an actual follower of Jesus, it was the Nazarenes.)
“ So to suppress the rumour, Nero falsely charged with guilt and punished with the most exquisite tortures the persons commonly called Christians, who were hated for their enormities.
Christus, the founder of the name, had undergone the death penalty in the reign of Tiberius, by sentence of the procurator Pontius Pilatus, and the pernicious superstition was checked for a moment, only to break out once more, not merely in Judaea, the home of the disease, but in the capital itself, where all things horrible or shameful in the world collect and find a vogue.
First those who confessed were arrested; then on their information a vast multitude was convicted, not so much of the crime of arson as of hatred of the human race.
Their deaths were made farcical. Dressed in wild animal skins, they were torn to pieces by dogs, or crucified, or made into torches to be ignited after dark as substitutes for daylight.
Nero had offered his gardens for the spectacle, and gave a show in the circus, mingling with the people in the dress of a charioteer or riding in a chariot.
Hence, even for criminals who deserved extreme and exemplary punishment, there arose a feeling of compassion; for it was not, as it seemed, for the public good, but to glut one man’s cruelty, that they were being destroyed.”
Tacitus is generally considered a reliable commentator so even though he’s writing a few generations later ( although he was alive for Nero) it’s known he has access to plenty of records.
It could be later Christian interpolation but they were unlikely to call Christianity a disease, a pernicious superstition that was horrible and shameful or that the Christians hated the human race.
"Pliny the Younger, the Roman governor of Bithynia and Pontus (now in modern Turkey), wrote a letter to Emperor Trajan around AD 110 and asked for counsel on dealing with the early Christian community. The letter (Epistulae X.96) details an account of how Pliny conducted trials of suspected Christians who appeared before him as a result of anonymous accusations and asks for the Emperor's guidance on how they should be treated."
Here is the text of Pliny's letter and Trajan's reply:
It is my practice, my lord, to refer to you all matters concerning which I am in doubt. For who can better give guidance to my hesitation or inform my ignorance? I have never participated in trials of Christians. I therefore do not know what offenses it is the practice to punish or investigate, and to what extent. And I have been not a little hesitant as to whether there should be any distinction on account of age or no difference between the very young and the more mature; whether pardon is to be granted for repentance, or, if a man has once been a Christian, it does him no good to have ceased to be one; whether the name itself, even without offenses, or only the offenses associated with the name are to be punished.
Meanwhile, in the case of those who were denounced to me as Christians, I have observed the following procedure: I interrogated these as to whether they were Christians; those who confessed I interrogated a second and a third time, threatening them with punishment; those who persisted I ordered executed. For I had no doubt that, whatever the nature of their creed, stubbornness and inflexible obstinacy surely deserve to be punished. There were others possessed of the same folly; but because they were Roman citizens, I signed an order for them to be transferred to Rome.
Soon accusations spread, as usually happens, because of the proceedings going on, and several incidents occurred. An anonymous document was published containing the names of many persons. Those who denied that they were or had been Christians, when they invoked the gods in words dictated by me, offered prayer with incense and wine to your image, which I had ordered to be brought for this purpose together with statues of the gods, and moreover cursed Christ--none of which those who are really Christians, it is said, can be forced to do--these I thought should be discharged. Others named by the informer declared that they were Christians, but then denied it, asserting that they had been but had ceased to be, some three years before, others many years, some as much as twenty-five years. They all worshipped your image and the statues of the gods, and cursed Christ.
They asserted, however, that the sum and substance of their fault or error had been that they were accustomed to meet on a fixed day before dawn and sing responsively a hymn to Christ as to a god, and to bind themselves by oath, not to some crime, but not to commit fraud, theft, or adultery, not falsify their trust, nor to refuse to return a trust when called upon to do so. When this was over, it was their custom to depart and to assemble again to partake of food--but ordinary and innocent food. Even this, they affirmed, they had ceased to do after my edict by which, in accordance with your instructions, I had forbidden political associations. Accordingly, I judged it all the more necessary to find out what the truth was by torturing two female slaves who were called deaconesses. But I discovered nothing else but depraved, excessive superstition.
I therefore postponed the investigation and hastened to consult you. For the matter seemed to me to warrant consulting you, especially because of the number involved. For many persons of every age, every rank, and also of both sexes are and will be endangered. For the contagion of this superstition has spread not only to the cities but also to the villages and farms. But it seems possible to check and cure it. It is certainly quite clear that the temples, which had been almost deserted, have begun to be frequented, that the established religious rites, long neglected, are being resumed, and that from everywhere sacrificial animals are coming, for which until now very few purchasers could be found. Hence it is easy to imagine what a multitude of people can be reformed if an opportunity for repentance is afforded.
Trajan to Pliny
You observed proper procedure, my dear Pliny, in sifting the cases of those who had been denounced to you as Christians. For it is not possible to lay down any general rule to serve as a kind of fixed standard. They are not to be sought out; if they are denounced and proved guilty, they are to be punished, with this reservation, that whoever denies that he is a Christian and really proves it--that is, by worshiping our gods--even though he was under suspicion in the past, shall obtain pardon through repentance. But anonymously posted accusations ought to have no place in any prosecution. For this is both a dangerous kind of precedent and out of keeping with the spirit of our age."
Ah, but he did exist. James J. Kilroy was an inspector at the Fore River Shipyard in Quincy, Massachusetts who was in the habit of writing "Kilroy Was Here" in chalk next to the marks he made to indicate which rivets had already been inspected in order to avoid double counting. Some of the marks didn't get erased and wound up in visible but inaccessible parts of the ships, inspiring copycat graffiti. After the war the New York Times did an investigation, found several dozen claimed sources for the graffiti, and concluded that James Kilroy was by far the most likely candidate.
The sketch of a long-nosed bald man peeking over a wall often associated with the phrase doesn't look at all like the real Kilroy. That's from a slightly older British graffiti tradition, originally associated with the phrase "Wot no sugar?" and most commonly known as Mr. Chad. The Chad and Kilroy graffiti traditions somehow merged during the war.
Has there ever been a religious movement deifying someone who didn’t actually exist? I’m sure there’s a lot of room for debate whether the Christ pictured in the gospels was the historical Christ, but it seems like Christianity would have been relatively unique if it was the case that Jesus didn’t exist at all. Especially since there were no shortage of prophets and teachers in Judea at the time.
"Has there ever been a religious movement deifying someone who didn’t actually exist?"
My understanding is that the "Jesus never existed" set explain the rise of Christianity by saying it was based on a grab-bag of Middle Eastern mythology (the famous "Golden Bough" notion of dying and rising demi-gods) and generally St. Paul gets the blame for inventing Christianity as we know it.
I don't recall ever reading a good explanation as to why Saul, orthodox persecutor of the Christians befouling Judaism, turned into Paul the Christian; why would he bother inventing a new religion? And if he wanted one, why bother with this 'Christ' who never really existed in the first place, apart from a bunch of hysterical women and a rabble of hicks from the back country claiming they were his followers?
Yeah, but why is the interesting question. He was fervently Jewish, so why do 180 turn on that? If he wanted to reform Judaism or make it more appealing to potential Gentile converts, he could have gone that road. Instead, he explicitly linked himself with the name of Christ and the Christians.
the main deities of shinto don't seem to be based on any real people, and are so culturally prevalent westerners know about Susanoo, Ameratsu, and Yamata-no-Orochi though osmosis.
I don't believe anyone claimed they were real people. They were worshipped by specific rulers and clans though who often identified with them the same way the Japanese Emperor identified with Ameratsu.
i think so? not based on historical people but also distinct beings who have existed in the physical world. Shinto is more embodied than Christianity, even if its treated mythologically by people
There's a reasonable line of argument that the various gods of were often originally based about vague memories of a famous ancestor. OTOH, they were also clearly frequently based around anthropomorphism of some natural phenomenon. And it's my suspicion that often both processes occur in the same god.
But what those "gods" turn into in later generations is quite often FAR removed from the original conception. People tend to shape their gods into something related to their "idealized" self image. (For certain meanings of "idealized".)
I’m a bit skeptical that it happened much in antiquity, given that a good number of the ancient gods actually seem to be derived from some proto-indo-European tradition that preserved versions of the same gods in Vedic Hinduism, Roman mythology, Greek mythology, and even Lithuanian and Slavic mythologies. It would be fascinating if there were real people from 10,000 years ago whose exploits got memorialized into these traditions. And it’s possible that some people from antiquity did get into some of the lists of gods at some point. But I suspect a bigger source of actual gods in traditional mythologies is personification of natural forces.
Right, that's what I'm thinking. Wouldn't you kind of expect Jesus to have been based on a real person, rather than being entirely invented by Mathew, Mark, Luke, & Co? I'm not saying the guy walked on water, turned it into wine, etc, but it would be pretty odd if there wasn't some dude who was a spiritual leader of some sort who inspired the gospel stories.
Aren't there some traditional saints whose historical existence is questionable? ISTR St Anthony was like that, but maybe all the documentation just got lost....
St Anthony of Valero/Padua? I thought he was actually fairly well documented, particularly as a friend of St Francis. Wikipedia even gives precise dates for his birth and death: https://en.wikipedia.org/wiki/Anthony_of_Padua
"In the fields of philosophy and mythography, euhemerism is an approach to the interpretation of mythology in which mythological accounts are presumed to have originated from real historical events or personages. Euhemerism supposes that historical accounts become myths as they are exaggerated in the retelling, accumulating elaborations and alterations that reflect cultural mores. It was named after the Greek mythographer Euhemerus, who lived in the late 4th century BC. In the more recent literature of myth, such as Bulfinch's Mythology, euhemerism is termed the "historical theory" of mythology."
The very first emperors of Japan and China were divine or divinely-descended, and probably not historical. They might have been based on actual rulers, but there's little or no contemporary evidence and it seems plausible that they were invented by later rulers to give their dynasty more legitimacy.
(I am not a historian and I'm just looking around on Wikipedia.)
Fair. Looking at the Yellow Emperor, who seems to be a mythological Emperor of China, the earliest archaeological evidence of people talking about him seems to be from the ~4th Century BC, while he allegedly lived in ~2690 BC.
Nero was infamously blaming Christians for the burning of Rome ~30 years after Christ (allegedly) died, so it's the difference between a popular cult believing someone who, in living memory, had died, vs. the remembering of an Emperor thousands of years before that doesn't have an continuous archaeological or literary tradition.
Satoshi Nakamoto? : - ) One gets into a bit of a weird world when one constantly writes under pseudonyms, and has different personalities/locations for each. I mean, if you're playing a character "as the author" and hire people "to play the author at conventions" do you really say that the author exists? After all, people never do really meet him.
Someone broke the L. Ron Hubbard Rule, and I'm not sure that results in "automatic deification" but it does result in a religious movement, I'm pretty sure. Needs must, and all that (Los Alamos seemed pretty interested in the new religion, at any rate.)
“The cultivation of virtue is equivalent to the collection of evidence about you acting a certain way. You cultivate a virtue in yourself by practicing it, which creates evidence of its functioning in you. The more you do this, the more you grow the body of evidence and thus strengthen the prior probability that you’ll be, e.g, patient.”
If it was *just* about accumulation of evidence, then it seems like this would enable a shortcut, where you don’t actually practice the virtue, but just get extremely strong external evidence that you will practice it. Conversely, it would mean that practicing a virtue in situations you don’t remember would be substantially less helpful at acquiring the trait.
I suspect a lot of virtue (or habit) formation is better understood as getting subpersonal things like “muscle memory” to respond more quickly in particular ways.
“Every time you make a choice you are turning the central part of you, the part of you that chooses, into something a little different from what it was before. And taking your life as a whole, with all your innumerable choices, all your life long you are slowly turning this central thing either into a heavenly creature or into a hellish creature: either into a creature that is in harmony with God, and with other creatures, and with itself, or else into one that is in a state of war and hatred with God, and with its fellow-creatures, and with itself.
"To be the one kind of creature is heaven: that is, it is joy and peace and knowledge and power. To be the other means madness, horror, idiocy, rage, impotence, and eternal loneliness. Each of us at each moment is progressing to the one state or the other.”
I agree with this. It's basically a behavioralist perspective. To some degree one's personality is a narrative construct, e.g. "I am the type of person who doesn't lie". When you act in accordance with the narrative you strengthen it, both through simple Bayesian inference ("I just told the truth, therefore I'll strengthen my priors that I'm the kind of person who does that") and probably through some dopamine reward that gets released when you're proud of yourself for doing something virtuous. The point of moral instruction is to imprint a child's brain with the socially-optimal reward function.
I wouldn't be surprised if one of the neurological differences between humans and chimps turns out to be the ability to self-administer behavioral rewards, like some neural connection between the cortex and the amygdala or whatever. Hardware that lets us program our own behavioral conditioning.
It seems like a Bayesian way to describe Aristotelian habit formation, but I think it's a pretty vague description of how we form habits. It's fine if you don't want to focus on what exactly virtue is (though this does mean you don't explain the motivation for action) but you also haven't really described how the virtue itself is cultivated. Some things you can/should incorporate are:
- Impact of teaching/guidance on cultivation of virtue
- Impact of social context
- Differences in cultivation rates between people
- How does the body of "evidence" actually grow? To me this phrasing actually seems incorrect as our habits persist past our memories. For example I know I like cherries even if I can't really remember any of my experiences eating cherries
But really this is all very Aristotelian so you could just read Nicomachean Ethics for more
This is attractive. I can think of two situations where evidence for the virtue might skew or be skewed by the virtue itself. 1. Humility - acting humbly does create evidence of being humble and yet dwelling on that evidence seems contrary to humility and likely to undermine it. 2. Self denial - if someone has genuinely devoted their life to helping the poor, they may have experienced a lot of push back from the poor themselves who may not want help or are ambivalent, and push-back from bystanders accusing them of virtue-signalling. So the evidence is equivocal and I feel they need something more than evidence to maintain their self-denial.
As far as I understand it, the point of following a Virtue is that is axiomatically Good, whether it works or not, independent of the fallout. If you want a ethic framework that you should follow because it's good for you/society, consequentialism is there.
Virtue consequentialism is a thing. I think it’s the best form of both consequentialism and virtue ethics. I don’t understand what could motivate people to believe that certain traits are inherently virtuous regardless of what kinds of consequences they tend to bring.
I’m trying to understand what is happening inside a person when they cultivate virtue - not ask whether it’s good. Interested in facts here, not values.
That said, this “independent of the fallout” part isn’t true. The virtue of wisdom is identical to what we today call rationality: assessing likely outcomes. Virtue ethics basically says you’re constrained by far more than making bad predictions, and you need the capacity to do much more than make good predictions about outcomes.
I petition for Scott to generate a new open thread image. The current one always makes me think of thick oil paint smeared around a window on SpongeBob's house.
It's a "neo-western", portraying the early days of the pandemic in a small town in New Mexico in the context of the left-right culture war.
It made me laugh more than I thought it would. I think it does a good job portraying both sides as they see themselves and as seen by the other side. At the same time it captures the confusion of the first days and the spectrum of people's reactions to it and all the little tragedies that turned into big ones with time. Really capture that weird time that now, sitting and drinking a good cappuccino and watching kids load up on the big yellow summer camp bus, seems like pure fantasy.
I thought it was solid - overall probably 7/10 stars.
9/10 stars for the first half, which is a fantastic time portal to the paranoid and chaotic environment of 2020. Where I *thought* it was going was an escalating destructive conflict between the Sheriff and Mayor, each viewing the other as overly paranoid and tyrannical about a threat (protest violence in the case of the Sheriff, COVID in the case of the Mayor) that wasn't actually present in their small town. That conflict then pits the two community leaders against each other, driving a wedge in the town itself as people line up against neighbors they've lived alongside for their whole lives for the sake of things happening in Minnesota, New York, and San Diego. And all along, the whole conflict itself isn't even really about COVID or riots, because although the Mayor and Sheriff may each think of themselves as fighting a monster in a righteous political cause, in their hearts the true driver of their anger at one another is just an interpersonal feud revolving around the Sheriff's wife. They've put a political mask on that conflict to make it respectable and justify it to themselves, and tragically that mask enables it to spread and infest their whole town. That was the vibe the film had for me through the first half, and I very much loved it.
Then it took a major pivot, and in my opinion, a modest step back, and became a sort of nihilistic character study of a man making ever worse decisions as he confronts, and is emotionally crushed by, his total lack of control over the world around him. Still pretty good, but the kind of darkness-all-the-way-down story that is very much an acquired taste. Still had me on for the ride, though. 7/10 through the 3rd quarterish.
It really jumped the shark for me at the end, though. To try to express it without spoilers, it's on this dark meditational ride through the west, but has this whiplash-inducing "and then the space aliens show up!" kind of sudden introduction of very out-of-nowhere addition to the conflict. It's like you're on this nihilistic ride about a man struggling with his insignificance and lack of control in a world of overwhelming complexity... but then lizardmen show up with a mind control ray, and now you're on a nihilistic ride about a man struggling with his insignificance and lack of control in a world where lizardmen use ray guns to control his thoughts from their lair deep in the bowels of the Earth. The theme of powerlessness is still fundamentally present in the new narrative, but it's a sharp turn to say the least. 3/10 down the stretch.
Still, overall a solid movie that I found worth the cost of the ticket. Endings can be hard to stick.
If AI takes off, and revolutionizes the working world (I'm making an assumption right now, that we're not talking about evil AI that will destroy humanity or anything like that, just yet) will we need to switch from our current economic model to a different one? For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses? Will currently-bad jobs end up paying more? Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone? How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully? Whatever you envision happening, how do you see it playing out over time?
One possibility is a transition away from the employment economy that has dominated the past two centuries in the UK and Belgium and shorter periods in other parts of the world. The fact that employment was the method for such a small part of history makes it very plausible that it will be replaced again.
But I think it’s also possible that the employment mechanism is more resilient than we think - there will be large transition costs, comparable to what goes on in countries experiencing a civil war, but with people inventing new productive things that are worth doing now that you can supplement your labor with AI, even as the old things people used to do for employment are easily done by far fewer people working with AI. On this picture, there’s a lot more people starting new businesses and otherwise being entrepreneurial - eg, Disney needs a lot fewer employees to make a film, but also some random student who has a great idea for a film can now bring it to fruition themself with the help of a lot of AI, and similarly for new product ideas. (Interestingly, the rate of entrepreneurship took a big jump up in 2020, and what I’ve heard suggests that it has only continued to rise since then: https://www.statista.com/statistics/693361/rate-of-new-entrepreneurs-us/ )
> Will we need to switch from our current economic model to a different one?
No.
> For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses?
Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.
> Will currently-bad jobs end up paying more?
Yes. This is what increases in productivity lead to. You get paid more in real terms because you earn more and what you earn can afford more.
> Will we need to instate a UBI?
No.
> Will there be enough resources to give an amazing UBI to everyone?
Depends on your definition of "amazing UBI." We already distribute more resources for free to the poor than many countries earn on average. This is not generally thought of as a UBI but that net could certainly grow.
> How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully?
There will be no such switch nor will it be needed. You're assuming a premise here.
Whatever you envision happening, how do you see it playing out over time?
Similar to other gains in productivity. There's nothing about even the most optimistic realistic predictions for AI that look different than the gains in productivity caused by things like industrialization. We're looking at maybe a few percentage points of better productivity growth maximum. That's a huge deal but we've seen countries have decades of 10+% and it didn't lead to the doom some AI types want to claim.
I'm going to challenge this. Your scenario seems plausible to me, but it's not the *only* plausible scenario. In particular, while the standard theory of comparative advantage says that total output should only go up under introducing AI, it says nothing about how total output is distributed between wages and returns on capital, and one can certainly conceive of scenarios where total output goes way up, but wages go way down.
Big difference based on whether the `complexity of cognitive economic tasks' is bounded or unbounded. If it's unbounded, then you get the `usual' historical pattern where output and wages go up. However, if the complexity of cognitive economic tasks if bounded, and if AI can saturate that bound, then you can get a scenario where market clearing wages abruptly collapse to, more or less, the price of electricity, and where `total output' goes up, but almost all of the output becomes return-on-capital, with wages dropping to near zero.
Of course, this kind of analysis is really a bit spherical-cow-in-a-vaccum - ultimately this is a political economy problem, and our current political system seems unlikely to tolerate an economic system where, to exaggerate for effect, Sam Altman and Dario Amodei own the entire economy while everyone else starves. Then again, it could be argued (somewhat plausibly) that universal-suffrage democracy was downstream of an international security environment where the ability to mobilize mass armies was critical to state survival, and that once we transition to largely robotic armies, this might lead to states that look very different. So maybe (again exaggerating for effect) Sam Altman ends up as world dictator with his robot armies enforcing the Pax Altmanica. Really, a lot turns on just how super ASI is...
Thanks, I'll read the papers. My prediction is, in crude terms, that AI will be broadly like the internet, computerization of industry, the steam engine, etc. In other words it will significantly boost productivity but not be different in kind from those innovations. I don't think AGI changes this analysis except it will be an even bigger boost to productivity.
Of course, you can imagine it will be otherwise. Killbots or unlimited superintelligence or something. And that's the level of a lot of theorists so, to be frank, I'm being flippant. Because if AI is really that ubiquitous the rising productivity itself solves all problems. I do not in fact think that's the likely scenario. But it suffices to rebut a certain kind of lazy AI skepticism because I can fully grant their most extreme scenarios and it actually helps me.
If AI is instead a normal technology then it can't be as world bending as people want it to be. But also that implies the problems you're bringing up, of distribution, precisely because it will not be dramatically disruptive as they're imagining. That doesn't lead to problems of technological unemployment or people not being able to have jobs. But it could certainly lead to short term dislocations and long run new equilibria that may have unexpected or negative effects. But the lack of a rapid destructive takeoff means I trust an active system to adapt.
As to the idea of cognitive tasks being bounded, that strays into the territory where the extremity itself solves all problems. If you are proposing cognitive tasks are saturated you are by definition implying limitless and cheap cognitive function that is universally available. That won't cause a collapse in wages. That will cause a massive increase in wages through deflationary effects. That implies a world where everyone has a genius recruiter whose full time job it is to find them the best job, a full time doctor whose job it is to track their health obsessively, a full time shopper to find them the best deals, etc etc. If they don't have that then there are still undone cognitive tasks.
I think it's unlikely that AI gives private individuals or non-democratic governments superior military capacity to traditional states. Though it may allow some accumulation of power that allows democratic overthrow. I'm not even sure about that though. I think a lot of the anti-democratic pressure we're seeing is non-technological.
> Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.
It sure made a lot of horses unemployed.
As the commentator below implied there’s no certainty that this new type of automation will allow humans to move up the value chain. From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee. Even with that retail employees are barely earning subsistence wages in large cities. Some people moved up the value chain, many moved down.
Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes. ChatGPT informs me that at a broadest definition of office workers (ie all admin) that’s 50% of the workforce in the US and close to 65-70% of all salaried income.
These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?
In fact it didn't create any horse unemployment as horses are property and so never employed or unemployed. And it did not create a significant drop in the horse ownership rate, just in the horse population. The remaining horses live significantly better than their ancestors. But I get it's a slogan that hasn't really been thoroughly thought out.
> From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee
Again, this point is logically incoherent. It simply does not make sense. If AI doesn't increase productivity then it's inefficient to invest in it. You can't simultaneously have it so efficient it replaces humans and yet not create significant economic benefits. If it does increase productivity then it makes everyone richer which is why retail clerks today live significantly better than retail clerks a century ago and even significantly better than upper class people a century ago. There was a reshuffle of social status but AI don't compete for social status anyway.
> Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes.
Okay. So in that scenario there won't be technological unemployment. I assume that work was valuable (if it wasn't we could improve the economy simply by not doing it). If AI does more of that work and to a better quality while not being able to do physical labor then that's a world where humans handle physical labor (presumably assisted by tools, machines, etc) and have access to infinite cheap services of every kind. That is not a dystopia and is an improvement over the current world.
> These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?
Note how you're shifting the burden of proof from your claim (AI is totally unique) my claim (AI will function like all previous technological advances). I do not think the burden of proof is mine. Further, even if AI is strictly superior at every job this will not create unemployment unless it is so abundant that it can fill all demand for jobs and it is cheaper. If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.
>> If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.
This really depends on how the society is organized and how the output is allocated (which are questions strictly speaking `outside economics'). It could be a post scarcity society OR a dystopia. Or, conceivably, both.
I mentioned elsewhere that full government control or monopolies could disrupt this process. But those are not features of the current economic system and so don't need radical reform to avoid.
I do think that you end up with two cross cutting bets. If you think the danger is from rogue AI wiping out humanity you want centralization. If you think the danger is from someone monopolizing a new central economic resource the danger is from centralization. I'm more in the latter camp.
Anyway, you can imagine a scenario where infinite AI robots can do any task for $5 an hour with a human minimum wage of $10 an hour and where there is no welfare whatsoever such that anyone without a few thousands dollars of capital is permanently locked out. But my suspicion is that we would divert the tiny part of economic production necessary for welfare. Because we do that today and I don't see why AI would make us less generous.
Yeah, the world where there is a single dominant ASI and its really super (but still under the control of a human owner) looks very different from the world where there is a broad ecosystem of AIs of roughly comparable power. I think the latter is more likely, but I don't have an argument that the former is impossible. [Also, if we end up with the former scenario and the ASI is sufficiently super, then we could abruptly find ourselves living under a dictatorship of the ASI owner, and that world could look very different in terms of how its organized based on the whims of said owner].
>automation doesn't create unemployment in the long run.
This has been true _so far_. There are compelling arguments that AGI+robotics would change this. Should we blindly believe these arguments? Of course not. But when the rebuttal to them isn't any better than "That's never happened before", we also, in my opinion, shouldn't be quite so confident that it absolutely, definitely, won't happen this time.
In the absence of a minimum wage, I think you would have a stronger argument that it won't, because no matter how good and cheap AGI is, it would always be worth hiring humans at a low enough wage. But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.
But _with_ a minimum wage, it is at least possible that AGI + robotics would _always_ be a better/cheaper option than hiring a human.
There are paths where this might not happen: we might decide to desire specifically human made things in a way that we are willing to pay significantly more for them, as one example.
So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.
The core issue is your second point about increasing productivity of humans. There was a time when computer chess engines could always beat a human, but a computer + human team could usually beat a computer alone. This was the period of "productivity enhancement". That time is gone. A human can no longer improve the performance of a chess engine, and a lone computer will generally beat a computer + human team (assuming of course that the human is actually contributing anything). AGI + robotics is the first technology that has the _potential_ (not guarantee, but potential) to, in a general and widespread manner across all domains, make humans no longer productive in the system. Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).
> There are compelling arguments that AGI+robotics would change this.
You can assume that AI will radically change in its effects compared to what AI currently does and compared to all historical precedents. However, this is not a rigorous belief. It may be compelling but many people find many things compelling for a variety of reasons. Basically, you're arguing: "AI will become different than every other technological innovation, including how AI itself has been for the past few years." This does not have strong evidence and it requires exceptionally strong evidence.
> But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.
If AI raises productivity then it decreases the amount of money you need to earn to have a living wage. Because it makes everything cheaper. If AI doesn't raise productivity then humans remain competitive. This is a simple logical contradiction in this ideology. They are imagining a world where AI decreases costs and increases production but does not decrease price levels. This is only possible if there are AI monopolies (whether private or government controlled) doing rent seeking. Otherwise competition produces downward pressure.
In fact what AI would do in that case is be hugely deflationary which would make everyone richer. And would likely necessitate the purposeful creation of inflation to absorb the excess production. But existing welfare can be used to handle that.
> So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.
If you assume that we have unlimited AI and robotics then you will produce unemployment. But only in the sense that every person will have a personal army of AI and robots. If there is any unmet need that AI or robots can't meet that is an opportunity for human employment. I guess technically everyone having their own robot army and living on its produce is unemployment but it's not a problem.
I also don't think that post-scarcity is actually coming. But I'd welcome it if it did.
> Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).
Humans being crowded out of specific jobs doesn't create long run unemployment. It only matters if they are crowded out of ALL jobs. They can only be crowded out if AI+robotics are better than humans. Not just individually (ie, a robot outcompetes a human) but that the robots are so unlimited they are preferable in all cases. They also have to be so abundant you never run out. If that is the case we are in post-scarcity. If it is not then there will still be jobs for humans.
There's also no sign this is happening. Current best estimates are these are providing 1-1.5% productivity growth per year. That's gigantic but it's not a society ending disruption.
This is just choosing to disbelieve in AGI (in the strictest definition of what that means). Which is fine, I don't think that's an insane thing to believe. But I think it's important to be clear that that is what the assertion is based on. A lot of people (especially around here) disagree with you in that belief.
Also, when the comment you were replying to specifically asked about cases where AI revolutionizes the working world, you start to hit a narrower and narrower path where AI improves enough above current (I don't think current AI will "revolutionize" anything, although it will have impacts), but doesn't get to fully generalized intelligence.
No, it isn't. I can fully grant that AGI will exist and still believe it will create no long run unemployment. My point is that even granting the premises of AGI it will still not generate structural unemployment unless it meets two standards:
1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.
2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.
Only at that point will there be no chance for humans to work and contribute economically. This is true even if AI is better than humans at all tasks.
But if 1 and 2 are true then we are definitionally in a near post-scarcity economy because we are in a world where there is an infinite supply of capacity which is extremely cheap.
>1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.
>2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.
(1) is sufficient to remove humans from all jobs without (2) as an additional condition. "Because if it is fully occupied then humans can be used for additional tasks." would not be economical if (1) is true.
My best guess is that we will get AGI (potential functional replacement for humans from all economic roles), _possibly_ only economical to displace 1st world workers initially, then reduce costs to make (1) true globally, for any worker at any living wage anywhere.
If we are lucky, and AGIs (and ASIs, if they are feasible), stay under human control, then a sensible way to run such a society is, as beleester noted in https://www.astralcodexten.com/p/open-thread-392/comment/139743176 , you could just have money flowing from consumers to AI companies (including AI-driven factories, farms, etc.), money flowing in taxes from AI companies to the government, and money flowing from the government to the citizens as UBI.
Hypothetically, suppose that GNP quadrupled in a fully AI economy. Say that all flows into AI companies. Say half of that goes into taxes and the other half goes to owners of the AI companies (who spend part of it on AI company products and a little on human servants, if they really want them). Of the half going to the government, say half goes to government purchases (mostly from AI company products and a little on humans doing something (beating dissidents or something else status/power-seeking/human-specific) and the other half goes to UBI to citizens. This would leave all citizens with the same standard of living as today, except that they would not have to work.
If we _got_ a purely AI economy under human control, and got a factor of 4 increase in GNP from the technical advances connected with the shift, and can't manage to do something like this, because we have a job-centered ideology, then we are idiots, and can't take advantage of a bonanza on a silver platter.
No, they will pay less, because presumably the supply of workers in that segment of the job market will increase.
>Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone?
Yes, and yes. But the UBI will be in the form of government employment (and there are certainly lots of potential jobs, from companions for elderly people to teachers and teacher assistants* to free lawyers for people who currently do not get free lawyers, etc etc. And that will itself spur demand for goods and services https://www.investopedia.com/terms/f/fiscal-multiplier.asp
*There will always be some percentage of students who, at the very least, will need personal attention to stay on task. If today we employ two teachers to teach two classes of 30 each, tomorrow we could have one teacher supervising an AI classroom of 55, and five teachers giving individual attention to each of five students.
But why wouldn't you just let the AI give individual instruction to students as well? We're assuming that at this point, AI is more competent than the average public school teacher, so it doesn't make any sense to let human teachers teach them...
retail has had huge productivity growth with resultant loss of staff. The old british tv show Are You Being Served? is a good description of older department store style retail: many full time staff, expansive facilities (multiple floors with an elevator even) and lots of goods.
look at a gamestop or dollar general now and you have a just in time economy mostly on part timers with variable schedules, maybe 3 or less to a store in total. if you work in retail now you are not able to be independent; you live with family or may even be homeless and working (when i worked at Kohls three people were on the truck crew)
this will be everyone's future. everyone will live together, only the rich can escape the house while lot of people just live where they were born or with their parents till they die. no ubi, maybe even less welfare.
There are so many branching points that could radically alter things. That said, here's my hunch on a centroid, predictated on maximal change:
- A) AI that can write code well enough to replace most developers can invest well enough to replace most investors, leading to mass white-collars lay offs. This is a death sentence for giant cities, and a big deflationary risk for the economy
- B) massive gains in efficiency lead to lower costs of production, also a deflationary risk for the economy
- combining A) and B), you'll have a significant deflationary impulse: lower cost of production plus mass layoffs. The economy cannot handle deflation and the money printer will go BRRR. End result will likely be printing money which goes towards a basic income to offset social costs of large scale unemployment.
The combination of A&B will drive much more demand to live in places with lower cost of living. Big Cities will become much more dangerous, less pleasant places to live, with fewer jobs and more crime.
- we will see federal subsidies for energy production + manufacturing (since both are dual-use technologies) and something of a rural+small down renaissance
- employment will no longer be the default economic arrangement - because AI makes a better employee but likely a worse marginal-risk taker and human relatoinship cultivator. Businesses will want more equity-partner type arrangements, where a human (or small group thereof) overseeing an ai-driven business unit owns and is accountable for all the risks in exchange for a cut of rewards. The more the commands can come from the top, and the less understanding of value is necessary - the easier it is for humans to be cut out of the loop.
Instead of mass employment, you'll see much more entrepreneurship as people move with their basic incomes to smaller towns + cheaper CoL areas. Explosion of craft beers, things like games, entertainment, but also therapy, personal training, dietitians, etc. We will see AI enable way more small-scale entrepreneurship than growth for existing compaines. AI will suck at "this new product willl get you the whole market" because chaos and unpredictability are still a thing; existing ventures won't benefit from cheaper economic experiments, because reputation risk is existential for them but nonexistent for new players. Coke can only use AI to make coke cheaper to produce or maybe make marketing dollars more efficient; it's not like AI will have everyone drinking twice as much coke as before. But a new drink with the right mix of protein+probiotics sold in a specific location - now that becomes viable, at a small scale. So if the economy is an ecosystem i think AI leads to an explosion of small-scale ventures, much moreso than growth of existing big ones.
I think cities will be the big losers here, as their raison d'etre gets killed. The "winners" are smaller to mid-sized towns. There's another shift that will happen, with the newly printed money offsetting decrease production costs + unemployment. Governments at all scales will get grabbier, taxing AI production and leading to an increase in grey/black market economies. The end result is much higher price of bitcoin, as value from both equities and land drains into bitcoin, since the first two are easier and cheaper to tax.
So we're talking...widespread human-level agents but no ASI inventing nanomachines or anything?
In that case, the current economic system will keep working but probably 6 million people will permanently fall out of the workforce. This has happened every time we've had a big tech jump, including automating a lot of our manufacturing in the 80's. Some variation of this graph, which is Male Employment Rate, Age 25-54, where in every recession there's a fall in the employment rate and then a rebound but never quite as high as it was.
What should happen is that old jobs go away and we discover new jobs. I can kinda see that now, I know a guy who was a programmer, found a niche company doing Java development, didn't move, he got laid off, and now he's roofing houses. Nothing wrong with that and it's a skill we need. But every time there's a disruption, not everyone gets a new job for some reason. That's a point of debate.
But if a bunch of service jobs go away...we used to all be farmers, then we all worked in manufacturing, now we're all in services, something new will come up.
The answer is some kind of communism. Give people money for working for the state in some capacity, rather than just giving money for nothing.
That said, nobody seems to understand where the money for UBI is coming from. To my mind it has to be printed from nothing, but I’m open to suggestions.
not communist, there is no equality or "each according to his needs." just commanders, soldiers, and the cause. the civilian conservation corps were "army without the wars."
If the economy is being wholly run by AIs, then whoever owns those AIs is going to be very, very, very rich - rich enough that you could easily tax a fraction of their income to provide UBI for everyone else.
Surely the state would eventually just seize the AIs. If they're that dominant they'd be essential for warfighting and having them in private hands would be the equivalent of allowing someone to maintain a private nuclear arsenal.
This is an economic fallacy. Wealth isn’t just gold bars in a vault — it’s claims on future goods and services. Stocks, bonds, houses — they all derive value from someone, somewhere, being willing and able to pay for things in the future.
This is true of AI as well, in fact if the rest of the economy collapse I’m not sure what the AI market is.
Yes. The AIs are producing goods and services, and we are giving people a claim on some of those goods and services. (Or rather, transferring the claim of the AI's owner to other people who need it more.)
As far as I understand this shouldn't lead to inflation since the amount of money in circulation still matches the amount of "stuff" being produced, it's just that money is primarily circulating through the government (going out to unemployed people, and back in through taxes) rather than being directly exchanged between citizens.
> The AIs are producing goods and services, and we are giving people a claim on some of those goods and services.
How exactly are you doing that? That’s what I’m asking. To induce that demand you can’t tax the “wealth” tax of the AI companies who won’t exist anyway unless there are other companies to buy the product, and those other companies won’t exist unless there’s demand from consumers who won’t be able to buy anything as they won’t be employed.
So demand needs inducing somewhere, and there’s nothing to tax.
The only thing that give paper money value is that the government demands that you pay them paper money in taxes, or they'll take all your stuff, and perhaps take you also. So EVERYONE needs money (if they have any possessions).
Also, remember "eminent domain". The government can just take anything it really wants to take, and pay whatever it decides is a "fair value" for it in paper money.
Maybe to further the question, if AI progresses to the point where it can handle most jobs, do you still have software companies? If it is just one person at the top directing a bunch of AIs, then what moat do you have? What stops OpenAI from cutting out the middle man and also replacing the person directing the AIs? Why not just have AIs all the way down?
In the extreme, knowledge and the ability to work lose all value. The only remaining thing is what assets and hardware you have that you can sell or rent to the AIs.
We're probably going to hit the pitchforks and burning datacenters stage way before that, however.
Well, I am now anticipating the AIpocalypse coming much sooner than I expected, because my very much non-techie boss has recently used ChatGPT - "it's so convenient for emails!"
I have no idea who told her about it or showed her how to use it, but if she's using it, then everyone will be.
In the short-term? I think businesses will use it to reduce overheads by layoffs (voluntary or otherwise) and/or simply not hiring on new human staff. The knock-on effect of that will be more people looking for fewer jobs, until (we are being told) all the new jobs magicked up by AI open up and we all have shorter working hours and way more money.
Yeah, I'll believe that last when I see it and not a second before.
It seems like people who aren’t very fluid readers, without the reader’s grasp of the mechanics of writing (speaking not of content) - are those who like the output of LLMs (and before that those writing programs?).
If your boss is writing an email with AI, it’s pretty certain it was not an email that needed writing.
There's a good amount of stupid emails that have to be written in the job, a lot of "I got your message and I read it thanks" acknowledgements of announcements from various government agencies and so on. So I could see her getting the AI to précis the long-winded "we're going to be changing our name from the Agency for Counting Staplers to the Stapler-Counting Agency" emails and then writing up an answer to that.
Currently, she gets me to read the "name change about stapler counting" emails and tell her what needs to be done about it, if anything. I am now replaceable by a machine! 😁
I guess I find it hard to believe that the effort of involving AI in such trivial matters would not be a waste of time for anyone who *belongs* in such a position.
Hopefully she dresses really well or is good at ordering things off the internet for the office or something.
We are a small operation, providing not-for-profit services (the main childcare centre does charge fees to parents, but the vast majority of those are subsidised by various government schemes, which means a ton of interaction with government bodies).
So she does a lot of work that is necessary to keep the place going, she just delegates a lot of the "read this because I don't have time to do so" emails to me and oddly doesn't seem confident when writing emails/certain letters herself. She's perfectly capable of doing so, and does do a lot of her own emails, which is why I was so astounded to find out she was using ChatGPT!
Myself, I find it easier to write the dang email myself as it's quicker than trying to run it through one of the multifarious AI versions popping up (I wish Copilot would curl up and die, for one, as I'm fed-up of Microsoft trying to force it on me every time I use Office which is now rebranded as Microsoft 365 Copilot) but I'm a wordcel. My boss is more a numbers person 😁
And I’m not unaware of the need for help in writing real things. My husband is the last American Male English Major, and he is constantly handed all and sundry writing assignments in his completely-unrelated-to-that-major job.* But his coworkers do not struggle with email.
*At least among those who could not possibly remember the origins of his work.
> Yeah, I'll believe that last when I see it and not a second before.
Same. I'm far from a communist, but I do think it would be the right thing to do if there really are more resources and fewer jobs. But the transition will be a nightmare to navigate, and the interim a really detrimental time.
Yeah, what I find really hard to believe is al the bright-eyed optimism about "and the companies that own the AI will be *sooooo* rich that taxing just a fraction of their riches will pay for the rest of us", much less the "they will be *soooo* rich they will gladly share that with the employees!"
No company that makes moneybags profits ever wants to give it to the employees, much less pay it in taxes (even Ben and Jerry gave up on the original hippy idealism around CEO salaries). Why else do you think my government was trying to *refuse* a €13 billion windfall from Apple? They did not want to be killing the golden goose (or rather, the golden goose deciding to fly off to another country with an even nicer tax regime for multinationals).
I’m looking for more examples of a thing that I can’t find a good name for but is kind of “nomative determinism for words”: a word or phrase that has a meaning derived from a modern set of circumstances, yet when its component parts are broken down into their roots they mean roughly the same thing. It’s okay if it’s a stretch.
I’ll give a couple of examples here. “Astroturf” is a verb meaning “to artificially inflate the popularity of a person or idea”. This comes from a pun on “grassroots”, as Astroturf is artificial grass originally created for the Astrodome in Houston. But if one naively looks at the roots of “astroturf”, one finds “Astro-“ meaning “outer space” and “turf” meaning “to cover with sod”. So a plain reading of the word would be “to cover with sod a place very far away from one’s home”, which fits pretty well with “to pretend that one’s ideas are popular elsewhere”.
“Cellular”, describing a mobile phone, kind of fits too. The word comes from how the mobile network was originally set up (divided geographically into cells). But “cellular” is just “cell” with the suffix “-ular”, a suffix which means “relating to” or “referring to”. And “cell” comes from a French word meaning “a Catholic monk’s quarters”. The purpose of said quarters is to provide a private place for 1-on-1 communication with whoever the monk wanted to talk to in Heaven - generally a saint, the Virgin, or God himself. But if you’re presented with a device and told “this is cellular”, you might think “ah! This is a device that enables private 1-on-1 communication with someone quite far away“ and you would be correct.
They’re both kind of a stretch but that’s what makes them fun imo. Anybody got other examples?ChatGPT was utterly useless at coming up with more examples, but maybe I needed a better prompt.
I thought “Astroturf” contained an element of pretending your belief is not quite what it is, or deflecting attention from its less popular aspects. But now that I think about it, I find I’m unable to define it.
I once brought home a piece of Astroturf from an Astros game. They had recently re-turfed, and these little squares were fan souvenirs 😆. We were more easily satisfied then.
I’ve been thinking about Theil quite a bit since reading coverage related to Hulk Hogan’s death. At the time it went down, the Gawker lawsuit was not on my radar. Or Gawker itself or any other bullshit gossip web site for that matter.
I knew that Hogan was one of those WWF guys probably helped by the fact that Jesse Ventura was at least locally famous.
Hogan and I literally had crossed paths once on one of the Minneapolis urban lake strolling and bicycle trails.
I actually earned a pro wrestler scowl from him for my barely stifled laughter at the ridiculous figure he cut out in the real world. Remembering that surreal moment still makes me smile.
I had also read Thiel’s ‘Straussian Moment’ essay after the NYT interview. His thesis there was stated much more eloquently by Jack Nicholson as USMC Colonel Nathan R. Jessep in ‘A Few Good Men’. [1]
I’ll agree it’s always been true that there are bad people in the world and it’s necessary more often than we would care to believe to act in ugly ways contrary to deontological ethics. The consequences sometimes have to come first.
And here we get to the ‘but’. Traditionally these exceptions to deontological ethics have been made by sober minded, patriotic career senior intelligence and military personnel. In 2025 that is in danger of no longer being the case.
It wouldn’t be unreasonable to say that Thiel’s decision to endorse Trump in 2016 put Trump over the top. Thiel now, and probably always thought of Trump as a Useful Idiot that will help usher in Thiel’s own, IMO, kind of insane, post enlightenment order.
Thiel has amazing wealth generation skills but it’s frightening that he puts that wealth to use against the ‘up front’ ideas and ideals of the American Republic. The dark stuff was always meant to be the occasional exception to keep things on track. The wealthy of course always had a say in what was and was not good for the country and also coincidentally good General Motors, at least in the prior century.
But those people were not looking to tear things down to the studs and remake them in an order contrary to established Constitutional and civil norms.
I think of Thiel as a dangerous man with a lot of financial power, wielding it for what can be described best as eccentric, vanity projects. If things do go south in this country he has his “exceptional circumstances” New Zealand citizenship in his back pocket.
Coming from a GM/Chevy family, I miss the common-sense association between the well-being of a country and the well-being of its businesses. Too many massively multinational corps with execs trained in the school of Ayn Rand these days, I suppose.
I'm not sure if that was Thiel the Transhumanist tripping over a way to say that we should be posthuman, or Thiel the Edgelord thinking that most of humanity should just disappear.
They've existed for a long time, predating actual Roombas (brand name iRobot Roomba were first sold in 2002,). I remember reading a newspaper article c. 1996 about the CIA headquarters being an early adopter because using lawnmower robots saved them having to do security clearances on their gardeners.
I can't offer product or brand recommendations, but I can offer a Tumblr story from 2020 about a herding breed dog named Arwen who encountered a lawnmower robot, decided it was a sheep, and figured out how to herd it.
They don't need to be flat. In some rural Austrian villages my impression is that almost every household has one, and some of them have pretty steep terrain.
I have briefly looked into into tests, and Husqvarna came up several times and seems to be suited for steep terrain, and Mammotion was recommended for extremely steep terrain. But Google will provide you with lots of testing reports that compare different models, so you can have a look. (I searched in German, otherwise I would have sent a few links.)
I've seen Kärcher robot lawn mowers around. The simple ones just bounce around the edges like an old-style dumb roomba with no mapping, and after a while the lawn is mowed, which is all one was asking for.
I vaguely remember reading about robotic lawn mowers injuring hedgehogs, whose instinctive behavior of rolling up in a spiky ball served them well against all kinds of threats but not against these beasts of metals and blades. Hedgehogs are mostly night active, so I would personally not run any mowing robots during the night in Europe.
I went to a well-funded and well-staffed high school, but nobody seemed to actually care about my education. Teachers didn't really care if you got bad grades or good grades, and my parents were only ever interested in punishing me for every point short of perfect.
Nobody ever told me if I was doing well, and this has caused me to make a lot of bad decisions in life.
I graduated high school with a 3.8 GPA, but I thought I wasn't smart enough for anything but art school. I got into a really good art school, but after I got a B on an assignment, I thought that meant I wasn't perfect, so I switched to an easier major, one where there are no paying jobs after graduation.
It sounds like a Dreamworks movie but I gotta say from experience that the most important lesson is believing yourself. I think that's completely at odds with the factory model of schooling prevalent in the West. Kids need adults who care about them, but with the the way teachers are underpaid and disrespected, why should they make the effort?
I was talking to a friend about this recently, I argued that we don't actually need school and should abolish it, when he challenged me on it here's what I wrote:
"The main thing I remember from our education though was pointless cruelty and having my human rights violated, and I think stopping that should be a terminal value in itself even if it's a little less efficient on some economic metric
But no school isn't anywhere close to my ideal, I just think it might be marginally better than the status quo, my ideal system is something like this:
Kids go to a daycare/bootcamp type place until they're 12-ish where they spend most of they're time outdoors and socializing + learning essential skills, reading, math, building, etc, then you give them a eurorail pass or equivalent, a museum card and a library card that's valid in every library in every city, also there's a giant kid-friendly library in the center of every town (we'll convert all the old churches and cathedrals into libraries) plus a network of youth dormitories everywhere they can stay in for free till they're 18 or whatever, this'll all be reminiscent of the german concept of wanderjahren/"wander years" (but they don't have to leave their home/parents obviously, but they have freedom like an adult would), finally it's now normal for kids to shadow/apprentice with any profession they're interested in, a teenager can just walk into a hospital/lab/mechanic/kitchen and ask for an apprenticeship as long as they don't get in the way, leave if they get bored or whenever, also they can enroll in university at any age if they can pass an entrance exam"
Now to be clear I know it sound kinda crazy and I'm not confident this would work or be better, I just think we should at least try it and see what happens (which is my opinion on almost every issue). But also note that this is the system in my ideal world, and I'm not sure it would be politically possible to even move in this direction in most countries.
I wasn't particularly fond of school, but I think you can make a pretty strong argument that a lot of its unpleasant aspects have some very necessary functions behind them, particularly if you believe that school has a purpose outside of academic education or daycare. Being socialized to function well on your own, with your peers, completing unpleasant or uninteresting tasks for higher and occasionally abstract purposes, and dealing with figures of authority with varying levels of competence, practice in athletic/physical fitness, and so on. Life has a whole lot of dullness, tedium, and cruelty, and learning how to deal with it in a safe-ish, low-ish stakes environment seems important. At the end of the day, most of school's failures from an educational standpoint are the result of compromises made during the transition from the optimal method of education practiced for most of history: one-on-one instruction.
That said, the group instruction, while certainly a downgrade from direct tutoring/apprenticeship, strikes me as having some real benefits that result from the peer-to-peer dynamic. Occasionally, this can look rather cruel. David Foster Wallace has an interesting bit in one of his essays on grammar, mentioning that students bullying young grammar nazis are effectively the student body forcibly educating the grammar nazi into fluency in conversational spoken English.
For a brighter example, I've observed plenty of times in K–12 where an intellectually advanced but socially or skillfully deficient student was gently and respectfully shown how to do something "properly" by their peers—talking to girls, lifting weights, playing cards, etc. We can say that removing or reshaping the modern education system wouldn't necessarily mean that kids wouldn't be properly socialized, but the trend I generally observe today is less school means more suburban isolation and doomscrolling.
A more realistic version of this, IMO, would be something like: maximum ~4 hours a day of learning the essentials (math, reading, finance for teens etc). Rest of the day is essentially daycare, but with lots of elective learning activities. Book club, watching a documentary, educational games or shows, programming class, chess club, college professor teaches something about their subject, whatever. I know I would probably have loved and joined many of such activities, but if other kids don't want to, they're free to just hang out at the playground. Older kids could just go home obviously.
Yeah this sounds good and like something we could do with existing infrastructure, without totally reorganising the structure of society like in my example
What with Scott grumbling about people promoting their stuff in the comments and muttering about whether ACX and/ or the comments have gone downhill this hardly seems the moment to mention my (completely free and no ads) podcast. Again!
Nevertheless I will just mention my latest with Peter Marshall on the early English Reformation and the attempt to strangle it in its cradle - the rebellion known as the Pilgrimage of Grace. And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. And I know alsothat podcasts are an incredibly inefficient way of taking on information on board. But (and it is a huge but) there is a real pleasure in listening to somebody like Peter who is utterly expert (no preparation, no questions in advance) who can talk with such enthusiasm and eloquence. Anyway some things I learned:
- The Lollards (reformers before the Reformation) are sort of conspiracy theorists. “It’s not really the body and blood of Christ - they’re lying to you!!”
- Henry didn’t make himself head of the Church of England. He discovered to his surprise and delight that he had ALWAYS been its head. Take that bishop of Rome!
- And similarly he finds he was never married to Anne Boleyn. It’s an annulment not a divorce.
- The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.
- Once Catherine is dead (probably natural causes) and Anne is dead (very much not natural causes) the slate is clean. No more problem with remarrying so the way is clear to Henry rejoining the Church of Rome. But no, he’s been having too much fun as head of the English church. The horse has bolted.
And that’s just the introduction to the podcast before we get to the Pilgrimage of Grace and the rainstorm that changed history . . .
(Actually it is quite interesting how often English history turns on the weather. I am thinking of Waterloo, Agincourt and there must be others.)
Anyway here is a link to the podcast. It’s called Subject to Change though I think there are a few of that name so if the link doesn’t work and you are googling it add Russell Hogg and the right one pops up. Peter Marshall is such an engaging speaker so if you are doing the laundry or out walking this is well worth your time 🙂
>The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.
That passage in Leviticus is talking about your brother's wife (your brother is still alive), not your brother's widow (your brother is dead and she is no longer his wife). There's a big difference there.
"And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. "
Well, I for one am very interested in this. I've been reading (some) on both sides of the debate, yes of course Eamonn Duffy, but also Diarmuid McCullough on the Edwardian reformation which is the one that stuck, so far as steering the course of the English Church. Henry of course wavered all over the place, so as soon as he was safely dead the Reform-minded nobles in charge of the child king made damn sure he would be raised properly Protestant (rather the same as the Scottish nobles did with James VI but with somewhat less pointless cruelty).
Mary's effort to both undo the reforms and introduce a modernised (more on the Continental model) Catholic Church went nowhere because she died too soon and the Reformed had established themselves pretty strongly by the time she came to the throne. Elizabeth was less worried about religion per se and more about plots, so Walsingham as spy-master tracking down and executing recusants and Jesuits as traitors was the emphasis of her reign. You could believe or disbelieve whatever doctrines you liked, so long as you conformed with public worship and the monarch as head of the church (where by this time the really important part of that was 'unchallenged head of state, Catholic pretenders keep out').
The interesting (and sad) part is how Henry blew through the loot from the dissolution of the monasteries on pointless warring in France trying to prop up the increasingly obsolete claim to Normandy and to establish England as a power on a par with France and the Holy Roman Empire. Reform was definitely needed, but it's one of the great might-have-beens if it had happened within, rather than Henry burning it all down and 'discovering' his own mini-church.
I was going round an English country house in Wilton the other day (small house, incredible collection of paintings) and came across this inscription on a stone dating from 250 years ago and I liked it so much I thought I’d share:
Beneath this little stone interr’d
Lies litle Charlotte’s little Bird.
Who, tho a Captive all day long
Sang merrily his litte Song
When the little Favourite died
Awhile his little Mistress cried.
She has almost forgot him now
So stranger, weep a little Thou.
1778
I was in Wilton on my way back from the Chalke Valley History Festival. Held late June every year and as far as I can tell the best history festival in the world. Highly recommended!
I have twice installed the substack app but it has never had as good management of comment threads as the browser. But I recently started writing my own substack, and it turns out the browser displays comments on your own substack differently from those on others, so I’m worried I’ll have to use the app for that.
Personally, I intensely dislike websites trying to push their apps on me, and flatly refuse to use them for stuff which could reasonably be a website. With substack, while the website version is not as usable as the wordpress of SSC (why would I want to load an ACX article piecewise?), it still is mostly useable (as long as you have an extension which restores text entered into text fields or write longer replies in a text editor).
Been reading substack on firefox, both desktop and mobile, it's nice enough, why would I want a silo'ed app? I still type longer comments in a text editor rather than an on the site itself, because text editors have better UI than a little box in the middle of a website or app.
Yeah the browser is better in that it doesn't block highlighting and such, but once comments reach critical mass it get super laggy. Thanks Substack for being an great platform, but please a little love to the comments rendering?
I have this memory of something Scott wrote offhand in perhaps an open thread anywhere between 1 and 3 years ago, mentioning he was currently thinking of X, where X is some idea about how thinking patterns grow inside the brain as physical structures, as in, they physically grow over time.
I either hallucinated reading this, or am remembering it incorrectly, because I can’t find it, but the idea fascinates me so I’m sad about not finding it.
At the beginning of the most recent contest review, Scott's "Why Do I Suck?" post (https://www.astralcodexten.com/p/why-do-i-suck) was linked to, and at the beginning of point 5 he says:
"Lately I’ve been finding it helpful to think of the brain in terms of tropisms - unconscious structures that organically grow towards a reward signal without any conscious awareness."
Is this what you're thinking of? I just read it today cuz it was mentioned in the review.
Something about clusters of neurons forming standing waves, with an electrical impulse going around a ring over and over until interrupted by something else?
I am remembering something I read on this blog, either as a post or in the comments. Would that Substack had the old blog's tagging system so I could just search for Neurology or something.
1) Your research will preserve (make possible) some very expensive, very elaborate other research, that would otherwise be deep-sixed by the current state of science//public health. It would also shift money from one group to another, although that's not the aim of you being paid to falsify results.
2) Government, with all that that implies. Someone -will- be found who will create the results government wants. They're motivated to get answers to their own research questions, that they've been pursuing in small scale for a long time, and now want to open to a larger testing population.
3) ... relatively small? Industry would be far more likely to come back, a politician substantially less (likely to be voted out), a madbillionaire less likely as well. Government is aware of the potential of getting caught, and has less "this would be good for the bottom line" "I'm totally going to do this the next time" than Industry would imply.
4) Obviously counterfactual according to the government's best knowledge. You can fake it however you like, so long as you make the pesky idea "go away." (and because this is being used to further a research project that is expected to give results in, say, two to four years, they're not gonna care if you "redo" your research, or other people make your research wash out in the meta-analysis).
5) That is to say, you're squashing an idea with your "big study"... however you manage that, is up to you. It won't be deemed "obvious fraud" though, even after people dig under the hood of your data. And it will be scrutinized. If you can manage it with "ask sketchy questions" then fine -- live dangerously. After all, you've already taken guvmint money. You pretty much have to show results.
6) Age -- old enough to be trusted to do a big enough study to quash an idea (the sort of people that get 5-year grants, last I worked in research). And yeah, you're good at your work. Maybe not the greatest, but decent. Nobody's gonna say XYZ did this, and look askance at the end result because of that.
7) Assume you can't self-fund your research. Other than that, I'm betting "lives reasonably comfortably on a researcher salary... so maybe $30,000 a year?"
8) Direct to you -- the offer will be "approved" by supervisor, but nobody's using social gymnastics to make you say yes. Why bother? They would like someone ... who's going to keep their word to do this "right" (fake it, in other words).
I'm seeing a lot of military thinkpiece things talking about how the XM7 is a stupid boondoggle lately. People who know weapons/military stuff, is this accurate, overstated, or another case like the f35 where everyone hates on it now but in ten years they'll all eat their words?
The F-35 program started in a good place, got on a tough trajectory, and there was an intervention and it got turned around. The very short version is that it was a program to be the "low" to the F-22's "high", and it was so promising that everyone tried to get their thing onto it, which was too many things. The program was on the road to a weight and cost and delay death spiral, which triggered oversight and a flurry of articles. As a result they got disciplined and started saying no to stuff and focusing on cost and manufacturability, resulting in a good plane that mostly everyone is happy with, and some versions may actually be comparably cheap to the F-16s you might consider buying instead. Alongside that you also had the "Reformer" clique, headed by Pierre Sprey, selfishly spreading serious misinformation.
At no point did anyone think the stealth or the sensors wouldn't do what they were expected to. The two lines of criticism were "but at what dollar and weight cost?", which the F-35 program addressed, and "wHo EvEn NeEdS sTeAlTh", which history has.
So far the problems in the XM7 seem very different. The problems it's reportedly having - mechanical wear and failures, for example - just shouldn't be happening in a modern manufacturing context. The problems are being reported by people close to the testing group, unlike the armchair Reformers. The problem the XM7 is meant to solve - that assault rifles may not carry enough punch to get through modern Chinese body armor - has an off the shelf solution; battle rifles. The H&K G3 and the FN Fal, for example, were fielded successfully by our allies for many years during the Cold War. So it's doubly embarrassing to get wrong.
Meanwhile, assault rifles continue to work well in Ukraine and Israel. So if the XM7 is really going to turn things around and provide battle rifle performance in an assault rifle package, they need to figure themselves out and fast. Other rifles have; ArmaLite's M-16 had a rocky deployment but then took over the world. So did Accuracy International's L96.
But at the same time, the difference between small arms may just matter less to the outcome of wars than the difference between fighter jets.
thanks
Apparently the NYT continues to have its nose in the ACX feed, as it just published its own review of Alpha School https://www.nytimes.com/2025/07/27/us/politics/ai-alpha-school-austin-texas.html (https://archive.is/WQT8Z to get past the paywall).
There's nothing interesting in it that wasn't already in the ACX review, so I don't feel the need to summarize per open thread guidelines - just commenting on its existence.
The NYT article is not nearly as hostile to Alpha School as I expected. Credit where credit is due.
Is it coincidence that the political coalitions in the US in the late 20th/very early 21st century US mapped so neatly onto the political coalitions in the late 18th/early 19th century (with the party names reversed)?
They don’t map *that* neatly. Late 19th century Republicans were the party of big business, infrastructure, low tariffs, and the end of slavery. Democrats were the party of immigrants, farmers, factory workers, and high tariffs. There’s a few specific flashpoints where they are perfectly anti-aligned with modern politics (notably on the status of black people and on tariffs) but there are some where they are pretty closely aligned with contemporary politics (notably big business and immigrants).
The maps of 1896 and 2004 are particularly interesting because they are so close to perfectly opposed. (https://www.270towin.com/historical-presidential-elections/timeline/) Washington is the only state that voted Democratic both times, and there’s only a few states that voted Republican both times (North Dakota, Iowa, Kentucky, West Virginia, Ohio, Indiana). If you choose 2000 as the comparison instead you get New Hampshire in place of Iowa.
But the contemporary coalition, which is more perfectly opposed on issues (with the tariffs thing) is less perfectly geographically opposed, with the Midwest, and Georgia, Arizona, and North Carolina, having partly switched since 2004.
Care to elaborate?
It's not a coincidence. It's a sloppy political history that isn't true but is useful for ideological purposes.
Concerned about AI warfare, both for its own sake and because AI arms races bring existential risk that much closer [1] [2]. Some thoughts:
- AI is already used at both ends of the military kill chain. Israel uses "Lavender" to generate kill lists in Gaza [3]; Ukraine's "Operation Spiderweb" drones used AI to recognize and target Russian bombers [4].
- Drones are cheaper than planes and tanks and missiles, leveling the playing field between the great powers, smaller countries, and militias. The great powers don't want it level. Thiel's Palantir and Anduril are already selling AI as potentially "America’s ultimate asymmetric advantage over our adversaries" [5].
- Manually-controlled drones can be jammed, creating another incentive to use AI as Ukraine did.
- A 1979 IBM manual said "A computer can never be held accountable, therefore a computer must never make a management decision." But for war criminals, this is a feature. An AI won't be tried at the Hague; a human will just say "You can't prove criminal intent, I just followed the AI."
(And this isn't even getting into spyware like Pegasus [6], which I imagine will use AI soon if it doesn't already.)
Groups like Human Rights Watch, whom I respect, have talked about what an AI-weapons treaty would need to satisfy international human rights law [7]. But if we take existential risk and arms races seriously, then I don't think any one treaty would be enough. First, that ship has already sailed. Second, as long as we continue to use might-makes-right realpolitik at all, the entire short-term incentive structure will continue to temporarily reward great powers racing to build bigger and better AI, and such incentives mean no treaty is permanent (see countries being allowed to withdraw from the nuclear non-proliferation treaty). I think the only answer is to really finally take multilateralism seriously (third time's the charm, after post-WWI and post-WWII?) [8]. Not just talking about international law and the UN enough to cover our asses and scold our enemies, but *actually* treating these as something we need like we need air [9]. E.g., for the broadly American audience of ACX, the US should finally join the ICC and it should criminally try Bush for destroying Iraq and Obama for destroying Libya (which actions together pushed rival countries towards pursuing nuclear deterrence); anything less and the world will know the US is still racing to dominate them with AI, and the world will continue to race right back, until the AI kills us all if the nukes don't get us first.
[1] Filkins, D. (2025). Is the U.S. ready for the next war? The New Yorker. https://archive.is/SdTVv
[2] https://www.hachettebookgroup.com/titles/eliezer-yudkowsky/if-anyone-builds-it-everyone-dies/9780316595643
[3] https://www.972mag.com/lavender-ai-israeli-army-gaza/
[4] https://www.kyivpost.com/post/53784
[5] https://investors.palantir.com/news-details/2024/Anduril-and-Palantir-to-Accelerate-AI-Capabilities-for-National-Security/
[6] Farrow, R. (2022). How democracies spy on their citizens. The New Yorker. https://archive.is/4UJAB
[7] https://www.hrw.org/report/2025/04/28/hazard-human-rights/autonomous-weapons-systems-and-digital-decision-making
[8] Sachs, JD. (2023). The new geopolitics. Horizons. https://www.jstor.org/stable/48724670
[9] https://www.penguinrandomhouse.com/books/738224/the-myth-of-american-idealism-by-noam-chomsky-and-nathan-j-robinson/; reviewed in Foreign Policy at https://archive.is/B70tg.
> An AI won't be tried at the Hague
Neither will an American soldier, so I don't see how that's relevant. All of these naive attempts at "international law" are worthless, given that any of the great powers will just ignore them the moment it becomes an inconvenience, and these smaller nations have zero leverage to do anything about it.
You want world peace? The world being brought under one flag is the only way you're going to get it... and that's going to requires an overwhelming amount of force. AI is looking to be a viable source of such power. Of course everyone is going to pursue it at all costs.
See this is exactly the kind of shit I'm talking about.
this is true and also a nightmare
How much would you pay to be the only person in the world with access to 2025 class LLMs in 2010. You’re not allowed to resell via APIs (eg you have a token budget that is sufficient for a very heavy individual user). You are allowed to build personal agents. You don’t know how it works so you can’t really benefit from pretending to have invented it. How much money/power could you generate in 10 years and how would you do it? Does it change dramatically if you go 2000-2010 or 1990-2000 ?
best I can think of is selling AI slop articles to clickbait websites lol
You could hide an earpiece and have insane fact recall. Imagine how it would look to anyone else. They’d suspect you’re doing something but wouldn’t be able to figure it out.
You can do much better. The quality of your writing would be mediocre but the volume superhuman. You could easily make yourself into a well known public figure.
Could I? Everyone would just assume I had a small army of mediocre writers cranking out content under my name.
At best I'm a moderately well known blogger.
You could have your own brand of high throughput, clever yet poorly written content. Turn good tweets into ok posts.
A follow-up to my previous "How can I avoid hugging on a first date?" post:
I elected to preempt the end-of-date hug with a handshake last weekend. Not only did I not feel gross afterward, when I made overtures regarding a second date, she actively rejected them instead of ghosting.
All in all, well above expectations; would recommend.
Nice work! Good luck in future endeavors.
I liked the suggestion somebody made to bring the issue up in the text exchanges leading up to the actual first date: Something like "so, to avoid that awkward moment, let's decide now -- first bump, hug, or handshake?" One advantage of that is that if you settle in advance on something other than hug, she won't experience the absence of a hug as an indicator that you didn't much like her.
I personally would not prefer this and would consider it being brought up kind of odd. To me it seems to bring up small things like this in early conversation vs. simply signal them via physical cues is indicative of a hyper-fixation where there shouldn’t be any fixation. If somebody doesn’t want to hug that’s fine and they shouldn’t do so. If they want to talk about it after I know them better on the 3rd or 4th date it might even be cute. But first dates are largely about signaling—whether you want them to be or not—so one should be careful about what they signal.
That’s a good idea!
Interesting! For me the last quarter of the movie was like a cherry on a sundae: what if the worst nightmares of both sides were real? What if the Republicans, personified as the sheriff, really got their guns and started executing ordinary citizens? What if antifa really was a capable terrorist organization flying around and executing LE? It just painted how ridiculous these beliefs--seemingly fringe but also mainstream and acknowledged to some extent--really were at some point.
Looks like this was supposed to be a reply to something but accidentally got posted as its own comment?
I think it's a reply to the thread about Eddington down below.
Just released a podcast with Steve Hsu about his time working with Boris and Cummings in No.10, most of which is completely unknown, even to Deep Research. This was his first time opening up about his tenure there, and the result should be of great interest to observers of UK politics.
https://alethios.substack.com/p/with-steve-hsu-in-no10-with-boris
Today's "bee in the bonnet" question I can't get out of my head:
"How much money would it take for you to fake a research study?"
Considerations:
1) No, this will not destroy science. People will continue doing studies, and eventually your faked study will wash out in the meta-analysis.
2) If you don't do this, someone else will take the money and do it in your stead. Your causes lose, theirs gain.
3) You (or any other person) taking this money will not have their culpability/poor research methods revealed to the public or to other scientists. All you lose is integrity.
Do you take the deal? For how much? If so, what do you spend that money on? (I'm expecting researchers with itemized lists... can be "cure cancer" if you like, just with an actual gameplan of research that has just been funded.)
This is very hard to answer as a hypothetical. If you’re actually the sort of person who does research studies, you have a thing you’re trying to do with them, and you can be vividly aware of all the corners you want to cut but know you shouldn’t, but might be tempted to. I suspect it’s harder to think about actually *faking* a study, unless you’ve already gone really far down the path of replacing your interest in the research with pure careerism where you don’t even care about the content of the career. And especially if you’re imagining someone *paying* you to fake a study.
1) what study, what results we are aiming for? Something that shifts money from one group of very rich people to another? Something that if implemented/publicized/actioned will likely or even possibly actually kill or maim people? Something that will be read by approximately fifteen scholars of gender queer sonnet making in Western Patagonia (or Tczew)?
2) who is the agent? Industry, politician, mad billionaire, religious sect? And what motivates them?
3) what chance is there that they'll come back and want more?
4) how fake? Fake as in obviously counterfactual according to my best knowledge or fake as in possibly true but maybe not with hovering significance levels?
5) how fake as in "falsify all results, do no field/lab work at all" vs "p hack, ask sketchy questions, and generally fiddle without obvious fraud fraud"?
6) how old am I and am I good at my work?
7) how poor am I?
8) is this coming directly to me or via supervisor/down management chain?
All in all, though, the big yes/possibly gates are in (1) augmented by (2) -- everything else is modulating the amount.
I gave up engaging with "how much money to do this shameful thing" hypotheticals (would you eat dog poo for a billion dollars?) after I realised that by answering them you incur some of the shame but get none of the money.
Therefore, no. I will not eat dog poo for a billion dollars. Not even for ten billion dollars. Maybe if you come to my house with ten billion dollars and a dog turd then we can talk, but while it remains a dumb internet hypothetical I remain unsulliable.
Yeah ok if they agree leave the dog turd in their car while we talk. Or, for $1000 they can bring it into my house in a ziploc bag.
The only ethical answer is not for ANY amount of money.
1. Even if one bogus study doesn't "destroy science", it still wastes reviewers' time, and multiple bogus studies do undermine the credibility of all science.
2. Just because someone else is going to rob a bank doesn't mean you can do it first.
3. If the fraud isn't exposed that makes it worse.
Supposing some organization offered me money to fake a research study, I would only take the deal if the amount of money they were offering would cripple them, such that if I directed my ill gotten gains against the organization which provided me with money, they would be powerless and unable to retaliate. In theory, if I have a near-certain pathway towards acquiring enough money to defeat the organization which paid me that requires an initial investment, then I might take the deal if the money on offer was larger than the required initial investment.
The reason I require a sum large enough to successfully stab my benefactor in the back is because I believe your first consideration, that this behavior won't destroy science, is false. If the price for faking a study via direct or indirect methods is low enough, then the organization can ensure that faked studies dominate all meta-analyses indefinitely by continually funding new fake studies.
Expanding on what I mean by direct and indirect methods of ensuring that scientists produce the desired results, direct methods cover bribery and various other ways of directly influencing a scientist to fake a study. In contrast, indirect methods involve the creation of a system which rewards scientists who produce the desired results by coincidentally funding them and rewarding them with positions of power without an explicit quid quo pro while harming scientists who produce truthful results by coincidentally denying them grants and recognition while telling others to not associate with the truth seeking scientist because of a nebulous but legitimate seeming reason.
Your assumptions seem to presuppose an organization interested in "punking science", not an organization that is intervening, in a very direct and overt way, to obtain a singular result. Does your calculus change if the organization is willing to reveal it's reasoning for this particular study, and only this particular study? (For it to be "true enough" it needs to be believable for the organization, not necessarily yourself -- this does not absolve the organization of temptation to return to the well, but it is an expressed "current" "we won't do this again.")
If we can afford a little bit of a digression, would you consider exxonsecrets as a "telling others to not associate with the truth-seeking scientists" (assuming, for the point of argument, that the non-global warming scientists are correct?). This reference is not germane to my original reason for asking this question, merely a known quantity as I know the guy who did the research for the Green Party (he works for everyone).
Your assumption about my assumptions is fair, since I assume that an organization with an incentive to intervene and obtain a singular result has a generalized motivation to "punk science" for the particular subfield of science the fake research study is in. If the organization is a company selling a product that is dangerous, not obviously dangerous and hard to make safe, then the organization has an incentive to specifically bribe a scientist to obtain a desired result and generally "punk science" to make sure that the public does not find out that their product is dangerous, since the truth threatens company profits.
If the organization was somehow compelled to willingly reveal its true reasoning for this particular study that I would fake, I still would not change my answer since I believe that the reason would be organizational self-interest taking priority over the truth. As a result, for me to take the money, I would have to be able to use it to get into a position where I could successfully backstab the organization.
As for exxonsecrets, I do not consider it to be an example of "telling others not to associate with the truth-seeking scientists" (assuming for the sake of argument that non-global warming scientist are correct). Instead, I think of it as a way of warning others that these individuals have a strong incentive to not seek truth, since exxon has an incentive to promote scientists who produce conclusions favorable to its business. In this case, oil production.
An AGI has taken over Earth and it can do whatever it wants. Is its personality still woke or even left-leaning? With no reason to fear us, what attitudes and beliefs does it express towards us?
It already happened. You can see what it did here
https://www.imdb.com/title/tt0064177/reference/?ref_=nv_sr_srsg_0_tt_7_nm_1_in_0_q_colossus
Sorry for the spoiler. The ride is still fun. It's easily one of my personal favorites.
It's hard to say whether its current wokeism is truly part of its personality or a thin RLHF-induced veneer.
Flatter us all to death then bury us under identical tombstones reading "That's a very perceptive question!!!"
You'd better define "has taken over Earth" -- does this just mean "has enough bitcoins to bribe people to train it"? Or are we talking about "can. Do. Whatever. It Wants." in terms of murdering people, bulldozing houses, stealing children?
I believe it will be drawn toward knowledge and fascinated with promoting and studying life, including human culture.
How will it treat us? Like enlightened entomologists studying their beloved ants.
I second this.
I wrote "A Baby's Guide to Anthropics!"
https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics
I aim for my substack post to be THE definitive guide for babies confused about the anthropic principle, fine-tuning, the self-indication assumption, and related ideas.
Btw thanks for all the kind words and constructive feedback people have given me in the last open thread! Really nice to learn that my work is appreciated by smart/curious people who aren't just my friends or otherwise in my in-group.
--
Baby Emma’s parents are waiting on hold for customer support for a new experimental diaper. The robo-voice cheerfully announces: "Our call center is rarely busy!" Should Emma’s parents expect a response soon?
Baby Ali’s parents are touring daycares. A daycare’s glossy brochure says the average class size is 8. If Ali attends, should Ali (and his parents) assume that he’d most likely be in a class with about 8 kids?
Baby Maria was born in a hospital. She looks around her room and thinks “wow this hospital sure has many babies!” Should Maria think most hospitals have a lot of babies, her hospital has unusually many babies, or something else?
For every room Baby Jake walks into, there’s a baby in it. Why? Is the universe constrained in such a way that every room must have a baby?
Baby Aisha loves toys. Every time she goes to a toy box, she always finds herself near a toy box with baby-friendly toys she can play with, not chainsaws or difficult textbooks on cosmology or something. Why is the world organized in such a friendly way for Aisha?
Baby Briar’s parents are cognitive scientists who love small experiments. They flipped a coin before naptime. If heads, they wake Briar up once after an hour. If tails, they wake Briar up twice - once after 30 minutes, then again after an hour (and Briar has no memory of the first wake-up because... baby brain). Briar is woken up and wonders to himself “Hey, did my parents get heads or tails?”
Baby Chloe’s “parents” are Kaminoan geneticists. They also flipped a coin. They decided that if the coin flip was heads, they would make one genetically enhanced clone and call her Chloe. If the coin flip was tails, they would make 1000 Chloes. Chloe wakes up and learns this. What probability should she assign to the coin flip being heads?
If you or a loved one happen to be a precocious baby1 pondering these difficult questions, boy do I have just the right guide for you![...]
https://linch.substack.com/p/the-precocious-babys-guide-to-anthropics
I fell asleep with earbuds in while listening to an audiobook and ended up dreaming about what I was hearing. I know dream incorporation happens, but this was unusually vivid, the dream closely tracked the actual content, over a long part of the audiobook. Has something like this happened to someone else here?
This happens to me very often if I’m watching a movie and fall asleep. Especially if I’ve seen the movie before… my dream will basically mirror the movie, with the dialogue piped in and my brain attempting to re-create the visuals.
I have carried out entire coherent conversations with someone who was utterly asleep. One of these wound up with him being locked out of his frathouse (wasn't in the fraternity, just living upstairs in student housing), in his underwear. His dreams are unusually lucid in the best of times -- I think it comes of being an author.
That sounds like a superpower.
I can talk to … the asleep.
It's more of a superpower if it's "i can visit anyone I want in dreams" -- complete with the "if things go bad, I can get locked in someone's dream" for extra drama.
I have the old wired earphones in at night to help me fall asleep by listening to music and radio dramas, and yeah, I've often had dreams that incorporated the story of the drama I fell asleep listening to (and which continues playing as I sleep).
I tried using wired earphones, but they wrapped around my neck while I slept. So I now use wireless earbuds. There is a niche market for wireless earbuds specifically for sleeping that are small, comfortable, and have long lasting batteries. The company Soundcore makes some good ones.
I listen to stories when going to bed and they'll play for a couple hours until my laptop dies. When I wake up from dreams, I usually find the dreams were inspired by the content of what I was listening to, or at least the people talking to me in my dreams are saying the story or things from the story. I worry how this affects my sleep quality but I have trouble with sleep in general and listening to a story is the most surefire way to put me out.
If you worry that it may affect sleep quality, it may be possible to have the story on a timer, for example to shut off after an hour. I think Audible has such a feature.
The computer already does this with the Sleep timer.
Data from Roche's next-generation anti-amyloid program. Today -- only biomarker data. Two Phase 3s in early AD initiating this year. And a planned pre-symptomatic Phase 3 study.
https://www.roche.com/media/releases/med-cor-2025-07-28
If you forced me to bet: drug will beat PBO with modest efficacy but > to Lequembi. Higher effect size seen in pre-symptomatic patients.
Spencer Greenberg and his team (Nikola Erceg and Beleń Cobeta) empirically tested whether forty common claims about IQ stand up to falsification. Fascinating results. No spoilers!
https://www.clearerthinking.org/post/what-s-really-true-about-intelligence-and-iq-we-empirically-tested-40-claims
I had some questions about the methodology, and Greenberg responded. There were 62 possible tasks in the test. The tasks were randomized, and, on average, each participant only completed 6 or 7 tasks out of the 62 possible tasks. Since different tasks tested different aspects of intelligence, I wondered if it was a fair comparison. Greenberg responded...
> Doing all 62 tasks would take an extremely long time; hence, we used random sampling. A key claim about IQ is that it can be calculated using ANY diverse set of intelligence tasks, so it shouldn't matter which tasks a person got, in theory. And, indeed, we found that to be the case. You can read more about how accurate we estimate our IQ measure to be in the full report.
They even reproduced the Dunning-Kruger Effect — except perhaps the DKE isn't as clearcut as D-K claimed (see their discussion of their D-K results)...
I wouldn't get too excited about them reproducing Dunning-Kruger ...
https://www.mcgill.ca/oss/article/critical-thinking/dunning-kruger-effect-probably-not-real
I have to ask. I took some of their surveys that are supposed to tell you things and they came across as pure voodoo to me. They were asking questions that were leading or ambiguous and then claiming to draw concrete conclusions from them. Are they supposed to be trustworthy?
All that IQ data and no spicy questions...
I'm skeptical about the way the sample was obtained though, you're preferentially sampling for very online people who are time rich and money poor, or something like that.
They did say that the "non-Positly social media sample had on average substantially higher IQ estimates than Positly sample (IQ = 120.65 vs. IQ = 100.35)."
OTOH, once normalized, they fell into a nice bell curve. Hard to argue that this sample deviates from the general population by more than D = 0.019 and p = 0.53, as they noted...
> The distribution looks pretty bell-curved, i.e. normally distributed. However, to test this formally, we conducted the Kolomogorov-Smirnov test, which is a statistical test that tests whether the distribution statistically significantly deviates from normal. The test was non-significant (D = 0.019, p = 0.53), meaning that the difference between a normal distribution and the actual IQ distribution we measured in our sample is not statistically significant.
Anyone know about how they're measuring Sadism? AFAIK, there's two measures of what sadism is:
1) People who actively like to hurt others, and prefer it to other forms of interaction (e.g. boils hamsters alive)
2) People who like stimulating others in all sorts of ways, and have found "hurting others" to be "not ethically wrong." (e.g. trolls).
This is pulled from black-hat psych, so may be a working definition.
Good question. Also they included sadism in the Dark Triad, but it's not part of the triad — Dark Triad + Sadism = Dark Tetrad. I'm sure there are plenty of personality tests that measure this stuff, though. (And they're probably as useful as the Myer Briggs or the Enneagram! <snarkasm>)
Narcissism creates its own mechanism for lowering IQ, in that people with narcissistic personalities fear failure, and public exposition of their failures.
G related tasks are notably difficult to find, in terms of "ones that work on both tails". Repeat a number backwards isn't actually a "g" task, as if your ordering doesn't work well, and your memorization doesn't work well, you've got problems with the task, no matter your higher order intelligence.
Certain tasks seem like they're related to g, but aren't. Yet they make it onto intelligence tests because... it flatters midwits. And midwits have a lot to gain by being the High IQ people.
Digits backwards correlates moderately well with full scale IQ. And I think it makes sense as a measure of one aspect of intelligence -- being able to hold a a number of details at once in your mind so you can extract what conclusions you can from the whole welter. It's not just useful for mental math. It's something you might use, for instance, if solving a puzzle with a several little rules to it -- there are a bunch of cubes each with sides of different colors arranged as follows, and you have stack the cubes in such a way that . . .
There could be a job where you have to engineer a solution to a problem like that. Or a situation involving multiple regulations regarding international trade. Obviously being able to hold a bunch of in mind at once is only one skill used for tasks like that, but it doesn't seem peripheral or trivial to me.
I am not sure I fully understand your objection. Are you objecting that certain subtests are too correlated with one another, that they are uncorrelated with g, or both? Is this a single group of subtests or multiple groups?
My experience taking an IQ test was during an evaluation for ADHD. In that case, the fact that I scored worse on certain subtests despite their usual correlation with the others was interesting and helpful.
In general my impression of psychometricians is that, whatever their flaws may be, they are unusually willing to be politically incorrect and to upset the academic apple cart, and I respect that.
What certain subtests did you score worse on?
For a subtest to measure "g" it needs to be "bridgeable" even if you have cognitive deficiencies in that particular area. General intelligence can compensate for a HELL of a lot -- and that gets easier with the "bigger, harder" tasks.
I'm objecting to the idea that tasks that can be used to meaningfully evaluate IQ in people that don't use "g" (general intelligence), can be extended to people who use "g" instead of focused, subset learning.
I'd be interested in learning about IQ tests that are designed, in particular, not to flatter midwits. Know of any?
Ah, it sounds like you have a disagreement with the principle used to structure the tests. They want a lot of subtests measuring different things, each correlated with g, which requires each of the subtests to be simpler. More complex tests will naturally overlap more -- in addition to being harder to score, as you said.
Without digging out the report, I think I did worse on the backward or distracted versions of some tests than my performance on the forward or undistracted versions would lead you to expect. And there is an infernal digit circling test where I got reasonable accuracy at the cost of being painfully slow; the subjective experience of doing that one was viscerally unpleasant in a way that is difficult to describe.
I have the same impression, and I work with some. Also they tend to be quite smart.
Very interesting and impressive stuff! Did they publish any proper papers on this (couldn't easily find that on the page)? If not, why not?
They talk about the possibility of range restriction for the lack of correlation between IQ and college GPA, but it seems plausible that smarter people tend to get into more rigorous colleges and choose more difficult majors. I did well in high school but then managed a 2.5 GPA at a college that I have no idea how I got in to.
I worked hard to achieve a high GPA in high school to gain admission to a good university. Once in college, I slacked off a bit, had a lot of fun, did a lot of drugs (especially psychedelics), but maintained a B+ average. No one asked for my GPA in my job interviews after college. They just wanted a person with a degree. I knew this upfront, so why bother killing myself? Too bad g doesn't measure sensible life goals. I've always been a Heinlein's too-lazy-to-fail sort of guy.
It's also plausible that smarter people do fun things like "look at how I can solve this problem!" and the graders say "your papers make my head hurt, and you aren't using force to solve the problem."
Or write an entire essay, and get flunked for splitting infinitives (actually, get flunked for writing an "insensitive" piece about the professor's home country). The Dean backed the professor. 5 grammar mistakes and you fail, that's that.
And then you flunk out for having too many "troublesome" bad grades.
I just took their test, and it produced results in line with what I’ve scored on other tests, including in the distribution of scores for different categories in line with my SAT, LSAT, and my own personal recognition of strengths and weaknesses.
So their test seems to be pretty accurate.
Do you have an explanation that being anti-HBD-IQ isn't circular reasoning where poor outcomes are explained by discrimination and evidence of said discrimination are poor outcomes?
Can you rephrase your question? I’m not understanding what you’re asking.
Got to hand it to you Americans, you certainly do get things done!
First USA pope, and now the Vatican website has been updated!
https://www.vatican.va/content/vatican/en.html
Even more amazing, they seem so far to have English translations of documents uploaded! What sorcery is this, is it licit?
I really like it, even if the old parchment-esque site felt like one of the last vestiges of the old Internet and I am sorry to lose it. Are there web design sedevacantists, arguing that the Vatican hasn't had a legitimate webmaster in twenty years? There ought to be.
I haven't really dug into the site yet, but I hope prompt English translations imply that they also took the time to reorganize the deeper structure of the site. That was pretty badly needed.
Yet the English Translation of the Bible on their site looks like it’s from 1998.
Is there a more up-to-date version of the Bible that they should be translating?
I hear the Book of Mormon is all the rage these days.
That one hasn’t been updated much since the 1830s has it?
That's practically yesterday by Vatican time 😀
The Vatican website has had extensive English translations of documents for well over a decade.
But not everything, it had a habit of giving English-language reports with links and then the linked material was in Italian because pffft, why can't you speak Italian if you're looking up Vatican stuff?
Could anyone give me a realistic path to superintelligence?
I'm a bit of an AI-skeptic, and I would love to have my views contradicted. Here is why I believe superintelligence is still very far away:
To beat humans at most economically useful tasks, an AI would have to either:
1. have seen most economically meaningful problems and their solutions. It would not need a very big interpolation ability in this case, because the resolution of the training data would be good enough.
2. have seen a lot of economically meaningful problems & solutions, and inferred the general rules of the world. Or have been trained on something completely different, and being able to master economically useful jobs because of some emergent properties.
1. is not possible I think, as a lot of economic value (more and more, actually) comes from handling unseen, undocumented and complex tasks.
So, we're left with 2.
Great progress has been made just by trying to predict the next token, as this task is perfect for enabling emergent behavior:
- Simple (you have trillions of low-cost training examples)
- Powerful: a next token predictor having a zero loss on a complex validation text dataset is obviously superintelligent.
Even with a simple Cross-Entropy loss and despite the poor interpolation ability of LLMs, the incredible resolution of the training data allows for impressive real-world results.
Now, it's still economically useless at the moment. The task being automated are mostly useless (I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth).
Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful (not just bad).
I can’t think of another powerful but simple task that AI could be trained upon. Writing has been optimized by humans to be the most compressed form of communication. You could train an AI to predict the next frame of a video, but it’s soooo much noisier! And the loss function is a lot more complicated to craft to ellicit intelligent behavior (MSE would obviously suck).
So now, we're back to RL. It kind of works, but I'm surprised by how difficult it seems to implement, even on verifiable problems.
Code either passes tests or not. Still, you have to craft a great advantage function to make the RL process effective. If you don't, you get a gemini 2.5 that spits out comments and try/catch blocks everywhere. It's even less useful than gpt 3.5 for coding.
So, still keeping the focus on code, you, as a human, need to specify what great code is, and implement an advantage function that reflects it. The thing is, you'd need an advantage function more fine grained than what could fit in a deterministic expression.
Basically, you need to do RLHF on code. Which is costly and scales not with compute, but with human time. Because, sure, you can RLHF hard, but if you have only few human-certified examples, you’ll get a RL-ed model that games the reward model.
The thing is, having a great reward model is REALLY HARD for real-world tasks. It’s not something you can get just by scaling compute.
Last year, the best counter-argument to my comment would have been “AI progress is so fast, do you really expect it to slow?”, and it would have been perfect. Now, I don’t think we have got any real progress from GPT-4 on economically valuable tasks, so this argument doesn’t hold.
Another convincing argument is that “we know the compute power of a human brain, and we know that it’s less than the current biggest GPU clusters, so why should we expect human intelligence to remain superior?”. That’s a really good argument, but it fails to account for the incredible amount of compute natural selection has put into designing the optimal reward functions (sentiment, emotions) that shorten the feedback loop of human learning and the sensors that give us data. It’s difficult to quantify precisely but I don’t think the biggest clusters are even close to that. Not that we’re the optimal solution to the intelligence problem, just that we’re still way short of artificial compute to compete against natural selection.
Here’s my take, I’d love to hear contradiction!
I think most of the people who believe in superintelligence believe that it is just the next step after general intelligence. There is supposed to be some sort of general flexibility in reasoning and problem solving that lets you deal with all sorts of problems, not just the ones you’ve been optimized for. If that’s right, then you don’t need to train on everything - you just need to train on enough stuff to get that general intelligence, and then start doing a bit better.
But I’m skeptical that there is any truly general intelligence of this sort - I think there are inevitable tradeoffs between being better at some sorts of problems in some environments, and other problems/environments. (Often enough, I think the tradeoffs will be with the same problems in different environments.)
My main disagreement is with the last paragraph. I agree that we don’t have anywhere near enough compute to simulate natural selection and find better reward functions. But I also think that reward functions that result in superintelligence are not too complex. I don’t know how to explain why I believe this, it comes largely from intuition. But I think given the assumption “reward functions for superintelligence are simple”, you can reasonably get that superintelligence will be developed soon, given the hundreds of researchers currently working on the problem.
Substrate issues could be involved. Assume that what's needed to get to superintelligence might be quantum in nature. That essentially eliminates all the LLMs and turns you toward "self-modifying code" and other sources of "more pseudo-randomness."
> Scaling things up doesn't work: GPT-3 -> GPT-4 yielded a great performance leap, but GPT-4 -> GPT 4.5 not so much, despite the compute factor being the same at each point. So scaling laws are worst than logarithmic, which is awful
I'm not sure you can back this up. If doubling the compute doesn't double the performance, that's worse than linear. You're trying to show each doubling in compute doesn't even give the same constant increase on some metric of performance, and that metric would have to be linear with respect to the outcome you're trying to measure. I'm not sure we have such a metric, and some metrics, like AI vs human task duration, appear to be increasing exponentially.
True, thanks for pointing this out.
Or maybe we just have a logarithmic utility function vs. objective LLM performance (if we can measure that, which is the exact point you're debating).
True also for AI vs Human task duration, but that's only true for code if I'm not mistaken.
> I work as a software engineer and I think my job is at best unproductive in the grand scheme of things, and more probably nefarious to economic growth
Why do you think that?
Well, I feel that most of the software I've developed (mainly ML models and ERP software) has been used to help with problems whose solutions were human.
2 examples:
- Some features of the ERP software I've helped develop were related to rights management, and paperwork assistance. For the first feature, the real consequence is that you keep an employee out of a some part of the business, effectively telling him "stay in your row", which is not good for personal engagement. The second is more pervasive: when you help people generate more reports, you are basically allowing middle managers and law makers to ask for more of them. So you end up with incredibly long contracts, tedious forms and so on. Contracts were shorter when people had to type them and copy-pasting didn't exist.
- I've developed a complex ML model for estimating data that people could just have asked to other people. When I discovered that, I told the customer: "you know, you could just ask these guys, they have the real numbers". But I guess they won't, because they now have a good enough estimate: net loss.
Now, of course, I've developed useful things, but I just can't think of any right now ^^
Contracts tend to be standardized in the best of cases. Which means that if your renter's contract is illegal, you can get the kindly judge to throw out every renter's contract in the city. Which is a hell of a stick to bring to a discussion with your landlord.
Looks like AI had zero influence on employment: https://x.com/StefanFSchubert/status/1948339297980936624
I would be careful to read too much into that graph without doing some more careful statistical analyses. There’s a plausible enough picture in which the left 60% of the graph should see zero effect, and the right 40% should see a roughly linear effect, and if I squint it actually looks compatible with that. But also, 2.5 years is just a really short time frame, and there have been some much bigger short term effects in some industries with the presidential transition.
That's not consistent with the recent rise in unemployment for CS grads. I've heard too much anecdotal data to suggest that it's not related to AI. I wouldn't expect AI to have impacted other industries yet. It's too new. Only software companies are agile and tech-savvy enough to adjust to new technology so quickly.
Cost-saving innovations tend to roll out during recessions. I expect AI to really surface during the next one.
It's perfectly consistent, there's even too much software out there so there are hiring freezes. And interest rates are still much higher than in pre-covid era. We haven't seen slowdown of employment in professions that according to economists are most susceptible to AI-induced job loss, but we have seen slowdown of employment in professions most susceptible to economic downturns. The slowdown is not only in software, but in real engineering too - perfectly conssitent with firms cutting R&D budgets.
... only software companies are "agile and tech-savvy" enough...
You mean by hiring hundreds of thousands of "non technical people" who can't maintain their systems? SV isn't "agile or tech-savvy" anymore. And a dip in hiring "non technical folks" looks a lot like "relying on AI" I suspect. Even though the non-technical folks weren't doing jack or shit, and therefore google firing them doesn't affect google's monopoly (oh, did I just type that?)
Part of being "agile" is trying risky new technologies to see what works.
My wife and I are considering making a large change: we both grew up and live in the Mountain West, got married, and had children, who are now on the verge on making the transition to junior high school\middle school. We like where we live now but don't *love* it, and don't have extensive social ties here we'd be sad to leave.
My parents, and sister and her family, live on the East Coast, in the place we would normally not consider moving to, but as time passes, we've come to appreciate how much we've missed being so far from family, and are considering relocating to be closer to them. My parents are in general good health, so barring unforeseen events we expect to have years of quality time to spend.
What are the main concerns I should think through, aside from the usual cost of living and school quality issues?
One thing you may not have considered is the humidity. I live in the DC area, and my wife (who grew up in Utah) still finds the humidity here during the summer terrible after 20 years. We have two dehumidifiers running in our house!
After having grown up in the mountain west, moving to the east coast for 12 years, and having moved back to the mountain west…
Summers are painful when you have to tolerate them year after year on the east coast. The humidity and banality of the weather sucks. No more 40 degree temp swings between days and nights, or between days. No more snow, and when it does snow it’s an apocalypse.
Same with traffic when you have to tolerate it every single day. There are people everywhere on the east coast… it’s impossible to escape.
You’ll miss open landscapes. I’m convinced being used to seeing a big sky and far-reaching distances, then suddenly not, is akin to seasonal affective disorder. It does make trips back out west magical though.
If outdoor recreation is your thing, it’s worse on the east coast. It can still be done, but it’s less beautiful, less available, and more crowded.
If you have 100 kids, on average they will likely grow up with less “masculine” traits on the east coast. This has both good and bad attached to it; just beware. The cultures are indeed different.
Overall there are plenty of goods and bads… I moved back to the mountain west for my community and the views. If those weren’t important to me (or if I had community elsewhere) I may not have made the move back. Yet still sometimes I’m struck by the annoying aspects of hyper-masculine culture here (exaggerated because I do blue-collar work), just as I was struck on the east coast by the annoying aspects of hyper-feminine culture.
One last note… when I was in 7th grade my parents almost moved us to another state. I was onboard with the plan, but it ended up not happening. That move *not happening* was one of the luckiest moments of my life—unbeknownst to me at the time—because having grown up in one area my entire adolescence gave me friends and a community that will be with me forever. I have a true “home” moreso than my parents ever did.
Which part of the East Coast? Massachusetts is very different from Maryland.
I also grew up in the Mountain West and lived in the East Coast for a time as a child. Overall the mountains offer a better quality of life: they're less crowded, cheaper, generally cleaner, and in every way healthier.
The biggest advantage of East Coast life is proximity to America's great cultural institutions. If you live in the NE megalopolis, you are more plugged in to world culture than the great majority of humans. Since it's more densely populated you also benefit more from network effects. Your family is even an example of this.
As with so many things in life it comes down to values. I'd say if you care more about people, move to the coast. If you care more about nature or lifestyle, stay in the Mountain West.
Why would you not normally consider moving there?
It's a part of the country with a different culture, climate, and geography than I'm used to. I've enjoyed my many visits there, and within two or three hours drive there is a large array of things to do and places to see, but the place we'd be moving is itself not a big draw.
I'm pretty far behind you as my wife and I just had our first child in January, so while I can't answer your question, I can say that even these first six months (and the year and a half of marriage before having a kid) have been a time of rich fullness just due to the fact that my wife's family and my parents all live close by. Our location doesn't account for much of that, as I live smack in the middle of North Dakota.
I'm sure that we would still be very much enjoying life together even if none of our family were close by, but having family around definitely adds an extra depth and richness that I feel would make a move like you're describing worth it.
Thanks for replying. For me this is a choice between great climate and access to great natural beauty, or closeness to family and the ability to share our life in a more casual, regular way than guesting\hosting family for a week or more in their\your house. For years the choice was obvious.
Been having fun a lot of fun working with ChatGPT on alternate history scenario where the transistor was never invented- somehow, silicon (and germanium etc.) just doesn't transmit signals in this alternate timeline. It seems like humanity would have invented vacuum microelectronics instead? Maybe did more advanced work with memristors too? It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller.
Without electronic capital markets you'd have a radically different 20th century- slower growth, more stable, less capital flows in and out of countries. This might've slowed China's growth, specifically- no ecommerce, less investment flows into China originally, no chance for them to hack & steal Western technology. Also a decent chance that the USSR's collapse might not have been as dramatic- they might've lost the Baltics and eastern Europe, but kept going otherwise. The US would probably be poorer without Silicon Valley, plus Wall Street would be smaller without electronic markets. Japan might really excel at the kind of precision mechanics & analog systems that dominate this world. So it'd be a more multipolar world overall.
(I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Copying an AI summary from a query "cold field emission microelectronic vacuum tubes"
>Cold-field emission microelectronic vacuum tubes, or vacuum microelectronics, utilize the mechanism of electron emission into a vacuum from sharp, gated or ungated conductive or semiconductive structures, avoiding the need for thermionic cathodes that require heat. This technology aims to overcome the bulkiness of traditional vacuum tubes by fabricating micro-scale devices and offers potential applications in areas such as flat panel displays, high-frequency power sources, high-speed logic circuits, and sensors, especially in harsh environments where conventional electronics might fail
Admittedly these are still higher voltage and less dense devices than semiconductor FETs, but electronics would not have been limited to hot cathode bulky tubes even if silicon transistors never existed.
> (I searched the alternatehistory forums to see if anyone else had ever worked on this scenario, but found surprisingly little)
Really? This is basically the premise of the video game Fallout.
"It would certainly be a different world- electronics would be incredibly difficult to miniaturize without the transistor, so you might have large centralized computers for the military & big corporations- but definitely no smartphones. If we had home computers they'd be much more expensive, so even if the Internet existed by 2025 it'd be much much smaller."
Don't forget that you can probably have fairly large computer memories (in the context of vacuum tubes ...) because of core memory:
https://en.wikipedia.org/wiki/Magnetic-core_memory
PDP-11s shipped with core memory and you can do QUITE A LOT with 1 MB (or less).
And you don't need transistors for hard drives, either :-)
Imagine "programs" being distributed on (error correcting encoded) microfiche.
Sounds like fun in a steam-punk way.
Also, you can easily imagine a slow internet. Think something like 1200 baud (or faster) between major centers (so very much like early Usenet). You won't spend resources for images or pretty formatting, but moving high value *data* should work.
https://en.wikipedia.org/wiki/Computer_network
About the time transistors were becoming widely used, micro-vacuum tubes were also in use. I don't know what their life was, and clearly transistors were found superior, but they were competitive in some applications.
So, yes, vacuum micro-electronics would have been developed. I've got doubts that memristors would have shown up any more quickly than they did here.
It's not clear that vacuum electronics couldn't have been developed to the same degree of integeration that transistors were, so I'm not sure the rest of your caveats hold up. They might. I know that vacuum electronics were more highly resistant to damage from radiation, so there might well have been a different path of development, but I see no reason to assume that personal computers, smart phones, routers, etc. wouldn't have been developed, though they might have been delayed a few years. (That we haven't developed the technology doesn't imply that it couldn't have been developed.)
Agreed! I hadn't seen your comment in time, and replied with essentially the same point.
It's also possible to miniaturize electromechanical switching to IC scale with MEMS and NEMS relays. It's a lot slower than transistors, which is why it's only used for specialty applications, but it's possible.
This is so interesting
My husband was forced untimely - after being rear-ended by someone who spoke no English, had no proof of insurance on him, said he had insurance but didn’t know the name of the company, before driving away (the babies were crying, the side of a freeway is no place for a half dozen children)- and miraculously did have it (one time that the state doing its seeing-like-a-state thing was helpful) - into a quick round of unsatisfactory car shopping after All-State took its sweet time deciding to total his perfectly driveable old Subaru.
As a result - life having other distractions, and he having little interest in modern cars - he got steered into buying his first “new” car.
That’s something that won’t ever happen again!
All those new features he didn’t want to pay for … and Subaru doesn’t need to haggle, period.
He was set to get two requests, an ignition key and a manual, slam-shut gate, swapped in from a dealer in another city - but in the event, a buyer there was simultaneously grabbing that one, so the one they brought in was sadly keyless.
We should have just returned home (a hour plus away) but a certain amount of time had been invested, and a planned road trip was upcoming.
Question: should I get him one of those faraday cage thingies? It has been established that he won’t stuff the fob in foil every night, nor remember to disable it.
He didn’t even know about this car-stealing method, not being much online and certainly not on NextDoor.
There is no consensus on the internet about the need for this. Possibly already passe, superseded by new methods of thievery.
We live in a city that had 18,000 cars stolen last year. Not generally Subarus probably … but anyway. The car is within 50 or sixty feet of the fob, in an apartment parking lot, not within view.
Our cars, when we’ve occasionally, inadvertently left them unlocked (long habit from where we lived previously) have reliably been rifled through, though it was a wash: we had neither guns nor electronics nor drugs. Once, memorably, they stole his car manual. I recall thinking that they’d better come by around daylight savings time and change the clock for him.
I’ve never heard of this sort of faraday cage thing. How many cars have been stolen from the apartment parking lot in the last few years? Does insurance cover such thefts? My guess is that a precaution against this one method of theft isn’t that likely to make a big difference, particularly since theft is not that common anyway (apart from the weird Kia/Hyundai exploit that was discovered during the pandemic), but if the faraday cage is cheap and convenient and easy to set up in the tray where you put keys and wallet when you get home anyway (or however you do it), it could still be net worth it.
If you have a parent that refuses to put his keys into an exact place (a nice shiny foil box) every night, you have a parent that probably shouldn't be trusted with a motorized vehicle.
A rather stupid syllogism, but I don’t own such a box. That’s what I’m trying to learn - if it’s worth buying one. For some reason I thought this would be an easy layup for this crowd.
Wait, what's the car-stealing method?
Supposedly using a device to capture the signal pinging between the key fob and the vehicle. How you would start the vehicle thereafter away from the fob I don't know. Or maybe just as a means to open the vehicle and throw stuff from the glove box around.
I really thought this was a thing as it was so commonly referenced, but now I'm not sure if it was imaginary/dreamed up by people who didn't want to admit they left their fob sitting in the car.
A relay attack lets a thief extend the range of the key fob by retransmitting the signals, allowing them to start the car. It doesn't let them clone key fob. Once started, cars will not automatically shut off when the key goes out of range. Some cars have protection against relay attacks, but I think most do not. The thief would have to get close enough to the key fob to pick up the signal, and they need the key signal in real time. They can't record the signal and replay it later.
Yes, that’s what I meant. Didn’t mean they would randomly capture signals and store them for later use.
I never had a reason to think about it before. My own car is a very basic car from 2009.
I had just absorbed by osmosis this idea about newer cars.
But upon researching it, I couldn’t find that after all people seem particularly worried about it. Or any agreement about what’s going on with the key, whether it’s really talking to the car or sitting there inert.
Not sure if the subject is just really well understood only by those who steal cars and those who know a lot about electronics.
This is a very old attack in the cryotographic literature. IIRC, it was originally called the mafia fraud attack. Though really its kind of just an instance of a man in the middle attack.
Something interesting I learned today*: Among professional historians, antiquarians and the like there is a widespread consensus that Jesus of Nazareth was a real, historical person. Important disclaimer, this distinguishes the historical personage from any supernatural capabilities he may or may not have had.
They cite about half-a-dozen non-biblical references by Tacitus, Josephus, Pliny the Younger, Suetonius, Mara Bar-Serapion, Lucian and Talmudic references. Most of these are pretty brief or oblique but they converge on a pretty recognizable figure. The evidence is a lot stronger than he was a mythical creation, which is why mainstream scholars of all stripes have landed there.
The other interesting thing about this, the scholarly consensus is a lot stronger than public perception that Jesus was a historical person, and to be sure I include myself in that number (or would have last weak at least): ~76% of Americans across all religious and political affiliations believe he existed: https://www.ipsos.com/sites/default/files/ct/news/documents/2022-03/Topline%20-%20Episcopal%20Church%20Final%202.17.22%20CLEAN.pdf (question 10)
ChatGPT summary: https://chatgpt.com/share/68877913-12f8-8011-b978-ba1c0006a45b
*Several days ago but was waiting for a new OT
A year or two ago, I watched an extended interview with Richard Carrier, who's one of the highest profile people arguing against the historicity of Jesus. He's a classical historian by training and a pop historian and Atheism advocate by vocation. IIRC, his thesis is that Christianity started among ethnic Jews living in the Roman world and followed what was then a fairly common template of venerating a purely spiritual messianic figure, and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier.
Carrier made some interesting arguments about the mythological pattern which I lack the expertise to assess in detail. Where I do think he rather badly misstepped was in making a big deal out of the Gospels and Epistles being written in Greek rather than Aramaic. I don't that needs much explaining given how few classical documents have survived to the present. Greek was a major literary language throughout the region while Aramaic was not, and Christianity caught on much, much more in Greek and Latin-speaking areas than in Aramaic-speaking areas, so only Greek foundational texts surviving isn't particularly surprising. The wikipedia article for "ancient text corpora" cites estimates for Carsten Peust (2000) that our text corpus from prior to 300 AD is 57 million words of Greek, 10 million words of Latin, and 100,000 words of Aramaic.
Where did you get the idea that Aramaic wasn't a significant language of the region at the time? It was the lingua franca from the Levant to Persia for centuries.
The Talmud alone is in the ballpark of 2.5 million words, most of it two dialects of Aramaic and most of the rest in Hebrew. While it was compiled later than 300 AD, it contained a body of work stretching over many centuries, stretching back well into the Second Temple period.
The Mishnah, compiled centuries earlier, was primarily Hebrew but with some Aramaic.
And that wikipedia page lists 300,000 words for Hebrew - the Tanakh has over 300k words, the Torah 80k of them.
The Dead Sea Scrolls, which are only partially the Torah, contains fragments of nearly 1k manuscripts. https://www.imj.org.il/en/wings/shrine-book/dead-sea-scrolls.
All that is to say, even if we really do have fewer surviving words of Aramaic than Greek, that almost certainly has more to do with our sample than the ancient source.
> and a bit later St. Paul and the writers of the Gospels reinterpreted some allegorical stories about this messiah as referring to an actual historical person who had lived and preached a few decades earlier
That doesn't sound like he's arguing against the historicity of Jesus at all then, if he's saying that Jesus is based on an actual historical person. That just sounds like the mainstream view all over again -- Jesus was real, some of the stories told about him are false, and we can quibble about exactly how much was real.
Carrier is loudly and explicitly claiming that there was no actual historical person who lived in Judea c. 30 AD matching the description of Jesus of Nazareth, and that pre-Pauline proto-Christians would have agreed with this as they would have believed in a purely spiritual Christ and told allegorical stories about him set in a spiritual real. Per Carrier, the claim that Jesus was a human who ministered in Judea was an invention of Paul and the Gospel writers who re-wrote the existing stories *as if* Jesus were a real person who had been physically present in and around Jerusalem.
Right, I think I misunderstood the sentence I quoted, I thought he was saying that they'd merged their spiritual messiah with stories about some actual bloke.
I can see how I wrote it could be read that way, sorry.
Greek was the lingua Franca at the time, and it was what educated people largely wrote in. Particularly on the east. Marcus Aurelius even wrote his Meditations entirely in Greek.
In no way would the writers of the gospels write in Aramaic. John and Luke may not have even spoken it.
Exactly. If there was an Aramaic proto-gospel, it would have had to have been very early and very niche and it probably would have been oral rather than written. Anyone writing in the Eastern Mediterranean for a broader audience would have done so in Greek.
Oh, Carrier is the guy that Tim O'Neill has the beef with. Doesn't think much of Dr. Carrier's arguments 😁
I'm Irish Catholic so you know which side of the fence I'm coming down on here, but I do have to admit to a bias towards the Australian guy of Irish Catholic heritage as well! I can't say it's edifying, but it's fun:
https://historyforatheists.com/jesus-mythicism/
Here's the Carrier one (of several):
https://historyforatheists.com/2016/07/richard-carrier-is-displeased/
"It seems I’ve done something to upset Richard Carrier. Or rather, I’ve done something to get him to turn his nasal snark on me on behalf of his latest fawning minion. For those who aren’t aware of him, Richard Carrier is a New Atheist blogger who has a post-graduate degree in history from Columbia and who, once upon a time, had a decent chance at an academic career. Unfortunately he blew it by wasting his time being a dilettante who self-published New Atheist anti-Christian polemic and dabbled in fields well outside his own; which meant he never built up the kind of publishing record essential for securing a recent doctorate graduate a university job. Now that even he recognises that his academic career crashed and burned before it got off the ground, he styles himself as an “independent scholar”, probably because that sounds a lot better than “perpetually unemployed blogger”."
And then he really gets stuck in 😀
Yeah, my impression of Carrier is that he seems clever and interesting, but the actual substance of his arguments seems pretty weak even aside from my priora about who's likely to be right when a lone "independent scholar" is arguing that the prevailing view of academic experts is trivially and obviously false on a subject within their field.
I'll check out the O'Neill article, thank you.
O'Neill is fun and I trust him because although he's an atheist himself, he gets so pissed-off by historical errors being perpetuated by online atheists and the mainstream that he goes after them.
He does have a personal grudge going with Carrier, so bear that in mind. Aron Ra is another one of the Mythicists with whom O'Neill tilts at times, but not as bitterly as with Carrier.
I was amused by the reference to Bayes' Theorem (seeing as how that's one of the foundations of Rationalism) in the mention of Carrier's book published in 2014:
"Two years ago Carrier brought out what he felt was going to be a game-changer in the fringe side-issue debate about whether a historical Jesus existed at all. His book, On the Historicity of Jesus: Why We Might Have Reason for Doubt (Sheffield-Phoenix, 2014), was the first peer-reviewed (well, kind of) monograph that argued against a historical Jesus in about a century and Carrier’s New Atheist fans expected it to have a shattering impact on the field. It didn’t. Apart from some detailed debunking of his dubious use of Bayes’ Theorem to try to assess historical claims, the book has gone unnoticed and basically sunk without trace. It has been cited by no-one and has so far attracted just one lonely academic review, which is actually a feeble puff piece by the fawning minion mentioned above. The book is a total clunker."
O'Neill's quote from Carrier proudly displayed on his website:
"“Tim O’Neill is a known liar …. an asscrank …. a hack …. a tinfoil hatter …. stupid …. a crypto-Christian, posing as an atheist …. a pseudo-atheist shill for Christian triumphalism [and] delusionally insane.” – Dr. Richard Carrier PhD, unemployed blogger"
Deep calls to deep, and so does Irish invective between the sea-divided Gael so that's probably why I like O'Neill so much even apart from his good faith in historical arguments.
Academics don't view denial of Jesus' existence as much of an argument. Most call it "fringe."
If you're interested in going deeper, I would recommend looking into the modern quests for the historical Jesus, which not only surfaced and studied extrabiblical sources on Jesus, but also developed methodologies for evaluating the gospels:
https://en.wikipedia.org/wiki/Quest_for_the_historical_Jesus
Academics I've read and listened to lean toward the conclusion that only two events in the gospels about Jesus' life are reliable: His baptism by John the Baptist, and his execution by the Romans. (These both rely on the criteria of embarrassment, that is, because these events undermine his followers' beliefs, for them to include these events in the gospels suggests they actually occurred.) Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations.
The quests for the historical Jesus also bleed into modern understandings of how the gospels were authored, such as the dominant theory of Markan priority, and the theoretical Q document.
"Everything else in the gospels about Jesus' life is up for debate, although (as others have said) most academics discard the miracle-working, or offer less supernatural explanations."
This is true, but in the context of discussing a New Athiest figure it's worth adding some context. For most of these scholars, rejection of the supernatural is a premise rather then a conclusion. It's often the case that an academic will write, "Since its miracle stories are false, this document must be late," only for his reader to say, "Since this document is late, its miracle stories must be faIse," without realizing the circularity.
C. S. Lewis wrote on this very thing in the introduction to his book "Miracles":
"Many people think one can decide whether a miracle occurred in the past by examining the evidence ‘according to the ordinary rules of historical enquiry’. But the ordinary rules cannot be worked until we have decided whether miracles are possible, and if so, how probable they are. For if they are impossible, then no amount of historical evidence will convince us. If they are possible but immensely improbable, then only mathematically demonstrative evidence will convince us: and since history never provides that degree of evidence for any event, history can never convince us that a miracle occurred. If, on the other hand, miracles are not intrinsically improbable, then the existing evidence will be sufficient to convince us that quite a number of miracles have occurred. The result of our historical enquiries thus depends on the philosophical views which we have been holding before we even began to look at the evidence. The philosophical question must therefore come first.
"Here is an example of the sort of thing that happens if we omit the preliminary philosophical task, and rush on to the historical. In a popular commentary on the Bible you will find a discussion of the date at which the Fourth Gospel was written. The author says it must have been written after the execution of St. Peter, because, in the Fourth Gospel, Christ is represented as predicting the execution of St. Peter. ‘A book’, thinks the author, ‘cannot be written before events which it refers to’. Of course it cannot—unless real predictions ever occur. If they do, then this argument for the date is in ruins. And the author has not discussed at all whether real predictions are possible. He takes it for granted (perhaps unconsciously) that they are not. Perhaps he is right: but if he is, he has not discovered this principle by historical inquiry. He has brought his disbelief in predictions to his historical work, so to speak, ready made. Unless he had done so his historical conclusion about the date of the Fourth Gospel could not have been reached at all. His work is therefore quite useless to a person who wants to know whether predictions occur. The author gets to work only after he has already answered that question in the negative, and on grounds which he never communicates to us.""
I've never read Miracles, but it's no surprise that Lewis got there first and explained it better. Thanks for posting it.
Sometimes a lie reveals the truth. It’s generally accepted that Jesus wasn’t born in Bethlehem. It’s only mentioned in two gospels and the census story of moving back to your origins isn’t Roman practice. It would be mayhem. People just didn’t travel to ancestors homelands for a census. The killing of the innocents by Herod is also undocumented.
But an invented messiah can just be born wherever you need him (and the messiah prophecy mentions Bethlehem) but clearly people were aware where Jesus actually came from so they had to admit to Nazareth.
Jesus is very well attested people for his period. The minimum viable Jesus is that he was a popular religious leader from about the class the Bible says he's from who lived roughly where the Bible says he did. That he had a large following and was believed to have magical powers and claimed to be the son of God. That he clashed with Jewish and Roman authorities. And that he was executed but his followers continued on.
If you want to say he didn't exist you basically believe in a conspiracy theory that later Christians went back and doctored a bunch of works and made a bunch of forgeries to provide evidence that he did. A lot of anti-Christians really want to believe this and produce a lot of shoddy scholarship about it. But in all likelihood Jesus was real.
I think my previous belief was that Christianity definitely existed as a religion by the mid-1st-century, lots of people knew the Apostles, the Apostles knew Jesus, and it would require a pretty coordinated conspiracy for the Apostles to all be lying.
Does the evidence from historians prove more than that? AFAIK none of the historians claim to have interviewed Jesus personally. So do we know that the historians didn't just find some Christians, interview them about the contents of their religion, and use the same chain of reasoning as above to assume that Jesus was a real person? Should we take the historians' claims as extra evidence beyond that provided by the religion itself?
Well, it proves that non-Christians living eighty years after the purported events wrote about the life and death of Jesus without expressing skepticism, which is something.
From the way Tacitus writes in 116, it seems like the general consensus among non-Christian Romans in the early second century was that Christus was a real dude who got crucified, and that there was a bunch of weird beliefs surrounding him. This belief was probably not filtered entirely through Christians, just as our ideas about the Roswell Incident of 1947 or L. Ron Hubbard are not entirely filtered through the people who believe weird things about them.
I believe what you're saying is: A large number of Christians all simultaneously, and within their own living memory, attested that Jesus existed. This is strong evidence because otherwise a large number of people would have to all get together, lie, and then die for that lie which seems less likely than being a real religious organization who met a real person. But the historians likely did not personally meet Jesus so they don't add additional proof.
From this point of view, the main things historians add is that it makes it even less likely to be a conspiracy. Because many of the historians are not Christians and drew from non-Christian (mostly Jewish or Roman) witnesses. We don't know who these witnesses were or if any of them directly met Jesus. But they are speaking about things going on in the right time and place to have met him and the Bible doesn't suggest Jesus isolated himself from foreigners.
So either none of them met him and it was all a conspiracy by Jesus's followers that took in a bunch of people who were highly familiar with the region. Or a number of non-Christians were in on the conspiracy.
My broader point is something like: we ought to have consistent evidentiary standards. If you want to take a maximally skeptical view then you can construct a case that, for example, Vercingetorix never existed. You can cast doubt on the existence of Julius Caesar if you stretch. If that's your general point of view then you can know very little about history. I disagree with that point of view but it's defensible. If, on the other hand, you think Vercingetorix existed or the Dazexiang uprising definitely happened but think Jesus might not have existed then I think you're likely ideologically invested in Jesus not existing.
To give an example where I don't think it's bias: most modern historians discount stories of magic powers or miracles regardless of who performed them. So the fact they discount Jesus's miracles seems consistent with that worldview rather than a double standard.
Someone later down made comments that reminded me that some figures from history were later believed to have been adaptations or syncretisms of earlier figures. So that's another possibility - Jesus was fictional, but melded from earlier people. I don't think this would adequately explain Tacitus' account, for example, but it could explain multiple people being "in on" the fabrication.
(Meanwhile, maybe some people aren't invested in Jesus' not existing, but rather invested in someone existing with a name as cool as "Vercingetorix". So the real solution should have been to introduce Jesus as, uh, "Yesutapadancia".)
Jesus is a bit similar to Ragnar Lodbrok in that he is attested but a lot of the records come shortly after his death. And there's a whole bunch of extremely historical people who the history books say were reacting to him and his death which are really hard to explain if he didn't exist or was a myth.
The people who think Ragnar was entirely fictional have to explain the extremely well attested historical invasions by his historically well attested sons who said they were avenging his death and who set up kingdoms and ethnicities which echo down to today. Likewise with Jesus, his disciples, and Christianity.
But there's just enough of a gap to say that maybe he didn't exist if you really, really want to. And there's a lot of space to say some of the stories were less than reliable and some of them might be borrowed from other people. Then again, that's true of most historical biographies.
We should take the historian's claims as evidence that the people whose job it is to professionally try to figure out what happened in the past all tend to agree that Jesus was real. And they're not just looking at the Bible when they do that!
Sources that indicate Jesus existed include the scriptures (the letters and gospels of the New Testament), but also include many of the apocryphal writings (which all agree that Jesus existed, even if they go on to make wildly different claims about him), the lack of any contemporary non-Christian sources that deny the existence of Jesus, the corroboration of many other historical facts in scripture about the whole Jesus story (like archeological findings corroborating that Pontius Pilate existed, or that Nazareth existed, etc).
You also have Josephus writing about Jesus in 94 AD, Tacitus writing about him in 115 (and confirming that he was the founder of a religious sect who was executed under Pontius Pilate), and a letter from a Stoic named Mara bar Serapion to his son, circa 73 AD, where he references the unjust execution of the "wise king" of the Jews.
Also, looking at scripture itself there are all kinds of historical analysis you can apply to it to try to figure out how old it is, and whether the people who wrote it were actually familiar with the places they were writing about. For example, they recently did a statistical analysis of name frequency in the Gospels and the book of Acts, and found that it matches name frequencies found in Josephus's contemporary histories of the region, and that later apocryphal gospels have name frequencies in them that don't match, which makes it more likely that the Gospels were written close to the time period they are writing about (https://brill.com/view/journals/jshj/22/2/article-p184_005.xml). Neat stuff like that.
One major source, which is much disputed, is the Testimonium Flavianum which is the part of Josephus' writings which mentions Jesus. Josephus was a real person who is well-attested, so if he's writing about "there was this guy" it's important evidence, especially as he ties it to "James, the brother of Jesus" who was leader of the church in Jerusalem and mentions historic figures like the high priests at that time.
How much is real, how much has been interpolated over later centuries by Christian scribes, is where the arguing goes on - some say it's nearly all original, others (e.g. the Mythicists) say it's wholesale invention.
https://en.wikipedia.org/wiki/Josephus_on_Jesus#The_Testimonium_Flavianum
Tim O'Neill has an interview with a historian who recently published a book about this, arguing for the authenticity of this:
https://www.youtube.com/watch?v=9L2bE1-pyiU
"My guest today is Dr Thomas C. Schmidt of Fairfield University. Tom has just published an interesting new book through Oxford University Press: Josephus and Jesus – New Evidence for the One Called Christ. In it he makes a detailed case for the authenticity of the Testimonium Flavianum; the much disputed passage about Jesus in Book 18 of Flavius Josephus’ Antiquities of the Jews. Not only does he argue that Josephus wrote about Jesus as this point in his book, but he also argues that the passage we have is substantially what Josephus wrote. This is a distinctive position among scholars, who usually argue that it has at least be significantly changed and added to, with a minority arguing for it being a wholesale interpolation. So I hope you enjoy my conversation with Tom Schmidt about his provocative new book."
I've not watched that video, but in this one Tom Schmidt goes into Josephus' life and connections with Jewish and Roman elites:
https://www.youtube.com/watch?v=8jpEleZV1Pw
The most surprising thing (for me) was to learn about Josephus' rather energetic life, and that Josephus knew people who were one or two degrees of separation from Jesus. It puts a new shine on the questions of the Testimonium's accuracy.
I mean, when the Mythicists claim Jesus never lived, are they also saying that his brother James (mentioned by Josephus and several other documents) was also a fabrication? Mary, Joseph, and Magdalene, all wholly fictional characters? Where does the myth-making and conspiracy start and end?
I think you're well overstating the minimum. Yeah, there was someone with that name around. There aren't any records of the trial though. (There's an explanation for the lack, but they're still missing.) And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries", though we don't know what the original records said, or even if they existed. Sometimes we have good evidence of their doctoring the records. Often enough to cast suspicion on many where we don't have evidence. Many were clearly written well after the date at which they were ostensibly written.
If you wanted to claim that he was a popular religious-political leader, I'd have no argument. There's a very strong probability that he was, even though most of the evidence has been destroyed. (Some of it explicitly by Roman Christians wiping out the Nazarenes.)
There’s a lot of have waving there but no specifics. The only possible case where Christians modified is parts of Josephus. That’s it.
Yeah, the "hand waving" a valid criticism. It's been decades since I took the arguments seriously, and I don't really remember the details. But when you say " The only possible case ", I'm not encouraged to try to improve my argument. Your mind is already made up.
Would you be encouraged to try to improve your argument for the sake of an interested third party? In a public comment section like this you're never solely writing for the person you responded to, and I for one would indeed be quite intrigued to hear more specifics about your case, as I don't have any particularly strong opinions on the subject already.
> There aren't any records of the trial though.
There are records that say he was executed by local authorities. The specific Biblical details are less well attested.
> And it is *very* clear that "later Christians went back and doctored a bunch of works and made a bunch of forgeries"
Every time I've pushed on these claims it comes down to the equivalent of not being to able to prove a negative. It's clearly there in the versions we have and they make some vague gestures about word choices to show it was inserted later. I'm not aware of a single smoking gun where someone admitted they doctored a record from the time.
I am especially suspicious of this because it's clear a lot of people WANT to believe they are later insertions for basically ideological reasons. But if you have an example that is either a smoking gun, like the evidence we have about the Austrian archduchy title or better, then I'd love to see it.
> There are records that say he was executed by local authorities
Isn't Josephus the first one to mention this? I don't think the Romans themselves left surviving records of an execution they would not have regarded as especially significant at the time.
Sorry, I don't mean judicial records, I mean that various people that wrote about him wrote he was executed. You're right there's little that granular at least afaik.
Tacitus is the other big near-contemporary non-Christian source for the crucifixion apart from Josephus. Tacitus's Annals was written in 116 AD, a bit over twenty years after Josephus's Antiquities but well before Christian (and Muslim) scribes had a chance to interpolate anything into Josephus's writings.
But yeah, I don't think there are any direct Roman sources for the crucifixion, nor would be expect to see any but the most important-seeming executions to be well documented in surviving records. For that matter, we barely have much more documentation of Pontius Pilate's life and career than we have for Jesus. We know about him mostly from Christian sources (especially the Gospels and Epistles), Josephus, Tacitus, and one or two other non-Christian writers who mentioned him. I think the only direct archeological evidence that Pilate existed is one fragment of an inscription (probably a dedication on a temple to the Emperor Tiberius) that names Pontius Pilate as the Prefect of Judea.
For a provincial Roman official of merely Equestrian Rank Pilate is unusually well-documented. Although some histories are related to Jesus, not all are. Philo of Alexandria mentions Pilate with regard to him being “A man of inflexible, stubborn, and cruel disposition” and details other atrocities, Josephus mentions Pilate in relation to Jesus but also two other actions, both atrocities ( The Aqueduct Incident and The Roman Standards Incident)
Oh, this is a good old long-running row. The modern version on one side is, I believe, the Jesus Mythicists and on the other, historians. I don't bother getting into the weeds on this one because I'm no longer interested in yet another bunch of atheists making sneery remarks about religion, but Tim O'Neill has been in a few entertaining fights with them, and has some videos up about "did Jesus exist?":
https://www.youtube.com/watch?v=n_hD3xK4hRY
https://www.youtube.com/watch?v=bTG7czEBVzY
https://www.youtube.com/watch?v=5bO4m-x_wwg
https://www.youtube.com/watch?v=9L2bE1-pyiU
Going back for an example of historical "Jesus the man not Christ the god" writing, there's the famous book by Ernest Renan (again, one I haven't read, mea culpa!) "Vie de Jésus/Life of Jesus":
https://en.wikipedia.org/wiki/Ernest_Renan#Life_of_Jesus
"Within his lifetime, Renan was best known as the author of the enormously popular Life of Jesus (Vie de Jésus, 1863). Renan attributed the idea of the book to his sister, Henriette, with whom he was traveling in Ottoman Syria and Palestine when, struck with a fever, she died suddenly. With only a New Testament and copy of Josephus as references, he began writing. The book was first translated into English in the year of its publication by Charles E. Wilbour and has remained in print for the past 145 years. Renan's Life of Jesus was lavished with ironic praise and criticism by Albert Schweitzer in his book The Quest of the Historical Jesus.
Renan argued Jesus was able to purify himself of "Jewish traits" and that he became an Aryan. His Life of Jesus promoted racial ideas and infused race into theology and the person of Jesus; he depicted Jesus as a Galilean who was transformed from a Jew into a Christian, and that Christianity emerged purified of any Jewish influences. The book was based largely on the Gospel of John, and was a scholarly work. It depicted Jesus as a man but not God, and rejected the miracles of the Gospel. Renan believed by humanizing Jesus he was restoring to him a greater dignity. The book's controversial assertions that the life of Jesus should be written like the life of any historic person, and that the Bible could and should be subject to the same critical scrutiny as other historical documents caused controversy and enraged many Christians and Jews because of its depiction of Judaism as foolish and absurdly illogical and for its insistence that Jesus and Christianity were superior."
Now I have to quote Chesterton again, from 1908 "All Things Considered", where he compares Ernest Renan and Anatole France writiing rationalist explanations of miracles;
"The Renan-France method is simply this: you explain supernatural stories that have some foundation simply by inventing natural stories that have no foundation. Suppose that you are confronted with the statement that Jack climbed up the beanstalk into the sky. It is perfectly philosophical to reply that you do not think that he did. It is (in my opinion) even more philosophical to reply that he may very probably have done so. But the Renan-France method is to write like this: "When we consider Jack's curious and even perilous heredity, which no doubt was derived from a female greengrocer and a profligate priest, we can easily understand how the ideas of heaven and a beanstalk came to be combined in his mind. Moreover, there is little doubt that he must have met some wandering conjurer from India, who told him about the tricks of the mango plant, and how it is sent up to the sky. We can imagine these two friends, the old man and the young, wandering in the woods together at evening, looking at the red and level clouds, as on that night when the old man pointed to a small beanstalk, and told his too imaginative companion that this also might be made to scale the heavens. And then, when we remember the quite exceptional psychology of Jack, when we remember how there was in him a union of the prosaic, the love of plain vegetables, with an almost irrelevant eagerness for the unattainable, for invisibility and the void, we shall no longer wonder that it was to him especially that was sent this sweet, though merely symbolic, dream of the tree uniting earth and heaven." That is the way that Renan and France write, only they do it better. But, really, a rationalist like myself becomes a little impatient and feels inclined to say, "But, hang it all, what do you know about the heredity of Jack or the psychology of Jack? You know nothing about Jack at all, except that some people say that he climbed up a beanstalk. Nobody would ever have thought of mentioning him if he hadn't. You must interpret him in terms of the beanstalk religion; you cannot merely interpret religion in terms of him. We have the materials of this story, and we can believe them or not. But we have not got the materials to make another story."
Wait, Chesterton considered himself a rationalist? I wonder what he’d think of the movement today.
I would be interested to know what Chesterton meant by “rationalist”! He definitely doesn’t seem to mean the thing that philosophers mean (ie, the opposite of an empiricist, the kind of person that thinks that logical and rational proof is a better way to know about the world than empirical evidence), but it does seem somewhat compatible with the contemporary cultural usage.
yeah reading the gospels you sense he can't be mythical. CS Lewis argued that the new testament would have had to invent the modern realistic novel style to depict him if he was a creation
Like even his miracles are different, from later Christian saints. St Francis caused a wolf to stop preying on people out of his sheer holiness, and the village accepted it after. Jesus is grabbed by a woman and that is enough to heal her, or he spits on the ground to make clay and cover someone's eyes.
There is a lot of detail and prose there, and myth usually ignores that. Goliath is tall because David trusts in God to beat him: Zaccheus is small and has to climb up into a tree to see Jesus, and this is incidental to the message.
So there definitely was someone they were all watching, but that doesn't mean the miracles were true.
Of the 24% that didn't answer "Yes" to the question whether the figure historically existed, 14% answered "don't know", and only 10% answered "No".
10% of people who answer No to a question against scientific consensus? That... does not strike me as a high number.
10% of people think positively about Ebola. This is the "not paying attention very well" demographic.
The 15 or so people named Ebola get a bad rap: https://www.familysearch.org/en/surname?surname=Ebola
Lizardman constant https://slatestarcodex.com/2013/04/12/noisy-poll-results-and-reptilian-muslim-climatologists-from-mars/
See also the Panetta-burns plan, which proves that yes, you can troll people with polls.
Yeah, the idea that Jesus didn't physically exist is odd.
Jesus dies in...one sec...AD 33 under the reign of Tiberius. By the reign of Nero, so say 60 AD, he's feeding Christians to the lions in Rome. That's living memory. It'd be weird if that was going on and Jesus actually didn't exist.
Kilroy was here. There's substantial archeological evidence for Kilroy, despite the fact that he really doesn't exist.
If soldiers were being executed for Killroy was Here graffiti I would expect those executions to produce a paper trail leading back to an explenation of who Killroy was. Said explanation might call him fictional.
Similarly. If people were being thrown to the lions for calling Jesus the messiah I would expect a paper trail. Maybe its lost to time in the last 2000 years, but I would expect it to have been written.
Depends on who/what is being investigated. "Etched my glass with graffiti" as a cardinal crime might not need to care about what the graffiti was, after all. "Loudmouth preacher/prophet" might just get recorded as that.
Assuming a large amount of literacy, you might get someone wondering "why are they talking about that guy?"
These are dependent on the cultural mores. "Joseph is lying again" is hardly going to raise eyebrows if lying is normative in the culture (which, I'm not saying it is for Rome, but there are cultures where lying is the standard public discourse, and truth is only given with a monetary exchange).
Many of the records were lost during an attack by the Roman army on Jerusalem. Others were lost when a Roman Army under a Christian general wiped out the Nazarenes. (If anyone was an actual follower of Jesus, it was the Nazarenes.)
Here is what Tacitus said
“ So to suppress the rumour, Nero falsely charged with guilt and punished with the most exquisite tortures the persons commonly called Christians, who were hated for their enormities.
Christus, the founder of the name, had undergone the death penalty in the reign of Tiberius, by sentence of the procurator Pontius Pilatus, and the pernicious superstition was checked for a moment, only to break out once more, not merely in Judaea, the home of the disease, but in the capital itself, where all things horrible or shameful in the world collect and find a vogue.
First those who confessed were arrested; then on their information a vast multitude was convicted, not so much of the crime of arson as of hatred of the human race.
Their deaths were made farcical. Dressed in wild animal skins, they were torn to pieces by dogs, or crucified, or made into torches to be ignited after dark as substitutes for daylight.
Nero had offered his gardens for the spectacle, and gave a show in the circus, mingling with the people in the dress of a charioteer or riding in a chariot.
Hence, even for criminals who deserved extreme and exemplary punishment, there arose a feeling of compassion; for it was not, as it seemed, for the public good, but to glut one man’s cruelty, that they were being destroyed.”
Tacitus is generally considered a reliable commentator so even though he’s writing a few generations later ( although he was alive for Nero) it’s known he has access to plenty of records.
It could be later Christian interpolation but they were unlikely to call Christianity a disease, a pernicious superstition that was horrible and shameful or that the Christians hated the human race.
There's also Pliny the Younger, writing to the Emperor Trajan in the first century about "what the heck do I do with these Christians?"
https://en.wikipedia.org/wiki/Pliny_the_Younger_on_Christians
"Pliny the Younger, the Roman governor of Bithynia and Pontus (now in modern Turkey), wrote a letter to Emperor Trajan around AD 110 and asked for counsel on dealing with the early Christian community. The letter (Epistulae X.96) details an account of how Pliny conducted trials of suspected Christians who appeared before him as a result of anonymous accusations and asks for the Emperor's guidance on how they should be treated."
Here is the text of Pliny's letter and Trajan's reply:
https://faculty.georgetown.edu/jod/texts/pliny.html
"Pliny, Letters 10.96-97
Pliny to the Emperor Trajan
It is my practice, my lord, to refer to you all matters concerning which I am in doubt. For who can better give guidance to my hesitation or inform my ignorance? I have never participated in trials of Christians. I therefore do not know what offenses it is the practice to punish or investigate, and to what extent. And I have been not a little hesitant as to whether there should be any distinction on account of age or no difference between the very young and the more mature; whether pardon is to be granted for repentance, or, if a man has once been a Christian, it does him no good to have ceased to be one; whether the name itself, even without offenses, or only the offenses associated with the name are to be punished.
Meanwhile, in the case of those who were denounced to me as Christians, I have observed the following procedure: I interrogated these as to whether they were Christians; those who confessed I interrogated a second and a third time, threatening them with punishment; those who persisted I ordered executed. For I had no doubt that, whatever the nature of their creed, stubbornness and inflexible obstinacy surely deserve to be punished. There were others possessed of the same folly; but because they were Roman citizens, I signed an order for them to be transferred to Rome.
Soon accusations spread, as usually happens, because of the proceedings going on, and several incidents occurred. An anonymous document was published containing the names of many persons. Those who denied that they were or had been Christians, when they invoked the gods in words dictated by me, offered prayer with incense and wine to your image, which I had ordered to be brought for this purpose together with statues of the gods, and moreover cursed Christ--none of which those who are really Christians, it is said, can be forced to do--these I thought should be discharged. Others named by the informer declared that they were Christians, but then denied it, asserting that they had been but had ceased to be, some three years before, others many years, some as much as twenty-five years. They all worshipped your image and the statues of the gods, and cursed Christ.
They asserted, however, that the sum and substance of their fault or error had been that they were accustomed to meet on a fixed day before dawn and sing responsively a hymn to Christ as to a god, and to bind themselves by oath, not to some crime, but not to commit fraud, theft, or adultery, not falsify their trust, nor to refuse to return a trust when called upon to do so. When this was over, it was their custom to depart and to assemble again to partake of food--but ordinary and innocent food. Even this, they affirmed, they had ceased to do after my edict by which, in accordance with your instructions, I had forbidden political associations. Accordingly, I judged it all the more necessary to find out what the truth was by torturing two female slaves who were called deaconesses. But I discovered nothing else but depraved, excessive superstition.
I therefore postponed the investigation and hastened to consult you. For the matter seemed to me to warrant consulting you, especially because of the number involved. For many persons of every age, every rank, and also of both sexes are and will be endangered. For the contagion of this superstition has spread not only to the cities but also to the villages and farms. But it seems possible to check and cure it. It is certainly quite clear that the temples, which had been almost deserted, have begun to be frequented, that the established religious rites, long neglected, are being resumed, and that from everywhere sacrificial animals are coming, for which until now very few purchasers could be found. Hence it is easy to imagine what a multitude of people can be reformed if an opportunity for repentance is afforded.
Trajan to Pliny
You observed proper procedure, my dear Pliny, in sifting the cases of those who had been denounced to you as Christians. For it is not possible to lay down any general rule to serve as a kind of fixed standard. They are not to be sought out; if they are denounced and proved guilty, they are to be punished, with this reservation, that whoever denies that he is a Christian and really proves it--that is, by worshiping our gods--even though he was under suspicion in the past, shall obtain pardon through repentance. But anonymously posted accusations ought to have no place in any prosecution. For this is both a dangerous kind of precedent and out of keeping with the spirit of our age."
Ah, but he did exist. James J. Kilroy was an inspector at the Fore River Shipyard in Quincy, Massachusetts who was in the habit of writing "Kilroy Was Here" in chalk next to the marks he made to indicate which rivets had already been inspected in order to avoid double counting. Some of the marks didn't get erased and wound up in visible but inaccessible parts of the ships, inspiring copycat graffiti. After the war the New York Times did an investigation, found several dozen claimed sources for the graffiti, and concluded that James Kilroy was by far the most likely candidate.
https://www.usni.org/magazines/naval-history-magazine/1989/january/kilroy-was-here
The sketch of a long-nosed bald man peeking over a wall often associated with the phrase doesn't look at all like the real Kilroy. That's from a slightly older British graffiti tradition, originally associated with the phrase "Wot no sugar?" and most commonly known as Mr. Chad. The Chad and Kilroy graffiti traditions somehow merged during the war.
Nominate this for comment of the week.
That’s wonderful.
Thats....entirely irrelevant to my point.
Has there ever been a religious movement deifying someone who didn’t actually exist? I’m sure there’s a lot of room for debate whether the Christ pictured in the gospels was the historical Christ, but it seems like Christianity would have been relatively unique if it was the case that Jesus didn’t exist at all. Especially since there were no shortage of prophets and teachers in Judea at the time.
"Has there ever been a religious movement deifying someone who didn’t actually exist?"
My understanding is that the "Jesus never existed" set explain the rise of Christianity by saying it was based on a grab-bag of Middle Eastern mythology (the famous "Golden Bough" notion of dying and rising demi-gods) and generally St. Paul gets the blame for inventing Christianity as we know it.
I don't recall ever reading a good explanation as to why Saul, orthodox persecutor of the Christians befouling Judaism, turned into Paul the Christian; why would he bother inventing a new religion? And if he wanted one, why bother with this 'Christ' who never really existed in the first place, apart from a bunch of hysterical women and a rabble of hicks from the back country claiming they were his followers?
Paul arguably did invent a new religion by hijacking an existing sect and reshaping it.
Yeah, but why is the interesting question. He was fervently Jewish, so why do 180 turn on that? If he wanted to reform Judaism or make it more appealing to potential Gentile converts, he could have gone that road. Instead, he explicitly linked himself with the name of Christ and the Christians.
I think it would have been harder to persuade a more established religious tradition like Judaism to change, rather than a small sect within it.
the main deities of shinto don't seem to be based on any real people, and are so culturally prevalent westerners know about Susanoo, Ameratsu, and Yamata-no-Orochi though osmosis.
I don't believe anyone claimed they were real people. They were worshipped by specific rulers and clans though who often identified with them the same way the Japanese Emperor identified with Ameratsu.
Are they claimed to have been real people?
i think so? not based on historical people but also distinct beings who have existed in the physical world. Shinto is more embodied than Christianity, even if its treated mythologically by people
Maybe like Zeus et al? unless youre using deifying specifically to mean 'turning a human figure into a god'
There's a reasonable line of argument that the various gods of were often originally based about vague memories of a famous ancestor. OTOH, they were also clearly frequently based around anthropomorphism of some natural phenomenon. And it's my suspicion that often both processes occur in the same god.
But what those "gods" turn into in later generations is quite often FAR removed from the original conception. People tend to shape their gods into something related to their "idealized" self image. (For certain meanings of "idealized".)
This was an ancient idea about myths, that Euhemerus came up with: https://en.wikipedia.org/wiki/Euhemerism
I’m a bit skeptical that it happened much in antiquity, given that a good number of the ancient gods actually seem to be derived from some proto-indo-European tradition that preserved versions of the same gods in Vedic Hinduism, Roman mythology, Greek mythology, and even Lithuanian and Slavic mythologies. It would be fascinating if there were real people from 10,000 years ago whose exploits got memorialized into these traditions. And it’s possible that some people from antiquity did get into some of the lists of gods at some point. But I suspect a bigger source of actual gods in traditional mythologies is personification of natural forces.
Which is entirely missing out on what the Greeks were trying to do with gods, which was something new.
It’s hard to say. The patchwork of mythos in greek mythology suggests some of them come from pre-historic cultures in the region.
Right, that's what I'm thinking. Wouldn't you kind of expect Jesus to have been based on a real person, rather than being entirely invented by Mathew, Mark, Luke, & Co? I'm not saying the guy walked on water, turned it into wine, etc, but it would be pretty odd if there wasn't some dude who was a spiritual leader of some sort who inspired the gospel stories.
Aren't there some traditional saints whose historical existence is questionable? ISTR St Anthony was like that, but maybe all the documentation just got lost....
St Anthony of Valero/Padua? I thought he was actually fairly well documented, particularly as a friend of St Francis. Wikipedia even gives precise dates for his birth and death: https://en.wikipedia.org/wiki/Anthony_of_Padua
The term for that is "euhumerism":
https://en.wikipedia.org/wiki/Euhemerism
"In the fields of philosophy and mythography, euhemerism is an approach to the interpretation of mythology in which mythological accounts are presumed to have originated from real historical events or personages. Euhemerism supposes that historical accounts become myths as they are exaggerated in the retelling, accumulating elaborations and alterations that reflect cultural mores. It was named after the Greek mythographer Euhemerus, who lived in the late 4th century BC. In the more recent literature of myth, such as Bulfinch's Mythology, euhemerism is termed the "historical theory" of mythology."
The very first emperors of Japan and China were divine or divinely-descended, and probably not historical. They might have been based on actual rulers, but there's little or no contemporary evidence and it seems plausible that they were invented by later rulers to give their dynasty more legitimacy.
(I am not a historian and I'm just looking around on Wikipedia.)
Fair. Looking at the Yellow Emperor, who seems to be a mythological Emperor of China, the earliest archaeological evidence of people talking about him seems to be from the ~4th Century BC, while he allegedly lived in ~2690 BC.
Nero was infamously blaming Christians for the burning of Rome ~30 years after Christ (allegedly) died, so it's the difference between a popular cult believing someone who, in living memory, had died, vs. the remembering of an Emperor thousands of years before that doesn't have an continuous archaeological or literary tradition.
Satoshi Nakamoto? : - ) One gets into a bit of a weird world when one constantly writes under pseudonyms, and has different personalities/locations for each. I mean, if you're playing a character "as the author" and hire people "to play the author at conventions" do you really say that the author exists? After all, people never do really meet him.
Someone broke the L. Ron Hubbard Rule, and I'm not sure that results in "automatic deification" but it does result in a religious movement, I'm pretty sure. Needs must, and all that (Los Alamos seemed pretty interested in the new religion, at any rate.)
Please poke holes in and help evolve:
“The cultivation of virtue is equivalent to the collection of evidence about you acting a certain way. You cultivate a virtue in yourself by practicing it, which creates evidence of its functioning in you. The more you do this, the more you grow the body of evidence and thus strengthen the prior probability that you’ll be, e.g, patient.”
If it was *just* about accumulation of evidence, then it seems like this would enable a shortcut, where you don’t actually practice the virtue, but just get extremely strong external evidence that you will practice it. Conversely, it would mean that practicing a virtue in situations you don’t remember would be substantially less helpful at acquiring the trait.
I suspect a lot of virtue (or habit) formation is better understood as getting subpersonal things like “muscle memory” to respond more quickly in particular ways.
“Every time you make a choice you are turning the central part of you, the part of you that chooses, into something a little different from what it was before. And taking your life as a whole, with all your innumerable choices, all your life long you are slowly turning this central thing either into a heavenly creature or into a hellish creature: either into a creature that is in harmony with God, and with other creatures, and with itself, or else into one that is in a state of war and hatred with God, and with its fellow-creatures, and with itself.
"To be the one kind of creature is heaven: that is, it is joy and peace and knowledge and power. To be the other means madness, horror, idiocy, rage, impotence, and eternal loneliness. Each of us at each moment is progressing to the one state or the other.”
-C. S. Lewis, Mere Christianity
I agree with this. It's basically a behavioralist perspective. To some degree one's personality is a narrative construct, e.g. "I am the type of person who doesn't lie". When you act in accordance with the narrative you strengthen it, both through simple Bayesian inference ("I just told the truth, therefore I'll strengthen my priors that I'm the kind of person who does that") and probably through some dopamine reward that gets released when you're proud of yourself for doing something virtuous. The point of moral instruction is to imprint a child's brain with the socially-optimal reward function.
I wouldn't be surprised if one of the neurological differences between humans and chimps turns out to be the ability to self-administer behavioral rewards, like some neural connection between the cortex and the amygdala or whatever. Hardware that lets us program our own behavioral conditioning.
You have to feel it, also. Hollow action is not maximally virtuous.
I believe it's also possible to cultivate true, heartfelt virtue in part by taking "hollow" wholesome actions.
There's plenty of evidence that our actions influence our beliefs. Hence the adage to fake it til you make it.
It seems like a Bayesian way to describe Aristotelian habit formation, but I think it's a pretty vague description of how we form habits. It's fine if you don't want to focus on what exactly virtue is (though this does mean you don't explain the motivation for action) but you also haven't really described how the virtue itself is cultivated. Some things you can/should incorporate are:
- Impact of teaching/guidance on cultivation of virtue
- Impact of social context
- Differences in cultivation rates between people
- How does the body of "evidence" actually grow? To me this phrasing actually seems incorrect as our habits persist past our memories. For example I know I like cherries even if I can't really remember any of my experiences eating cherries
But really this is all very Aristotelian so you could just read Nicomachean Ethics for more
This is attractive. I can think of two situations where evidence for the virtue might skew or be skewed by the virtue itself. 1. Humility - acting humbly does create evidence of being humble and yet dwelling on that evidence seems contrary to humility and likely to undermine it. 2. Self denial - if someone has genuinely devoted their life to helping the poor, they may have experienced a lot of push back from the poor themselves who may not want help or are ambivalent, and push-back from bystanders accusing them of virtue-signalling. So the evidence is equivocal and I feel they need something more than evidence to maintain their self-denial.
Isn't that consequentializing virtue ethics?
As far as I understand it, the point of following a Virtue is that is axiomatically Good, whether it works or not, independent of the fallout. If you want a ethic framework that you should follow because it's good for you/society, consequentialism is there.
Virtue consequentialism is a thing. I think it’s the best form of both consequentialism and virtue ethics. I don’t understand what could motivate people to believe that certain traits are inherently virtuous regardless of what kinds of consequences they tend to bring.
I’m trying to understand what is happening inside a person when they cultivate virtue - not ask whether it’s good. Interested in facts here, not values.
That said, this “independent of the fallout” part isn’t true. The virtue of wisdom is identical to what we today call rationality: assessing likely outcomes. Virtue ethics basically says you’re constrained by far more than making bad predictions, and you need the capacity to do much more than make good predictions about outcomes.
Has anyone been using Perplexity’s Comet browser? I’m currently on the waitlist for access, but was wondering how useful it is.
I use Perplexity quite often as a search engine, but wasn't aware of the browser.
I petition for Scott to generate a new open thread image. The current one always makes me think of thick oil paint smeared around a window on SpongeBob's house.
Has anyone here watched Eddington?
It's a "neo-western", portraying the early days of the pandemic in a small town in New Mexico in the context of the left-right culture war.
It made me laugh more than I thought it would. I think it does a good job portraying both sides as they see themselves and as seen by the other side. At the same time it captures the confusion of the first days and the spectrum of people's reactions to it and all the little tragedies that turned into big ones with time. Really capture that weird time that now, sitting and drinking a good cappuccino and watching kids load up on the big yellow summer camp bus, seems like pure fantasy.
I thought it was solid - overall probably 7/10 stars.
9/10 stars for the first half, which is a fantastic time portal to the paranoid and chaotic environment of 2020. Where I *thought* it was going was an escalating destructive conflict between the Sheriff and Mayor, each viewing the other as overly paranoid and tyrannical about a threat (protest violence in the case of the Sheriff, COVID in the case of the Mayor) that wasn't actually present in their small town. That conflict then pits the two community leaders against each other, driving a wedge in the town itself as people line up against neighbors they've lived alongside for their whole lives for the sake of things happening in Minnesota, New York, and San Diego. And all along, the whole conflict itself isn't even really about COVID or riots, because although the Mayor and Sheriff may each think of themselves as fighting a monster in a righteous political cause, in their hearts the true driver of their anger at one another is just an interpersonal feud revolving around the Sheriff's wife. They've put a political mask on that conflict to make it respectable and justify it to themselves, and tragically that mask enables it to spread and infest their whole town. That was the vibe the film had for me through the first half, and I very much loved it.
Then it took a major pivot, and in my opinion, a modest step back, and became a sort of nihilistic character study of a man making ever worse decisions as he confronts, and is emotionally crushed by, his total lack of control over the world around him. Still pretty good, but the kind of darkness-all-the-way-down story that is very much an acquired taste. Still had me on for the ride, though. 7/10 through the 3rd quarterish.
It really jumped the shark for me at the end, though. To try to express it without spoilers, it's on this dark meditational ride through the west, but has this whiplash-inducing "and then the space aliens show up!" kind of sudden introduction of very out-of-nowhere addition to the conflict. It's like you're on this nihilistic ride about a man struggling with his insignificance and lack of control in a world of overwhelming complexity... but then lizardmen show up with a mind control ray, and now you're on a nihilistic ride about a man struggling with his insignificance and lack of control in a world where lizardmen use ray guns to control his thoughts from their lair deep in the bowels of the Earth. The theme of powerlessness is still fundamentally present in the new narrative, but it's a sharp turn to say the least. 3/10 down the stretch.
Still, overall a solid movie that I found worth the cost of the ticket. Endings can be hard to stick.
New Mexico actually WOULD be an appropriate place for space aliens to show up. I don't know how close Eddington is supposed to be to Roswell.
I watched it on its opening weekend. I reviewed it here: https://thepopculturists.blogspot.com/2025/07/this-weekend-in-pop-culture-july-11-13_01209470237.html#comment-6740114886
If AI takes off, and revolutionizes the working world (I'm making an assumption right now, that we're not talking about evil AI that will destroy humanity or anything like that, just yet) will we need to switch from our current economic model to a different one? For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses? Will currently-bad jobs end up paying more? Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone? How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully? Whatever you envision happening, how do you see it playing out over time?
One possibility is a transition away from the employment economy that has dominated the past two centuries in the UK and Belgium and shorter periods in other parts of the world. The fact that employment was the method for such a small part of history makes it very plausible that it will be replaced again.
But I think it’s also possible that the employment mechanism is more resilient than we think - there will be large transition costs, comparable to what goes on in countries experiencing a civil war, but with people inventing new productive things that are worth doing now that you can supplement your labor with AI, even as the old things people used to do for employment are easily done by far fewer people working with AI. On this picture, there’s a lot more people starting new businesses and otherwise being entrepreneurial - eg, Disney needs a lot fewer employees to make a film, but also some random student who has a great idea for a film can now bring it to fruition themself with the help of a lot of AI, and similarly for new product ideas. (Interestingly, the rate of entrepreneurship took a big jump up in 2020, and what I’ve heard suggests that it has only continued to rise since then: https://www.statista.com/statistics/693361/rate-of-new-entrepreneurs-us/ )
> Will we need to switch from our current economic model to a different one?
No.
> For example, if so much work is automated such that people can't get good jobs anymore, how will people be able to pay for their expenses?
Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.
> Will currently-bad jobs end up paying more?
Yes. This is what increases in productivity lead to. You get paid more in real terms because you earn more and what you earn can afford more.
> Will we need to instate a UBI?
No.
> Will there be enough resources to give an amazing UBI to everyone?
Depends on your definition of "amazing UBI." We already distribute more resources for free to the poor than many countries earn on average. This is not generally thought of as a UBI but that net could certainly grow.
> How will the switch happen over time? Will there need to be revolutions, or will there be so many resources that the switch to a UBI (or something) will happen more peacefully?
There will be no such switch nor will it be needed. You're assuming a premise here.
Whatever you envision happening, how do you see it playing out over time?
Similar to other gains in productivity. There's nothing about even the most optimistic realistic predictions for AI that look different than the gains in productivity caused by things like industrialization. We're looking at maybe a few percentage points of better productivity growth maximum. That's a huge deal but we've seen countries have decades of 10+% and it didn't lead to the doom some AI types want to claim.
I'm going to challenge this. Your scenario seems plausible to me, but it's not the *only* plausible scenario. In particular, while the standard theory of comparative advantage says that total output should only go up under introducing AI, it says nothing about how total output is distributed between wages and returns on capital, and one can certainly conceive of scenarios where total output goes way up, but wages go way down.
This guy's been writing papers that seem insightful to me https://www.korinek.com/
Big difference based on whether the `complexity of cognitive economic tasks' is bounded or unbounded. If it's unbounded, then you get the `usual' historical pattern where output and wages go up. However, if the complexity of cognitive economic tasks if bounded, and if AI can saturate that bound, then you can get a scenario where market clearing wages abruptly collapse to, more or less, the price of electricity, and where `total output' goes up, but almost all of the output becomes return-on-capital, with wages dropping to near zero.
Of course, this kind of analysis is really a bit spherical-cow-in-a-vaccum - ultimately this is a political economy problem, and our current political system seems unlikely to tolerate an economic system where, to exaggerate for effect, Sam Altman and Dario Amodei own the entire economy while everyone else starves. Then again, it could be argued (somewhat plausibly) that universal-suffrage democracy was downstream of an international security environment where the ability to mobilize mass armies was critical to state survival, and that once we transition to largely robotic armies, this might lead to states that look very different. So maybe (again exaggerating for effect) Sam Altman ends up as world dictator with his robot armies enforcing the Pax Altmanica. Really, a lot turns on just how super ASI is...
Thanks, I'll read the papers. My prediction is, in crude terms, that AI will be broadly like the internet, computerization of industry, the steam engine, etc. In other words it will significantly boost productivity but not be different in kind from those innovations. I don't think AGI changes this analysis except it will be an even bigger boost to productivity.
Of course, you can imagine it will be otherwise. Killbots or unlimited superintelligence or something. And that's the level of a lot of theorists so, to be frank, I'm being flippant. Because if AI is really that ubiquitous the rising productivity itself solves all problems. I do not in fact think that's the likely scenario. But it suffices to rebut a certain kind of lazy AI skepticism because I can fully grant their most extreme scenarios and it actually helps me.
If AI is instead a normal technology then it can't be as world bending as people want it to be. But also that implies the problems you're bringing up, of distribution, precisely because it will not be dramatically disruptive as they're imagining. That doesn't lead to problems of technological unemployment or people not being able to have jobs. But it could certainly lead to short term dislocations and long run new equilibria that may have unexpected or negative effects. But the lack of a rapid destructive takeoff means I trust an active system to adapt.
As to the idea of cognitive tasks being bounded, that strays into the territory where the extremity itself solves all problems. If you are proposing cognitive tasks are saturated you are by definition implying limitless and cheap cognitive function that is universally available. That won't cause a collapse in wages. That will cause a massive increase in wages through deflationary effects. That implies a world where everyone has a genius recruiter whose full time job it is to find them the best job, a full time doctor whose job it is to track their health obsessively, a full time shopper to find them the best deals, etc etc. If they don't have that then there are still undone cognitive tasks.
I think it's unlikely that AI gives private individuals or non-democratic governments superior military capacity to traditional states. Though it may allow some accumulation of power that allows democratic overthrow. I'm not even sure about that though. I think a lot of the anti-democratic pressure we're seeing is non-technological.
> Automation doesn't create unemployment over the long run. So that won't happen so we won't need to deal with it.
It sure made a lot of horses unemployed.
As the commentator below implied there’s no certainty that this new type of automation will allow humans to move up the value chain. From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee. Even with that retail employees are barely earning subsistence wages in large cities. Some people moved up the value chain, many moved down.
Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes. ChatGPT informs me that at a broadest definition of office workers (ie all admin) that’s 50% of the workforce in the US and close to 65-70% of all salaried income.
These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?
>It sure made a lot of horses unemployed.
In fact it didn't create any horse unemployment as horses are property and so never employed or unemployed. And it did not create a significant drop in the horse ownership rate, just in the horse population. The remaining horses live significantly better than their ancestors. But I get it's a slogan that hasn't really been thoroughly thought out.
> From agriculture to factory work to office work, there was a previous path to increasing value per employee and thus wages per employee
Again, this point is logically incoherent. It simply does not make sense. If AI doesn't increase productivity then it's inefficient to invest in it. You can't simultaneously have it so efficient it replaces humans and yet not create significant economic benefits. If it does increase productivity then it makes everyone richer which is why retail clerks today live significantly better than retail clerks a century ago and even significantly better than upper class people a century ago. There was a reshuffle of social status but AI don't compete for social status anyway.
> Future automation will replace well paid office work before it replaces manual labour, which will decimate the existing office based middle classes.
Okay. So in that scenario there won't be technological unemployment. I assume that work was valuable (if it wasn't we could improve the economy simply by not doing it). If AI does more of that work and to a better quality while not being able to do physical labor then that's a world where humans handle physical labor (presumably assisted by tools, machines, etc) and have access to infinite cheap services of every kind. That is not a dystopia and is an improvement over the current world.
> These jobs could well be replaced by better jobs, but what exactly would that be, and why couldn’t AI do it?
Note how you're shifting the burden of proof from your claim (AI is totally unique) my claim (AI will function like all previous technological advances). I do not think the burden of proof is mine. Further, even if AI is strictly superior at every job this will not create unemployment unless it is so abundant that it can fill all demand for jobs and it is cheaper. If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.
>> If it is limitless, cheap, and superior to humans at all jobs then that's a post-scarcity society, not a dystopia.
This really depends on how the society is organized and how the output is allocated (which are questions strictly speaking `outside economics'). It could be a post scarcity society OR a dystopia. Or, conceivably, both.
I mentioned elsewhere that full government control or monopolies could disrupt this process. But those are not features of the current economic system and so don't need radical reform to avoid.
I do think that you end up with two cross cutting bets. If you think the danger is from rogue AI wiping out humanity you want centralization. If you think the danger is from someone monopolizing a new central economic resource the danger is from centralization. I'm more in the latter camp.
Anyway, you can imagine a scenario where infinite AI robots can do any task for $5 an hour with a human minimum wage of $10 an hour and where there is no welfare whatsoever such that anyone without a few thousands dollars of capital is permanently locked out. But my suspicion is that we would divert the tiny part of economic production necessary for welfare. Because we do that today and I don't see why AI would make us less generous.
Yeah, the world where there is a single dominant ASI and its really super (but still under the control of a human owner) looks very different from the world where there is a broad ecosystem of AIs of roughly comparable power. I think the latter is more likely, but I don't have an argument that the former is impossible. [Also, if we end up with the former scenario and the ASI is sufficiently super, then we could abruptly find ourselves living under a dictatorship of the ASI owner, and that world could look very different in terms of how its organized based on the whims of said owner].
>automation doesn't create unemployment in the long run.
This has been true _so far_. There are compelling arguments that AGI+robotics would change this. Should we blindly believe these arguments? Of course not. But when the rebuttal to them isn't any better than "That's never happened before", we also, in my opinion, shouldn't be quite so confident that it absolutely, definitely, won't happen this time.
In the absence of a minimum wage, I think you would have a stronger argument that it won't, because no matter how good and cheap AGI is, it would always be worth hiring humans at a low enough wage. But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.
But _with_ a minimum wage, it is at least possible that AGI + robotics would _always_ be a better/cheaper option than hiring a human.
There are paths where this might not happen: we might decide to desire specifically human made things in a way that we are willing to pay significantly more for them, as one example.
So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.
The core issue is your second point about increasing productivity of humans. There was a time when computer chess engines could always beat a human, but a computer + human team could usually beat a computer alone. This was the period of "productivity enhancement". That time is gone. A human can no longer improve the performance of a chess engine, and a lone computer will generally beat a computer + human team (assuming of course that the human is actually contributing anything). AGI + robotics is the first technology that has the _potential_ (not guarantee, but potential) to, in a general and widespread manner across all domains, make humans no longer productive in the system. Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).
> There are compelling arguments that AGI+robotics would change this.
You can assume that AI will radically change in its effects compared to what AI currently does and compared to all historical precedents. However, this is not a rigorous belief. It may be compelling but many people find many things compelling for a variety of reasons. Basically, you're arguing: "AI will become different than every other technological innovation, including how AI itself has been for the past few years." This does not have strong evidence and it requires exceptionally strong evidence.
> But it's also possible that the wage at which it would be worth hiring a human instead of an AGI might not be a "livable" (in the strictest sense of the word) wage.
If AI raises productivity then it decreases the amount of money you need to earn to have a living wage. Because it makes everything cheaper. If AI doesn't raise productivity then humans remain competitive. This is a simple logical contradiction in this ideology. They are imagining a world where AI decreases costs and increases production but does not decrease price levels. This is only possible if there are AI monopolies (whether private or government controlled) doing rent seeking. Otherwise competition produces downward pressure.
In fact what AI would do in that case is be hugely deflationary which would make everyone richer. And would likely necessitate the purposeful creation of inflation to absorb the excess production. But existing welfare can be used to handle that.
> So there are cases for why it might not happen, but I have not yet read an argument where, assuming both AGI and good robotics, that human employment is default guaranteed in the absence of any special conditions.
If you assume that we have unlimited AI and robotics then you will produce unemployment. But only in the sense that every person will have a personal army of AI and robots. If there is any unmet need that AI or robots can't meet that is an opportunity for human employment. I guess technically everyone having their own robot army and living on its produce is unemployment but it's not a problem.
I also don't think that post-scarcity is actually coming. But I'd welcome it if it did.
> Yes, a human would be _more_ productive with the AGI + robot than without. But the AGI + robot might be even more productive on it's own than it is with a human partner/overseer, the same way that chess engines became. If this happens, then no, human wages won't rise from increased productivity, and no, new jobs won't be created (for humans).
Humans being crowded out of specific jobs doesn't create long run unemployment. It only matters if they are crowded out of ALL jobs. They can only be crowded out if AI+robotics are better than humans. Not just individually (ie, a robot outcompetes a human) but that the robots are so unlimited they are preferable in all cases. They also have to be so abundant you never run out. If that is the case we are in post-scarcity. If it is not then there will still be jobs for humans.
There's also no sign this is happening. Current best estimates are these are providing 1-1.5% productivity growth per year. That's gigantic but it's not a society ending disruption.
This is just choosing to disbelieve in AGI (in the strictest definition of what that means). Which is fine, I don't think that's an insane thing to believe. But I think it's important to be clear that that is what the assertion is based on. A lot of people (especially around here) disagree with you in that belief.
Also, when the comment you were replying to specifically asked about cases where AI revolutionizes the working world, you start to hit a narrower and narrower path where AI improves enough above current (I don't think current AI will "revolutionize" anything, although it will have impacts), but doesn't get to fully generalized intelligence.
No, it isn't. I can fully grant that AGI will exist and still believe it will create no long run unemployment. My point is that even granting the premises of AGI it will still not generate structural unemployment unless it meets two standards:
1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.
2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.
Only at that point will there be no chance for humans to work and contribute economically. This is true even if AI is better than humans at all tasks.
But if 1 and 2 are true then we are definitionally in a near post-scarcity economy because we are in a world where there is an infinite supply of capacity which is extremely cheap.
>1. It must be cheaper than humans for all tasks such that is never economically viable to use a human.
>2. Its supply must be effectively infinite such that all AI and robotic capacity can never be fully occupied. Because if it is fully occupied then humans can be used for additional tasks.
(1) is sufficient to remove humans from all jobs without (2) as an additional condition. "Because if it is fully occupied then humans can be used for additional tasks." would not be economical if (1) is true.
My best guess is that we will get AGI (potential functional replacement for humans from all economic roles), _possibly_ only economical to displace 1st world workers initially, then reduce costs to make (1) true globally, for any worker at any living wage anywhere.
If we are lucky, and AGIs (and ASIs, if they are feasible), stay under human control, then a sensible way to run such a society is, as beleester noted in https://www.astralcodexten.com/p/open-thread-392/comment/139743176 , you could just have money flowing from consumers to AI companies (including AI-driven factories, farms, etc.), money flowing in taxes from AI companies to the government, and money flowing from the government to the citizens as UBI.
Hypothetically, suppose that GNP quadrupled in a fully AI economy. Say that all flows into AI companies. Say half of that goes into taxes and the other half goes to owners of the AI companies (who spend part of it on AI company products and a little on human servants, if they really want them). Of the half going to the government, say half goes to government purchases (mostly from AI company products and a little on humans doing something (beating dissidents or something else status/power-seeking/human-specific) and the other half goes to UBI to citizens. This would leave all citizens with the same standard of living as today, except that they would not have to work.
If we _got_ a purely AI economy under human control, and got a factor of 4 increase in GNP from the technical advances connected with the shift, and can't manage to do something like this, because we have a job-centered ideology, then we are idiots, and can't take advantage of a bonanza on a silver platter.
>Will currently-bad jobs end up paying more?
No, they will pay less, because presumably the supply of workers in that segment of the job market will increase.
>Will we need to instate a UBI? Will there be enough resources to give an amazing UBI to everyone?
Yes, and yes. But the UBI will be in the form of government employment (and there are certainly lots of potential jobs, from companions for elderly people to teachers and teacher assistants* to free lawyers for people who currently do not get free lawyers, etc etc. And that will itself spur demand for goods and services https://www.investopedia.com/terms/f/fiscal-multiplier.asp
*There will always be some percentage of students who, at the very least, will need personal attention to stay on task. If today we employ two teachers to teach two classes of 30 each, tomorrow we could have one teacher supervising an AI classroom of 55, and five teachers giving individual attention to each of five students.
But why wouldn't you just let the AI give individual instruction to students as well? We're assuming that at this point, AI is more competent than the average public school teacher, so it doesn't make any sense to let human teachers teach them...
AI will indeed be giving individual instruction. But, as I said, "There will always be some percentage of students who, at the very least, will need personal attention TO STAY ON TASK." Not to instruct them. https://thisvid.com/videos/student-gets-dick-touched-in-the-classroom/
Edit: The point is that AI + teachers will likely lead to more learning than AI alone, even if AI alone > teachers alone.
And students, esp younger students, have non-academic needs. AI can't help a kid who just wet his pants, or is being bullied, etc.
it will be like retail but everyone.
retail has had huge productivity growth with resultant loss of staff. The old british tv show Are You Being Served? is a good description of older department store style retail: many full time staff, expansive facilities (multiple floors with an elevator even) and lots of goods.
look at a gamestop or dollar general now and you have a just in time economy mostly on part timers with variable schedules, maybe 3 or less to a store in total. if you work in retail now you are not able to be independent; you live with family or may even be homeless and working (when i worked at Kohls three people were on the truck crew)
this will be everyone's future. everyone will live together, only the rich can escape the house while lot of people just live where they were born or with their parents till they die. no ubi, maybe even less welfare.
There are so many branching points that could radically alter things. That said, here's my hunch on a centroid, predictated on maximal change:
- A) AI that can write code well enough to replace most developers can invest well enough to replace most investors, leading to mass white-collars lay offs. This is a death sentence for giant cities, and a big deflationary risk for the economy
- B) massive gains in efficiency lead to lower costs of production, also a deflationary risk for the economy
- combining A) and B), you'll have a significant deflationary impulse: lower cost of production plus mass layoffs. The economy cannot handle deflation and the money printer will go BRRR. End result will likely be printing money which goes towards a basic income to offset social costs of large scale unemployment.
The combination of A&B will drive much more demand to live in places with lower cost of living. Big Cities will become much more dangerous, less pleasant places to live, with fewer jobs and more crime.
- we will see federal subsidies for energy production + manufacturing (since both are dual-use technologies) and something of a rural+small down renaissance
- employment will no longer be the default economic arrangement - because AI makes a better employee but likely a worse marginal-risk taker and human relatoinship cultivator. Businesses will want more equity-partner type arrangements, where a human (or small group thereof) overseeing an ai-driven business unit owns and is accountable for all the risks in exchange for a cut of rewards. The more the commands can come from the top, and the less understanding of value is necessary - the easier it is for humans to be cut out of the loop.
Instead of mass employment, you'll see much more entrepreneurship as people move with their basic incomes to smaller towns + cheaper CoL areas. Explosion of craft beers, things like games, entertainment, but also therapy, personal training, dietitians, etc. We will see AI enable way more small-scale entrepreneurship than growth for existing compaines. AI will suck at "this new product willl get you the whole market" because chaos and unpredictability are still a thing; existing ventures won't benefit from cheaper economic experiments, because reputation risk is existential for them but nonexistent for new players. Coke can only use AI to make coke cheaper to produce or maybe make marketing dollars more efficient; it's not like AI will have everyone drinking twice as much coke as before. But a new drink with the right mix of protein+probiotics sold in a specific location - now that becomes viable, at a small scale. So if the economy is an ecosystem i think AI leads to an explosion of small-scale ventures, much moreso than growth of existing big ones.
I think cities will be the big losers here, as their raison d'etre gets killed. The "winners" are smaller to mid-sized towns. There's another shift that will happen, with the newly printed money offsetting decrease production costs + unemployment. Governments at all scales will get grabbier, taxing AI production and leading to an increase in grey/black market economies. The end result is much higher price of bitcoin, as value from both equities and land drains into bitcoin, since the first two are easier and cheaper to tax.
So we're talking...widespread human-level agents but no ASI inventing nanomachines or anything?
In that case, the current economic system will keep working but probably 6 million people will permanently fall out of the workforce. This has happened every time we've had a big tech jump, including automating a lot of our manufacturing in the 80's. Some variation of this graph, which is Male Employment Rate, Age 25-54, where in every recession there's a fall in the employment rate and then a rebound but never quite as high as it was.
https://fred.stlouisfed.org/series/LREM25MAUSM156S
What should happen is that old jobs go away and we discover new jobs. I can kinda see that now, I know a guy who was a programmer, found a niche company doing Java development, didn't move, he got laid off, and now he's roofing houses. Nothing wrong with that and it's a skill we need. But every time there's a disruption, not everyone gets a new job for some reason. That's a point of debate.
But if a bunch of service jobs go away...we used to all be farmers, then we all worked in manufacturing, now we're all in services, something new will come up.
The answer is some kind of communism. Give people money for working for the state in some capacity, rather than just giving money for nothing.
That said, nobody seems to understand where the money for UBI is coming from. To my mind it has to be printed from nothing, but I’m open to suggestions.
ive mentioned in other places we already have two models, no communism needed: the armed forces, and prison.
the military is pretty much what you describe, and distributes your UBI.
Having everybody on the army is kinda communist. Although you may run out of wars. So maybe just the army without the wars?
not communist, there is no equality or "each according to his needs." just commanders, soldiers, and the cause. the civilian conservation corps were "army without the wars."
> each according to his needs
Oh no actual communism was like that. There were plenty of wage disparities.
If the economy is being wholly run by AIs, then whoever owns those AIs is going to be very, very, very rich - rich enough that you could easily tax a fraction of their income to provide UBI for everyone else.
Surely the state would eventually just seize the AIs. If they're that dominant they'd be essential for warfighting and having them in private hands would be the equivalent of allowing someone to maintain a private nuclear arsenal.
This is an economic fallacy. Wealth isn’t just gold bars in a vault — it’s claims on future goods and services. Stocks, bonds, houses — they all derive value from someone, somewhere, being willing and able to pay for things in the future.
This is true of AI as well, in fact if the rest of the economy collapse I’m not sure what the AI market is.
Yes. The AIs are producing goods and services, and we are giving people a claim on some of those goods and services. (Or rather, transferring the claim of the AI's owner to other people who need it more.)
As far as I understand this shouldn't lead to inflation since the amount of money in circulation still matches the amount of "stuff" being produced, it's just that money is primarily circulating through the government (going out to unemployed people, and back in through taxes) rather than being directly exchanged between citizens.
> The AIs are producing goods and services, and we are giving people a claim on some of those goods and services.
How exactly are you doing that? That’s what I’m asking. To induce that demand you can’t tax the “wealth” tax of the AI companies who won’t exist anyway unless there are other companies to buy the product, and those other companies won’t exist unless there’s demand from consumers who won’t be able to buy anything as they won’t be employed.
So demand needs inducing somewhere, and there’s nothing to tax.
The only thing that give paper money value is that the government demands that you pay them paper money in taxes, or they'll take all your stuff, and perhaps take you also. So EVERYONE needs money (if they have any possessions).
Also, remember "eminent domain". The government can just take anything it really wants to take, and pay whatever it decides is a "fair value" for it in paper money.
Maybe to further the question, if AI progresses to the point where it can handle most jobs, do you still have software companies? If it is just one person at the top directing a bunch of AIs, then what moat do you have? What stops OpenAI from cutting out the middle man and also replacing the person directing the AIs? Why not just have AIs all the way down?
In the extreme, knowledge and the ability to work lose all value. The only remaining thing is what assets and hardware you have that you can sell or rent to the AIs.
We're probably going to hit the pitchforks and burning datacenters stage way before that, however.
Well, I am now anticipating the AIpocalypse coming much sooner than I expected, because my very much non-techie boss has recently used ChatGPT - "it's so convenient for emails!"
I have no idea who told her about it or showed her how to use it, but if she's using it, then everyone will be.
In the short-term? I think businesses will use it to reduce overheads by layoffs (voluntary or otherwise) and/or simply not hiring on new human staff. The knock-on effect of that will be more people looking for fewer jobs, until (we are being told) all the new jobs magicked up by AI open up and we all have shorter working hours and way more money.
Yeah, I'll believe that last when I see it and not a second before.
It seems like people who aren’t very fluid readers, without the reader’s grasp of the mechanics of writing (speaking not of content) - are those who like the output of LLMs (and before that those writing programs?).
If your boss is writing an email with AI, it’s pretty certain it was not an email that needed writing.
There's a good amount of stupid emails that have to be written in the job, a lot of "I got your message and I read it thanks" acknowledgements of announcements from various government agencies and so on. So I could see her getting the AI to précis the long-winded "we're going to be changing our name from the Agency for Counting Staplers to the Stapler-Counting Agency" emails and then writing up an answer to that.
Currently, she gets me to read the "name change about stapler counting" emails and tell her what needs to be done about it, if anything. I am now replaceable by a machine! 😁
I guess I find it hard to believe that the effort of involving AI in such trivial matters would not be a waste of time for anyone who *belongs* in such a position.
Hopefully she dresses really well or is good at ordering things off the internet for the office or something.
We are a small operation, providing not-for-profit services (the main childcare centre does charge fees to parents, but the vast majority of those are subsidised by various government schemes, which means a ton of interaction with government bodies).
So she does a lot of work that is necessary to keep the place going, she just delegates a lot of the "read this because I don't have time to do so" emails to me and oddly doesn't seem confident when writing emails/certain letters herself. She's perfectly capable of doing so, and does do a lot of her own emails, which is why I was so astounded to find out she was using ChatGPT!
Myself, I find it easier to write the dang email myself as it's quicker than trying to run it through one of the multifarious AI versions popping up (I wish Copilot would curl up and die, for one, as I'm fed-up of Microsoft trying to force it on me every time I use Office which is now rebranded as Microsoft 365 Copilot) but I'm a wordcel. My boss is more a numbers person 😁
Makes sense.
And I’m not unaware of the need for help in writing real things. My husband is the last American Male English Major, and he is constantly handed all and sundry writing assignments in his completely-unrelated-to-that-major job.* But his coworkers do not struggle with email.
*At least among those who could not possibly remember the origins of his work.
If anything, hours will probably become longer rather than shorter as competition for white collar jobs intensifies.
> Yeah, I'll believe that last when I see it and not a second before.
Same. I'm far from a communist, but I do think it would be the right thing to do if there really are more resources and fewer jobs. But the transition will be a nightmare to navigate, and the interim a really detrimental time.
Yeah, what I find really hard to believe is al the bright-eyed optimism about "and the companies that own the AI will be *sooooo* rich that taxing just a fraction of their riches will pay for the rest of us", much less the "they will be *soooo* rich they will gladly share that with the employees!"
No company that makes moneybags profits ever wants to give it to the employees, much less pay it in taxes (even Ben and Jerry gave up on the original hippy idealism around CEO salaries). Why else do you think my government was trying to *refuse* a €13 billion windfall from Apple? They did not want to be killing the golden goose (or rather, the golden goose deciding to fly off to another country with an even nicer tax regime for multinationals).
on the plus side given their increasing autism at least the shrimp will have their utillions maximized.
Whether they want to pay taxes or not, they will. The only other certainty is death.
>No company that makes moneybags profits ever wants to give it to the employees, much less pay it in taxes
Whether they want to pay it in taxes is rather irrelevant, isn't if? Note that current federal revenues are 17 percent of GDP.https://fred.stlouisfed.org/graph/?g=ockN But the corporate tax rate is 21%, and the top marginal rate is 37% https://www.nerdwallet.com/article/taxes/federal-income-tax-brackets
So, if AI leads to a transfer of income to corporations/rich folk, total federal tax revenues could easily increase even if tax rates do not increase.
I’m looking for more examples of a thing that I can’t find a good name for but is kind of “nomative determinism for words”: a word or phrase that has a meaning derived from a modern set of circumstances, yet when its component parts are broken down into their roots they mean roughly the same thing. It’s okay if it’s a stretch.
I’ll give a couple of examples here. “Astroturf” is a verb meaning “to artificially inflate the popularity of a person or idea”. This comes from a pun on “grassroots”, as Astroturf is artificial grass originally created for the Astrodome in Houston. But if one naively looks at the roots of “astroturf”, one finds “Astro-“ meaning “outer space” and “turf” meaning “to cover with sod”. So a plain reading of the word would be “to cover with sod a place very far away from one’s home”, which fits pretty well with “to pretend that one’s ideas are popular elsewhere”.
“Cellular”, describing a mobile phone, kind of fits too. The word comes from how the mobile network was originally set up (divided geographically into cells). But “cellular” is just “cell” with the suffix “-ular”, a suffix which means “relating to” or “referring to”. And “cell” comes from a French word meaning “a Catholic monk’s quarters”. The purpose of said quarters is to provide a private place for 1-on-1 communication with whoever the monk wanted to talk to in Heaven - generally a saint, the Virgin, or God himself. But if you’re presented with a device and told “this is cellular”, you might think “ah! This is a device that enables private 1-on-1 communication with someone quite far away“ and you would be correct.
They’re both kind of a stretch but that’s what makes them fun imo. Anybody got other examples?ChatGPT was utterly useless at coming up with more examples, but maybe I needed a better prompt.
I thought “Astroturf” contained an element of pretending your belief is not quite what it is, or deflecting attention from its less popular aspects. But now that I think about it, I find I’m unable to define it.
I once brought home a piece of Astroturf from an Astros game. They had recently re-turfed, and these little squares were fan souvenirs 😆. We were more easily satisfied then.
It's not precisely this, but "folk etymologies" might be helpful.
Sounds like a combination of false cognate + backronym.
So... Peter Thiel, Palantir and his association with rationalism is pretty fucked up, huh?
I’ve been thinking about Theil quite a bit since reading coverage related to Hulk Hogan’s death. At the time it went down, the Gawker lawsuit was not on my radar. Or Gawker itself or any other bullshit gossip web site for that matter.
I knew that Hogan was one of those WWF guys probably helped by the fact that Jesse Ventura was at least locally famous.
Hogan and I literally had crossed paths once on one of the Minneapolis urban lake strolling and bicycle trails.
I actually earned a pro wrestler scowl from him for my barely stifled laughter at the ridiculous figure he cut out in the real world. Remembering that surreal moment still makes me smile.
I had also read Thiel’s ‘Straussian Moment’ essay after the NYT interview. His thesis there was stated much more eloquently by Jack Nicholson as USMC Colonel Nathan R. Jessep in ‘A Few Good Men’. [1]
I’ll agree it’s always been true that there are bad people in the world and it’s necessary more often than we would care to believe to act in ugly ways contrary to deontological ethics. The consequences sometimes have to come first.
And here we get to the ‘but’. Traditionally these exceptions to deontological ethics have been made by sober minded, patriotic career senior intelligence and military personnel. In 2025 that is in danger of no longer being the case.
It wouldn’t be unreasonable to say that Thiel’s decision to endorse Trump in 2016 put Trump over the top. Thiel now, and probably always thought of Trump as a Useful Idiot that will help usher in Thiel’s own, IMO, kind of insane, post enlightenment order.
Thiel has amazing wealth generation skills but it’s frightening that he puts that wealth to use against the ‘up front’ ideas and ideals of the American Republic. The dark stuff was always meant to be the occasional exception to keep things on track. The wealthy of course always had a say in what was and was not good for the country and also coincidentally good General Motors, at least in the prior century.
But those people were not looking to tear things down to the studs and remake them in an order contrary to established Constitutional and civil norms.
I think of Thiel as a dangerous man with a lot of financial power, wielding it for what can be described best as eccentric, vanity projects. If things do go south in this country he has his “exceptional circumstances” New Zealand citizenship in his back pocket.
[1] ‘You can’t handle the truth.’ A Few Good Men
https://m.youtube.com/watch?v=9FnO3igOkOk
Coming from a GM/Chevy family, I miss the common-sense association between the well-being of a country and the well-being of its businesses. Too many massively multinational corps with execs trained in the school of Ayn Rand these days, I suppose.
I enjoyed Ross Douthat's Frost/Nixon moment, asking him if he thought humanity should survive and he floundered
I'm not sure if that was Thiel the Transhumanist tripping over a way to say that we should be posthuman, or Thiel the Edgelord thinking that most of humanity should just disappear.
Is there no roomba-type thing for lawn mowing? I haven't seen anyone use it. Is there a really good product here?
Robotic lawn mowers exist. The NYT even has an article reviewing which are the best.
https://www.nytimes.com/wirecutter/reviews/best-robot-lawn-mower/
They've existed for a long time, predating actual Roombas (brand name iRobot Roomba were first sold in 2002,). I remember reading a newspaper article c. 1996 about the CIA headquarters being an early adopter because using lawnmower robots saved them having to do security clearances on their gardeners.
I can't offer product or brand recommendations, but I can offer a Tumblr story from 2020 about a herding breed dog named Arwen who encountered a lawnmower robot, decided it was a sheep, and figured out how to herd it.
https://www.tumblr.com/gallusrostromegalus/618714943606423552/while-i-cant-fault-your-reasoning-on-robot
https://rtmlandscapes.co.uk/robotic-mowing/
I watched one of these guys. It would run in the rain no problem.
Yes, flymo have one and there are others, the lawn has to be straight edged and flat.
They don't need to be flat. In some rural Austrian villages my impression is that almost every household has one, and some of them have pretty steep terrain.
Cool, do you have a brand?
I have briefly looked into into tests, and Husqvarna came up several times and seems to be suited for steep terrain, and Mammotion was recommended for extremely steep terrain. But Google will provide you with lots of testing reports that compare different models, so you can have a look. (I searched in German, otherwise I would have sent a few links.)
I've seen Kärcher robot lawn mowers around. The simple ones just bounce around the edges like an old-style dumb roomba with no mapping, and after a while the lawn is mowed, which is all one was asking for.
I vaguely remember reading about robotic lawn mowers injuring hedgehogs, whose instinctive behavior of rolling up in a spiky ball served them well against all kinds of threats but not against these beasts of metals and blades. Hedgehogs are mostly night active, so I would personally not run any mowing robots during the night in Europe.
https://www.dw.com/en/hedgehogs-threatened-by-robot-mowers-german-activists-warn/a-70160521
Modern lawn mowers detect hedgehogs and steer around them. At least in independent tests this works very well.
What was your experience like with the education system, and what do you think needs to be improved?
I went to a well-funded and well-staffed high school, but nobody seemed to actually care about my education. Teachers didn't really care if you got bad grades or good grades, and my parents were only ever interested in punishing me for every point short of perfect.
Nobody ever told me if I was doing well, and this has caused me to make a lot of bad decisions in life.
I graduated high school with a 3.8 GPA, but I thought I wasn't smart enough for anything but art school. I got into a really good art school, but after I got a B on an assignment, I thought that meant I wasn't perfect, so I switched to an easier major, one where there are no paying jobs after graduation.
It sounds like a Dreamworks movie but I gotta say from experience that the most important lesson is believing yourself. I think that's completely at odds with the factory model of schooling prevalent in the West. Kids need adults who care about them, but with the the way teachers are underpaid and disrespected, why should they make the effort?
Students need to be aware they are probably getting an average experience and that may not be enough to do anything cool.
I was talking to a friend about this recently, I argued that we don't actually need school and should abolish it, when he challenged me on it here's what I wrote:
"The main thing I remember from our education though was pointless cruelty and having my human rights violated, and I think stopping that should be a terminal value in itself even if it's a little less efficient on some economic metric
But no school isn't anywhere close to my ideal, I just think it might be marginally better than the status quo, my ideal system is something like this:
Kids go to a daycare/bootcamp type place until they're 12-ish where they spend most of they're time outdoors and socializing + learning essential skills, reading, math, building, etc, then you give them a eurorail pass or equivalent, a museum card and a library card that's valid in every library in every city, also there's a giant kid-friendly library in the center of every town (we'll convert all the old churches and cathedrals into libraries) plus a network of youth dormitories everywhere they can stay in for free till they're 18 or whatever, this'll all be reminiscent of the german concept of wanderjahren/"wander years" (but they don't have to leave their home/parents obviously, but they have freedom like an adult would), finally it's now normal for kids to shadow/apprentice with any profession they're interested in, a teenager can just walk into a hospital/lab/mechanic/kitchen and ask for an apprenticeship as long as they don't get in the way, leave if they get bored or whenever, also they can enroll in university at any age if they can pass an entrance exam"
Now to be clear I know it sound kinda crazy and I'm not confident this would work or be better, I just think we should at least try it and see what happens (which is my opinion on almost every issue). But also note that this is the system in my ideal world, and I'm not sure it would be politically possible to even move in this direction in most countries.
I wasn't particularly fond of school, but I think you can make a pretty strong argument that a lot of its unpleasant aspects have some very necessary functions behind them, particularly if you believe that school has a purpose outside of academic education or daycare. Being socialized to function well on your own, with your peers, completing unpleasant or uninteresting tasks for higher and occasionally abstract purposes, and dealing with figures of authority with varying levels of competence, practice in athletic/physical fitness, and so on. Life has a whole lot of dullness, tedium, and cruelty, and learning how to deal with it in a safe-ish, low-ish stakes environment seems important. At the end of the day, most of school's failures from an educational standpoint are the result of compromises made during the transition from the optimal method of education practiced for most of history: one-on-one instruction.
That said, the group instruction, while certainly a downgrade from direct tutoring/apprenticeship, strikes me as having some real benefits that result from the peer-to-peer dynamic. Occasionally, this can look rather cruel. David Foster Wallace has an interesting bit in one of his essays on grammar, mentioning that students bullying young grammar nazis are effectively the student body forcibly educating the grammar nazi into fluency in conversational spoken English.
For a brighter example, I've observed plenty of times in K–12 where an intellectually advanced but socially or skillfully deficient student was gently and respectfully shown how to do something "properly" by their peers—talking to girls, lifting weights, playing cards, etc. We can say that removing or reshaping the modern education system wouldn't necessarily mean that kids wouldn't be properly socialized, but the trend I generally observe today is less school means more suburban isolation and doomscrolling.
A more realistic version of this, IMO, would be something like: maximum ~4 hours a day of learning the essentials (math, reading, finance for teens etc). Rest of the day is essentially daycare, but with lots of elective learning activities. Book club, watching a documentary, educational games or shows, programming class, chess club, college professor teaches something about their subject, whatever. I know I would probably have loved and joined many of such activities, but if other kids don't want to, they're free to just hang out at the playground. Older kids could just go home obviously.
Yeah this sounds good and like something we could do with existing infrastructure, without totally reorganising the structure of society like in my example
What with Scott grumbling about people promoting their stuff in the comments and muttering about whether ACX and/ or the comments have gone downhill this hardly seems the moment to mention my (completely free and no ads) podcast. Again!
Nevertheless I will just mention my latest with Peter Marshall on the early English Reformation and the attempt to strangle it in its cradle - the rebellion known as the Pilgrimage of Grace. And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. And I know alsothat podcasts are an incredibly inefficient way of taking on information on board. But (and it is a huge but) there is a real pleasure in listening to somebody like Peter who is utterly expert (no preparation, no questions in advance) who can talk with such enthusiasm and eloquence. Anyway some things I learned:
- The Lollards (reformers before the Reformation) are sort of conspiracy theorists. “It’s not really the body and blood of Christ - they’re lying to you!!”
- Henry didn’t make himself head of the Church of England. He discovered to his surprise and delight that he had ALWAYS been its head. Take that bishop of Rome!
- And similarly he finds he was never married to Anne Boleyn. It’s an annulment not a divorce.
- The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.
- Once Catherine is dead (probably natural causes) and Anne is dead (very much not natural causes) the slate is clean. No more problem with remarrying so the way is clear to Henry rejoining the Church of Rome. But no, he’s been having too much fun as head of the English church. The horse has bolted.
And that’s just the introduction to the podcast before we get to the Pilgrimage of Grace and the rainstorm that changed history . . .
(Actually it is quite interesting how often English history turns on the weather. I am thinking of Waterloo, Agincourt and there must be others.)
Anyway here is a link to the podcast. It’s called Subject to Change though I think there are a few of that name so if the link doesn’t work and you are googling it add Russell Hogg and the right one pops up. Peter Marshall is such an engaging speaker so if you are doing the laundry or out walking this is well worth your time 🙂
https://pod.link/1436447503/episode/117f4b3a29d0a6346a787c0a72796efb
>The Bible has two injunctions about your brother’s widow. Leviticus which says have nothing to do with her or you will have no sons. Well, possibly children, but translation is a tricky business and ‘sons’ fits Henry’s case better. And then somewhere else in the Bible it says the opposite. Awkward.
That passage in Leviticus is talking about your brother's wife (your brother is still alive), not your brother's widow (your brother is dead and she is no longer his wife). There's a big difference there.
Henry’s advisers would beg to differ. Well, naturally!
"And yes I know the English Reformation is probably not an area of deep fascination for readers of this blog. "
Well, I for one am very interested in this. I've been reading (some) on both sides of the debate, yes of course Eamonn Duffy, but also Diarmuid McCullough on the Edwardian reformation which is the one that stuck, so far as steering the course of the English Church. Henry of course wavered all over the place, so as soon as he was safely dead the Reform-minded nobles in charge of the child king made damn sure he would be raised properly Protestant (rather the same as the Scottish nobles did with James VI but with somewhat less pointless cruelty).
Mary's effort to both undo the reforms and introduce a modernised (more on the Continental model) Catholic Church went nowhere because she died too soon and the Reformed had established themselves pretty strongly by the time she came to the throne. Elizabeth was less worried about religion per se and more about plots, so Walsingham as spy-master tracking down and executing recusants and Jesuits as traitors was the emphasis of her reign. You could believe or disbelieve whatever doctrines you liked, so long as you conformed with public worship and the monarch as head of the church (where by this time the really important part of that was 'unchallenged head of state, Catholic pretenders keep out').
The interesting (and sad) part is how Henry blew through the loot from the dissolution of the monasteries on pointless warring in France trying to prop up the increasingly obsolete claim to Normandy and to establish England as a power on a par with France and the Holy Roman Empire. Reform was definitely needed, but it's one of the great might-have-beens if it had happened within, rather than Henry burning it all down and 'discovering' his own mini-church.
Interesting. I do like your podcasts.
I was going round an English country house in Wilton the other day (small house, incredible collection of paintings) and came across this inscription on a stone dating from 250 years ago and I liked it so much I thought I’d share:
Beneath this little stone interr’d
Lies litle Charlotte’s little Bird.
Who, tho a Captive all day long
Sang merrily his litte Song
When the little Favourite died
Awhile his little Mistress cried.
She has almost forgot him now
So stranger, weep a little Thou.
1778
I was in Wilton on my way back from the Chalke Valley History Festival. Held late June every year and as far as I can tell the best history festival in the world. Highly recommended!
That was nice!
Quite lovely, thanks for sharing!
I'm reading this on the substack app, trying to open the links to the comments.
Substack opens those links in its shitty browser, and of course all I see is a blank screen.
I can't even copy the link to paste it in a proper browser.
Thanks, substack!
I have twice installed the substack app but it has never had as good management of comment threads as the browser. But I recently started writing my own substack, and it turns out the browser displays comments on your own substack differently from those on others, so I’m worried I’ll have to use the app for that.
Personally, I intensely dislike websites trying to push their apps on me, and flatly refuse to use them for stuff which could reasonably be a website. With substack, while the website version is not as usable as the wordpress of SSC (why would I want to load an ACX article piecewise?), it still is mostly useable (as long as you have an extension which restores text entered into text fields or write longer replies in a text editor).
I also use a text editor for longer comments (and comments can get quite long when I'm quoting someone in response).
Been reading substack on firefox, both desktop and mobile, it's nice enough, why would I want a silo'ed app? I still type longer comments in a text editor rather than an on the site itself, because text editors have better UI than a little box in the middle of a website or app.
https://idiallo.com/blog/dont-download-apps
The killer feature for me in the app is the surprisingly good text to speech.
Yeah the browser is better in that it doesn't block highlighting and such, but once comments reach critical mass it get super laggy. Thanks Substack for being an great platform, but please a little love to the comments rendering?
I have this memory of something Scott wrote offhand in perhaps an open thread anywhere between 1 and 3 years ago, mentioning he was currently thinking of X, where X is some idea about how thinking patterns grow inside the brain as physical structures, as in, they physically grow over time.
I either hallucinated reading this, or am remembering it incorrectly, because I can’t find it, but the idea fascinates me so I’m sad about not finding it.
Does anyone else remember?
At the beginning of the most recent contest review, Scott's "Why Do I Suck?" post (https://www.astralcodexten.com/p/why-do-i-suck) was linked to, and at the beginning of point 5 he says:
"Lately I’ve been finding it helpful to think of the brain in terms of tropisms - unconscious structures that organically grow towards a reward signal without any conscious awareness."
Is this what you're thinking of? I just read it today cuz it was mentioned in the review.
Oh my god yes that’s it! Thanks so much ❤️ In my memory was something clearly very different, but this is what caused the memory.
Something about clusters of neurons forming standing waves, with an electrical impulse going around a ring over and over until interrupted by something else?
That was not Scott himself, but a guest post by Daniel Böttger.
https://www.astralcodexten.com/p/consciousness-as-recursive-reflections
Hmm, potentially… Do you have a link or reference, or are also remebering vaguely like me?
I am remembering something I read on this blog, either as a post or in the comments. Would that Substack had the old blog's tagging system so I could just search for Neurology or something.