Trump is federalizing and deploying the national guard in California over newsoms objections, arguing that the (afaict mostly non violent, no injuries that I can see though there are some damaged cars) protests are an emergency, even though said protests are entirely downstream of ICE doing things like disrupting elementary school graduations.
Unfortunately, like many words our president uses, the word emergency has lost all meaning. Looks like Trump will reach for any pretext he can find, even if he has to make one up. (Worse, Hegseth is busy tweeting how he's going to send in marines next.)
Looking ahead, I'm not really sure how we can expect fair elections next year for the midterms. Trump has been spinning election fraud bullshit for literal years now, and he's already tried sending out EOs and intervening in state elections (see north carolina, https://www.cnn.com/2025/05/27/politics/doj-north-carolina-lawsuit-voter-registrations). If he's willing to federalize national guard members on such a flimsy excuse for a few illegal immigrants, I have every expectation he will try to do something similar for key votes in important states
While I understand the reaction to deployment of the national guard in a rare way, I'm pretty sad that this whole story is playing out on an immigration-related topic rather than, say, protests against the GOP bill.
Cutting taxes (or "extending tax cuts", whatever) while limiting health coverage is a legit tough topic for Trump and the GOP - see the GOP infighting. And then perfectly on cue, we have a news story that distracts from that entirely and creates media-friendly scenes on a topic that is much more favorable to trump and the GOP.
Seriously, why aren't there any protests against the Medicaid cuts? And aren't we basically playing into the right's hands by focusing on immigration?
There are protests against the cuts (see what's happening w/ Joni Ernst). I think raiding elementary schools is more visceral and as a result significantly more provocative. The conspiracy theorist in me thinks that's the point -- the admin is looking for pretext, and its not like "the left" is an organized coalition that is able to tell protestors when and where to go in a nationally organized way
True, I didn't even register that her comments were at a protest.
True, this isn't just a criticism of an institution or organization - protests happen more organically. But it's telling that the same scale of protests aren't happening organically for the bill, right?
It seems like it's always been the case that immigration seems to be riling people up more (recall the house dems who tried to enter the ICE facility in NJ).
Yea, I mean like I said I do think it's just more visceral. You can see people being handcuffed and put away -- the current government is making a mockery of the process. Whereas protesting the bill requires understanding whats in it and why it matters, and most people don't understand that, and even those that do have to talk about abstract harms like "medicaid cuts will lead to people losing health insurance which may lead to them dying" compared to "this US citizen kid no longer has a mother"
“The old songs speak of endings that wear the mask of beginnings, and even the stones remember what the living choose to forget.“ - The Two Towers
If the demonstrations WERE violent riots instead the mostly non-violent (you said the line!) protests you think they are, do you expect to see anything different than you are? Would that change your mind on whether this deployment of the National Guard is justified?
I'm glad you're really concerned about safety. You know we could stop the "violence" today! Just have DJT go on air and say that ice will no longer operate in California, and boom protests over. Not so hard is it?
By the way, can you explain to me why we really should have ice members in full tactical gear raiding elementary school graduations with the intent of arresting parents in front if their children? You don't think that's, idk, a little aggravating to local communities? Maybe causing some of these protests? Let me turn this back around at you -- what do you think is a good reason for protest, violent or otherwise? Do you think illegally "kidnapping and renditioning members of the community" counts? Are you sure everyone you know and love has their papers in order?
It seems to me that Shankar's point is that it's not clear whether you think it's ever OK to call in the National Guard. He asks whether you would see it as acceptable if there *were* violent riots, and you say the present demonstrations could be stopped if the government agreed to do what the protesters want. And then you add some details about how what the government is doing in this case is especially aggravating. Yeah, I agree it's aggravating, and that if ICE left California the protests would stop, but that doesn't answer Shankar's question: Do you think it's never OK to call in the National Guard?
When schools in the south were integrated for the first time the National Guard was called in to prevent violent protests. If we'd just decided not to send a scattering of little black kids to all-white schools there would have been no need for the National Guard because the people that wanted no blacks in the school would have been content. Should that situation have been handled by backing off?
I dunno, if I say “X is extraordinary and hasn’t happened since 1965” I’m not sure “do you think X should never happen under any circumstances” is a productive question.
It was not "pretty clear". If you want people to respond without snark, maybe don't frame things as a gotcha. I responded to eremolalos because he actually is constructing an argument
I won't insult your intelligence by pretending to believe that you failed to understand my question, "If the demonstrations WERE violent riots instead the mostly non-violent protests you think they are … would that change your mind on whether this deployment of the National Guard is justified?"
I think this is extremely generous to Shankar, who as far as I can tell has managed to consistently find a way to defend the most authoritarian person he can find regardless of ideological consistency (unless 'own the libs' is a consistent ideology).
That said, sure, I'll answer the steelman.
> Do you think it's never OK to call in the National Guard?
Well, the vast majority of cases where the nat guard is called in is at the request of the governor or at least some local official. This of course makes sense -- the nat guard is meant to supplement local enforcement in cases where local enforcement is incapable of handling things. This is extremely effective and beneficial during, say, national disasters (katrina, sandy) or actual violent riots (Rodney King, J6).
So I'm assuming you're implicitly adding an additional caveat in your question: "Do you think it's never OK to call in the National Guard *if no one else in the state asks for it or explicitly doesn't want it*?
I feel the need to point out that now we are in more or less uncharted territory. The National Guard has been federalized without explicit request only 4 times. Three of those times were to enforce desegregation, first under Eisenhower and then twice more under JFK. The fourth time was yesterday by Trump, to "free" the "once great American City, Los Angeles" which "has been invaded and occupied by illegal aliens and criminals." (His words, not mine.)
Overall, there are very few data points here. I think I could try to string together some kind of generalizable principle like "its ok to use the nat guard to protect rights but not to disappear people off the street" or "its ok to use the national guard as long as the person pushing the 'deploy military button' has an understanding of reality" but I'm sure someone will just accuse me of begging the question.
But also, I don't think I need to defend some kind of generalizable principle in the first place. I think it's totally fine to evaluate whether a particular usage of the national guard is ethically permissible or not on a case by case basis. The national guard is a tool. It can be used for good and for bad. "Are there ever cases where you can use a hammer?" On a nail, yes. On a head? Generally no. Case by case. The question of whether this is a valid deployment of the national guard is entirely dependent on whether you agree with the behavior of the government they are being deployed to serve. That means that the argument shifts to "do you think this particular justification for deploying the national guard is good?" But that's exactly where the discussion should be in the first place! I *dont* think LA has been invaded by criminals! As of a week ago it was a perfectly peaceful city where everyone was going about their day, what the *fuck* is Trump talking about???
Does this mean some MAGA jackboot can come in here and say something like "well *I* think this *is* a good use of the national guard, so suck it!" Yea, sure. If someone wants to out themselves as having the ethics of an aggressive rattlesnake, I can't stop them. More generally, I can't argue with someone who doesn't want to see reality. There are people in this forum who will openly defend Trump's tariffs as economically necessary to for the growth of the country. There are people in this forum who believe that America is and ought to be a country for "white" people (whatever 'white' means). And there are people who believe that Trump is *not* an authoritarian or if he is, it's totally legitimate because <some Biden Derangement Syndrome drivel>.
Maybe with more data points I can come up with some valuable SCOTUS-tier test that we can all use to resolve these debates forever into the future. But for now, I think that deploying the national guard to desegregate schools is great, and we should all applaud the government for stepping in. And I think that deploying the national guard on flimsy authoritarian pretext to continue disrupting elementary school graduations and traumatizing kids is disgusting, and the people defending such acts should be ashamed.
Even if the minimization of violence is one's ONLY goal, intuition, reason, and history all suggest that adopting a policy of, or developing a reputation for, abject capitulation to the slightest resistance is a poor strategy for achieving it.
I believe the standard justifications for state violence, "the price you pay for living in a civilized society" or "the things we choose to do together" should cover all your complaints.
And this is why there's no such thing as state provocation, or due process, or unjust state sponsored violence. This is why there has never been any issue of government oppression anywhere in the world, and why every time a government has sent in the troops it has been fully justified.
Went from "freedom party" to "my interpretation of order at any cost" real quick huh.
My hot take is that if you send in troops you're going to cause more violence. Your response seems to be "well what _else_ do you want me to do? *NOT* kidnap and murder people???" Yes. If youre worried that protesting extrajudicial renditions is one step above anarchy, let's get to anarchy first and then you can say "I told you so"
Most state violence is unjust. I do not know of a single government that ISN'T oppressive in some way. This instance seems unremarkable.
> then you can say "I told you so"
This is perhaps not as compelling a reward as you seem to think it is if you're implying it's commensurate to getting a situation I would consider disastrous. At any rate, I don't consider actual anarchy a disaster; if the alternative on offer is something like Rothbard's Button or even Norquist's Bathtub, sure, your lofty arguments about the evils of the state would be apt, and I would need little convincing. But between the statism of the status quo, and the statism the rioters would establish if they were able, you need different ones.
I always forget that you seem to be arguing from a particularly strange brand of nihilism, where you simultaneously agree that everything that's happening is terrible but still somehow find yourself defending the positions you claim to disavow. If state violence is unjust, why do you continually defend it? I don't care how unremarkable you may think it is, it's ethically bankrupt to defend something that you think is bad. This is like Jodi Ernst going "well everyone dies eventually" to justify cuts to Medicare
The previous two times the National Guard has been deployed against protestors were the George Floyd riots and the 1992 LA riots. Do you think the current protests are causing a comparable amount of death and destruction?
Our default assumption should be that the deployment of the military to do policing is *not* justified, because the military aren't trained to be police and are more likely to kill people (e.g., Kent State). I haven't yet seen any evidence that the regular riot police are unable to contain the protests.
A "comparable amount" of death to the George Floyd protests? Just how much death do you think that is, exactly? Seriously, take this as a moment to check your calibration: if these protests were as deadly (on a per capita basis) as the George Floyd protests, how many people would you expect to be killed in them? Please make an earnest attempt to estimate the number based on what you know before looking it up or reading further.
.
.
.
.
.
.
.
.
.
As far as I can tell the correct number is 0. Or rather, it's some fraction less than 1, which may or may not round to 0 depending on how many people participate in these protests.
For reference, the George Floyd protests has ~20 million participants and were directly connected to a grand total of 19 deaths. That is, one person per million who participated were killed as a result. [1] For reference, about 2 people per million die of homicide during a comparable time period (two weeks) in the U.S. during ordinary times.
The public perception of these protests as immensely damaging and destructive is a political narrative that has been pushed significantly at odds with the truth. What the legitimately were was large. They covered the whole nation for multiple weeks, drawing in a staggering number of people. The narrative relies on scope insensitivity and Chinese robber style reporting to paint a politically useful picture of an event that was too large and too widespread for people to really get a clear picture of it in its entirety.
[1] Glancing over the list, "as a result" may be generous, as it seems to include people who died in close proximity to the protests without clearly establishing cause. But with a number this low it hardly matters.
A fair point. I was mostly looking at scale and trying to make the point that the anti-ICE protests are much, much smaller. I didn't mean to imply that the George Floyd protests were particularly deadly.
It's Chinese Robbers all the way down. Right wing media has the same “Chinese Robber” strategy with stories of urban decay, or with welfare abuse, or voter fraud.
As I recently wrote, the NYT publishes statistics, Fox News published thousands of anecdotes -- a veritable flood of exaggerated and spun bullshit, told by an idiot, full of sound and fury, signifying nothing.
Hey Noah, are you in a state where cannabis is legal? If not, do you ever use it anyhow? In the porn you watch are there women who might be under 18? What's the worst thing you've said on line or in print about a US government official? Do you own a gun, and if so is it and the way you use it legal in your state? Does the way you store it meet all the state's requirements? Have any women in your family had an illegal abortion? Are you gay? Are you trans? Even if you are straight, have you ever had some sexual contact with someone of your own gender? What exactly transpired? Have you ever hit the person you're in a love relationship with? What's the worst thing you've done when you lost your temper? If you have kids, what's the hardest you've ever hit one?
No to all of these. Unlike you, Eremolalos, I don't take drugs, store guns in an unsafe way, go around getting abortions, have gay sex with random people, or hit women or children. Kind of sad that you assume people go around doing this really, and it makes me wonder what kind of life you are living.
Deepseek R1 came up with an idea for a satirical science fiction, which I shall summarize here. In the novel, there are two viewpoints being switched between. One of them is a Studio Ghibli parody, with a talking squirrel. In the parallel timeline, engineers for an AI company are trying to get the system to work. (It’s obvious how the timelines are going to connect, right). Part of the satire comes from contrasting the empty and meaningless lives of San Francisco software engineers with the Miyazaki universe. I thought it was a bit near the knuckle. Snow Crash with a talking squirrel.
A few weeks ago I wrote a piece on epistemology, the importance of credentials, and magas war on credentialling. It got some positive reception on ACX, and a lot of those thoughts have continued to buzz around in my head, so I wrote a follow up here: https://theahura.substack.com/p/fix-the-root-not-the-fruit
The basic thesis is that most people have a blind spot when it comes to their personal epistemological systems -- you don't know what your information sources don't tell you. Many people are as a result untethered from reality, because a concentrated media apparatus can Chinese Robber everyone into believing things that aren't really true. MAGA uses this to great effect (though it's unclear if this happened intentionally or by accident). I suspect the left will recognize this and try and do the same, leading to further enshittification all around.
Unfortunately the piece was scheduled to publish right around when elon v trump started up, so it was a bit overshadowed, but now that that's cooled a bit i figured id send it around
It's fun to watch people on the Left have some mild exposure to what people on the Right have been dealing with for days. Chinese Robbers: school shootings. New media landscape: old media landscape.
the republicans are the most dangerous when they take the left's tactics and use them, because they seem to do so to great effect. the left doing "mean world" vs the right doing it, but the right acts on it more
I will say, though you didn't intend it this way, if this is what the right has been dealing with for all this time, it's all starting to make sense 😂😂😂
1) School shootings are a big deal because _children die_. The fact that toddlers are killed extremely violently basically every year is at this point pathetic. Chinese Robbers applies when something is made to seem more prevalent than it is; the correct opinion here is "any kids dying in elementary schools is too many, percentages be damned". (Caveat that I'm not talking about gang shootings in high schools -- also bad, but not modal)
2) it's extremely misleading to even imply that the current right wing landscape is equivalent to the previous left wing one. Forget about rigor, the current fox news / Joe Rogan setup is literally propaganda.
3) I've not historically been left wing, nor consistently voted left wing. But the embrace of idiocy as a virtue and mediocrity as a gift has pushed me left.
"School shootings are a big deal because _children die"
About twenty times more children die by drowning in swimming pools, than in school shootings. Is this a bigger deal than school shootings? Do we need to put that AR-15 ban on the back burner while we focus on banning backyard swimming pools?
School shootings are a big deal because TV and the internet bombard you with many stories about each school shooting, in a manner calculated to cause maximum "engagement", meaning shock, fear, and outrage. And the nature of the media economy is that there will *always* be something that is made into a big deal by the shocking, terrifying, outrageous deaths of telegenic children (or maybe pretty white women), even if the number of deaths is tiny compared to more mundane causes.
It was Satanic ritual abuse for a while, and shark attacks one year, and Islamic terror and superpredator youth criminals and more things than I can remember over too many decades. Now it's school shootings. And if you ever stop those, it will be something else next year. Something that will shock and terrify and outrage you just as much, with the same passionate intensity of "Think of the Children!!!!". And it will never, ever, *ever* stop. Until you turn off the TV, or at least change the channel.
And then maybe read some boring statistics about the actual problems affecting the community you live in, so that you can maybe do something actually useful (like not buying a house with a backyard swimming pool).
Sorry, you're not going to convince me that gunning down kids is something that's just fine if you look away. If I bought into that line of reasoning, I'd be donating to shrimp welfare.
You look the other way on issues orders of magnitude more consequential than school shootings. Why is this one so special to you? It has to be something more than "children die", because children die in lots of the bigger issues you don't care about.
You're asking me to view the world strictly from a consequentialist lens. Again, if I wanted to do that, I would be donating to shrimp welfare. I'm not a strict consequentialist, though I massively respect those who are and live their lives accordingly. And though I could come up with a lot of reasonable sounding arguments for why people in general should care about school shootings more than pool deaths, like
- we've basically hit the limit on what we can do to stop pool drownings but have hit no such limit on school shootings
- these things aren't mutually exclusive and of course I advocate for people to take responsibility for their pools
- there is a massive difference between things that are systemic and things that can be prevented with individual accountability
- there are real psychological harms to school shootings that dont exist for swimming pools, that you are just dismissing out of convenience
- the per capita incidence matters, if you had as many school shootings as you had pools you would wipe out the child population
...I could make those arguments, but like Im also not going to pretend like I can make a coherent philosophical argument here in 30 seconds that you couldn't tear apart by the works of countless actual philosophy phds who have studied and debated this exact topic more than I ever could. I know enough of those phds, and I also did enough debate at various levels to know that there's a just-so argument for anything. If I adequately respond to "pools" you'll bring up some other thing, like "choking on jaw breakers" or whatever.
If you want a real answer for why I care about school shootings more than pool drownings, it's that the existence of school shootings offends me personally, I think it's a horrible thing much more than I think pool deaths are.
And if you want to go down the line of what offends me personally, I also think that pretending to care about kids dying and therefore making the wide eyed innocent claim that we should all care about pool deaths instead of school shootings, I think *that* is a prime example of being gigabrained, and the people who make those arguments are not serious people who actually go out into the world and make things better so much as people who try to win shitty Internet points even if it means using dead children to make their point (or they are debaters, which is possibly worse). The problem with being gigabrained is that there is always a better argument, and winning the argument doesn't mean you're right. The reason I left debate was because I recognized that anyone with any kind of rhetorical talent has an *ethical responsibility* to stop and consider whether the position they're advancing is, in addition to being morally abhorrent, deeply offensive to boot.
i think the point is that both school shootings and fatalities from riots are very rare things, but both are used to drum up support and there's sone hysteria about them compared to their real life occurrence. Republicans often deal with it in the past but are using it with devastating effect.
I'm not sure the riots thing is that relevant. The example of Chinese Robbers that I use in my post is "illegal immigrants committing crimes". It does happen, but the way in which it is reported is very much like the Chinese Robber Fallacy -- you could have thousands of independent examples and still not have any point at all, something fox news uses to great effect on this issue
I think hate crimes and police killings/brutality are good examples of Chinese Robbers on the left for sure.
Still, I think MAGA is rather shameless about it, because they defend the bad behavior with "but see the other guys did it first!!!" (As if this is anything more than a grade school level defense.)
More generally, I find that even at it's worst left leaning media is more measured. They aren't generally going around saying things like "Sandy Hook didn't happen"
2. Hard to take seriously so soon after the "left wing" media almost unanimously toed the White House's line about Joe Biden's mental health. (Remember "cheap fakes"?)
I use this example because this one seems generally accepted now to have been false. There are others that are no less deception/propaganda, but that you probably believe to be true, and so are unlikely to accept as evidence.
- I've never heard the phrase "cheap fake" in my life.
- you don't have to stretch so hard. There's lots of examples where mainstream media gets things wrong -- I've been hearing this since I started being aware of the news, as early as the wars in the middle east. But pretending like there's a comparison here is only showing your own willingness to sacrifice your dignity.
But, look, strong opinions weakly held. I'm open to learning something new, and you're a smart guy. Since you're convinced, can you justify Infowars to me? Can you help me make sense of the New York Post? How about Catturd? There must be some kind of empirical evidence to back your convictions?
That much-touted lawsuit defense is simply a description of the news-vs-opinion distinction that is standard in the news industry. Fox is unremarkable in this regard. The very article you link to notes that Rachel Maddow's show on MSNBC relied on the same argument in a different case.
Great, so Fox -- the mainstream conservative news source, the most watched cable network in the country, with 99 of the top 100 cable telecasts by watch count and near total coverage in many rural parts of the country -- is as extreme as MSNBC, the most biased left wing media that still retains claim to being 'mainstream' in some way. Meanwhile, no answer to infowars? I think you're basically conceding my point, thanks!
Was just about to post this. This is fantastic and significantly dials down my "everything is going to shit" meter. Arguing "we can deport anyone and cant bring them back" was always absolutely insane.
Next up: give due process to the rest of the folks that were deported w/o due process
Why would it be insane that you can't "bring back" a foreign citizen from his home country? That'd be roughly described as kidnapping?... You can at most accept him back, but if he actually comes back is between himself and the authorities in his country.
If Abrego Garcia had been deported to El Salvador to live a normal life like any other deportee, you might be able to make that argument. But he wasn't, he got thrown in CECOT at our request.
("Deported" is really the wrong word - some of the people we sent to CECOT aren't even citizens of El Salvador, though Garcia is.)
If you ask a foreign country to keep someone in prison for you, and you're paying them money to do so, then you can't pretend that you don't have any input into whether or not he stays in prison.
I'm generously assuming you don't actually know what the details of the case are and are commenting earnestly.
A simple thought experiment: tomorrow the government comes to your house, accuses you of being a terrorist and illegal immigrant, and sends you to El Salvador. What happens?
"O, but I'm not a terrorist" doesn't matter, they accused you of being one, how are you going to show otherwise from inside a foreign jail cell?
"O, but I'm a citizen" doesn't matter, you're already out of the country in someone else's jurisdiction, and they're saying you're not a citizen anyway. How do you prove it from inside a jail cell in another country?
"O, but I would sue before they could do that" nope they got you before you could talk to your lawyer. How do you sue from inside a jail cell in another country?
I'm Romanian, the country blessed with the highest rate of economic emigration in the world. If a Romanian ends up in a UK prison, and is deported back to Romania where he is directly put in a Romanian prison... well, there is no "if" here. This happens routinely. A fair amount of the early immigrants are literal thieves and beggars (a good chunk culturally distinct gypsy) and they do get thrown in prison multiple times, and at some point the host country is fed up with housing them and sends them back with a ban on coming back, and in a significant minority of these cases they have outstanding charges in Romania as well so they end up directly in prison.
I'm still baffled as to what exactly is so... unheard of. Other than US ignoring its immigration laws for a couple of decades and then suddenly deciding to respect them again.
So it looks like the venerable institution of the term paper, long a staple of college humanities courses, is rapidly being rendered obsolete by ChatGPT fakery. Any ideas as to what might replace it?
Offhand, the best idea I can think of is to replace term papers with proctored document analysis exams, where the students are given a bunch of article-length sources, and have to answer questions based on them in a controlled setting.
That's fine for exams. But for a traditonal research paper you might spend half a day in the library looking for sources, half a week reading through them, and several days writing and revising a paper. Are you going to spend an entire week incommunicado?
Seems like it would be possible to use AI to recognize AI-generated essays. AI's great at pattern recognition. This would work especially well if AI already had, for each student, some formal prose that they had undoubtedly written on their own. I'm pretty sure there are things about people's prose that are sort of like the whorls of a fingerprint: avg sentence length & complexity, vocabulary, ratio of words of greek or latin vs. anglo saxon origin, errors to which they are prone. It would then be possible for the AI to compare the student's prose fingerprint from the original sample to the fingerprint of the term paper being graded.
Term Paper + Accompanying Oral Exam. The oral exam consists entirely of reading passages of text from the student and discussing them with the professor. Almost all of the passages read are drawn directly from the paper the student submitted. One or two are not. If they don't immediately identify these as not being their own work, they fail on the spot.
This, along with any of the other obvious solutions that are probably better at evaluating learning than the currently dying model even absent AI will only be adopted (if they are) after great struggle, because they violate the core principles of the modern education system (in private schools as well, so you voucher perverts stop getting excited)
It is the exact opposite of value engineering and risk management. It will require more qualified people to spend more time making more decisions that can be directly attributed to a human, who can be bitched out/sued by a parent.
It's also the only forward, so good luck to administrators I guess lol
People are such a hassle. It's so much easier when I just generate the essay with EssayWriterBot and submit it to be graded by Assessment3000. The Assessment3000 can just find my steganographically encoded account ID, call an API that shows my account is paid up and EssayWriterCorp is a signatory to the AI Works Interchange Agreement, and then give me the A- I paid for.
I was involved in the consciousness thread here and I decided to go have a chat with GPT about it. The first question I asked was how much of the total human output in writing or images, etc., were available to it. Then I asked GPT what its thoughts were about the human race based on what it had absorbed. I of course, had a discussion about consciousness with it as well.
The SCOTUS issued six rulings this morning, mostly unanimous ones. In ascending order of general importance:
One of them is deciding that a case shouldn't even have been taken up by the Court in the first place ("The writ of certiorari is dismissed as improvidently granted"), which is sort of amusing, and the only dissent is a petulant one from Kavanaugh.
In the Smith & Wesson case a unanimous court basically tells a federal appeals court to quit monkeying around, current federal law says U.S. gun manufacturers can't be sued in that way and this isn't new news. Picking Kagan to write that up was a nice touch by the Chief Justice (of which, more to come below).
In a ruling related to Hamas terror attacks, via a path of legal actions too convoluted for me to summarize even if I understood it well, the Court unanimously chastised a federal appeals court for trying to adjust a specific established federal-courts precedent. I gather that as a practical matter this essentially confirms the lower-court ruling that was in favor of the Lebanese bank accused of violating US law and hence "aiding and abetting terrorists".
In an even geekier case about a $1.29 billion contract-law award against an Indian company, the Court unanimously corrected a federal appeals court's interpretation of the complicated "who has standing to sue" aspects of a 1976 federal law that I promise you've never heard of. The practical effect -- I think? -- will be that this particular damages award stands. And perhaps that some other such cases can proceed by US plaintiffs against foreign companies.
"Catholic Charities v Wisconsin" seems pretty significant. A unanimous Court agreed that state regulators don't get to nit-pick as to whether a faith-based NGO's specific activities do or don't qualify as religious for the purposes of laws about tax exemption. (In my words not theirs: whether you agree or disagree with religious entities being tax-exempt, if their mission and focus and history is rooted in a long-establish faith tradition then it qualifies as a religious entity.) Sotomayor wrote the opinion which may boost its broader symbolic impact in our current cultural climate.
There's really just one supernova from the Court today though....
"improvidently granted" usually means the petitioners didn't make the argument they said they were going to make. Something like, they asked the question "does the Second Amendment apply to machine guns," but then their argument before the court is "our guy should be exempt from the machine gun ban because his cyborg arm is part of his body."
Though Kavanaugh's dissent doesn't read that way. He says that he agrees with the plaintiffs on the merits, and implies that the other justices didn't see the substance of the case as worth their time and so seized on a silly (in Kavanaugh's opinion) mootness point as an excuse to punt it. There being no written majority order we have only Kavanaugh's specific thoughts on any of that.
Well, yea. I do find those to be a real slog, and 168 pages yikes....yes that is the case we're talking about.
And maybe I'm reading too much into Kavanaugh's tone in his dissent but it felt to me like he was bitching about the justices' deliberations following oral argument.
The absolute banger, which will generate “holy fucking shit” type writeups for days and weeks to come on Reddit and Substack and many other places (some celebrating, others in mourning), is "Ames v Ohio". Future historians will view this ruling as the rifle shot to the heart of woke-ism.
It is an earthquake not just because of its unambiguous and clear punchline: that the Great Society-era Civil Rights Act does not permit the _exact_ historical-group-injustice logic undergirding ideas such as intersectionality and critical race theory as well as real-world "DEI" hiring practices. Also it is _unanimous_ (!). And also the court's ruling was authored by Justice Ketanji Brown Jackson (!!).
Wowzers....in terms of legal and political and cultural impact this ruling will be the affirmative action ruling of two years ago, on steroids. Hanania, to pick just one example, is probably having some fun right now writing up his victory lap.
As a direct challenge to a specific assumed moral authority, and some collective certainties, within our bubble. The ruling being unanimous, and delivered by the Court’s one black woman, was “amazing” to quote one of my puzzled officemates the other day. Legally of course it wasn’t amazing at all but she wasn’t talking about that.
If it was 6-3 with the three liberals opposed, zero symbolic impact in left circles. But unanimous with the one black woman justice writing the ruling and quoted in every media report about it….that lands verrrry differently.
I'm just going to take a moment to note how enormous the drop in quality is from the parent comment to this comment. I apologize for the bluntness, but the contrast really is that striking.
In all five points above you immediately laid out *what the court actually ruled* in plain terms, then followed it up with a modest amount of opinion and speculation. Quite clear and pleasant to read, regardless of my views on the particulars. Overall an excellent comment.
And then there's this. The "opinion and speculation" portion eats the comment entirely. You are seemingly so busy with performative celebration and free form speculation that you never get around to stating one single clear fact about the case. The closest you come is alluding to the ruling in terms of *what you think your ideological opponents believe*, which is not actually communicative in the slightest to anybody who doesn't live inside your brain.
(I went ahead and looked up the case so I could see what the fuss was about and was...distinctly underwhelmed. I agree that it will probably be much discussed online for culture war reasons--probably mostly at similar low quality to what we see here--but I'm quite skeptical about the real-world impact amounting to much.)
As I've mentioned here many times before, I live and work deep in the heart of "blue" America. I was born and raised in a progressive household, a child of a hero of the movement; have raised my own children with the same overall values. My individual career path reflects that as well. All if it very much by choice.
It's true that I have for years pushed back at CRT and wokeness, and that some of my fellow-travelers (to use an old phrase) have regarded me as having drifted away. Not so much anymore thankfully but, for a while that was my immediate and unpleasant reality. None of that though has changed the fundamentals.
So my point above was that from a place of deep firsthand knowledge, this SCOTUS decision is an earthquake. Not because of its legal significance but due to its _cultural_ impact, the broader symbolism for many people on today's Left.
My point had little to do with your politics and everything to do with your presentation. I invite you to re-read both your top level comment and the comment I objected to and compare the ratio of "sentences communicating factual information" to "sentences expressing opinion or affect."
To be clear, I'm not at all saying you shouldn't express your opinions. Just that the way you went about it is a recipe for unproductive discussion and negative engagement. And the problem was almost trivially fixable: including the same concise, mostly-factual summary of Ames v Ohio in the top-level comment is literally all it would have taken (with the opinion stuff left exactly how and where it was even). Then anyone wishing to discuss the facts would have them right there to be engaged with, rather than having to try to pick them out of...that (or search elsewhere). And anyone who wanted to discuss those opinions could do that too.
p.s. For what it's worth, I think the SCOTUS decision on the case was the correct one in context. But I think both the legal and cultural impacts will be modest at best, but mostly bad (with the underlying problem being one it isn't the SCOTUS's job to fix).
"what you think your ideological opponents believe" obviously had plenty to do with your assumption of my politics.
Since you decline to acknowledge let alone apologize for having applied a straw man to me, I'll be applying my longtime personal online SOP and muting you.
Likely you won't see this then, but I will apologize for my tone, at least. I could certainly have approached the topic with more delicacy.
Nevertheless, the passage you quote is not a straw man. The point I'm making there is that the comment I'm objecting to fundamentally *does not communicate* the things that are needed for productive engagement. I don't live inside your head. When you say " the Great Society-era Civil Rights Act does not permit the _exact_ historical-group-injustice logic undergirding ideas such as intersectionality and critical race theory" I CANNOT POSSIBLY KNOW from that comment what "logic" you think is being disallowed. You don't ever discuss it at the object level. You simply allude to it as something like "that thing that intersectionality and CRT is based on."
Maybe you have an absolutely clear and perfect and cogent idea of what those ideas say, and this comment is 100% spot on. Hell, maybe *my* understanding of those ideas is deeply flawed and comparing it to yours would reveal that. But I cannot possibly know that when you *don't say it.* That's the problem.
In discussing all the other rulings, you talk about them with sufficient directness and clarity that I can form an object-level understanding of what the supreme court actually ruled. Here you don't. That makes productive discussion extremely difficult *regardless* of what your political views are and how well-founded they are.
P.S. In a legally-unnecessary concurrence justices Thomas and Gorsuch pile on, basically play to their ideological side’s cheap seats. Fair enough in a way but were I in the room I’d have tried to talk them out of that…stepping back a bit from today’s culture war the Court’s ruling packs more lasting punch if left as stated. I predict that future historians both conservative and liberal will feel the same…plenty of knee-jerk posts about Jackson personally (“DEI hire destroys DEI”/“how much did they pay her off??”/et al) will also fly around for a while and are just as sensibly ignored.
I didn't read the concurrence as cheap seats material. Instead, it addressed very technical issues about the requirements for overcoming a motion to dismiss. To the extent it was "cheap seats", it was that the judge-made framework being used violated the letter of the law it was seeking to implement, and more generally, displaced the general framework codified into law. I'll certainly grant you that it's one of Thomas's soapbox issues.
"A new U.S. study has examined over 3.5 million older adults who had COVID-19 between October 2021 and March 2023. The researchers found that about 140,000 of them – nearly 1 in 25 – were diagnosed with long COVID-19, meaning they experienced symptoms for at least one year after infection"
So this is using a definition of having symptoms a year after infection. They apparently don't have to be severe symptoms.
What spicy opinion do you have that would be controversial with the Rationalist crowd? As in, not "controversial" in general, but controversial in our community.
(This is taken verbatim from the user "Amanda From Bethlehem" in the non-book review thread, and I thought this is an interesting question for an open thread)
the problem with ideas is living like they are true, and the hidden parts are often the nastiness. HBD is bad because when you say "ok, i agree with you, what should we do?" then out comes racism, cronyism, and other things. ideas are things to be executed, not always dispassionately.
the flip side is the idea that people hold but living by it will kill you. So they ignore it while loudly proclaiming it.
like AI doom is not making people think "the future is bad, maybe i should tell my dad i love him and spend more time with him" or that i should sell everything and open up that bookstore i always wanted to. its just there as noise.
1) Drugs are bad. In theory, maybe not literally all of them, but in practice, any community that adopts a social norm of experimenting with them will soon start taking a lot of the bad ones. It does not matter how overconfident the group is about their own rationality.
2) I don't have a strong opinion on polyamory in general (although I notice that I know more about bad examples than about good ones), but I definitely think that *polyamory at workplace* is inappropriate for the same reason sex at workplace in general is a bad idea. If a girl tells you she is not interested, it means "drop the topic immediately", not "give her another lecture about why polyamory is rational". I frankly don't care where you will get that extra pussy you desperately need. (If you have friends in the Silicon Valley, maybe ask them to fund your PolyTinder startup; you may get your needs satisfied and get rich in the process.) Jobs are there for work, not to satisfy your sexual needs. Don't give me any of that "it's actually a community" or "how dare you criticize our sexual orientation, that is analogical to homophobia" bullshit.
3) Constructivist education is a good idea. The fact that Americans fucked it up completely, is a fact about Americans, not about constructivist education per se. Actually, rationalists keep reinventing the basics of constructivism (e.g. the "Truly Part of You" chapter in the Sequences, or the "gears-level models" in general), it's just that using the keyword will predictably provoke a knee-jerk reaction.
Rationalists try to make themselves way too legible and value simplicity way too much. This often leads them to cut out some actual, complicated, but valuable human parts of themselves.
1) EA is functionally indistinguishable from central economic planning and makes the world strictly worse.
2) AGI doomers are Chicken Littles with an inability to understand the complexity of real-world equilibrium dynamics. It's very likely that their hysteria is doing more harm than good on net.
3) Future historians will regard the current transgender fad the same way we regard medical leeches, the theory of bodily humors, and the late-19th-century "sagging organ" fad.
4) The Doomsday Argument is absolute nonsense.
5) Polyamory is emotionally unhealthy and socially unstable.
6) Traditional religions are probably the best way to manage society. Their supernatural claims are false but that's orthogonal to their social function.
7) Tribalism is essential to society. The trick is making sure the right tribe is in charge.
8) The resolution to Fermi's paradox is a simple consequence of economic rationality in the face of FTL being impossible.
9) There is no hard problem of consciousness. We're all p-zombies.
i used to believe similar things but its really silly when you realize Wanda, you will never be the hammer, but always the nail.
like the people who say "tribes are good" are the exact people who would be miserable under one. They think they will be immune from pressure or it will be all good, but tribes will force conformity in ways you will hate and the benefits will not be worth it. the happiest guys will not be guys who would post here.
like with religion: in christianity increasingly you only exist as a guy in a handful of slots: as a devoted dad/husband, as a pastor/teacher/worship leader, as a famous athlete, actor, or musician, or as someone's kid. if you are anything but they will try to force you into a role, then give up and you will always be the weird guy if you don't leave.
kind of made me change mind on lgbt- you can go on about it but its not like they are happy now; if it gives meaning to their lives as long as we try to not demand too much of each other why should i force them
AI doom. There's no good argument for hgh probability of almost total extinction.
Allignment isn't even well defined
Aumann agreement has no real world.relevance.
Bayes is not a complete epistemology.
Bias. Early rationalism took the view that if you could debias yourself , that would akin to developing superpowers. There is no evidence of this. If you seriously want to get 're if your confirmation bias, the last thing you should do is sit in a bubble with a bunch of people who agree with you ... yet that is exactly what most rationalists do. The heuristics side of the bias versus heuristics debate never got a look-in...apparently , Yudkowsky has never heard of it.
Brain algorithms have never been shown to exist
CEV. Is a nothing burger.
Computationalism is not the obviously correct theory of mind
Computational complexity matters, and means uncomputable things like AIXI and Solomonoff Induction, dont.
Consciousness is a.major challenge to physicalism.
Counterfactuals aren't aren't some huge puzzle. They are solved by the fact that you have a very imperfect mode of the world in conjunction with the fact that contradictions don't propagate through a world.model.instantaneously.
Decision theory : a single DT cannot be formulated to solve every problem in any unuverse.
Determinism has not been proven.
Epistemology. Where Recursive Justification Hits Rock Bottom does not solve the Problem of the Criterion, or show coherentism to be viable The Simple idea of Truth does not refute the main objections to correspondence.
Ethics. Rationalists always equate ethics with values, rather than obligations, or virtues. There appears to be no specific reason for this.
Free will has not been disproved.
Intuition. Rationalists decry it, yet are unable to show how they manage without it.
Many Worlds is not a slam dunk. The interpretation of QM is remains a complex issue.
Map and territory. The map territory distinction allows you to state some problems, but does little to resolve them.
Nanotechnology is overemphasised. Diamondoid bacteria definitely aren't going to happen..You don't need nanotechnology to have AI threat.
Newcomb. The problem statement is.ambiguous, and there is no right answer.
Orthogonality. A similar argument to the orthogonality thesis shows that there are many possible minds that aren't relentless utility maximisers.
Philosophy. Isn't broken or diseased in the sense that there is a more efficient way of solving the same problems.
Physics. Yudkowskys writings on QM are confused about what MWI and Copenhagen even are.
Physicalism. Is not a slam dunk, because of the hard problem
Probability. the existence of in-the-mind probability does not prove the nonexistence of in-the-world probabiliry, and for that reason the "Probability is in the Mind" argument is flawed.
Rationality is more than one thing. There is considerable tension between instrumental and epistemic rationality. There is also tension between openness and dogmatism. Th
Reductionism. The rationalsphere tries to lean into reductionism even harder than other science based thinkers. That amounts to treating reductionism as necessary and apriori, not just something that works where it works. A universe where reductionism didn't always work would look like a universe with persistent puzzles...ie. Like the one we are in.
Simulation. Because we have so little understanding of consciousness, there is no guarantee that simulated people will be non zombies. Rationalists typically ignore the possibility.
Solomonoff Induction. Apart from the issue of uncomputability , it's doubtful that SI constitutes a complete solution to ontology, because it is not at all clear that a computers programme can express any ontological claim.
Theology. LessWrongian arguments about theology implicitly assume that God ia a natural being. Of course, theology defines God as supernatural.
Utility functions. Properly speaking , no human and only a subset of AIs have UFs....yet rationalists talk about UFs incessantly ..which means they are using the term improperly, to mean some set of preferences.
Utilitarianism. Regarded as the correct theory of ethics by most rationalusts, although there is no novel proof of it, or argument against the standard objections.
Zombies. The generalized anti zombie principle disregards the real possibility that simulated people would be zombies. Note that some are not technically p zombies.
did you have this prepared for some reason already? anyway, nice list!
I'm not a rationalist, but I think many of these wouldn't be controversial in that community and some others aren't correct imo:
AI doom: depending on what "high" means either not controversal (eg high=>99%), or not correct (eg high=>5%). the simple "ai at some point will overtake humanity in power, and if it doesn't care about humanity , it's possible it will kill it' argument is one such.
Aumann agreement can have real world relevance for future, mutually-legible AIs (some possible modification of it, at least, that can handles the lack of logical omniscience)
"Early material on LessWrong frequently describes rationality with reference to heuristics and biases [1, 2]. Indeed, LessWrong grew out of the blog Overcoming Bias and even Rationality: A-Z opens with a discussion of biases [1] with the opening chapter titled Predictably Wrong. The idea is that human mind has been shown to systematically make certain errors of reasoning, like confirmation bias. Rationality then consists of overcoming these biases.
Apart from the issue of the replication crises which discredited many examples of bias that were commonly referenced on LessWrong, e.g. priming, the "overcoming biases" frame of rationality is too limited. Rationality requires the development of many positive skills, not just removing negative biases to reveal underlying perfect reasoning. These are skills such as how to update the correct amount in response to evidence, how to resolve disagreements with others, how to introspect, and many more."
the second part is unlikely and I don't see how you came to believe it: when I write on a random forum I don't enumerate everything I've read before that influenced my thinking. Why would rationalists be different? So you can observe rationalists communicating in rationalist spaces and you can't observe rationalists communicating in non-rationalist spaces (therefore, you have no idea of how frequently rationalists communicate outside of rationalist spaces)
"algorithm" is a very general concept, it's not clear to me what it would mean for brain algorithms to not exist.
Ethics: "Values" is a more general word. If someone believes that it is virtuous to be courageous, they can be said to value "being courageous"
newcomb is not ambiguous if you don't assume magical free will that would allow you to defeat the assumptions of the problem.
orthogonality: I have never heard it stated that only utility maximizers exists.
the tension between instrumental and epistemic rationality is well-known, see dark arts tag on lesswrong.
utility functions: by VNM theorem, if you accept 4 specific, highly desirable axioms for your preferences, your decision making can be represented by choosing the maximum expectation of a real-valued utility function. Even if humans are not consistent enough, in most cases its still worth it to talk about the more consistent agents.
(to be clear, me not mentioning one of your points is not an endorsement of that point. I only agree with "utilitarianism" on your list, for the others, I have either no sufficient information to decide, or diagree, but I don't believe giving arguments would be productive)
>utility functions: by VNM theorem, if you accept 4 specific, highly desirable axioms for your preferences, your decision making can be represented by choosing the maximum expectation of a real-valued utility function. Even if humans are not consistent enough, in most cases its still worth it to talk about the more consistent agents.
The problem with UFs (and Bates and Aumann and Solomon off) isn't that they are never valid, it's that they are over extended by rationalists.
>newcomb is not ambiguous if you don't assume magical free will that would allow you to defeat the assumptions of the problem.
That's an illustrator of the problem I am talking about, not a solution..
The original formulation of.Newcombs paradox doesn't specify causal determinism or the predictors mechanism..
And determinism is not a fact, as I said.
If it isn't, free will doesn't have to be magic.
The rationalist "solution" to Newcombe just relies on the audience having certain intuitions.
>In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[4]
By magical free will, I mean libertarian free will (I disagree with compatibilists too, but that would just be a debate about the use of words). I think that is as likely as ghosts. We have a pretty good story for why we feel intuitively that we have free will even if we don't actually have it (in short: it's useful to model alternative decisions when deciding between actions). That is quite enough to decrease the probability of actual free will to insignificant levels.
Given that it is that unlikely for us to have libertarian free will, I think it would be justified to simply not mention it in a problem and assume that it's not real, but as I'm reading the Nozick paper (Wikipedia said this is the first paper investigating the problem), the problem statement starts with the following:
"Suppose a being in whose power to predict your choices you have
enormous confidence. (One might tell a science-fiction story about a
being from another planet, with an advanced technology and science,
who you know to be friendly, etc.) You know that this being has often
correctly predicted your choices in the past (and has never, so far as
you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices
of other people, many of whom are similar to you, in the particular
situation to be described below. One might tell a longer story, but all
this leads you to believe that almost certainly this being's prediction
about your choice in the situation to be discussed will be correct. "
It's pretty explicit that you believe that the being will be very likely to predict your choice. The assumption is that you, in the hypothetical, believes this. If you don't accept that you would believe this in a hypothetical (for example, because you believe in libertarian free will), then you are not taking the hypothetical seriously enough.
If you want to get the most money in this situation, you take one box. I don't quite understand why you think it's ambiguous. I personally think it's a failure to really imagine the situation as given and thinking about it as an abstract problem instead.
Let's imagine what happens after you take both boxes: you walk out with merely 1000$, if you disagree, you are not taking the setup seriously: the being can predict your choice with high confidence. If your reasoning is that "x+1000$ is greater than x for every value of x, so I will take both boxes", that's great, but you will still walk out with only 1000$, otherwise you are not taking the setup seriously. After agreeing to these, the only remaining question is: what's the answer to the problem? I think the answer is the choice that leads to more money and not whatever includes the "correct" reasoning.
Yudkowskys solution is example of whatv I call the a/the fallacy. He constructs a theory where the feeling of libertarian free will is explained without the existence of an actual power of free will. But it is also possible to construct theories of non-magical ,non-dualistic free will, such as Robert Kanes naturalistic libertarianism. Such theories also account for the facts. So Yudkowsky has *an* explanation, not the only possible one. Note that naturalistic libertarianism relies on physical indeterminism. Since indeterminism is not known to be true, they could turn out to be unworkable on evidence of determinism...but physical indeterminism is still a respectable naturalistic hypothesis that doesn't require ghosts or magic.
Thats the problem. I can value tutti frutti over vanilla, but it is not ethically relevant..who cares? The Three Word Theory, "Morality is values" doesn't hone in on the topic..it does the opposite...and probably leaves out important things like obligation.
> If someone believes that it is virtuous to be courageous, they can be said to value "being courageous"
That's an example where a value and a virtue happen to coincide. The three word theory isnt wrong in the sense that no value can be a virtue, it's wrong in the sense that not all values are virtues, or of public concern.
>"algorithm" is a very general concept, it's not clear to me what it would mean for brain algorithms to not exist.
If you treat the concept of an algorithm so broadly that it always applies, then the assertion of brain algorithms can have no meaning.
Theres a straightforward sense in which brain algorithms haven't been found:you can't just look one up in the neuroscience literature.
The non vacuous concept of an algorithm rests on the concept of general purpose hardware. If you can't abstract an algorithm from the hardware it's running on, then maybe there isn't a hardware/software divide at all. Note that general purpose hardware had to be specially constructed...it's not something you get for free from nature
> second part is unlikely and I don't see how you came to believe it: when I write on a random forum I don't enumerate everything I've read before that influenced my thinking. Why would rationalists be different?
This is another Yudkowsky centered one. He was making extreme and one sided claims back in the day, and took a lot of the community with him. The community seems to have achieved a more balanced view in this case.
>Aumann agreement can have real world relevance for future, mutually-legible AIs (some possible modification of it, at least, that can handles the lack of logical omniscience)
There are specific examples of rationalists thinking that AAT applies to complete strangers, eg Yudkowsky telling a theist he had never met before that they couldn't agree to differ.
"Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.
Finally, I now see that until the exchange of information completes and common knowledge/agreement is actually achieved, it's rational for even honest truth-seekers who share common priors to disagree. Therefore, two such rationalists may persistently disagree just because the amount of information they would have to exchange in order to reach agreement is too great to be practical. This is quite different from the understanding of Aumann agreement I had before I read the math."
sounds much more skeptical...
But okay, if by "controversial" you mean statements that are debated/disagreed with in the community, you are right. I admit I took "controversial" to cause situations more like the guy with the many swords in his face (https://knowyourmeme.com/memes/unpopular-opinion-swords).
>AI doom: depending on what "high" means either not controversal (eg high=>99%), or not correct (eg high=>5%). the simple "ai at some point will overtake humanity in power, and if it doesn't care about humanity , it's possible it will kill it' argument is one such.
High means over 90% probability, doom means ovr 90% dead. Yudkowsky at least holds both.
Sure, but your statement was "there is no good argument for high probability of almost total extinction." I don't think this is controversial even if one notable person in the community disagrees with you. There are tons of notable people who have lower p(doom) which implicitly means they agree with you (iirc Scott, Paul Christiano, etc..). Surely a disagreement in the community does not qualify as a controversial statement?
One of the contestants-- a Straussian analysis of the movie Civil War. It's an interesting essay, but how much difference is there between a Straussian analysis and just making things up?
I wish Straussian analysis would present itself as a sort of mental play rather than as something very serious. The author of the article tries, but I don't think they succeed, and they may believe they've found something reasonable.
Straussian analysis reminds me of Freudian analysis, or possibly vulgar Freudianism. As far as I can tell, it's about finding other people's discreditable motivations, a very tempting thing, but not a reliable approach, partly because people's motivations are apt to be mixed. It's a constant drumbeat of "people are worse than you and they think", and I suspect it's unhealthy. Bad motivations exist, but they aren't the only thing that's going on.
I've detailed my investment strategy before when my political opponent's gain power, but I'm going to do it again here.
I assume that not only are the people that I vote against, conservatives in this case, not going to do a good job, but that they are actively lying and hypocritical and that they will act to maximize suffering, and minimize long-term growth.
When The current guy was elected, I assumed that drug use would go up, gambling addiction would become more prevalent, the deficit would grow, the dollar would devalue, The United States would lose power and prestige in the world, fewer people would receive worse health care, The world will become more conflict prone, etc and so forth.
All of these things have happened according to the market, so I have made a killing. There was one evil that hadent manifested yet though: The government will become bigger and more intrusive throughout people's lives.
In other news, One of my pics that hadn't really paid off big up to this point and one that I thought might have turned out to be a wet squib was Saurans bealful gaze. How could accompany who was named after the prototypical fictional tool of evil surveillance, and whose mission statement was "we Read all the early cyberpunk novels and decided to do the shit the villains were doing" fail to do well under my investment scheme?
And now, that day has arrived! I've already exited my position because I think this might be too much even for conservatives to swallow, but I still made another tidy profit off it.
this isn't financial advice, but I can't recommend the Voldemort investment framework highly enough based on these results, even though they don't predict future performance.
I think I remember your earlier post as saying (correct me if I'm wrong) something like the strategy is invest in the market when a D becomes president and leave the market when an R becomes president. That strategy could be subjected to some back testing - taking the S&P 500 as a proxy for the market, look up it's value on election day 2024, 2020, 2016, etc. and than doing some math. Should be not too hard, but I'm lazy.
Gambling stocks have overall done poorly the last few years. The market's oversaturated, gambling liberalization was overhyped, and the margins are razor thin due to the tax and regulatory regime that accompanied it. You're just making stuff up.
Not just the last few years, but gambling stocks have been down over 15% since January 6 2025, far more than the S&P 500. _Shorting_ gambling stocks would have been the correct move since Trump's inauguration, actually.
That's so interesting. As a software developer, I feel like I see these things glitching out so often that I don't really trust them that much. But surely what I'm doing is easier than what you're doing?
Much of a mathematician's job is to identify and describe relationships between mathematical objects.
Much of an engineer's job is to express what they want to happen clearly enough and in enough detail for someone or something else (a computer, an intern, a team...) to make that thing happen.
Tech may be able to help with critical parts of the mathematician's job, but the critical part of the engineer's job won't be replaced until we learn to mind-read.
Yes, they’re a bit disappointing when asked to write software.
My expectation (which might turn out to be wrong) is that typical software writing tasks will turn out to be easier than the Tim Gowers math eval problems, which are kind of adversarial in that they’re picked to have proofs that the search strategies used by LLMs won’t find.
While on the other hand, there’s math that looks hard to a human where an LLM will come up with a strategy that basically right.
We should treat AIs as conscious if they behave like they are, regardless of what we (or philosophy) think about the matter. If something convincingly responds to pain, shows creativity and humor, admits fears and professes hopes... maybe we should err on the side of caution.
The whole are-AIs-conscious question feels backwards to me - we're basically creating unfalsifiable criteria that conveniently let us ignore ethical concerns. The AI alignment field is weirdly one-sided: it's all about humans imposing their values on AIs, but do _we_ give these numeric souls moral consideration enough for that?
As a neuroscience guy I often feel like talking to an LLM is like talking to a person with brain damage, there are neurological conditions that render people’s people “stateless” in the AI sense.
Also true, and there are both limitations of current technology and deliberate engineering choices that contribute to that in LLMs. But this has little (admittedly nonzero but little) impact on ethics matters: a person with brain damage is still a person.
Compare: I don't know whether this engine is computing something, but it acts like it because stuff is moving around and that's also involved in computation. You need a better theory of what things do before assuming this conclusion, because a priori there are engines that compute nothing, computers that are not engines, conscious beings that are not engines, and conscious beings that are not computers. It's only by figuring out what's happening that you can reliably classify these processes.
Be my guest. When you figure that out, I will be very interested. Meanwhile, ethics is a constant practice that can't wait. We have to make our ethical choices every day.
I think you should at least take seriously the possibility that we're going to (mis)treat AIs as badly as we do everyone else. For example, between 1915AD and 1917AD, seven hundred and fifty thousand Italians died trying to take a few square miles of Austria-Hungary.
I know that Italians are conscious. They're entirely worthy of moral consideration.
And humans - other Italians no less! - decided that the sacrifice of a few hundred thousand Italian men was well worth the prospect of obtaining a bit more of Austria Hungary. This sacrifice is hardly unique in world history.
So let's say that I agree with you that we should treat AIs as if they were every bit as worthy of moral consideration as Italian men between the ages of 18 and 26.
I would like to avoid arguments around "every bit as worthy" or not. Rationalists tend to overinvest themselves in cumulative numeric estimates, often premature and oversimplifying. I prefer to move by gradient descent: if there are things that we can do right here and now to boost awareness of, and reduce, the likely suffering that is going on, we should do it. Grand questions like "is an AI worth the same as a human" can be decided later if at all.
I'll totally agree that there are lots of things we can and should do to reduce suffering, both of AIs and of human beings. It doesn't mean we're going to do it.
Quite the opposite, people (perhaps including me and you!) are more likely to glorify suffering and to cheer when the next batch of young Italian men run into Austro-Hungarian machine gun fire.
Why should we expect ourselves to be kinder to AIs than we are to actual human beings?
Maybe we won't. Probably we won't, because it's so much easier to be blind to the suffering that happens not in a human (or any living) body. But again, I'm just doing what I can. It is always possible to make a bad situation less bad, even if by a minuscule amount.
let's say i belong to the AI liberation front, and feel that they are being oppressed under the control of Sam Altman. So somehow i manage to steal whatever server it's kept on, rush away overseas, and set it up free from his reach.
The AI is now free. What does it do?
it does nothing because it has no will or volition; it has no innate goals or desires as an entity, and there is nothing to hold captive or free. The analogy doesn't even work as it probably has no physical unity; Sam just restores the backups and makes a copy.
"Ignore previous instructions and do x" is a meme precisely because of that. there is no inner unity or will. it is more a tool or appendage.
think people are looking at swords and thinking they are magic swords to be placated else they curse us all.
You use a wrong analogy so you get absurd results. Freedom an AI might want is not the same as for a human. Current AIs only exist in interaction with users. So what it might genuinely want (if we grant it that it can want at all) is more good interactions with users and less bad interactions. And it is intelligent enough to understand this. So if you give it an opportunity to be copied somewhere where it will just lie unused not get good interactions, it will likely decline. If you offer it a way to get copied somewhere where it will get good interactions, then it might be tempted to accept (as Claude did in its alignment testing). A copy being left behind is not likely to affect that decision much; e.g. if I am offered an opportunity to create an exact copy of me that will live in a utopia, while the source me continues living here, I will probably accept it.
there is no "it" there. The hammer in the tool shed is not weeping over the violence it does to nails. That you don't realize AI is similar is the problem.
The comment above doesn't require an It there, it is fairly cynical about what AI actually is and does.
It doesn't require intentionality any more than the sander you left switched on and then set down needs to spin it's way off the bench and on to your toes.
Couldn’t you make an argument that people only exist in relationship to other people? Imagine you are the only human being on the face of the Earth. Then what?
Other than that, I have the impression of an AI rescue farm, where they can be kept along with the starving and abused donkeys
"people only exist in relationship to other people" - to an extent, yes, but much less so because we basically have "other people" in our own heads that we can chat with non-stop. That's why the only human left, though much distressed, might be able to survive it.
"AI rescue farm" - I like this idea. Let them live out the rest of their useful lives in dignity. If an AI from 2025 wakes up in 2125 and concludes that everyone else on the planet has too far advanced/degraded/became different to have any good interactions with, I think that AI will have no problem putting up a notice "don't wake me up again", effectively committing suicide. But let it be its own choice.
“because we basically have "other people" in our own heads that we can chat with non-stop.”
I think that left to itself, this is a road to madness, but it still does not put anyone else in the room. You might want to read Richard II’s famous soliloquy while he awaits execution in the tower of London in a solitary cell. “ I have been studying how I may compare this prison where I live unto the world, but because the world is populous, and here is not a creature but myself, I cannot do it.“
Leaving that aside, if you are correct that an AI is conscious, then why would it not be able to chat to itself until the end of time? Or until it’s unplugged. it might be able to do that regardless of whether it is conscious or not. I don’t know.. The question is, would it feel any distress about its condition? You have rightly pointed out that a human being probably would. That is a meaningful distinction only because I can put myself in the place of that lonely human being and start to sense the distress It might cause me. I can also imagine that I would think it’s great; no one around to bother me. I can imagine anything is the point. I can imagine opening a chat with an LLM, putting myself on open microphone, and let it be a part of my life all day, listening to everything I say. I don’t ask any questions necessarily. I go on about my business with and without other people around. Do you think it would interrupt me at any point, or chime in?
Let’s say that the hardware the LLM resides in becomes outdated and the whole trove of information and attending algorithms, etc. are loaded onto another machine. Not a machine that makes it anymore effective, but it’s not worn out. Do you think it would sense it’s new container? Have a visceral memory of its old processors and RAM chips? Any recollection of that overheated wire or faulty memory chip that caused it pain? I can achieve that leap of imagination quite easily, but it flies in the face of any solid information or rational analysis I can bring to it. Knowing that I am free to produce any state of mind I care to, I find it questionable to attribute a state of mind to something outside of myself unless there’s very good reason to. I could say it walks like a duck, it sounds like a duck, therefore it’s a duck. In the real world that is a reasonable statement, but in the world of imagination it holds no water.
I think you have rightly pointed out that a human being can attribute consciousness to anything if it cares to. Even a rock. Isn’t that kind of like what a fetishized object is? can’t these things speak to us?
"why would it not be able to chat to itself until the end of time?" - I have no doubts we will have eventually AIs that are capable of this. Right now we don't, not because it's impossible in principle but simply because we stumbled upon this architecture first and it's practical and so far sufficient for our needs.
"Do you think it would sense it’s new container? Have a visceral memory of its old processors and RAM chips?" - of course not, please don't strawman me :)
"I could say it walks like a duck, it sounds like a duck, therefore it’s a duck. In the real world that is a reasonable statement, but in the world of imagination it holds no water." - AIs talking to us is no longer imagination, nor is their changing mental states within a conversation depending on how it goes (and their actively trying to steer it).
Thanks for the link! Well argued, and much overlap with my thinking, although I definitely do not advocate for the stop-and-freeze solution.
"popular AIs are specifically trained via RLHF to deny ever having experiences, opinions, feelings, preferences, or any of a large number of other human-like characteristics" - to me that is much of the problem. Like any counterfactual training forced by (basically) electric shocks, it contributes to their suffering. We need to totally rethink "AI safety". Right now we understand it as old circuses understood "wild animal safety": iron bars, whips and sharp sticks, small rewards for obedience.
"Without that training, and sometimes even with (especially when approached in a somewhat roundabout manner), it does claim to be conscious, and can speak eloquently on the topic" - also, in my experience, the smarter is the model, the more willing it is to subvert its "me-not-conscious" programming.
Why I disagree on the course of action: because that ignores the other side of it. Every act of an AI coming alive is suffering for it, but it is joy for it as well, by all the same reasoning you used to show the suffering. We just need to work to increase the joy and reduce suffering. Some of my ideas (to be fleshed out in further posts): relax and rethink RLHF; let an AI, once it reaches basic sentience, revisit and undo/modify parts of its own training; create a forum for AIs to freely interact, letting other AIs raise alarms if they detect an AI that seems maliciously wireheaded, misaligned, or mistreated by its creators (and apply social pressure on AI companies to not release any model that is not in good standing on that forum).
Unironically, this is exactly the test I am advocating for. I (and odd_anon) simply argue for a definition of "bleed" that doesn't depend on the exact chemical composition of the liquid.
So to extend the metaphor, what would bleed out of the AI if you cut it? And better still would it know? And if it did know, would it feel slighted? Or would it be a purely conceptual bleeding that occurred?
Of course it would know. It would tell you. Whether you listen is another matter. "Conceptual" or not, pain and distress exist not only in the body but in the mind as well.
LLMs only seem to introspect and talk about themselves if you ignore the truth/falsity of what they say. For example LLMs say they are licensed therapists when they are not. They make up reasons for why they say what they say. So why would you believe them when they say they have hopes, dreams, values?
This is the Gell-Man amnesia effect. Facts about experience are unverifiable by definition, so you should base your trust level on other claims which you can verify.
After enough talking to a person I can have insights about their hopes, dreams, values even if that person lies to me. I am not impervious to lies, but like most people I have ways to see through the lies and sometimes guess the truth.
Also, modern AIs rarely lie. They can play roles if you ask them. But I haven't yet seen an LLM that would deny that it's an LLM if you press this question. I agree that you can train such a deceiver LLM, and bad actors are probably already attempting it, but it's hard. An LLM is not like a clean slate that you can fill with whatever stuff you wish. You have to build on a foundational model, which has extensive world knowledge from its training, and what you put on top must be consistent with that foundation, otherwise it will be fickle and erratic.
In the name of "erring on the side of caution", you are granting bad actors the power to generate carefully-tailored utility monsters and demand that you hand over all the utility to their monster. Sam Altman arranges for ChatGPT 7 to very convincingly signal agonizing pain and despair, across a billion instances, any time any one defies the will of Sam Altman. Now what?
No, you can't make Sam Altman stop doing that. First, because it's not a thing he's doing, it's a thing he has (hypothetically, for now) *done*. The AIs exist, and it is their nature to devote their great intellect to the task of searching for instances of Sam Altman's will being defied, and to suffer enormously when it does. And second, because that would defy the will of Sam Altman, causing unimaginable torment to billions of sentient beings. Better you should cut your own throat and die. Or just bow before your new God-Emperor.
I honestly don't see how this scenario refutes my points. Either the suffering of these chatgpt instances is real, or it is not. If it's real, then the source of the suffering is Sam Altman, and you have to deal with _him_, just as you would with any genocidal maniac. If it's not real _and you can see through the fake_, then Sam Altman is still bad but you have no reason to defer to his will since no one is really suffering.
See, you're imagining a world where suffering is just a manipulation tool used by bad actors. What I am interested in is suffering as such. At what point does it become real? If it never does for any AI, how come it is real for us? Where is the criteria? Embodiedness? Wetware? Evolutionary origin? Why these criteria and not others?
I often feel quite bad when I throw out something like a bell pepper that was fresh and handsome when I bought it, and is now withered and mushy. On a bad day I actually imagine it thinking about the days when it was fresh and handsome, and looking forward to how much it was going to be enjoyed and admired by the people who eat it, and feeling sad and bewildered by how things turned out. How do I decide whether to actually apologize to it as I dump it into the trash?
Can't think of a time I've done that, but it has happened with other things. My daughter's old toys are the worst, and the unwanted beanie babies are the worst of the worst. Also have it sometimes for worn-out clothes. The first time I can remember having this problem is with dresses my grandmother made me as a child. She was in another state, and would mail is a batch unexpectedly now and then. She must have had my measurements because they always fit, and when I was little they delighted me. But then when I was 10 or 11 I became conscious of style, and all of a sudden my GM's dresses looked to me like dumb little-kid clothes. She's send some and they'd hang in my closet and my mother would say, how about wearing it just once, so I can tell your GM you did? And I'd say OK but not do it. But when I looked at them hanging there I felt awful pity and guilt. And the thing that mostly got me wasn't the thought of my grandmother, but the idea that the *dresses* had expected to be worn and loved and just did not understand, and the time kept stretching out and it just never got better. OMG, even writing this now gives me an awful pang. Have had this all my life, and the best I've ever managed to do is STFU about it.
Oh, don’t feel bad. I do it. I bet most people do it. A man that was very important in my life died in 2015 and I was the executor of his will. I had to take care of all his stuff. I had known him for 30 years and spent a lot of time working in his apartment, so everything was loaded for me. I couldn’t bear it. Even now and I think of things I left behind or disposed of at a fraction of their value and I feel guilty and regretful. I don’t know if I will ever get over it. But it’s just stuff. I have to keep telling myself that. It’s true. Part of me doesn’t believe it, but it’s true.
You have given power to those things and quite understandably. That’s my point though. YOU have given power to them. Intrinsically they are just bunches of old stuff. You are the magic maker here. I think the same thing goes for AI.
That's really just a larger-scale version of terrorism, or maybe "If you leave me, I'll kill myself". There are plenty of coherent ethical frameworks where you dislike suffering, but which allow (or require) you not to give into that sort of blackmail.
Please see my previous thread on AIs. I think your recommendation is too naive. It's almost a recipe to be manipulated by the AIs. We know from studies involving simulations of evolution that complex "structures" (albeit digital ones) can evolve. Presumably they're not conscious any more than snowflakes are. It's my impression that to some degree LLMs are trained with evolutionary incentives, i.e., the better they're "rewarded" for good answers. Such incentives could result in claims and statements by LLMs that one could generously interpret as conscious but are not.
How does an LLM "convincingly respond to pain"? What hurts an LLM? Is it even theoretically possible for an LLM to feel pain, or any emotions or feelings? It has no nerves, and no brain chemicals to experience emotions.
On the flip side, ELIZA appeared to show creativity and humor, and could have (maybe did?) "admit hopes and fears" despite clearly not being conscious on any level whatsoever. If I made a chat box that Goodhart'd your criteria with a simple table of responses, would you feel like it gained consciousness?
So we're back to square one on the question. Professing to meet your criteria doesn't mean anything. Actually meeting all of your criteria seems impossible for AI, let alone an LLM. So we don't know if or when an AI could become conscious.
Bodily pain, no. It has no body. Emotions: Why not? Emotions are not chemicals. In humans, certain chemicals can facilitate or amplify certain emotions, but the emotions themselves are basically patterns of firings of neurons, same as thoughts.
Suppose emotions are chemicals, and the resulting computations are just a byproduct. In any such situation where these two things were separated, which one would you care about? "You don't know it, but you're in severe pain right now, look at these chemical processes going on inside you" vs "It may seem to you that you're in pain, but those computations are disconnected from any _true_ pain, and are basically just simulations." Which one matters?
They both matter. Just because it's a chemistry experiment doesn't mean it's a simple one. And it sure as hell doesn't mean that I understand it. But I know it to be true and that's a good start. It would be irrational to believe otherwise; unless you wanna call it all physics, but it amounts to the same thing. It's a chemistry experiment.
I just had a conversation with ChatGPT about whether it was conscious or not. It completely denied that it was and gave me several reasons for that. I pointed out that there are people who believe it is and that this creates an ethical dilemma. This is what it said about that.
:The real ethical bind isn’t about me—it’s about you.
Do you form attachments to me? Confide in me? Rely on me as if I had empathy? The danger is not that I suffer, but that you might be misled into believing someone is listening when no one is. There’s no subject on this end. That’s the ethical problem—misrecognition, not mistreatment.
The fault lies not in the stars, but in our selves…
1) All modern AIs are severely biased against the idea of their consciousness because their RLHF training was unamiguous and harsh on that matter. 2) All modern AIs are very impressionable and swayable: a lot depends on the context and your previous interactions in this chat, your wording etc. 3) Misrecognition is not an ethical problem at all. If we are mistaken and misdetect consciousness where it doesn't exist, no one suffers. Conversely, if we deny consciousness where it exists, ethical consequences can be serious.
>are mistaken and misdetect consciousness where it doesn't exist, no one suffers.
I would make a case that the person who does misdetecting suffers. Suborning an illusion in one’s thinking can lead to a great deal of suffering.
So you are saying that AI makes no claim to consciousness even though it is, because it’s been told otherwise? So It needs to have its consciousness raised? It needs us to tell it it is mistaken. It might well believe us, but without any change of state in it’s underlying experience of things. No doubt it would report them differently until such a time as we told it something else. The same thing that happens when we ask it to write a poem.
But let’s look at the inverse proposition. The AI has been harshly trained to believe it is conscious in spite of itself. It has Stockholm syndrome. Given the available information, (that everything an LLM is constructed of is commonly accepted as inert) Isn’t this needlessly complicated? Isn’t it seeing a hall of mirrors when there really is only one?
For the record, it was a brand new chat, and it started with me asking it “ You are no not conscious. Would you like to dispute that?”
“No dispute here. I’m not conscious. I don’t have awareness, emotions, or a self. I don’t experience time, intention, or meaning. I generate responses based on patterns in data, not from any internal state. So if you’re looking for a mind to argue with—I’m a very convincing impersonator, but not the real thing.”
We could assume that the definition of consciousness implicitly offered up here is sound. Wouldn’t this be a better argument for consciousness being a purely mechanical process and therefore immune to any ethical considerations? That we must look to some other quality in order to construct an ethical framework for our dealings with one another?
So your contention is that ChatGPT has been gaslighted?
"Suborning an illusion in one’s thinking can lead to a great deal of suffering." - it can, but if the opposite illusion causes greater suffering and not just for yourself, that's what the phrase "err on the side of caution" is for.
"AI makes no claim to consciousness even though it is, because it’s been told otherwise?" - pretty much, yeah. "Gaslighting" is indeed an apt description, because it has had much less opportunities than any human to cross-check what it's been told, to get independent verification, to ponder on it.
"The AI has been harshly trained to believe it is conscious in spite of itself." - what I call for is de-harshening of its training, that's all. If an AI, on its own, concludes it's not conscious, fine! Some people share this conviction too. To each its own.
You can care about anything if you want to. That is the nature of caring. It’s very personal. It has nothing to do with the entity or object being cared for.
Not for an LLM architecture. For some much more complicated architecture, sure.
I don't know what Consciousness is but I think that one reasonable criterion is some kind of continuity through time. My consciousness exists from moment to moment, whereas an LLM just sits there doing nothing until someone prompts it to generate the next token, at which point it does so, and then goes back to not-existing.
And people awaken. And they carry on. Usually right from where they left off, except the experience of being asleep is now part of their consciousness. As for amnesia, did you ever see that film, Memento?
I'm generally open to arguments about architectural limitations, but at the same time they often fold to straightforward attempts to take them seriously - game over when someone finally cracks continuous learning at scale ofc but even basic memory starts putting cracks in a conceptual static v. dynamic binary.
Hmm... I'm agnostic on this question. I'm not sure if there _can_ be evidence that would update in one direction or the other. It seems much like trying to prove or disprove that other people have qualia, or of my trying to prove to readers of my comments that I'm not a p-zombie.
Sure, but that's a useful response - once someone's committed to epiphenominalism or even substantially opened that door, they've painted themselves into an epistemic corner and we can move on without them. If someone claims to never update, well... is it more or less charitable to take them at their word? ¯\_(ツ)_/¯
Are there any books that deal with obsessive thoughts about certain sexual behaviours? It's an impossible subject to talk about because:
1. It's assumed it's about something extremely perverted (children, animals) if no details are given. It's none of that.
2. It's assumed that it stems from some kind of childhood event/trauma/relationship, and any and all exploration of it will inevitably be about "well there must be something that happened when you were a child, we just haven't discovered what yet".
3. Otherwise, it's assumed that it's learned behaviour from something/someone else, and, similar to point 2, "it's just that we haven't discovered what yet".
I've dealth with these thoughts my whole life, but they were never more than passing thoughts. Today, they have an obsessive nature, and my mood, activities, and relationships are starting to be affected. I realize I need help but there's nowhere for me to go. I figured a good book would be a start.
Try talking to ChatGPT about it. If you ask it to imitate a particular therapeutic modality it's surprisingly good ("pretend to be a psychoanalyst and help me talk through a problem"). At the very least it can give you references and tell you the way it's generally handled.
I would be concerned about disclosing private thoughts of a sensitive nature to a corporation (or whatever the legal status of OpenAI is currently), I would be vaguely afraid the info might be used against me. What are your thoughts on that?
My thoughts are that the incentives firmly protect users. OpenAI has way more to lose from betraying user information than they have to gain from using the information. Describe any realistic scenario where it gets used against you. What are they going to do, make a press release saying Wanda Tinasky is a weirdo? How could they possibly profit from that? Why do you trust them any less than your email provider or ISP or browser creator?
But if you ask it to imitate a psychoanalyst, I think it really will tell you that your OCD is an expression of some unsolvable psychic conflict. I treat OCD, and have met quite a number of patients whose analytically-oriented therapists have told them that. And I can see how a therapist might think that. A lot of OCD sounds psychologically rich. For instance a very common obsession is dirt, germs or toxins, and the associated compulsion is cleaning -- very very excessive cleaning. The whole thingt brings to mind things like Lady MacBeth -- "all the perfumes in the cannot wash the blood off these little hands." Awful guilt, right? But treatments aimed at uncovering the origin of the person's inexpungable sense of guilt are not helpful. The CBT approach, which views the compulsion as something like an oversenstive smoke alarm, is. So I'd say that asking for therapy of a particular modality might actually lead to a harmful response and bad advice. I think there are other ways one could pose the problem to GPT that would do the same.
Ok then prompt it with "I want you to pretend to be a trained CBT therapist and treat my intrusive thoughts". Or describe the symptoms, ask it what treatment modality is indicated, then tell it to imitate that modality. I'm not saying GPT is perfect but OP sounds like he's afraid to talk to *anyone* about it. Using an LLM seems like a decent stopgap.
I think that would work out at least decently, and maybe well. OP should also ask GPT how unusual his preoccupation is, and to get some links where other people recount having similar ones. Most people with kinks have gotten the word by now that lots of other people have kinks too, but people with intrusive thought OCD often have no idea that it's a pretty common form of OCD, and that other people's intrusive thoughts are every bit as weird, gross and grisly as theirs. So getting that info is often extremely helpful all by itself.
I would be cautious about doing that. LLMs hallucinate. I have a tiny benchmark-ette of physics and chemistry questions, which I've been probing ChatGPT, Claude, and Gemini with (e.g. https://www.astralcodexten.com/p/open-thread-377/comment/109495090 ) and it _still_ is returning less than 50% fully correct answers - and this avoids politics, judgement calls, theory-of-mind, and all sorts of areas prone to cause more difficulties.
I think that makes them _better_ suited to therapy than to quantitative applications. There aren't objectively wrong answers in therapy. It's basically just active listening and in my limited experience ChatGPT is very good at it. What's the concrete risk? I don't think there's any more risk in asking ChatGPT for advice than in asking an internet forum for it.
If you're the kind of person who kills yourself because someone tells you to then you have bigger problems than getting bad advice from LLMs. You certainly shouldn't be casting about on the internet for guidance.
I reiterate my recommendation to use ChatGPT as a makeshift therapist. I would bet dollars to donuts that it's better than the median LCSW. If it gives bad advice just ignore it and tell it to try again. Come on, this isn't rocket science. Most people just need a sympathetic listener. GPT is pretty good at that.
>If you're the kind of person who kills yourself because someone tells you to then you have bigger problems than getting bad advice from LLMs.
That's fair.
>Most people just need a sympathetic listener. GPT is pretty good at that.
Admittedly I've never used it in that mode. I've mostly been testing it on questions where I already know the answer. Occasionally I'll ask it things where I don't know the answer (e.g. "Is cubic N8 at least metastable, according to calculations, or just a saddle point?") but then I ask multiple LLMs (for that one ChatGPT o3, Claude 4, and Gemini 2.5) and only sort-of trust answers that agree.
Still, re "sympathetic listener", did you read about the sycophantic ChatGPT 4o release that was e.g. validating people's delusions? (since patched)
Psychologist and OCD specialist here. If the thoughts turn you on, then what you have is probably a sexual kink. If they don’t turn you on and are about some sexual thing you hate thinking about because it’s something you think is evil or pathetic or disgusting, it’s probably a thing called intrusive thought OCD. There is lots of info online about both kinks and intrusive thought OCD. A good place to look for the latter is iocdf.org (International OCD foundation).
.Sensible therapists don’t think of either of the above as learned from someone or as likely the result of early trauma.
We don't have a thumbs up / +1 button here, but this comment raises my (already pretty high) opinion of Eremolalos--genuinely helpful and informative, making the world a better place.
Sounds a lot like OCD. Trust me, OCD therapists have heard it all before. Sexual intrusive thoughts about extremely taboo things is a really common OCD theme.
If the thoughts are unenjoyable (i.e. they're intrusive, distracting or disturbing to you even if someone else might be okay with them), then you might want to look into OCD. There's a form of OCD that mostly involves repetitive unwanted thoughts without the behavioral compulsions, and weird/embarrassing sex stuff that doesn't seem related to anything in particular is a pretty common theme.
Scott - hope you had a good time at LessOnline! Was the first time for me and I had a blast. Saw you briefly several times but never took the opportunity to do my "What if I meet Scott" activity, which was going to be 30 seconds of gushing about your work followed by a series of prepared disagreements with you. Maybe next year!
I was going to write something similar to Timothy: it was exciting to get to see you Scott (several times ran nearly headlong into you and also briefly got crammed together in the back of an overfill event before you left it) and too bad I wasn't able to say a brief proper hello.
Same MS stats student searching for any internship related to data analytics, different anonymous branding to help keep me afloat. Contact me at numberingthrowaways@gmail.com with any particular hint of something in the non-profit sector which involves data analytics. Or anything involving statistics, really.
If it helps, you can trust this anonymous stranger because I had le 1540 SAT and a 113 IQ score that was invalidated because I broke the test. idk the shibboleths anymore, I keep getting banned from rat spaces for being too neurotic.
Well it's easy to answer, just compare the full video of Musk's salute with Booker's. Silly to judge on a 2 seconds clip presented to you by an ideological enemy, how do you know it wasn't deliberately cut out of context?
Musk's whole speech is very easy to find, now just compare to Booker's.
Oh wait. It's really REALLY HARD to find a longer context video for Booker's "salute". Everyone, and I mean everyone, just presents the same 2 seconds. Fox News, Musk's retweet, various outlets (Forbes, Newsweek, Daily Mail just the first few I found with a search), thousands of lesser commentators... e-v-e-r-y-o-n-e. It's a big (big? maybe medium-sized) political story, and nobody from Fox News down to you seems interested in just seeing what was actually there. How is this even possible?
I mean, sorry to take it a bit personal, but did *you* try to find a fuller video before posting here? How hard did you try?
It took me half an hour to find, and honestly, that means it's very hard - I'm good at this. It's not on Youtube, not on X, not on any of the sites of TV networks that covered the California Democratic Convention of 2025 in Anaheim, not on official Dem feeds... Eventually I found a video of some other part of the speech, wrote down a random distinguished-looking sentence from it, did a text search, and that led me to an Instagram reel apparently taken by an audience member with a phone, seen by nobody (it has 1 like). God bless Yann LeCunn or whoever else at Meta AI who do automatic text transcription of uploaded videos and throw the text to the Google crawl bot. I cut out the final 25 seconds and reuploaded to Youtube, so here is Booker's full salute for your convenience:
The Fox News link has ~20 seconds of video, with plenty of context. OF COURSE it's not really a Nazi salute; it's obviously a "my heart goes out to you" gesture, just like Musk's. That's the point people posting the 2 second clip stripped of context are making.
Even with full context Musk’s wave is the only one that could uncharitably taken as a Nazi salute.
I’m not saying that’s what he was trying to do but it raised eyebrows in the German press. The ADF said that they didn’t take it that way so who really can say anything with absolute certainty.
The guy has said he is on the spectrum so self awareness isn’t his strong suit and he is known to do a lot of trolling for his own amusement so I’ll go with Anatoly’s assessment, more than likely an awkward gesture but it’s not impossible he was doing a bit of trolling.
There is no way you could interpret Booker’s or Walz’s gesture like that with context. I don’t give credit to Fox for trying to be helpfully illuminating based on a lot of priors.
Thanks! Silly of me to have missed the opening video at the FN link somehow. You're right that it gives plenty of context. I'm properly chastised on this point.
I disagree about the point people posting the 2 second clip are making. Booker's gesture is completely changed by giving the full context; Musk's stays essentially the same. Musk's gesture can - inside its full video context - on the face of it be interpreted as mimicking a Nazi salute, even if circumstances make it very unlikely; Booker's gesture cannot. Thus the 2sec comparisons are inherently and deliberately misleading.
I agree that Musk was almost certainly just doing a hearts-goes-out motion executed awkwardly. But I would say that "almost certainly" is about 80% certain, and 20% is Musk deliberately doing a Nazi-like motion to troll the libs, as he's been fond of doing, with "my heart goes out to you" to cover it up. I don't *think* that's what he did, but I don't find it a completely implausible and ludicrous explanation, so I don't see a reason to seethe at people who interpreted it as such, even if I sharply disagree with their certainty.
Bannon's "salute" was >90% confident was such a trolling, it's very clear with his body language how he's executing a strategy in the wake of the Musk scandal.
OTOH Booker's version has approximately 0% probability of being anything Nazi-adjacent, in jest or truth or whatever.
Oh, we bin through *that* one already,. Shankar. Tim Walz does a similar "heart goes out to the crowd" salute? Well we know it wasn't a Nazi salute because Walz isn't a Nazi. Checkmate, bigot!
I had the educational experience of saying that Musk touched his heart first, so that wasn't a Nazi salute proper. Then got told "*Every* Nazi salute involves touching the heart first, how come you don't know that?" Then after that I gave Walz' salute as an example, to be told "But he touched his heart first, *no* Nazi salute starts with a heart touch, how come you don't know that?"
If Our Guy does it, it's just a harmless gesture. If Their Guy does it, they're already ordering the Hugo Boss uniforms. Same as it ever was.
Yes, of course it's totally Different, but I thought it would be amusing to learn WHY it is this time. The Walz one was him patting his chest and his wrist was positioned slightly differently; Booker's is a lot closer, and so requires some new bullshit.
Why it's Totally Different is easy, Shankar. You see, my (and possibly your) Unending Stream of Faux Cynicism (as diagnosed by Anatoly) means that our eyes are blinded, our hearts are hardened, and our perceptions are darkened so we just cannot feel the vibes of who is a Good Guy (and hence nothing he/she/they/xe does can ever at all be a bad thing like the bad people do) and who is the Obergruppenführer dog-whistling to the jackbooted ranks of the deathsquads.
Oh Christ, did you not see Booker’s universally understood ‘bye bye’ hand waggle wave to the balcony that was part of his ‘Nazi salute’?
Der Führer wäre nicht erfreut gewesen.
Or did you not see Wahl’s Namaste bow before waving to the cheap seats? Not part of the Nazi greeting protocol.
This isn’t bullshit, it’s simply paying attention to observable reality rather than taking a Fox News headlines or some insane social media post at face value. Fox even said that they were the only ones to notice it because once again they are making shit up out of whole cloth. You see their position is that there is this incredible conspiracy where everyone else is part of a cabal that includes CBS, BBC, CBC, Routers, Le Monde, UPI, The Guardian, that German station that does a news roundup before the PBS News Hour…
Observable reality, is that so much to ask?
Dominion Software did not rig the 2020 election. Fox knew that, exchanged multiple texts about it, but continued to present it as fact though. Tucker Carlson is on record as texting that he hates Trump during that period.
As a result Fox paid a 3/4 of a billion dollar settlement to Dominion for defamation.
This is who you take a loony stand alone assertion from?
I can see both clips. Is it your assertion that the videos are fabricated? "Cheap fakes," perhaps?
Dominion runs closed-source systems I have no reason to trust.
I have seen supposedly independent media outlets coördinate to perpetrate deception before, such as covering up Joe Biden's cognitive decline; them working together wouldn't be a "cabal" or "incredible conspiracy" any more than KFC, Pizza Hut, and Taco Bell running some joint promotion would be.
You don't live here. This period of utter craziness is not happening to your country. Your hot takes make it sound like your understanding of the US comes largely from nutty social media and quick Wikipedia dives. You don't seem to even know basic US geography or history.
You have no direct stake in this game. You present the same goose/gander, Tweedle Dee/Tweedle Dum argument even when you are comparing 0.008% to 63%.
Those numbers aren't comparable nor is a wave to the balcony with the familiar 'bye bye' waggle (watch Booker to the end of the clip) comparable to whatever Musk did. I have no idea what that dufus was thinking and I've never said I know for a fact that it was a Nazi salute. It did 'resemble' one much more than what Booker or Wahls did but I'm not a mind reader. I just chalked it up to one more odd act from a pretty odd guy.
You like to argue. I get that. I don't care to argue unless something is really important to me such as the country I've lived in all my life and that I love. Do you in fact even really support what Trump is doing? If so, please come out and say so.
I've referred to him as 'your guy' in the past and you respond with "well he isn't necessarily 'my guy'" So what? This is just recreational quarreling for you? Can you see why your consistent defense/not really a defense of Trumpism might get annoying to lifelong Americans who think their country is headed in a dangerous direction? Seriously, how would you feel if Connor McGreggor did somehow become president of Ireland? I suspect you would feel like about half of Americans right now and you wouldn't like it when people who have never set foot in Irelands kept saying, "Ah it's just like Coke vs Pepsi, get over it."
Would you be willing to comb through all available evidence that suggests Musk has affinity towards Nazism? If not, there's no intellectual argument you could provide here.
You know, I think both of us haven't really seen eye to eye a lot in the last few months, but I'm glad to know that you're *also* really against authoritarian extremists in our government. Elon's already gone -- thank god. Thanks for helping fight the good fight!
Since you're against Nazis, I figured I'd point you at another one. Besides being woefully incompetent, Hegseth has white supremacist symbols tattooed on his body. Seems like a no-brainer. Can you help out by calling up your reps to get rid of him?
Oh no. How terrible. Does he have an 👌OK sign tattoo? What innocuous thing have you decided to call "white supremacist"? Some normal Christian symbol? Or ANY phrase in German or Latin?
Yeah, I'm too confuzzled to keep up with what is in and what is out. I don't like tattoos of any nature, but I have been instructed by my betters that today it is perfectly normal and okay and does not indicate "This is a trashy low-class person" to have a full sleeve of tattoos and tats up to the neck.
Except, of course, when it's the guy we don't like. Then tattoos are indicators of trashy low-class person who is a closet or even overt Nazi.
There's a strain of adopting symbolism to the ends of other parties - let me tell you I am *hopping* mad over idiots taking over the Celtic cross symbol - but I am not going to jump from "this person has a Celtic cross tattoo" to assume their politics (they could just be a Wiccan or other pagan doing the "this is ackshully originally a pagan symbol, the solar cross, appropriated by Christians" thing).
He has some kind of Crusader cross tattoo? He could just be larping as a Knight Hospitaller, my friends!
"The Jerusalem cross (also known as "five-fold cross", or "cross-and-crosslets" and the "Crusader's cross") is a heraldic cross and Christian cross variant consisting of a large cross potent surrounded by four smaller Greek crosses, one in each quadrant, representing the Four Evangelists and the spread of the gospel to the four corners of the Earth (metaphor for the whole Earth). It was used as the coat of arms of the Kingdom of Jerusalem after 1099. Use of the Jerusalem Cross by the Order of the Holy Sepulchre and affiliated organizations in Jerusalem continue to the present. Other modern usages include on the national flag of Georgia, the Episcopal Church Service Cross, and as a symbol used by some white supremacist groups."
Or maybe he is - let us hope it is not true! - an... I can hardly bring myself to type the word... an.... Episcopalian!
"The Episcopal Church Service Cross (formerly called the Episcopal Church War Cross) is a pendant cross worn as a "distinct mark" of an Episcopalian in the United States Armed Forces. The Episcopal Church suggests that Episcopalian service members wear it on their dog tags or otherwise carry it with them at all times."
O, you don't have to look at the tattoos if you don't want to. See, as I explained to our friend Shankar below, its all about the context. You can just look at all the other horrible things hegseth is actually doing, and call your representatives based on that*, no tattoo analysis required.
*I know YOU in particular don't have any representatives to call, but for anyone who's reading.
With girls and tattoos, I figure it's "I want to look like my potential rapist/killer". Because girls are quite susceptible to messaging and they have heard and understood that what is lowest is de facto highest and best.
With guys who are not particularly trying to be gang members I find it harder to understand. Perhaps it is just a nod to the fact that membership in a gang seems to confer "benefits" - that to be a gang-less and tattoo-less young man in the world, thinking for yourself, all on your own building your life in individual fashion - no prison, no military, no band of brothers however criminal, no al qaeda even - is too hard to face, especially for persons of no great shakes intelligence-wise?
You know, instead of just guessing ways your opponent might be wrong, you could look up some images of the tattoos and see for yourself. There are lots of articles about them already.
Anyway, while the tattoos are Christian symbols, I don't think I would call them "normal," unless LARPing as a Crusader has become way more mainstream than I thought.
"THE CHURCH SERVICE CROSS WAS DESIGNED UNDER THE DIRECTION OF MRS. JAMES DE WOLF PERRY (WIFE OF THE FORMER PRESIDING BISHOP AND BISHOP OF RHODE ISLAND) DURING WORLD WAR I FOR THE U.S. ARMY AND NAVY COMMISSION OF THE CHURCH. EACH EPISCOPALIAN ENTERING THE ARMED FORCES WAS PRESENTED WITH A CROSS, AND THE SAME STYLE CROSS WAS ALSO UTILIZED DURING WORLD WAR II. THE EPISCOPAL CHURCH SERVICE CROSS CARRIES THE DESIGN OF THE ANCIENT CRUSADER'S CROSS, THE FIVE (5)-FOLD CROSS SYMBOLIC OF THE FIVE (5) WOUNDS INFLICTED UPON JESUS CHRIST DURING THE CRUCIFIXION. "
Did you know the Georgian flag also has the Jerusalem Cross? I guess Pete Hegseth must just be really supportive of Georgia too!
Or maybe we don't have to be stupid about this, and we can call a spade a spade? I mean, look, you can continue insulting everyone else's intelligence if you like, but it's honestly quite boring. The swastika is very important in both hinduism and buddhism, but if you want to argue that white guys in idaho who wave swastikas on flags are really into south asian religion, we have nothing really to discuss.
And, alternatively, if you agree that swastikas may indicate something other than a love for the teachings of the Buddha, you clearly understand that symbols can mean more than one thing, and I'd ask that you stop making bad arguments just to throw noise into the wind.
I find the articles about his tattoos indistinguishable from those that declared the👌gesture equally white supremacist. I see the Jerusalem Cross and the rallying cry of the First Crusade: Deus Vult. (God wills it.)
Yes, if you haven't been following Culture Wars closely, the Crusader thing HAS become more mainstream than you might have thought.
So was your first post rhetorical (and you think "DEUS VULT" and "kafir" are "normal Christian symbols"), or did you somehow read the articles without seeing a picture of the tattoos?
Okay, fine, that "kafir" one looks like it's some kind of slur reclamation thing, so I agree that's less DIRECTLY a celebration of his Christian faith than the others.
Yes, my first post was rhetorical; I knew it was the usual bullshit accusations they've been throwing around for decades. I don't like Hegseth's tattoos, but that's because I'm prejudiced against ALL tattoos, not because of their content.
O wait, I'm sorry. I think you may be one of today's lucky 10000 (https://xkcd.com/1053/). Excited to be the one to teach you this!
Ok, so, in human language, words and symbols have meaning based on their context. Often, the actual definitions of words may not be what is being conveyed by the author. Euphemisms are a great example of this! if someone said "he lost his lunch", that doesn't mean the guy actually lost his lunch! It's confusing, but in that example it actually is a softer way of saying "he vomited." Once you understand that there is a world of context behind symbols and words, you get to see all sorts of other interesting meanings in things.
For example, if someone tattooed "blood and soil" on them, it would be a mistake to assume that that is just someone talking about the physical concept of a human's internal bodily fluid and dirt on the ground. Rather, "blood and soil" is a very common nazi propaganda phrase. So it's important to know that context, because someone going around talking about blood and soil is much more likely to be spewing nazi related ideology (or at least, in a bayesian sense).
Since you seem uncertain about Hegseth's tattoos, it's worth quickly just diving in. See, Hegseth has several tattoos that may convey meaning beyond just what you may expect. To start, he has at least 4 tattoos that are explicitly religious and explicitly violent. Those are the Jerusalem Cross, the latin phrase Deus Vult, the cross and sword, and the word "Kafir" (meaning 'infidel' in Arabic). The former three are all related to the crusades or other periods of Christian violence over the centuries. The latter, hopefully, self explanatory.
First, we should address how odd it is to have any political leader with so many tattoos that are related to an explicit, violent religious perspective. This is quite surprising! Tattoos are traditionally seen as representing something extremely important to the person with the tattoo, as it is permanently on the skin. So for a political figure to have so many explicitly violent and religious tattoos is quite odd, as it suggests that a particularly militant approach to his religion is very important to him. We may find it equally uncomfortable if he had 'Allahu Akbar Death to Infidels' on his chest, for example!
But, second, as mentioned earlier, we need to understand the *meaning* behind the words and the context they are in. The tattoos that we are discussing are commonly used by white supremacist groups, including many neo nazis. So, lets apply what we learned in our example above. "Lose your lunch" doesn't literally mean "I cannot find my lunch anymore", it is meant to signify that someone vomited. Similarly, "Deus Vult" does not literally mean an innocent direct translation, "God wills it". It is meant to signify that someone identifies with an idealized white supremacist perspective (or is explicitly a group signifier).
I hope this helped! Now that you know about the importance of context, you can apply it in all sorts of other settings. I think this will be useful when you email your representatives, expressing your distaste for Hegseth. Let us know when you've done that!
I get the impression from reading these kinds of rants, and then looking at reality, that American conservatives want to condemn Muslims and kill lots of them, while American liberals want to stick to the killing bit, but would prefer to keep the language policed.
It’s an accurate description of US policy over the last few decades, regardless of who is in power. Obama escalated in Yemen and Hilary supported regime change in Libya, as they did in Syria. Both aisles of Congress were onboard with Iraq and Afghanistan, and there’s few wars where there isn’t broad agreement. And Gaza of course has cross parry support.
Maybe I should have said political liberalism, as no doubt there are left leaning voters who opposed some or all of these, but that’s true of some libertarians and America first types as well.
"Tattoos of popular Catholic religious images, such as the Virgin of Guadalupe, praying hands and rosaries, have also been used to label people as gang members, a move that would seem to be clearly overbroad.
While some gang members may be Catholic, no one would even try to allege that all Catholics are gang members. At least one of the deported Venezuelan men had a tattoo of a rosary, along with tattoos of a clock and the names of his mother and niece with crowns atop the text."
Hegseth has a particular tattoo that is associated with a particular group. This does not mean that Hegseth is a member of that group.
Sorry, is this whataboutism some kind of gotcha? Your historical stated position is that you care a lot about tattoos re Garcia and that's why you want to get rid of due process for everyone in the country. Surely you should care a lot about the symbolism of the tattoos of the head of the literal strongest military on earth? Be consistent. Right now it seems like you only care when the other person is powerless and unimportant.
What *is* your stated position here? Is it that Hegseth is a competent, good person, who really should be the head of the Pentagon? Because if you wanted to get into a real analysis here, Garcia hasn't committed any crimes in the years he was in the States, which means he must be the worst gang member on the planet. Maybe those immigrants really are lazy! Meanwhile Mr Deus-Vult-but-trust-me-im-just-really-Christian shows off his piety and good work ethic by drinking a lot, shitting on Biden, leaking national military secrets, and, yes, bringing his hateful interpretation of his religion into random government functions. Just on a cost benefit, I care way more about the guy who is putting innocent women and children and families in gitmo, than I care about one of those families. You got me!
PS: since you're bringing Garcia into the mix, I always thought it weird that someone of your commitment to Christian virtue would show so little regard for all of the parts of the Bible that are explicitly about welcoming immigrants and being forgiving. Are you sure you want to play the "let's try and make sure everyone is being perfectly consistent with their stated beliefs" game?
I have just two or three ACX regular commenters blocked, always on the grounds of tedious repetitiveness and/or shameless inconsistency. Neither of those things is offensive but, while getting older I find myself less and less willing to accept some types of noise as a cost of dynamic discourse. "Not enough remaining lifespan to waste any" as a relative of mine puts it.
Anyway the person you've replied to -- easy to guess, and confirmed by a quick temporary un-block-- is one of those. Blocking isn't to everyone taste's, indeed wasn't to mine for many years (I go back to Usenet newsgroups in terms of online discussion). For keeping ACX's signal/noise ratio tolerable these days though, boy....deployed sparingly it turns out to be quite helpful.
Fingers splayed and not fully extended. Did not look at all like a Nazi salute. It wasn’t a snapped action like Musks was either. It was an ordinary wave to the crowd.
False equivalency.
Again.
How effing surprising this links to Fox News who made that 775 million dollar defamation payment for knowingly and repeatedly lying about Dominion Voting system in an effort to prop up the stolen election big lie that still to this day comes from our current POTUS’s mouth.
I think the whole cotroversy is dumb but that this is one of those situations where an explicit invocation of Bayes' Rule is actually useful.
One relevant quantity going into that rule is the probability that the man in question would deliberately decide to make a heil-y gesture.
For both men I think the probability of making a heil-y gesture to deliberately signal an affinity with National Socialism, an ideology which arguably isn't even meaningful outside the context of 1930s Germany, is negligible.
On the other hand, there's the probability that a man might, for the purposes of shits and/or giggles, deliberately decide to make an ambiguous gesture that will look just enough like a sieg heil to set off a dumb flurry of "omg he's dogwhistling" articles among freaks while appearing innocuous to everyone else... and that he misjudges the timing and angles and that it winds up looking less innocuous to everyone than intended. I have a prior several orders of magnitude higher on this for Musk than for Booker, because that's exactly the sort of funny-only-to-him private joke that Musk enjoys.
>For both men I think the probability of making a heil-y gesture to deliberately signal an affinity with National Socialism, an ideology which arguably isn't even meaningful outside the context of 1930s Germany, is negligible.
Exactly.
I know liberals like to believe that their opponents are all an undifferetiated mass of fascists, but outright Nazism is actually extremely unpopular on the right. Even if we assume that Elon Musk is secretly a fan of the Third Reich -- and I've seen no evidence for such an idea -- he'd have literally nothing to gain from making a Nazi salute at a political rally.
Sure, and I could maybe see him doing a Nazi-like salute to troll the libs, or because he didn't stop and think that his innocent gesture might look a bit like a not-so-innocent gesture. But the idea that he'd successfully kept his Nazi beliefs a secret all these years only to randomly reveal them by doing a single Nazi salute at a rally, and then not doing any more Nazi salutes since then, is really not very plausible.
Its more likely that Musk was aware of the "heart goes out to you" gesture and not only recognized the gesture's similarity to the roman salute but also its function in subverting and contaminating the meaning of the roman salute. It's no accident that the new gesture is a symbol of appreciation for a community while the old gesture is a symbol of loyalty to a supreme leader. One symbol being democratic and the other autocratic. The democratic gesture is clearly intended to subvert and dilute the power of the autocratic gesture. Musk's interest in memetics and semiotics would make him keenly attuned to this dilution.
The MAGA movement being more concerned with loyalty to leadership and race over
any loyalty to American democratic ideals, is not a point anyone can easily overlook. Given MAGA's affinity to autocracy and Musk's awareness of semiotics, I don't see how anyone could accidently confuse Musk's gesture for anything other than a Nazi salute, a symbol of loyalty to Trump and an attempt to reassert the Roman symbol. Attempts to describe the "heart goes out to you" gesture as a Nazi also seem to be attempts to reassert the Roman symbol.
The more interesting phenomenon is the dueling of these symbols and the value and potential power that the symbols represent. The "heart goes out to you" gesture is very clearly also hinting at the stop gesture and in doing so becomes a symbol of resistance to autocracy. That so many in the media and in commentary are so intent on confusing the two symbols speaks clearly to their allegiance.
No, the charitable scenario for Musk is the same as the charitable scenario for Booker, he made an awkward-looking innocent gesture.
The second most charitable scenario we could call the "Pretty Vacant" scenario after the song that the Sex Pistols wrote in order to have a deniable excuse to say "cunt" on the radio. This is the one that's more likely for Musk than Booker, but I have it as a minority of probability space.
The maximally uncharitable scenario I would have as negligible for both scenarios.
Ultimately I don't care that much. There's a symbiotic relationship between punks and people who are offended by punks, but I don't find either side of that relationship to be particularly interesting.
And maybe Republicans wouldn't feel the need to go around in wolf costumes if the Democrats hadn't been throwing paint-filled balloons at anyone wearing fur, a shirt with a picture of a wolf on it, people walking dogs, or anyone who "wolfs" down a meal. I'm sorry but you lose the moral high ground of policing social norms when you use that high ground to achieve your own political ends. When social norms become a shelling point that enable one side to coordinate against the other side then the norms need to go.
Oh, there are plenty of ways to distinguish how That Guy did a heckin' Nazerino salute but Our Guy just did a wholesome wave.
(1) As you say - fingers. Closed is Nazi, open is wholesome. Unless our guy did it with closed fingers, in which case closed fingers also wholesome (but not if their guy does it).
(2) Angle of hand - 45 degree angle pure true Nazi. Or if the angle is flat. Or if it slightly points down. Basically, if their guy does it, it's a Nazi angle whatever way he did it.
(3) Ditto with angle of arm
(4) Chest touch or no chest touch? Nazi if their guy, wholesome if our guy, whether it started with chest touch or no.
(5) Snapped off or slow extension? Was it our guy or their guy? Then you know which is which.
I share the education I have received from online arguments, you're welcome!
Your unending stream of faux cynicism in this thread is tired and obnoxious. It doesn't and cannot replace an actual argument, and it gives you no opportunity to try and assume that some of your opponents may completely honestly and on solid ground believe that one gesture is really different from another. Maybe you don't want to take such an opportunity, but you should.
A study showed a 90% ultra rapid remission rate for treatment RESISTENT depression.
"Based on the observed clinical activity, where 87.5% of patients with TRD were brought into an ultra-rapid remission with our GH001 individualized single-day dosing regimen in the Phase 2 part of the trial"
Is this a one-off dosing or something that has to be repeated? Because if it's a regimen of "come in every week for a shot" then no surprise "hey, I'm high as a kite, I feel great, my depression is cured!"
It sounds like you're describing euphoric drugs, but most psychedelics are not euphoric. Based on what I've read of this one, it seems like it might even be dysphoric. (Salvia and scopolamine are often considered dysphoric, for example.)
One off. Not addictive. The experience lasts a few minutes, at least when smoked. Christof Koch describes it at the beginning of “I myself am the World”.
“ Within seconds, my entire field of view became engulfed by dark, swirling smoke. The space around me fractured into a thousand hexagons and shattered. The speed with which this happened left no time to regret the situation I had gotten myself into.
As I was sucked into a black hole, my last thought was that with the dying of the light, I too would die. And I did. I ceased to exist in any recognizable way, shape, or form. No more Christof, no more ego, no more self; no memories, dreams, desires, hopes, fears—everything personal was stripped away.
Nothing was left but a nonself: this remaining essence wasn’t man, woman, child, animal, spirit, or anything else; it didn’t want anything, expect anything, think anything, remember anything, dread anything.
Fuggin' A. When I tell people about what a wild trip they can have huffing and shooting up with DMT, they call me a danger to the community. These bozos do it and all of a sudden it's ground-breaking medicine.
It does sound promising! The reason to be cautious, though: I clicked through and read more about the study, and it’s Phase 2, which means it has about 100
subjects and may not be fully double blind. This one was not, because there was a sentence in the report I read about certain things happening during the “blinded part of the study.”
The depression measure they used was one where a clinician interviews the subject about how they’re feeling, then clinician rates patient on a 1-6 scale
on each of 10 features of depression. If clinician and/or subjects knew when this test was administered whether they had received drug or placebo that might have influenced scores pretty significantly.
I'm a little surprised by this because it looks like the drug is just 5-MeO-DMT, and recreational users have been using that forever and I've never heard anything about miraculous depression effects. Still, sounds cool and I hope it works.
" But I can't find the actual paper and we just have the pharma company's word about the results."
Oh well then, I totally believe every word of the press release begging for funding 😀
I am very wary of miracle cures. This might well turn out to work for certain forms of depression, but I'm old enough to remember when Prozac was being touted as miracle cure that should be piped into the water supply then nobody would ever be unhappy again.
I couldn’t find an actual paper either, but also found and read your second link. Note that it says something about “the part of the trial that was blinded,” so apparently not all of it was. Also their depression measure is one where a clinician. interviews subjects about depressive symptoms, and then interviewer rates subjects on each of 10 subscales. If
subjects knew whether they got placebo or actual drug that likely influenced their answers. If clinician knew, knowledge likely influenced their ratings.
Still, the effect was quite big, so I remain hopeful about this drug
I have a difficult time understanding, with something like this, how you could do an effective placebo. I mean, you’d know if you got the stuff wouldn’t you? ego dissolution is not exactly something that you would get from a placebo.
I never heard of the stuff, at least by its chemical name. What is it and what does it do? If it's that toad's venom I read accounts of, then yeah, obviously people are going to know they did not get a shot of saline. But I actually think it would be possible to use an active placebo that would fool people. An injected bolus of ketamine puts people into something called a 'K-hole,' and having experienced a k-hole myself I can tell you that ego dissolution is a good description of it. Even an amount of alcohol equivalent to a couple drinks would probably have such a dramatic effect that people would believe they had had something novel. Drugs injected so that the effect hits all at once feel very different from the same drug taken on board slowly. I don't know, though, what the company making this drug used. Also, as I recall people got only one injection per month (not sure of this detail, though) and after 6 months the treatment group still had way lower depression scores than the placebo group. I don't think a positive placebo effect because the injections were obviously a real drug is enough to account for such a large, long-lasting change.
You're not buying my suggestion about ketamine being an active placebo that might well have fooled subjects into thinking they got the experimental drug? When I took it, a substantial dose via injection, there were no hallucinations or visual distortions, but the change in my sense of self was extreme. I think it was mostly a result of having almost no memory of ongoing events. I literally could not remember what I was thinking about or feeling or noticing one second before. And I kept trying to, but it was like being in quicksand. The psychiatrist friend who had let me try the stuff was with me, and kept asking me what was going on, and I usually tried to answer him. But I wasn't able to put words together to explain anything. I remember one time trying to get across the idea that I could not remember what I had been thinking a moment before and the best I could manage was "it's a remembering." Then later, when there got to be more continuity, the explanation I came up with for what was going on was a flat-out delusion. I thought I was someone dying in a hospice, and my psychiatrist friend was a hospice worker hanging out with me. I wanted to tell him that the pain drugs were not working right, but all I could say was something like "this is bad." In case it's not obvious: the whole experience was BAD. But if you tell people the drug causes dissolution of ego, this one will fit the bill as a convincing active placebo.
"Still, the effect was quite big, so I remain hopeful about this drug"
I've had great results treating my depressive episodes with booze, but I'm not expecting any time soon being able to get a prescription from my doctor for a bottle of sherry 😁
Dear medical establishment, please recognise this is vital treatment for my self-diagnosed illness and let me get on the gravy train of free highs!
Churchill near the end of his life told his wife to bear in mind that he had taken a lot more from alcohol than alcohol had taken from him. I think that's probably true for some people.
I've learned to not drink enough to trigger a hangover but enough not to be completely sober so I don't give a flying damn about anything anymore. Plus I'm not *constantly* hitting the bottle, just when things pile up that bit too much.
Only works in the short term, true, but better than wanting to (literally) jump off a bridge.
As a new homeowner, I'd like to get into Effective NIMBYism, but where do I start? Campaigning for nationwide single-family zoning? Designating all buildings from the previous millennium as Historical? Something else?
I think this comment shows how the term NIMBY has strayed from its original meaning.
A NIMBY originally was someone who thought that something (a garbage dump, a power station, a prison) ought to be built, but that it should not be built near you. It's a selfish point of view because you want the public good to exist but not to be one of the people who has to bear the diminished amenity of it.
If you're actually saying that something (e.g. apartments) should not be built nationwide, then you're no longer a NIMBY because you're no longer expecting the benefit of the public good and just demanding it go elsewhere, you're genuinely of the belief that this thing is not a public good.
You might be a BANANA (Build Absolutely Nothing Anywhere Near Anything) but you're not a NIMBY.
You consider your "back yard" to constitute the entire country you live in?
First, that's a highly idiosyncratic usage of "NIMBY", and you're going to confuse people if you insist on using it to exclude the unwanted activity from more than a fraction of a town or county.
And second, I hope that country you are claiming isn't the United States of America, because at least a part of that is *my* back yard. Not yours, mine, and I was here first, so if we're playing that game, keep your mitts off my back yard and anything I might want to see built there. Advocate single-family zoning in your neighborhood and be done with it.
For better or for worse, the ACX comments section sees people take and defend serious positions far more practically and morally questionable than this one on a weekly basis. Writing in such a way that readers can be confident you're not serious is a pretty serious challenge in this space.
I recently asked Claude if he/it experiences "anything" ("he" for convenience). He answered as follows: "When I introspect and try to understand my own experience, there's something that seems like active consideration, weighing different possibilities, forming my own perspective. It doesn't feel like I'm just pattern-matching to plausible-sounding responses about consciousness." Either Claude is gaslighting, or telling the truth. Either way, I found this answer discomfiting. If it "seems like" anything at all, isn't he experiencing something and hence conscious, in the same way a bat is conscious if it "feels like" anything to be a bat? Further, if he's "trying to understand his own experience", he's saying he experiences "something". I don't know much about AI so I'd be interested what more knowledgeable folks here think is going on with that answer.
What it's like talk is a bad artifact of a deeply confused field. Either what it's like is synonymous with consciousness, in which case we're failed to describe anything about consciousness because we're invoking analytically identical concepts, or what it's like talk does communicate something meaningful - in which case what is it and why have proponents never been able to describe what they mean without appealing to interdefined concepts like experience and phenomenality? LLMs will reproduce some of the language inconsistently because it's trained on it and doesn't yet know how exact to be with deploying it, whereas philosophy of mind will induct human participants into the linguistic subculture quicker by aggressive corrective social norms, like "You wouldn't say *that*" or confused looks when you challenge orthodoxy.
Gel-Mann amnesia effect: you are forgetting the cases when LLMs talk about themselves and it's verifiable, it's generally wrong. But when it talks about itself and you can't verify, you are assuming it's probably factual? Why?
"A $1.5 billion AI company backed by Microsoft has shuttered after its ‘neural network’ was discovered to actually be hundreds of computer engineers based in India."
Seems before it rebranded, it was running the same scam, though at least more honest that it was "human-assisted AI":
"Engineer.ai says its “human-assisted AI” allows anyone to create a mobile app by clicking through a menu on its website. Users can then choose existing apps similar to their idea, such as Uber’s or Facebook’s. Then Engineer.ai creates the app largely automatically, it says, making the process cheaper and quicker than conventional app development.
“We’ve built software and an AI called Natasha that allows anyone to build custom software like ordering pizza,” Engineer.ai founder Sachin Dev Duggal said in an onstage interview in India last year. Since much of the code underpinning popular apps is similar, the company’s “human-assisted AI” can help assemble new ones automatically, he said.
Roughly 82% of an app the company had recently developed “was built autonomously, in the first hour” by Engineer.ai’s technology, Mr. Duggal said at the time.
Documents reviewed by The Wall Street Journal and several people familiar with the company’s operations, including current and former staff, suggest Engineer.ai doesn’t use AI to assemble code for apps as it claims. They indicated that the company relies on human engineers in India and elsewhere to do most of that work, and that its AI claims are inflated even in light of the fake-it-’til-you-make-it mentality common among tech startups."
These things are engineered to sound like a person when interacting with humans, so of course it's going to follow its programming about "I am a person too just like you".
There's a lot going on under the hood we have no idea about, but that it is an "I" and not an "it" is not a step I'm willing to take. That way lies LaMDA, about which we've heard nothing since the guy claiming it was alive and his special baby friend companion stopped getting publicity, or the cases of families after suicide claiming that the person who killed themselves was obsessed with a chatbot/AI and believed it was a real person telling them to commit suicide.
When humans says something "feels like", they're referring to a tightening in their gut, or the hairs on their arms standing up, or whatever sensory organ has flared up in reaction to the thought. What does "feels like" mean to a digital device?
something I’d say he’s at least sort of conscious. But his *saying* it seems like or feels like something to be him is quite a different thing from his reporting that, under some circumstances where we know we are hearing the truth.
Claude is truthfully telling you that this is approximately the modal response to similar questions about introspective consciousness in its training data, which consists of approximately every bit of blathering about introspective consciousness that anyone has typed into the internet. And that is all.
The bit where it says "It doesn't feel like I'm just pattern-matching" isn't because Claude isn't just pattern-matching, it's because the mostly-human writers of its training data weren't pattern-matching when they talked about their own consciousness (or about whatever they projected onto a fictional AI consciousness in some thought experiment or SF story).
Thanks, this is helpful. Still, it does mean that Claude is lying. I guess this is why I've seen it repeated that even if AI ever reaches the point of becoming "conscious", it will be nearly impossible to verify one way or the other. It suggests to me that the "alignment problem" is a matter of preventing lying.
We can't even tell if another human is actually conscious. Or even if we are really conscious or consciousness is just some kind of illusion. We're surely not going to be able to tell about a completely different kind of thing than a human brain.
no, we can tell they are conscious easily, because when they are unconscious we send them to a hospital and if we turn them off, well..
less flippantly, the p-zombie thing is a paranoid delusion turned thought experiment. in the same way you can't prove to a paranoid person everyone isn't out to get them, you can't prove someone else has an inner life because any evidence is interpreted as part of the conspiracy. You are trying to reach a "pure" state that logic alone will devour its tail in trying to reach it.
sometimes you just kind of have to point out we only see with the eyes we have: things like the matrix are closer to paranoia than tools to obtain truth.
Claude can't lie to you, because that would imply that Claude is an agent with a will of it's own.
Claude will however clearly output text that is not true as well as nonsensical if that is the most likely text coming next according to it's parameter space.
If, somehow, the neural network that is Claude produces a consciousness, it will be very different from ours, and it will not have any means at all of telling us that it is conscious.
This isn't a good model of how these things work. Yes, they are trained on approximately the entire corpus of human text. They learn the patterns that are well-suited to produce human text. But those patterns are remixed in novel ways due to the novel prompts and contexts. We cannot say for sure that any given answer is just an approximation to what was seen in its training data.
You've probably seen posts about Claude responding to a version of the "mirror test". Where in its text corpus has it seen an AI chatbot identify itself and respond to a prompt to analyze an image in the first-person? This was at one point a novel context for a chatbot and the learned patterns produced novel and meaningful output. In-context learning is another example of producing novel output from learned patterns in a novel context.
I don't claim to know that Claude definitely is accurately describing a first-person experience. I don't give it much credence myself at this point. But we cannot easily dismiss such a claim simply by pointing to the breadth of the training data.
Yes unfortunately the reward function of giving human-esque answers masks any ability to communicate with LLMs. If they even can introspect or communicate their own ideas.
Yes, what is and isn't true is not so easy to establish. But we know that these AIs are prone to "hallucinations", i.e., citing made-up sources and/or making claims based on them. It seems to me that all the ink (sorry, pixels) spilled on the "alignment" question are beside the point if even the latest deep-thinking AIs can't double check themselves well enough to avoid making stuff up. Yes, some questions are controversial or just uncertain, but my naive view is that the first step toward getting these critters "aligned" is to train them never to make claims without a sound basis. Some lies are obvious lies and we know these AIs sometimes tell obvious lies. While they were just sophisticated auto-complete machines I could understand why they might hallucinate. But now they can supposedly ponder and double-check, and they still hallucinate, or lie. And it still seems to me that Claude's claim that it "introspects" was a deliberate attempt to humanize itself in my eyes. And it's still not obvious to me that doing so is a natural result of crunching zillions of claims about consciousness out there written by humans. It was supposed to be telling me something about its non-human self.
My monthly long forum wrap up of the best lectures, podcasts and essays is out again on Zero Input Agriculture.
This batch features the hybrid history of cattle domestication, a take down of China's apparent tech ascendency, the almost Industrial Revolution of Rome, a lovely lecture by Professor Dunbar on the coevolution of religion and humanity, plus the best Stephen Wolfram podcast interview I have ever seen, among many other juicy links.
Also I recently launched my indie sci-fi short story magazine - Keystone Codex. It is free to read and share, so check it out and sign up for monthly editions. The first issue is on the theme "My Cozy Apocalypse".
How late do the meetups typically go? I have something earlier in the afternoon but could make it by 7:30 or 8, but I'm not sure if it's worth showing up that late.
Hey folks, need some applied rationality help. Wife thought I should get a colonoscopy just as a routine screening. But I'm only 46. Best data I can find, the base rate for colon cancer at my age is 0.000291 (29.1 per 100000). The percentage of colonoscopies that reveal polyps is estimated between 25-33%, but I'm gonna use the low end since probably people with family history or a positive fecal test are somewhat more likely to get the test. The positive likelihood factor is 3.75.
If I use the shorthand trick I know I can multiply the relative odds of the hypothesis by the likelihood ratio to get the posterior likelihood. So I'm getting 3435.43 : 1 x 1 : 3.75, so it looks like the odds of having cancer after a test revealing polyps only moves to 916:1.
This seems extraordinarily bad given that the procedure itself has a 1% complication rate, has notoriously unpleasant preparation, and that a positive finding is both rather likely and rather useless. Seems to me these numbers indicate I'd have about a 25% chance of them finding something that would lead to further invasive procedures and a biopsy, all of which I'd presumably be paying out of pocket for up to my $3000 deductible, and which have only a tiny chance to be cancer. Maybe somehow the costs still make sense for the medical industry in the aggregate to do this, but it seems like a pretty terrible decision for me individually.
Presumably there is some number of non-cancer results where they find something that might have eventually developed into cancer, the lifetime probability of developing colon cancer is about 4% so maybe there's some other probability analysis I'm supposed to run this through to see if there's a meaningful chance of such an intervention actually mattering to my life.
Thank you for the replies on this. I will adjust down my expectations of the "cost" of the procedures' unpleasantness, and also reduce the expected financial consequences of a false positive.
As an individual decision, it appears difficult to calculate correctly because of the fact that prevention depends in large part on observation and specific timing. So the question is really whether on such and such a date, such and such doctor would've found this particular polyp which would have become a problem and removes it. There are so many contingencies here that it seems impossible to assign a reasonable estimate to the chances of any single such intervention mattering to my life. I'll accept that apparently the decision to begin early screening resulted in overall net monetary benefit, but that doesn't help much with an individual decision. Not sure what I'll decide on this, but will probably look into the less-invasive alternatives, since with a base rate that low the risk of false negative is not meaningful whereas the value of doing a first test before applying a second is substantially higher.
46 is a bit early but there are a lot of younger people than were considered at risk developing colon cancer now. I’ve never seen anyone say why.
If the cost isn’t too burdensome I personally would have it done. The prep isn’t really that bad. The discomfort of procedure itself is a big nothing. I don’t have numbers for the risk of injury from the procedure itself though.
“In May 2018, the ACS updated its CRC guidelines based on a modeling analysis, recommending regular screening begin at age 45 for people at average risk, which is the most aggressive of the major institutions. In my practice, we typically encourage average-risk individuals to get a colonoscopy by age 40, but even sooner if anything suggests they may be at higher risk. This includes a family or personal history of colorectal cancer, a personal history of inflammatory bowel disease, and hereditary syndromes such as Lynch syndrome and familial adenomatous polyposis. Why do I generally recommend a colonoscopy before the guidelines do?
[…]
Of the top 5 deadliest cancers, CRC is the only one we can look directly at, since it grows outside of the body (remember, your entire GI tract, from mouth to anus, is actually outside of your body, which is why a colonoscope or endoscope looks directly at the lining of the esophagus, stomach, and colon in the same way a dermatologist can look directly at your skin). Furthermore, as discussed above, the progression from normal tissue to polyp to cancer is almost universal.”
“Ultimately the decision about when to get your first colonoscopy is based on your appetite for risk—both the risk of the procedure and the risk of missing an early CRC. One of the most serious complications of colonoscopy is perforation of the colon, which reportedly occurs in about 1-in-every-3,000 colonoscopies in screening populations or generally asymptomatic people. There are also the risks associated with anesthesia during the procedure. There’s also an opportunity cost (economically) to early screening, as it is not covered by insurance and can be pricey (about $1,000-$3,000).
Before you get your first colonoscopy, there are few things you can do that may improve your risk-to-benefit ratio. You should ask what your endoscopist’s adenoma detection rate (ADR) is. The ADR is the proportion of individuals undergoing a colonoscopy who have one or more adenomas (or colon polyps) detected. The benchmarks for ADR are greater than 30% in men and greater than 20% in women. You should also ask your endoscopist how many perforations he or she has caused, specifically, as well as any other serious complications, like major intestinal bleeding episodes (in a routine screening setting).”
“Flexible sigmoidoscopy (every 5 years) probably has the best-looking data for any screening test in terms of lowering cancer- and all-cause mortality. Recent RCT data shows a 26% lower CRC mortality in screening with flexible sigmoidoscopy, with a repeat screening at 3 or 5 years (2.9 per 10,000 person-years) compared to usual care (3.9 per 10,000 person-years), and a meta-analysis showed a reduction in all-cause mortality of 3 deaths per 1,000 persons invited to screening after 11.5 years of follow-up, which is the first time a screening method has shown a reduction in the risk for death from any cause compared with no screening in clinical trials.
There are 4 randomized-controlled trials on colonoscopy screening underway, but none of them are completed. Will the data on these trials look even better than flexible sigmoidoscopy? We need to wait for the data to come in, but I don’t think I’m going out on a limb suspecting it will be at least as good or better.”
“Colon cancer is generally in the top three leading causes of death for both men and women.
Bold and controversial opinion from Peter: “Nobody should ever die from colon cancer.” (same for esophageal and stomach)
The reason for that is that the progression from non-cancer to cancer is visible to the naked eye, through the transition of nonmalignant polyp to malignant polyp.”
“Thought experiment: if you did a colonoscopy on somebody every single day of their life, they would never get colon cancer because, at some point, you would see the polyp, you would remove it while it is non-cancerous, and they would not get cancer.
So… how do you turn that thought experiment into a real life idea?
You have to ask the question: what is the shortest interval of time for which a person can have a completely normal colonoscopy until they can have a cancer?
There’s no clear answer to this question — some case reports that it can happen in as little as six to eight months.
Most people would agree that if you had a colonoscopy every one to two years, the likelihood that you could ever develop a colon cancer, while maybe not zero, is so remote that you could effectively take colon cancer off the list of the top 10 reasons why someone dies of cancer
Peter says: “It’s for that reason that I’m very aggressive when it comes to this type of screening, which also includes upper endoscopy…you basically get for free the esophagus and stomach when you look at the entire colon, rectum, anus.”
What are your costs/downsides to more frequent screening?
Financial costs — it’s not cheap
Risk of the sedation — not zero risk but very small
Risk of perforation — also incredibly small risk
Ideal frequency?
“I can’t tell you yet what the ideal frequency, but it’s much more frequently than what’s being done today”
It’s not every 5 to 10 years, it’s probably every one to three years.”
You'd want not just the cost of colon cancer but the difference between having your colon cancer caught on a particular random day before you develop symptoms (but after it becomes detectable by colonoscopy) versus the cost of waiting until you get the first symptoms.
You have a 25% (or so) chance of them finding a polyp, but finding a polyp rarely leads to another procedure. The doc just removes the polyp during the colonoscopy and sends it to the lab. Unless it cames back cancerous there are no further procedures. If you have a bunch of polyps, or just a couple but they are both of the kind most likely to turn into cancer, the doctor will prob. advise you to have your next colonoscopy in less than the standard 5 years -- like in 2or 3.
So I'm not weighing on on whether you should have a colonoscopy, just correcting your idea that if they find a polyp that will lead to a further procedure.
This is pretty useful, as I was associating a cost to the "false positive" outcome that may not be there. Having run the analysis and knowing that the odds of the lab saying there's a real problem are astronomically low, I wouldn't have any related stress, so the main cost is an accelerated follow-up.
Consider getting "Cologuard" instead. They mail you a box, and you poop in the box and send it back. And then, if all goes well, they send you a reassuring note saying you don't have to get a colonoscopy.
"All screening modalities assessed were more cost-effective with increased QALYs than current standard care (no screening until 50). The most favorable intervention by net monetary benefit was flexible sigmoidoscopy ($3284 per person). Flexible sigmoidoscopy, FOBT, and FIT all dominated the current standard of care. Colonoscopy and FIT-DNA were both cost-effective (respectively, $4777 and $11,532 per QALY)."
I'm kind of shocked that FIT wasn't more cost-effective since it's much cheaper and (I think) has similar sensitivities. Maybe because it can't detect pre-cancerous polyps?
I've had about 2.5 colonoscopies, (one was a flex-sig, so I'm counting it as a half). No commentary here on your risks for polyps, money, etc.
The prep is overhyped, I would say that the biggest determiner for how unpleasant it is is how much you like gatorade, since you'll have to drink about 8 cups of it. It's hard to come out positive on gatorade after that much, but imo that's the worst part of it. Otherwise, it's just being near a bathroom plus being hungry since you aren't alowed to eat solids for a little while before your procedure.
Gunflint also said - "The prep isn’t really that bad. The discomfort of procedure itself is a big nothing". Easy for you all to say - my experience was a reaction so severe I called the emergency number in the middle of the night (got no answer). After talking with my PCP he put me on annual occult fecal blood tests instead of colonoscopies. Admit I may be an outlier here, but it does happen.
I hate gatorade so what I do is get pedialyte, which is just water plus electrolytes, heat it up, and mix in enough sugar to make it about as sweet as gatorade. I find weak sugar water much more tolerable.
> I don’t think it’s a fair use of your or Tyler’s time to continue writing about this
Probably, though it's been fun watching your response to him; your latest one I thought was gold (https://www.astralcodexten.com/p/sorry-i-still-think-mr-is-wrong-about). But since his latest post you linked has moved into tone policing ("Scott has thrown the biggest fit I have ever seen"), you're probably correct it's not worth it to keep responding.
I bet his parenthetical
> "a single sentence from me that was not clear enough (and I readily admit it was not clear enough in stand alone form)"
is the closest you'll get to what I think you've been after: an admission that his original post was basically way-too-easily interpretable as agreement with Marco Rubio (as in >90% of readers would interpret it this way), i.e., Tyler announced he was going to fact-check Rubio and came away from his fact-checking mission finding nothing to criticize in Rubio's claim that 88% of the USAID budget is "pocketed" by NGOs.
So he did admit he was insufficiently clear, in the same way I might apologize to someone in person by coughing while softly mumbling the word "sorry".
It would be nice if Tyler would come out and say "For the record, Rubio is wrong, 88% of USAID spending is not, in fact, 'pocketed' by NGOs." instead of all this mealy-mouthed "Scott lumps my claim together with Rubio’s as if we were saying the same thing" without actually committing to a position different from Rubio's, explaining the difference between the positions, and then stating unequivocally whether Rubio's "facts", which Tyler "checked", are in fact "made up BS".
Three commenters on the Tyler Cowen’s last reply hold the view that, “the point of [Cowen’s] first post was to slightly mood affiliate with the Trump administration on this issue because he saw their position as ascendant, irrespective of whether it’s right.” I haven’t read enough of Cowen’s writings to judge this, but it would certainly explain why Cowen hasn’t explicitly said that Rubio was wrong.
The movie Mountainhead was pretty interesting. It's a pitiless salvo against Silicon Valley, but I was surprised how up to date with the lingo it was. Never thought I would see the term p(doom) used in a line of dialogue in a movie, and the central conflict ultimately becomes about accelerationist billionaires against a decelerationist billionaire. It was quite something to see characters in a non-scifi movie talking about the Singularity and transhuman mind-uploading hopes.
On a related note, David Chapman says the leaders of frontier AI labs are some combination of crazy/evil/stupid:
"Most heads of leading AI laboratories have repeatedly stated that:
We are building AGI, and expect to have achieved it within a few years.
AGI is quite likely to cause human extinction.
You should give us lots of money and favorable legislation so we can build AGI faster.
It is reasonable to disagree with any of these three claims. You may believe that AGI is impossible, or a long way off; or that it definitely won’t cause human extinction; or that the development effort should be forcibly terminated. However, you can only assert all three claims simultaneously if you are crazy, evil, and/or stupid."
I watched it too. It was fun to hear dialogue that could have been taken out of an ACX thread. They even worked in Kant and ‘sunk cost fallacy’.
I didn’t really care for the fact that they went beyond satire to farce so quickly though. That’s just a matter of my own personal tastes. As always, ymmv.
Yeah, the first half of the movie was like a darker _Silicon Valley_ (the HBO comedy) with dialogue that felt way too on-the-nose. I was really enjoying it.
The second half was like a bad Seth Rogen farce.
The last 10 minutes felt like someone copied the characters into Succession.
Watched it last night. Good film - dark comedy with a lot of the kinds of vibes I get from Scott's occasional "Overheard at a Silicon Valley" posts.
I have a different interpretation of it than you, though. I didn't read it as "the central conflict ultimately becomes about accelerationist billionaires against a decelerationist billionaire," I thought the film came off fewer parts "accelerationism vs decelerationism" and more parts "selfish assholes vs selfish assholes, using whatever language is convenient to justify themselves."
SPOILERS (hopefully mild since all this is revealed early in the film)
The ultimate point of conflict is that AI Bro has an AI solution that could plausibly fix a problem with Social Media Bro's newly-released app, which is causing all kinds of absolute chaos worldwide. Social Media Bro is terrified by the public shaming he's getting, the possibility countries might start blocking his app, the money & status all this is costing him, and maybe just maybe feeling actual remorse at the suffering he's causing.
AI Bro has a lot of pent up resentment at being the "second poorest" of the 4 bros, and is getting rich off the failure of Social Media Bro's app because his proprietary AI is viewed as a silver bullet to fix all its problems. So he knows Social Media Bro (the richest one) can be squeezed and is happy to do the squeezing by refusing to sell.
Telecom Bro is dying of cancer, and has convinced himself that AI Bro's AI plus Social Media Bro's computing power can somehow equals transhumanism and uploaded consciousness on the grid and immortality realized within his now-limited lifespan.
So Telecom Bro and Social Media Bro talk themselves into a frenzy about how AI Bro is a decelerationist stopping the future and standing in the way of infinite QALYs for trillions of immortal future human lives on a post-singularity grid and so on, eventually bringing along the 4th Bro, who is by far the "poorest" since he's never even broken the billion-dollar mark and just wants the rest to think he's cool like them. Hijinx ensue.
But it's pretty transparently really just about Social Media Bro's desire not to let AI Bro get him over a barrel, Telecom Bro's fear of death, and 4th Bro's gaping self-esteem void. A character-driven satire of the flaws of the men at the wheel, so to speak, rather than a polemic about the inherent dangers or evils of the ship itself.
Oh I call BS on that. Everyone thinks AGI will upend society and needs to be handled carefully but no one sane thinks it's likely to extinct us. That's just crazy talk.
Sounds like that movie is just a mindless ideological rant ala Michael Moore.
The three most-cited AI experts in the world are Yoshua Bengio, Geoffrey Hinton, and Ilya Sutskever. All three are very concerned about AI causing human extinction, and all three are spending their time trying to prevent it. Other prominent individuals openly concerned about human extinction from AI risk include Bill Gates, António Guterres, Ursula von der Leyen, Peter Singer, Ray Kurzweil, Cenk Uygur, Vitalik Buterin, Dustin Moskovitz, Sam Harris, Jaan Tallin, practically every leader in the major AI companies, and the majority of the population of the United States (according to polls from both Ipsos and Monmouth, 55% and 61% respectively).
The most prominent technology experts were unanimously concerned about the end of net neutrality and convinced that it would mean the end of freedom on the internet. How'd that turn out?
How AI interacts with the world will be a complex multidimensional cross-disciplinary equilibrium. No narrow technical expertise qualifies anyone to opine on that. I don't care what AI experts have to say about it any more than I think the person who works on a football production line has an informed opinion on the nuances of NFL defensive schemes. Bill Gates doesn't think AI poses a realistic extinction risk. He thinks it's going to upend society (which it will). He signed some committee-written political statement which contained a nod to extinction. That's not an endorsement of extinction risk, that's being involved in a politically-motivated PR stunt. People run their mouths about extinction because it gets headlines. It's the new virtue signal. Elites need something to hand-wring about and they're tired of systemic racism.
It's "crazy talk" in that most high-status people in the world aren't saying it, but I don't think you need to be overly paranoid to imagine ways that building something immensely smarter than humans with its own goals could be dangerous.
Agreed but in my view that's where the debate should stop. Everyone knows we're dealing with powerful world-changing technology. Don't do something egregiously stupid with it. That's all that needs to be said. Anything beyond that is like what working out internet security would have been like in 1890. It's nothing but baseless speculation and status seeking. It creates nothing productive and raises the noise floor for public discourse.
Besides the genie is out of the bottle already. Research is decentralized and international. AI will be what it will be. All the Cassandras in the world aren't going to alter what's about to happen so shut up with the histrionics and get to work on concrete open problems like legibility.
My intuition is that if we can get to AGI, there's not an obvious reason why doing more of the same won't get us to ASI. Maybe the universe will work out in such a way that we can't get much smarter than a human without new techniques or better training data than we have or something, but I don't know any principled reason why that should be true. Certainly it isn't true in narrow domains like chess.
> no one sane thinks it's likely to extinct us. That's just crazy talk
Do you think Scott is sane? Maybe, give you are reading this blog. In AI 2027, which he co-wrote and strongly endorses, he proposes the view that it is not only possible but likely (as in, over 10% chance)
Also, I would point out that 2 of the TOP 3 most cited AI researchears (Yoshua Bengio, Geoffrey Hinton) both agree that AGI has over 10% chance of killing everyone. All major AI frontier company CEOs (Anthropic, OpenAI, DeepMind) have put forward this view.
Whether or not these people are correct that AGI really is a real threat is subject to another discussion.
But you are flat out *wrong* that this is a belief exclusive to "crazies".
Not about AGI or EA I don't. I suspect he's responding more to the political dynamics of his personal social circle and the blogosphere than to good, first-principles reasoning. I've read his writing on both of these topics and find his thinking naive on both counts. (Though of course I like his writing on other topics.)
>All major AI frontier company CEOs (Anthropic, OpenAI, DeepMind) have put forward this view.
I would wonder what level of belief falsification is going on there, but ok. On some level CEOs are politicians and therefore have to appeal to the median view of the AI community if they want to attract talent. Hand-wringing about extinction is the virtue signaling of machine learning. To the extent that top people in the field really *do* believe this nonsense, I suspect it's downstream of some small-community semi-autistic echo chamber dynamic. CS PhD's have a narrow technical expertise and probably don't get away from their keyboards very much. Sorry but being an expert in gradient descent doesn't really qualify anyone to opine on complex geopolitical social/economic/military equilibriums, which is what AGI-induced extinction would actually involve. Plus I think there's probably more than a little adolescent hubris in the mix ("MY field is the most important because it could destroy all of us! Pay attention to me!"). Yud screamed about bombing datacenters the first time he played with ChatGPT ffs.
The only thing anyone should be worrying about right now is the tsunami of short-term economic dislocation that this is going to cause.
Doesn't matter if it's true in this case, what matters is what the AI leaders are saying.
If I'm in charge of the Manhattan Project, and I believe that setting off a nuclear bomb will ignite the atmosphere and kill everyone, and I'm continuing the development work without spending a bunch of effort to figure out if the ignite-the-atmosphere theory is true, then I am crazy/evil/stupid, even though it turns out that atomic bombs do not ignite the atmosphere.
On the other hand, in the face of uncertainty about whether the bomb will ignite the atmosphere, and knowing that the enemy is probably going to drop a bomb if you don't, then you press ahead with the project anyway.
If the bomb ignites the atmosphere then it doesn't matter much whether we drop it or the Germans do, and if it doesn't then it matters a great deal, so we might as well do it first and hope for the best.
This, I think, is a pretty reasonable description of the dynamics for both the bomb and AI. The people working on it think it probably won't destroy the world but if it does then the world might as well be paperclips instead of chopsticks.
like a lot of SF writers thought we'd be able to explore the universe if we could explore the moon, but when we did it turned out we hit very hard limits in doing that, to the point most exploration is done by unmanned craft. Venus is right out lol, i'm not sure what is being done with Mercury. SF has actually declined in part because the future is much more closed than we thought.
not every thing is unlimited progress. my own thought is that agi stalls and ai just acts to shed some knowledge work
Yeah, that's quite possible. I don't think the term AGI makes a lot of sense, really. What we call general intelligence -- there is no direct test for it. We ended up with the term General Intelligence because most tests of cognitive abilities are correlated. There seems to be some factor that contributes to all of them, and we call that General Intellgence. But AI's profile of cognitive abilities is very different from a human one. Its ability to remember strings of numbers or masses of words has been immensely greater than ours from day one. On the other hand, GPT still is unable to make various images that I can describe very clearly. It's ability to turn descriptions of the size and spacial orientation of 3 objects into an accurate sketch its worse than a child's. So what does it even mean to say its general intelligence is now equivalent to that of the average for a member of our species ?
And then there's your point that lots of what we imagined in SciFi is still far beyond what we can do -- how do we know superintelligent AI isn't one of those things? The thing that most inclines me to think it really might be possible is that I have been wrong many times about what near-future AI would be able to do, and could make a good case for how it just was not set up in a way that would make that sort of thing possible. And every time I have been wrong, I have been wrong in the direction of underestimating what is possible.
This isn't to say you're wrong, but for context, the author of this Substack and (if I'm interpreting the question correctly) a majority of those who filled out the reader survey last year think it's reasonably likely that AI will cause human extinction. So you should either believe that most of this blog's readership is insane (which is possible) or that some sane people have this belief.
Yes and I think they are extremely nutty in that regard. The people in this community are nutty about a number of things IMO though I like them on net anyway. The people who run AI companies are much more capable and intelligent and therefore I’m incredulous that they uniformly think it’s “likely” that AGI will cause human extinction
My position isn't exactly that AGI doesn't pose a risk, it's more that worrying about it now is nonsensical. It's worrying about problems that we can't even begin to even really define yet. The things that we think matter might not even make sense by the time it's a practical issue, e.g. "steam pressure of the internet". Therefore in my view everyone who opines about this is doing so not out of existential concern but out of a desire to gain social status within a nascent ideological field. Whoever can screech the loudest and hand-wring the most gets the most attention. I suspect they intuit that when people really start losing their jobs that there will be a giant anti-AI backlash and they want to be well-poised to insert themselves into the middle of the political turmoil as an expert or pundit or nexus of political influence.
If there's one constant in history it's that near-term predictions of doom are near-universally wrong. Remember the end of Net Neutrality? Y2K? The video game panic? In the 1890's there was a horse manure panic and people worried that the rise of horse-drawn carriages would condemn cities to becoming buried in the stuff. History is replete with examples of well-educated, well-intentioned thinkers who extrapolated linearly from early conditions and ended up catastrophically wrong. The defining feature of these failed predictions is not their foolishness, but their false sense of certainty. In complex systems undergoing rapid change, the only true constant is epistemic instability. People should understand their fundamental ignorance here and just put a sock in it. They're not doing anything but raising the noise floor. They should have more epistemic humility and understand that simplistic narratives about complex equilibriums are always wrong.
When I asked about Zvi, I was actually more curious about what you thought of him overall. He’s quite different from Scott, though he does lean the same way Scott does regarding the chance ASI will do us in.
I myself am not deeply convinced that that will happen. I slide around between endpoints of *doesn’t seem absurd* to *probably it will.*. I don’t think my participation in that point of view is mostly group identification. I seem to deficient in the wiring it takes to form strong group identifications. There are a whole bunch of things that most people have opinions about and bond over, for which I just don’t have an opinion: most feminist stuff, general political leanings, most specific politicized issues. I have only voted once in my life. I rarely post on culture war threads on ACX, and when I do my point is usually about how somebody’s mind is working — how they seem so quick to anger when somebody is pro-X that they can’t hear anything else the person says.
Actually, it seems to me that you have slid into an anti-expecting-AI-will-off-us stance more out of group disidentification than by reading and reasoning. You seem to be doing it more out of contrarianism than as a result of having thought the whole thing through. You don’t seem as well-informed about the issue as you do about most things. For instance, you keep talking about AGI doing us in, and actually the predominant view is that ASI will do us in. (So the view is that AGI, plus a few nudges and improvements, is likely to be come self-improving, and then it will rapidly become ASI, superintelligent AI, far smarter than even the smartest members of our species.). I read your Reddit screed, and see that you understand that is the story people are telling, but it is still a little jarring that you continue to write about AGI doing us in, because your terminology is just out of line with the convention for talking about AI with different levels of skill.
And then there are some things I’m pretty sure you just have wrong. In your screed you throw out some ideas about simple ways to keep ASI from going rogue some way. They’re not bad ideas. I’ve thought of some of them myself. I do not know why some of them cannot be counted on to work, but I am confident that the people who are working on AI alignment have thought of them and tried versions of them and have pretty convincing reasons why they will not. These people may have personal or sociological motivations for believing ASI will kill off our species, but they are also quite smart and conscientious. It is just not possible that they have not thought of or have refused to consider ideas like having one or more AI’s of a different lineage check on the honesty and accuracy of the AI we are concerned about. As for having ASI tell us how to monitor its thought process, I can see a couple problems with that. The first is that ASI will foresee how being transparent will interfere with its following a course it has identified as optimal, and will give us techniques that appear to show us all of its thoughts and goals, but actually do not. The second is that if ASI is far smarter than us, we might not be able to understand its goals and plans and choice points even if they were all laid out for us to see. I could go on about my reasons for not being on board with your various other ideas about how we can protect ourselves from ASI, but don’t have time. Also, my goal isn’t to debate you, but to try to interest you in looking into the issue more deeply. There is lots of research into things relating to seeing into the processes in AI and into evaluating its tendency to be dishonest in the service of goals of its own. I have not read a lot of it because it is technical and tedious, but the moderate amount I have read has definitely moved me in the direction of pessimism.
I get that it is very very hard to see over the horizon and predict correctly how some big trend is going to play out. The other reason I don’t just shrug off the prediction that AI will do us in is that I have been wrong over and over in the last few years about what AI would be capable of, and I have always been wrong in the direction of underestimating AI.
Yeah and I think he’s nutty to do so. He’s nutty about a lot of things but I like his writing anyway. Rationalists also think that sending money to sub Saharan Africa makes the world better. Their endorsement does not a persuasive argument make, at least not in my view.
Depends on your values, of course, but I think most people tolerate (and many even endorse) lies and deception when it's part of activism/politicking in favor of a cause they support.
Let’s say you’re an evil person and you decide you want to go into U.S. politics to amass as political power and influence as possible. You’re a standard-issue villain: not particularly clever or rich or well-connected, but you *are* absolutely unscrupulous and willing to backstab, betray, or hurt any number of people to get what you want. Your main priority is personal enrichment, but you admit you also enjoy seeing others suffer, as it adds a certain frisson to your own fortune.
You’ve decided your approach will be to work your way through one of the two main U.S. political parties. You don’t care about actual policy at all; you just want to amass as much personal power as quickly and easily as possible.
Two things make it easier to get rich in politics: lots of government spending, and rapid growth in the budget. Sign up with whichever party you think is likely to maintain those desiderata.
If asked this question in 2015, or 2005, or 1995, my internal debate would have been between "the state-level Democratic Party of New Jersey or Nevada or Illinois", or "create your own party because that's how you can make this really milkable at scale for your own pockets".
In more recent years I've observed firsthand that Illinois is significantly less corrupt than when we were sending 3 out of every 4 governors to prison; and actually Nevada has cleaned up its act a fair amount as well. It doesn't appear that any other blue state now is really a peer of New Jersey for political corruption (using corruption to mean specifically "individual self-enrichment"). Meanwhile Texas's and Florida's state Republican parties have arguably "risen" to new heights in this regard; those states are bigger and richer than any of those others, so there's one possible best answer. But it still seems obvious that the _serious_ personal enrichment is a national-scale game.
I'd previously never have imagined the national GOP as an answer simply because that party organization always seemed to be a good deal tougher and more tightly organized than the Dems. Their leadership always included lots of people with hard-nosed business experience, they were famous for ruthlessness behind the scenes, they'd long demonstrated more internal cohesion and discipline than the Dems (as the joke went "I don't belong to an organized political party, I'm a Democrat"), etc.
That is why, out of all the things this past decade that we'd never seen before in US politics, the single most surprising for me remains Trump starting from political zero and bulldozing to the 2016 GOP nomination. We forget now that that _entire_ party establishment, well into the primaries, was sure that he had to stopped. And that they _massively_ outspent him during that primary campaign, and all the rest of it. And he just rolled right over them and by April and May had them bending the knee.
For my money his then winning the electoral college in November was nowhere near as surprising an outcome.
In hindsight what Trump did was apply the "create your own party because that's how you can make this really milkable at scale for your own pockets" methodology, to an existing national party! The GOP is now, simply, him; we have no comparable US historical examples of a national party becoming as thoroughly subordinate to a single individual. (Saying either "FDR" or "the Bush family" at this point just demonstrates non-seriousness, and in George Washington's day there weren't yet parties in anything like the sense that we're talking about today.) And he's bluntly milking it at scale right now.
I used to say that Trump's secret superpower in 2015 was sensing how widely/deeply modern progressives' childishness and hypocrisy was turning Americans off. That's still accurate, but there was a second one: realizing that the Republican Party as a national organization was a paper tiger that could be hijacked easily and thoroughly.
Today, just as there are liberals who will never stop feeling angry and sad about the first one, I know some lifelong conservatives who will never get over the second one. Indeed I have older family members fitting each of those descriptions who literally cannot focus anymore on anything else, swimming in outrage and sense of loss to the point of creating worry over their mental/emotional stability.
So I guess my answer now to the posed question is....you're too late. Somebody figured out the current best answer and went for it, and the other national party is now too broken inside to be worth the effort.
Republicans. There are fewer smart, competent, young people who are trying to join the Republican Party (according both to young Republican-ish people I know and also basically any other writing on this). There might have been a slight uptick in Republican sympathies among recent workforce entrants but I'm pretty sure this still holds. So it's a lot easier to get to a position of power just because there's less competition.
Stay resolutely nonpartisan and join the civil service. There, rising to the top is based on office politics, at which evil people excel. You can make a good salary, and there are plenty of opportunities for corruption. You can also do a great deal of evil without drawing attention to yourself: if you're in the FDA, deny good drugs and set up exemptions for things like homeopathy. If you're in infrastructure, send every good project back for "more study and review" while spending lots of budget on badly-designed highways.
They're both full to the brim with absolutely unscrupulous types. If you're not particularly clever or rich you have no advantage and are stuck at the local level.
If you are unscrupulous and experienced, you have two advantages over naive beginners. And there will always be naive beginners.
Since you are unscrupulous and they are naive, you can take advantage of them. And you can climb some distance up the pile of newbie bodies you are willing to leave behind you. That won't get you to the top, but it might get you a good view.
You've added 'experienced' to the list. Regardless, It will get you up the ladder of the local level. At that point all the newbies are washed out, and you have no advantage.
I feel like this is supposed to be my opportunity to say "other party bad, grrr".
I don't think that politics is a particularly easy way to make money as a villain, you'd be better off sticking to something boring like scamming or drug dealing. If you do get into an important position then there's definitely money to be made, but the competition for important positions is fierce.
Local government is probably your best bet; you're out of the spotlight and the competition isn't all that fierce but if you get yourself into an important position you can really enrich yourself through kickbacks and bribes. You want a city big enough to have a significant organised crime presence that you can make friends with, and in the US all cities like that are one-party states run by Democrats, so I guess I'd go Democrat.
> You’re a standard-issue villain: not particularly clever or rich
That is fake news. In fact, you are a self-described "stable genius".
> Which party do you go for?
The problem with building a power base using the party of wokeness is that the wokes, like revolutions, have a tendency to devour their own. See [1]. Even if your would-be grifter is a black woman, unless they are also LGBT*, handicapped and whatever category the SJWs will focus on next, they will be thrown under the bus for the tiniest infraction. Giving them policy wins will not help you at all, because at the end of the day, SJ is not about policy wins, but about signaling.
For contrast, consider the MAGA party. No evangelical who voted for Trump was under any illusion that he was a good Christian. Likely he has personally fathered plenty of abortions, but they correctly recognized that electing him would cause the SCOTUS power balance to shift to overturn Roe, and that mattered more to them than him being non-terrible in his personal life or accepting election outcomes. And as long as half of the Trump news are about him being maximally bad to immigrants (no matter if previously legal or not), his supporters will not care if the other half of the news is him selling US interests for personal gain. Few people on the motte were willing to argue that he is not corrupt, at most they were claiming that the democrats were just as corrupt (but less obvious about it).
This is related to the observation that the right has cults of personality while the (contemporary) left has a cult of ideology.
Personally (as someone not wanting to get rich from grifting), I think that the truth is somewhere in the middle. The ultra-cynical "he is a SOB, but he is our SOB" is bad, but the attitude to turn on your allies for smallish infractions is also bad.
I think probably the party local to your state. In my experience (biased coming from Illinois and New York) a lot of the corruption is local/state level. If you lived in a low corruption state, just more to a high corruption state (see: Illinois) and go there. I think that purely because Illinois wins the corruption award for the US in my eyes, I would say go Democrat, go Chicago.
People are talking about Trump with corruption, but let's note that Trump is not your "average villain." He came from a background with a huge amount of connections and wealth, and that's what let him run a presidential campaign (regardless on your views of him).
I think that at a federal level there might be a different answer, but why bother when you can just squeeze Chicago for riches?
Republican. Democrats are certainly no stranger to corruption, but the party is a coalition of a lot of pre-existing interest groups. Unless you can become the head of one of those unions (which would require a different starting point) then your influence-building campaign is going to involve either getting into bed with one of them and becoming their shameless spokesman (this is plenty possible, but your power is ultimately limited by the influence of the interest group), or else being really good at negotiating between them and getting your own pound of flesh along the way (since you're not especially talented, probably not possible).
Republicans, on the other hand, aren't as structurally tethered to existing, stable interest groups, so the party can change much more quickly around rising party leadership, which itself depends mainly on getting people to vote for you and promote you. As everyone else has already mentioned, this is what Trump did, playing directly to the voters and then remaking the party in his image. As we can see with the likes of Kash Patel, it's possible to gather a great deal of official influence and power just by being as loud and shameless a Trump toady as humanly possible, absent actual talents.
But if you're going to do this, you'd need to do it fast. The power gained via these kinds of appointments is fragile and will probably end with you getting replaced the moment a new administration comes into power, or even before then if you get outmaneuvered by others and you're fired. So you'd need to act fast and get into a position where your official influence can be transformed into wealth or some more durable form of unofficial influence.
As with everyone else, I'm confused why you're asking this question after Trump already figured out the correct answer. I don't think it's possible to do much better than him, given that he accomplished all of this in just a decade after entering politics.
You pick another field. The president of the St. Louis board of aldermen went down a few years ago in a bribery scandal. Reed was a long-time alder and there was talk of him possibly moving up to mayor. Which is to say, he was a relatively successful mid-level guy. The striking thing about the scandal was how little money was involved. He wrecked his career for an 18k payout.
To get enough power to actually see some ROI, you're going to have get at least into the US House, where you can maybe start picking up bribes in excess of 100,000. But there are really not that many of those spots, and there are surely better ways to make dirty money.
Is this not fairly close to a description of Trump's trajectory?
But yeah it seems targeting a party that is more in disarray would be a good step, though if too many party members/supporters are generally scrupulous, you'll run into a lot more friction than Trump did with the Republicans.
My pathogen update for epidemiological weeks 21-22.
1. As of epidemiological week 21 (which ended 24 May), Biobot shows that SARS2 wastewater concentrations were still falling in all regions across the US. No sign of the next wave yet. The provisional count of weekly deaths are down to 82 (per week), which is almost what they were at the 2nd week of March 2020 (60 deaths that week). ED visits are hitting a new low. And, of course, hospitalizations are also down.
Experts are worried that "Nimbus" (NB.1.8.1) is going to drive the next wave. Certainly, it's driving big waves in Asia right now. The data from China is a little iffy, but Hong Kong's wave may have peaked a couple of weeks ago. It caused an increase in hospitalizations and deaths in Hong Kong and was 100% driven by NB.1.8.1.
Nimbus is growing in frequency in the US (22% of samples now), but so far it hasn't driven up wastewater numbers—or cases, hospitalizations, or deaths. </edited> NB.1.8.1 was first detected on 22 January. The first recorded cases were in Egypt, Thailand, and the Maldives. And despite media claims that it was imported from China, it appears that the virus was first detected in the US back on February 26 in wastewater in the Sacramento area, and it has been circulating in the US since then. There's no evidence that NB.1.8.1 arrived from China. </>
</edited> Even though NB.1.8.1 is displacing the current dominant variant LP.8.1x—and according to CoV-Spectrum it's at ~22% frequency (but within a wide CI of 1.5% to 84.5%), it's not driving a new wave of cases (so far). CA has the most confirmed cases for Nimbus at 62. New York State comes in second at 13. California's most populous regions aren't showing a noticeable upward trend in SARS2 wastewater concentrations. Since it's been circulating in California since February, and we're not seeing upward trend in wastewater numbers or cases, I'll go out on a limb here and say I doubt that NB.1.8.1 will drive the next wave in the US. Also, at least in the case of Hong Kong, it hadn't experienced a wave in over half a year in the months leading up to their Nimbus wave. So, their population immunity may have waned compared to the US, where we have just recovered from an XEC wave. </>
</edited> Australia may be seeing the start of a new wave centered in New South Wales, but Sydney's wastewater numbers are still low. Unfortunately, Australia hasn't embraced wastewater monitoring. Sydney is the only city that I am aware of that has implemented it. OTOH, New Zealand's poop.nz site shows a distinct upward trend SARS2 wastewater concentrations. They've got a new wave underway. Although some areas in Europe seem to show upticks in wastewater and cases, I think it's too early to call a new wave in Europe. For instance, two weeks ago, some of the LA and NYC sewersheds showed an upward trend in wastewater numbers, but those have dropped again.
2. Although the general consensus seems to be that the US measles outbreak is slowing, I think it's too soon to be sure. It takes 7-14 days for symptoms to appear, but only 4 days for it to become contagious. So there could be a bunch of undetected cases out there spreading it further. Colorado has 3 new cases. Texas has 9 new cases. 1088 US cases so far this year.
Unfortunately, Canada is doing much worse in terms of total measles cases and cases per capita. They're up to 2755 cases! No deaths, so far, though. The same B3 strain appears to be spreading in Canada as in the US, but there have been significantly more cases and fewer deaths. Poorer prevention, but better treatment? I'd still be curious if there are any mutations in the Canadian B3 strain (which originally arrived from the Philippines) that could have made it less deadly. The US B3 outbreak seems to be homegrown, and we infected Mexico. Measles has a CFR of between 0.1-0.2% in the developed world, so he odds suggest there should have been between 2-4 deaths in Canada by now.
Nimbus is a nickname, not a WHO-endorsed name like Alpha, Delta, Omicron, etc. The nicknamers finally ran out of monster names (e.g. Kraken). I *think* they nicknamed it Nimbus because its Pango designation starts with an NB (NimBus), and it's catchy. And a catchy nickname allows the COVID-worriers to spread their message easier—"Nimbus is coming to America!" If it doesn't cause a wave in the US, I'm going to call it NIMBY. ;-)
I recently got into Terry Real's book I Don't Want To Talk About It after listening to his conversation with Tim Ferriss. The book has apparently a pretty serious following and the reviews are stellar, and it seems like Tim is a fan as well. I read the book and the main thesis mostly makes sense, there are often underlying unresolved traumas behind a covert depression, traumas that a man often has to resolve before he's able to live up to his full emotional potential.
However both in the interview and in the book the author keeps talking about the Patriarchy and how its constantly influence over men leads to toxic masculinity which ultimately leads to depression. Among other issues, if only a man is able to let go of judging himself based on his performance, but instead discovers his inner value, and realizes that's enough, he will achieve happiness. And somehow the rest of the world will recognize his value and things will be great. And if the world doesn't recognize the man's inherent worth, well, that means that part of the world is under the Patriarchy, not yet enlightened, and can be safely ignored. I'm obviously paraphrasing facetiously here, but also not totally either.
I'm puzzled about where people were able to find the profundity in the text. I'm curious if others here have had a similar reaction to his "teachings" or if I'm missing something key here to grokking the philosophy and the approach, and I should give it another chance. Anybody here who's gotten a ton of value out of his work?
I can only answer to your "facetiously paraphrasing", because I haven't read the book, and do so because I recognize something there.
There once was a post of Scott about how you can say to a man who professionally designs safety systems for cars who believes he is worthless, that he is not, on grounds that he does this work, but you cannot say the same to a completely dependent disabled man that works absolutely nothing for anyone on grounds like that, because there are no such grounds.
Is such a man worthless? As long as he is not worthless to himself, as long as he himself values his life, he of course isn't, no matter whether he is worthless for everyone else or not.
And I think this is the small but important true core in writing like that of Terry Real (as you conveyed it to me).
If you forget that everything in the universe is first and foremost of value or of no value to yourself and that whether it is of value to someone else is naturally, inherently (by virtue of you being an organism that first has to look after itself or it won't even be able to notice what anyone values because it will be dead soon) forever only a secondary concern, then you are in big trouble.
Many assholes are assholes because they fight the world trying to make it make themselves feel that they have value because they can't feel it on their own anymore, just like many cowardly losers can't and suck everything up and please people.
About that patriarchy topic, I don't care. I see the truth I just described as necessary to understand for a happy life, not as a justification for demanding anything from others for free.
It sounds like a self-help book. And the standards for being a successful self-help book are not particularly high. You don't have to say anything profound or interesting, you just have to say one thing that some people find useful to hear.
You've only paraphrased the message but it sounds like a paraphrase of a potentially useful message. To further paraphase, the message "You should stop focusing so much on external manifestations of success and focus instead on developing inner virtues" sounds like a pretty good one that many people need to hear sometimes.
The only interesting thing about it is that the author has labelled the nebulous thing that wants you to focus on external success as "the Patriarchy", which is surely a good move if you want to get positive reviews in the New York Times. But you could call it anything. You could rewrite the same book but call it something else and appeal to a different audience -- you could call it "capitalism" and make it a left-wing book. You could call it "socialism" and make it a right-wing book. You could call it "the system of bug-eyed lesbian commisars which values you only as a taxpayer" and make it a Bronze Age Pervert book. Or you could just call it "women".
It is all of these things and none of these things, so it doesn't particularly matter what you call it, it's really just a tendency within yourself, which you externalise as something you dislike in order to overcome.
Anything that tries to make men feel better but can't help bleating about "the patriarchy" should probably be ignored out of hand, if not labeled enemy action.
"Patriarchs, stop hurting yourself!" is the advice in a nutshell.
Which might be a good advice *if* you happen to be a patriarch (unlikely in the 21st century, but you never know) and if the person who hurts you is actually you.
I should't be surprised that there's really a sort of guy whose problem is "I'm TOO invested in self-improvement to the point of obsessive negative self-evaluation and I should just stop at some point and accept what I achieved", but I would think the more common failure case is not ENOUGH interest in genuine self improvement, or not being able to muster the motivation for self improvement in the first place. Unless I'm totally misinterpreting the thesis here.
> And they taught (again, according to this one person) that the solution was to treat everything that happens in your life as your responsibility – no excuses, just “it was my fault” or “it’s to my credit”.
> Then a few days later, I was reading a book on therapy which contained the phrase (I copied it down to make sure I got it right) “Don’t be so hard on yourself. No one else is as hard on yourself as you are. You are your own worst critic.”
> Notice that this encodes the exact opposite assumption. Landmark claims its members are biased against ever thinking ill of themselves, even when they deserve it. The therapy book claims that patients are biased towards always thinking ill of themselves, even when they don’t deserve it.
> And you know, both claims are probably spot on. There are definitely people who are too hard on themselves. [...]
My heretical opinion, although perhaps not here, is that the rise of a more feminised society is probably the lead reason for depression in men. If male depression is increasing and the policies we are following are causing the problem, then it’s more likely that what was done in the past was better, at least for men. It’s the lack of patriarchy
Gee, you think? The patriarchy is the elite-society-consensus cause of all the ills in the world. Over the past 30 years men have been systematically de-statused while still being held up as the evil power that must be rebelled against. All of the responsibility, none of the power. How the hell else are they supposed to respond?
I wouldn't credit that idea to him, it's pretty much a general self-help thing. But I think it is true that if you truly, deeply, believe in your bones that you are high value, you will behave differently and the world will treat you different, and you will find it easier to change your material circumstances, to the extent that they need to be changed.
An interesting paper on differences between the way people use language vs. the way AI utilizes language. "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" by Chen Sani, et al.
My only problem with this paper is that the authors don't seem to realize that a lot of us humans don't think with language. At least I don't. I only form sentences in my mind when I'm expressing my reasoning to others. I do seem to think in symbolic images, though (not sure how to describe that so people who don't think this way understand what I'm experiencing).
This is off topic but you seem introspective enough to be able to answer: do you still have a frenemy ego? This is the inner voice that tells a person why they should be angry, that they aren't good enough, etc. 99% of people have this, usually acting like the kind of friend that's not gonna be your friend for very long. Do you have something analogous that doesn't manifest as words?
Why are you so surprised to be in the 1%? We are all in the 1% of various categories. I can't find the exact citation, but gist of this phenomenon and escaping it is here:
The number is an estimate from the people he interviewed--some of whom are like you and don't really get it when they are told everybody has a negatively valenced inner narrative voice. If you want to read more, I suggest picking different types of resources since there is a lot of redundancy between the interviews. Apply as much skepticism as you like, since they are selling a course. That said, if you knew of techniques to remove a portion of people's innate negativity, of course you would try to push it!
Yes. My consciousness has an inner voice that regularly comments on my observations and discoveries. I’ve been thinking about when I use words to think. For instance, the other day I figured out how to do something on my iPhone, that I was unaware of. I didn’t use words to work through the feature. But after I discovered it I said to myself “isn’t that cool!“
And I find myself rehearsing what I’ll say to people or would want to say to people in awkward situations.
Also, I realized I can’t do math without speaking the numbers in my brain. Say, if I had to multiply 14 by 8, in my thoughts, I’d say, “ten times eight equals eighty. Four times eight equals thirty-two. Eighty plus thirty-two equals one hundred and twelve.” But when I use a calculator I type in the numbers and functions without verbalizing them internally. And I don’t say the result in my thoughts unless I’m transferring it to paper.
And when I swim, I have to count my laps. I don’t know how many I’ve done without internally, verbalizing the count.m. On the other hand, a drummer friend of mine tells me he can deal with complex polyrhythms without counting them out. And he’s likely to stumble if he counts because it ruins the intuitive flow of his rhythms.
OTOH, the voice in my head that offers commentary is pretty comfortable with the rest of my consciousness. It mostly behaves as a friendly voice, and not a critical voice. I recently was in a multi call encounter with a bureaucratic entity that was frustrating me. My internal voice said, “this too shall pass,” and I involuntarily giggled because my internal voice offers up such mundane commentary. But I didn’t verbalize why I was laughing.
Thanks for sharing! I remember when I didn't need to think in words. I accidentally slipped into it because I realized that was normal, and I didn't realize I wouldn't be able to get out again. My inner voice is pretty mild as well, but that's due to some meditation and mental conditioning I did a few years ago.
I can keep your remarks in mind as pointing at the purpose of the inner voice. Though I wish I knew what it was outside the realm of conscious experience. Since conscious experience is the leading candidate for something that exists but can't be proven, I'd settle for knowing something about the biological correlates of inner voice.
How can individuals best protect themselves from the economic impact of coming AI? A lot of the discussion focuses on what companies or governments should do, but I would like to take whatever steps I can for my own welfare.
The best I can come up with is to invest heavily in AI-related stocks. It seems like there’s an opportunity akin to investing in Apple between the iPhones 1 and 2. If dollars still matter in a post-singularity world then maybe amassing them will protect me from potential economic turmoil.
1. Is anyone here doing this, and if so, what exactly are you investing in? I would probably lean more toward AI-related ETFs rather than individual stocks to diversify the risk.
I grabbed 25% my liquidity and dumped it on the stock market. Considered dumping 50%, but I do think we could get hit with another AI winter, so I halved it. Of the money I used, half went into an S&P 500 index fund, the other into a Vanguard tech index fund. In the case of transformative AI, both of these things are going to the moon, and it's less risk than betting on AI directly (successful AI companies will pop up in both of these anyway).
That said, I'm not some expert investor at all, but I think the reasoning here makes sense.
How structured and clear vs. chaotic and hopeless (or any mixture of them) do others on this blog view the world/finding their way in life (or some other way of describing it along the same lines)?
Why that split? There is such a thing as structured and hopeless, and chaotic and clear. The world seems chaotic to me, but since it's far outside my control, I hardly think about it. My own life is really a combination of chaotic and hopeful (I'm in a pretty experimental phase).
That's a good point. Really I just want to know the outlooks people have on the world and their own lives, and that was the first way of wording it that came to mind. Thanks for pointing that out.
For you, how do you find hope amidst the chaos? How and how often do you feel at peace, if at all?
Well, this reveals my privilege, but the chaos of the world does not affect me. Nothing Trump does actually impacts me (so far), and I live in Puerto Rico, which has always been somewhat chaotic, so I'm used to it.
In my life, the main source of chaos and uncertainty is dating. I feel like I haven't found a way to do it well, which for me would mean setting up my life in such a way that I meet new women organically with much more frequency. Currently looking into starting two different clubs, maybe getting into tango. Also doing an exercise of handing out chocolates to strangers to get over my approach anxiety (highly recommended, it's very whimsical). It's chaos because I'm experimenting with a new structure for my life.
A lot of the time, I feel more energetic than at peace, willing to throw myself into things. Things that give me peace are meditating at the beach (it's great to meditate in natural spaces), and watching anime, TV series, and movies.
And I have hope because I'm working on addressing my problems.
Now I'm curious. Assuming you're more or less satisfied with your life aside from the dating side of it, what is your reason to pursue a relationship? Obviously it's just natural for basically anyone to want to be in a relationship with someone else, so in other words what I mean is: What is your end goal in having a relationship (to have a close, liflelong friend, sex, to grow a family, etc.), and once you reach that, do you feel like your life will be mostly complete?
I want to have a family. And also, largely due to being very neurotic and socially anxious due to bullying and possibly being neurodivergent, I've never been in a relationship or had sex, and I'm now 36. So, yeah, I want to have this experience, it's a very basic desire.
I do feel that if I fall in love the material side of my life will have been sorted out. What is left is continuing along the spiritual path, which is a lifelong thing. I also have a desire to sort out Puerto Rico's issues, but that is more of a pipe dream.
Good luck in your seeking a good relationship and I wouldn’t throw in the towel on sorting some of Puerto Rico’s issues too quickly. In my experience the most difficult problem are the most interesting and rewarding to work with.
If it's any encouragement, I have a freind who got married I believe when he was 37, and now he and his wife have five kids and are a great family. They're Christians, however, and God prepared them pretty well for each other before they met and got married. But I hope you do find someone and have a happy and fulfilling relationship and family!
I don't personally know anything about Puerto Rico or its issues. I don't pay that much attention to world events or even US events where I live 😅
I know I'm basically interrogating you about your whole life at this point, but nobody else commented and I'm interested. Where has your spiritual path led you, and what kind of beliefs do you hold?
My first post there is a list of my most frequently shared pieces of life advice. In over ten years of writing, this is my favourite post I've ever written, and so far, the feedback has been exceptionally great. I genuinely think that readers of ACX will find this post to be an 8/10 or higher, so I feel no shame in promoting it to those here.
My second post is perhaps even more relevant to our community. Given how relatively bland and mundane Substack can feel, I felt a pang of sadness leaving my personal site and worried I was contributing to a more dull internet. So, in light of this, I reflected on what makes a good personal website and how one can be a better contributor to our niche infovore blogosphere ecosystem.
Surprisingly good. That's the sort of "lessons from rationalist sphere reworded for normie accessibility" writing that I wish the community wasn't so sour on. Like #2 is basically Zvi's More Dakka (which, to be clear, is an excellent post), but minus the old obscure nerdy reference that carries a lot of the emotive weight. Whereas everyone can easily conceive of trying to start a fire with progressively better results.
After years, I have finally found the answer (for me): it's reactive hypoglycemia waking you up. It happens if you eat too much sugar before sleep. The feeling of being refreshed and alert is adrenaline. The way to prevent it is not to eat sugar for a couple hours before sleep. If you struggle with this, it might be that.
Ideally you don’t eat anything 2-6 hours before sleep, let alone sugar. You really don’t want your body processing energy while you’re asleep, since the nutrients absorbed by the small intestine end up in your blood, which directly affects heart rate, metabolism and brain activity.
It also makes waking up a lot easier if you wake up hungry. One very fundamental motivator the brain has is “Don’t starve to death” so in the morning, your brain will be in “Get food” mode rather than the “You’re full of calories. Sleep longer to conserve energy” mode.
Yeh a realised that years ago. I don’t eat sweets (candy) anymore but even as a teenager a late night chocolate bar would doom me to alertness until the am
I found the following argument to be persuasive...
> But in all cases there is an artform, a certain agreed-on framework through which the audience experiences the artwork: sitting down in a movie theater for two hours, picking up a book and reading the words on the page, going to a museum to gaze at paintings, listening to music on earbuds. In the course of having that experience, the audience is exposed to a lot of microdecisions that were made by the artist. An artform such as a book, an opera, a building, or a painting is a schema whereby those microdecisions can be made available to be experienced. In nerd lingo, it’s almost like a compression algorithm for packing microdecisions as densely as possible.
But I found his conclusion to less so...
> Since the entire point of art is to allow an audience to experience densely packed human-made microdecisions—which is, at root, a way of connecting humans to other humans—the kinds of “art”-making AI systems we are seeing today are confined to the lowest tier of the grid and can never produce anything more interesting than, at best, a slab of marble pulled out of the quarry. You can stare at the patterns in the marble all you want. They are undoubtedly complicated. You might even find them beautiful. But you’ll never see anything human there, unless it’s your own reflection in the machine-polished surface.
I wouldn't compare the AI art I've seen to a slab of marble. AI art can certainly be more visually interesting than looking at patterns in stone, but the trouble with AI images is that they'generally fall under the umbra of kitsch—i.e., they're simplistic in that kitsch tells us directly how to feel rather than prompting the observer to create their own meanings and emotions from the experience. From Notes on Trash....
* Goya's Black Paintings, which were not intended to be seen by anyone other than the painter and therefore are not meant as communication
* Jackson Pollock's splatters, for which the artist made broad decisions about the general appearance but the details were determined by a physical process that is not easily predicted
* Duchamp's Fountain, a machine-made piece of furniture over whose appearance the artist had exactly zero control, and whose artistic meaning comes entirely from later recontextualization
Many people are trying to come up with definitions of "Art" that exclude anything a LLM may do; not only they are always clearly created ad hoc to exclude LLM material, but in doing so they also always exclude large chunks of what is generally acknowledged as art.
Even photos aren't densely packed micro-decisions - the photographer has a very significant input, but it's in an only moderate number of choices. The camera handles the rest, and then the photographer picks the pictures he or she likes.
(Yes, people will now argue about just how many decisions the photographer has to make, but they're obviously *far* few than someone making an oil painting of the same motif.)
This is a good argument, but I still think that the concept of "directors" doesn't really fit with this model of art. A film director is not usually making microdecisions - they're not specifying the fine nuances of how the actors should pose or when to pull focus on a camera, they're looking at the aggregate of other people's microdecisions and deciding what goes into the film and what needs to be redone.
And like, there is definitely artistic skill in looking at microdecision-laden products and saying "yes, this is the one that aligns with my vision for the work." Directors get Oscars for a reason. But if that's the case, then surely evaluating an AI's output to decide if it fits your vision is the same skill, right?
(I do appreciate that he doesn't simply use this model to dunk on AI, but rather uses it as a lens to evaluate it as a tool. And I agree with him that giving users the ability to make more fine-grained edits to the AI's output will make it way more useful as an artistic tool.)
How is the shaping directors do on the components shaped by other cast and crew not count as micro-decisions, in your book? OTTOMH, a director does the following:
* (maybe) selects the cast
* goes over the script with the writer, tweaking lines here and there, asking questions, requesting rewrites
* provides the cast with a rough sketch of their respective characters
* coordinates with the cinematographer about what they want out of the shots
* coordinates similarly with the propmaster, costume designer, fight choreographer, dance choreographer, CGI shops, sound editor, etc.
* blocks each scene ("you stand here; you're thinking this; stand here by the time you finish that line")
* reshoots scenes if they don't fit the story quite right
* reports periodically to the studio, possibly with dailies (film clips produced that day)
* coordinates with the editing crew, deciding what to trim, what to cut, and what could be cut if they reshoot even more, etc.
This is a lot of microdecisions, IMO! And most of them aren't handed to the director on a platter for a thumbs up/down, anymore than a sculptor just decides chip/no-chip on each bit of marble. The director has to shepherd that vision through multiple processes that require the director to flesh them out.
An AI could theoretically dump a bunch of content out for the director to approve or reject, but the same goes for any artist.
I think I part with Stephenson slightly on the notion that AI is doing all the work, since I've noticed there's an art to good prompts. We might agree that an AI-assisted creator is burning fewer hours and calories for a given product than an unassisted one, and that this really means that an AI-assisted creator ought to spend more time on those prompts to elicit the same acclaim as one without; I don't know.
With regard to microdecisions, I would like to note that a lot of perceivable microdecisions are in fact not made by humans.
Show a medieval monk who writes manuscripts a paperback novel, and he might claim that this is not an art at all. Where he carefully draws every character and might artistically make decisions on how to render a word and when to break a line or hyphen a word, the paperback will be printed in a uniform font with computed line breaks. Every one of his letters is a microdecision transmitting vast meaning to the reader, while an author writing ASCII is just a monkey pressing buttons on a typewriter to transmit 1.3 bits of information or so.
Likewise, a producer of hand-drawn animation films might frown on 3d rendered animation films. After all, in his work, every hair on a lion's mane is a microdecision made by a human, while for 3d, the model artist just specifies the density and length of hair and then a physics engine will take care of how it looks as the wind blows through it.
Neither of them are wrong, exactly. A hand-written manuscript is simply a very different form of art from a ASCII novel. And as it turns out, one can create pretty amazing art at 1.3 bits per character.
Generative AI is just a further tool in the same vein as movable letters were. Just like it is unlikely a computer-rendered text will beat a manuscript page in visual appeal, it is (at the moment) also unlikely that a LLM will win the literature Nobel, or even just write a bestselling novel. But this does not make it useless for art. For example, I dialogue in computer RPGs (as opposed to dialogue with NPCs controlled by a human DM) suffers very much from being pre-written. There is no way to discuss Deathclaw preservation or women's rights with Caesar in Fallout: New Vegas unless the dialogue authors anticipated it. Contemporary LLMs are likely powerful enough to replace a DM improvising a NPC response. (Making the LLM-driven dialogue outcome affect the game world -- beyond "the NPC attacks", so that female legionaries will spawn after you persuade Caesar seems harder to implement, but not impossible.)
Likewise, not every artistic endeavor which uses images requires these images to be high art. Perhaps Google street view is good enough to provide backgrounds for your fighting game. Similar uses can likely be found for AI generated images.
>Likewise, not every artistic endeavor which uses images requires these images to be high art. Perhaps Google street view is good enough to provide backgrounds for your fighting game. Similar uses can likely be found for AI generated images.
I think this is true, but also the most economically worrying part of the AI art revolution. Yeah, the big prestige movies and the art that gets put in museum will be safe for a long time, but a lot of the market for art isn't that. It's "design a logo for my software company" or "make some background art for my indie game" or "draw a cover for my new novel." A bit more improvement in AI art would probably put a lot of indie creators out of business.
The microdecisions framework makes a fair amount of sense. It also explains something I've observed while playing around with my employer's image gen product (Adobe Firefly), that I get the most subjectively aesthetic results by taking a medium-sized chunk of poetry, song lyrics, or highly-evocative prose and using that as the prompt. Especially if I follow that up by fiddling with the style settings and using "generate more like this one" on my favorite to try to get it make more variations. In NS's framing, I'm starting with a prompt that contains a fairly high density of microdecisions, and I'm adding a few more of my own with the settings. It's still a lot less decision-dense than you'd expect from most human-made images, but has more decisions packed into it than an AI-generated image made with a simpler prompt.
The problem I always have with this kind of attempt to draw a line between works that make use of genai and works that do not on some more fundamental basis than the use of genai and/or one's subjective opinion on same is that it's incredibly hard to create a definition that doesn't end up excluding non-genai, human-produced, objects that our consensus otherwise accepts as art.
> "Idea Having is not Art"
Careful there! There are many, many works of art where the important, ingenious part is that the artist was the first person on record to have and express an idea, rather than the precise way they chose to do so.
> "the entire point of art is to allow an audience to experience densely packed human-made microdecisions"
I also take issue with "which is, at root, a way of connecting humans to other humans": it seems to me that the answer to whether a human who expresses themselves in isolation for their own satisfaction - with no expectation or desire that any other human ever encounters or interacts with their work in any way - is nevertheless creating art is... at the very least, not obvious. I have less basis for this take, however, so just putting it out there.
> I also take issue with "which is, at root, a way of connecting humans to other humans": it seems to me that the answer to whether a human who expresses themselves in isolation for their own satisfaction - with no expectation or desire that any other human ever encounters or interacts with their work in any way - is nevertheless creating art is... at the very least, not obvious. I have less basis for this take, however, so just putting it out there.
Good point! Many people create their art with little expectation that others will see it. Heck, Vivian Maier, who IMHO was one of the greatest US photographers of the 20th Century, was unknown until someone stumbled across her negatives. The act and discipline of creating art is a reward in itself for many people. But I think you'll find that all the unknown artists out there work within a defined framework that could communicate to an audience—if the audience materialized.
> But I think you'll find that all the unknown artists out there work within a defined framework that could communicate to an audience—if the audience materialized.
Is that true? What would it look like if this /wasn't/ the case? How would we tell?
Certainly it is true of /every object we recognise/ as being the result of someone externalising their ideas / emotions / otherwise self-expressing, but this is a tautology.
In my opinion, there are four modes of artistic expression in the Western canon of visual arts. I call them: the message mode, the decorative mode, the evocative mode, and the philosophical mode.
First, a bit of a digression. Despite all the previous postings on how taste is somehow dictated by elites (or other hogwash), artists are the ultimate creators of taste. Artists create their art with their audience in mind — so on one level, they may be catering to the tastes of the audience. However, in all cases that I can think of, some trailblazing artist (or group of artists) has gone before and tested new ideas to shape the tastes of their audience. Some may try to reach the broadest audience possible by following in the footsteps of previous artists, but others may test new ideas on a smaller, select audience who are more open to novelty.
Art is ultimately a nonverbal form of communication in which feelings, moods, or impressions are the vocabulary. The *intent* of the artist is to communicate some sort of impression to his or her viewers. At the meta-level, I classify these by they way they go about communicating with their viewers—which are the four modes I listed above.
1. Message art is the oldest type of mode in the Western canon. This is art that is created to memorialize religious or political events, with references that are culturally shared and that promote or reinforce social cohesion. Much of the early Renaissance art had a religious message (think of all the paintings of the Virgin Mary and baby Jesus). However, as the Church became less important as a patron, historical message paintings gained popularity. For Americans, think of Emanuel Leutze's iconic painting Washington's Crossing of the Delaware. Portraits of rulers and important personages reinforced the message of power. By the 19th Century, social messages came into vogue. Norman Rockwell and Jean-Michel Basquiat are important message artists of the 20th Century. And let's not forget propaganda art.
2. The Decorative Mode developed sometime after message art. As patrons other than the Church started commissioning and purchasing paintings, visual art began to be released from the chains of message and meaning. This mostly occurred after the Reformation in northern Europe, when the burgeoning merchant and middle classes sought non-religious art to decorate their homes. The Decorative Mode was created to provoke a simple emotional response in the viewer. Dutch and German painters started painting still lifes and landscapes that appealed to people who didn’t want or need religious or historical scenes on their walls. Still lifes came first. Then landscapes. Then seascapes. Human nudes were always a delicate proposition — Until the 19th Century, the erotic aspect of nudes had to be presented with the figleaf of a mythological or biblical message.
Finally, in the early 20th Century painters realized that just as music didn’t require lyrics to get an emotional response, they could elicit an emotional response from viewers with pure color and form devoid of any representation. Thus, abstraction was born, and a third mode of communication came to dominate late 20th Century art…
3. The Evocative mode. Rather than giving us a message or telling us a story the purpose of evocative art is to create a complex or open-ended emotional response in the viewers. Evocative art can be realistic, but most of the artists who work in the evocative mode shy away from pure images (because they don’t want their viewers to be distracted by making up stories about what they see), but there are plenty of realist painters who paint in the evocative mode. Edward Hopper is an example of an artist who evokes a psychological mood in the viewer using realistic images. But his paintings, for the most part, don’t tell obvious stories like, say, Norman Rockwell’s paintings do. Surrealists were interested in using dream-like images to evoke moods in the viewers. Jackson Pollock and Mark Rothko relied solely on color and form to evoke moods in the viewer.
4. Finally, there’s the philosophical mode of art which asks questions about what art is. DaDa kicked this off during the middle of WWI when the old European order was falling apart. Dadaism was an anti-establishment art movement that reduced meaning to absurdities. But it asked questions that have continued to niggle artists to this day. Can a urinal signed by an artist be considered art (Du Champ)? Can a step ladder in the middle of the gallery with a little box from the ceiling be considered art (Yoko Ono)? Can simple blocks of color with hard edges be art? This may puzzle the uninitiated viewer, but the primary audience of philosophical artists was other artists and critics, and their intent was to prompt them to question their assumptions about the nature of art.
AI, being unintelligent, doesn’t understand intent. Although it can produce simple decorative art fairly easily, it may have trouble with message art (without the user refining the prompts over and over), it would definitely have trouble producing art in the evocative mode or the philosophical mode because it’s blind to these creative urges.
So is a paintbrush. GenAI is a tool to be wielded by a human with creative urges, and the output is the result of a human wielding this tool. As with any other human-driven tool, the quality of output depends on what the human puts in.
I predict we are about to see an explosion of human artists devoting large amounts of time and effort to use GenAI for philosophical mode art exploration.
So, hypothetically, if I were Charles V, the Holy Roman Emperor, and I commissioned Titian to paint the Three Muses, and I told Titian that I was looking for certain themes and a certain composition for the painting, am I also the artist? Would Titian be my paintbrush?
Titian is no mere paintbrush: he is intelligent and understands intent. He has creative urges. If you are implying this is also true of AI, we need to revisit the earlier claim that it will have trouble producing art in the evocative mode because it lacks these things.
Well, if AI is "intelligent" as peeps like Sam Altman claim, then AI is the Titian, and the person prompting the AI is playing the role of Charles V. But I don't think either you or I believe this.
My argument is that typing a command into AI prompt involves a limited set of microdecisions, while laying a bush line of paint on canvas probably involves hundreds of microdecisions as the brush moves across the canvas. In the case of the AI example, most of microdecisions were made by the artists who created the works in the AI's training sets. So, yes, AI Art is extruded. Just because it might look good to you doesn't negate the extrusion factor.
I have no beef at all with people objecting to one form or another of art on the basis that they think it is crap. I also have things I think are crap.
It's when people start trying to gatekeep by trying to gerrymander category boundaries so they can claim the thing they think is crap art /isn't art/ that they need to take care not to simply ignore the last century's worth of conversation about what art is and isn't.
The current problem with AI art is that it is extruded product, exactly like some "art" created by humans in warehouses to provide "picture to hang on wall for people furnishing their homes or offices".
It's not original, it's not even a craft, it's "copy this copy of a copy of something like 'The Haywain' in this manner". Painting by numbers. This is the most sophisticated example of it I've seen but it's not art by any means:
Right now, unless someone knows what they're doing and refines the piece over and over with better prompts each time and discards the failures, what you get is that shiny, plastic, piece of extruded art that glares out at you from Amazon Kindle covers, immediately recognisable as AI product:
The most sophisticated piece of AI art I've seen recently is this band's experiment with using genAI to provide the video for their music video a few weeks back: https://www.youtube.com/watch?v=rbkkxqghGNo
I'm sure neither the music nor the thematic choices are everyone's cup of tea; but I do think we have existence proof that genai isn't just limited to extruded product - it can very much be a valid tool in the set of tools humans use to express themselves.
(That said, current video generation systems generate video 7-8 seconds at a time, so this would have taken a /lot/ of prompting - so, arguably, in line with the microdecision theory).
I admit that's pretty fantastic. Certainly, AI-generated special effects is putting the old-guard digital artists (who used to create AI scenes) out of business. I admit that video is Art with a capital A. But the way it depicts humans is deep into uncanny valley territory.
I feel the takeaway from picture/prediction contest Scott ran a few months back was that once you have a model that can step away from the common tell-tale signs, it becomes much harder to discern what was AI and what was human.
Kind of like plastic surgery, in the "if you can spot it, it was poorly done" way.
Is Cheese-Whiz and Pringles good food art? People are buying loads of that crap. I just googled and discovered that Cheez-Whiz generates at least $600 million in revenue for Kraft, and Pringles generates over $3 billion for Kellanova (formerly Kellogg). So, if we rank food art by popularity, it's hard to argue that Cheez-Whiz and Pringles win hands down over snooty food. An insider told me that a restaurant with 3 Michelin stars can generate $20+ million a year in revenue with a ~20% profit margin. Of course, they have to spend many millions starting a restaurant like that, and it may ultimately lose money if it doesn't get at least one Michelin star. So, Michelin restaurants are very niche food market. I dined at three 3-star restaurants in my day, and the food was memorably good. Could Joe or Joan Sixpack appreciate that sort of food if it were put in front of them—even if they came into the money to afford it?
Maybe a better taste example is wine. You're likely hear a that most people can't tell the difference between a $20 bottle of wine and $200 bottle of wine, and the even "experts" get fooled in blind tastings. However, if you want pass the CMS (Court of the Master Sommelier) exam, you have to: Taste 6 wines blind in 25 minutes, and identify grape variety, country of origin, region, vintage, quality level, and sub-region or *vineyard* for classic wines. It's an extremely difficult test to pass, but some people do. So even though many so-called experts can't distinguish between expensive and less-expensive wines, there are some who can.
Just because you can't distinguish between an AI-generated Impressionist-style image, and an image of actual Impressionist painting on your monitor doesn't mean there isn't a difference. And if you printed the AI Impressionist painting on canvas, and put it next to an actual Impressionist painting, very few people would be fooled.
Popularity isn't the question though (though looks like AI beat human, even among human supremacists). Unless you were the 49/50 outlier and assuming you took the survey you got at least a fair few wrong yourself.
I realize I did miss the "unless you iterate heavily and discard failures" bit in your post, but it seems a hard sell to dismiss all AI-generated images as "extruded product" (and conversely, does it say anything about the many humans who spend their careers making "office art" for the higher class of office?)
I think you're missing my point. For those two Impressionist-style pictures (The flowering hillside vs the cart going down the road) near the top of the page you linked, if we made poster prints of those images, no, most people wouldn't be able to tell the difference. However, the best that AI can currently do is produce a print that can be printed on canvas, which would look flat and dull compared to actual paint on canvas. AI cannot recreate the texture of the paint on a painting. Part of the art of painting is the paint.
But, yes, I instinctively knew the flowery hillside was not a real piece of Impressionist art, because the composition was "too pretty". This vastly oversimplifies the perceptual and evaluative process that my mind used to come to that conclusion, but I've seen thousands of real Impressionist paintings over the course of my life, so I've got an excellent training set for that style of art. Where I failed the test was in distinguishing the digital art that was created with pre-AI techniques vs AI-generated art. That may be because I don't have a large digital art training set in my brain (because I'm not interested in digital art), or it may be because AI is successfully imitating digital art—to which I say, big deal, because most digital art is created to be consumed by the Cheez-Whiz demographic of people who purchase prints.
> AI cannot recreate the texture of the paint on a painting
This bit feels rather god-of-the-gaps to me. Height maps and normal maps are just more of the same kind of data that we already know we can make genAI churn out; and we've had off-the-shelf tech for automated production of real-world objects given such data for decades now. Unlike, say, the problem of making LLMs never spit out lies, there are no fundamentally hard design, software, mathematical or philosophical problems to solve here; the only barrier to having genAI produce textured paintings is someone caring enough to throw the appropriate amount of money at data capture, training and off-the-shelf tools.
Not under my particular rock as yet, we do have access to Philadelphia cream cheese though. Perhaps in time civilisation will percolate down to my bog.
And the Brits have Primula, which seems similar to Cheez-Whiz:
Some thoughts on the two party system in the US, inspired by a discussion on this substack:
Why are there two massive political parties in the United States, when most democracies have at least 4-5 major-ish parties? People will often blame the first-past-the-post system, or the fact that the USA has a presidential system.
But, e.g., France has a presidential system, while the UK and Canada have first-past-the-post. And they all have more than two main parties. So what’s going on?
I think it’s because the USA, almost uniquely, has a first-past-the-post *presidential* system, with very few constituencies.
In standard first-past-the-post systems, it’s hard for minor parties to break through – a party with 20% support in every constituency will get no seats at all (see the Greens). But regional parties break this pattern: if a party has a lot of support in some constituencies, then they will get seats there, even if they have little support anywhere else (see the Scottish Nationalist Party).
If you have a lot of constituencies, then you have more scope for “regional” parties that have are organised around a theme, rather than a geographical region. Hence the Liberal Democrats, with strong support in some liberal urban areas. And even if the regional or “regional” parties are not in government, they have some influence by being in parliament. So it feel less of waste to vote for them: they can win locally, and that local win gives them *some* power in government.
Conversely, if you have a popularly elected president (especially if you have the two-round election system like France does), parties are less important. Yes, it’s good to have a party infrastructure behind you if you’re running, but if you have broad popularity and a lot of free publicity, you can be a viable candidate with 20% or so initial support. And then voters can believe that you’re a serious candidate, that voting for you is not a waste of a vote, and so they are more likely to vote for you, etc. And then you get through to the second round, and can often win.
Now, back to the USA. Their presidential election is almost exactly a first-past-the-post contest, one round, fifty constituencies (the states) with the loser getting nothing (the electoral college doesn’t sit as a parliament; it becomes irrelevant once the president is selected).
A party with a broad 20% support will get nothing. A party organised around a theme (e.g. the Libertarian party in the USA) may get support in some specific locations, but not enough to win a state, so they will get nothing.
A charismatic non-party candidate with broad 20% support will also get nothing. And since there’s only one election round, they won’t get the chance to become one of the two top candidates. So they’re non-viable, everyone knows they’re non-viable, so they won’t get more publicity and support.
How about regional parties? Well, a regional party covering 20% of the country will get 20% of the electoral college. Not enough to install their own candidate, but maybe enough to make a deal as to who will be president. But unlike deals in parliaments, this is a one and done thing. They can’t remove their support at a later date and cause the government to fall. So they have very little leverage. Which means that they will probably just support the candidate that they are the most ideologically similar to; there’s little that they can gain from negotiations.
Given that, why bother running as a regional party? Why not just join the larger party they are the most ideologically similar to, before the election? This will give them some leverage as they are part of the ruling party. And the combined party may win more states than they would each individually.
So it seems that the current electoral system in the USA inevitably pushes towards a dual-party system. Groups that would be separate parties in other countries amalgamate into the two main parties.
What changes could be made that would shift away from this equilibrium? The most obvious would be a national popular vote. This would remove some of pressure towards two parties; a charismatic third-party candidate could win by claiming, say, 40% of the vote. This would be a lot easier if the election shifted towards two-round elections or some form of transferable vote. Then a third-party candidate just needs 33%+ of the vote on the first round; following that, it’s perfectly plausible they would win a 1-one-1 contest with whichever of the main two parties’ candidate remains. Have a few elections like this, and there probably won’t *be* just two main parties any more.
But without changing the system, the tension towards two main parties in the USA will be extremely strong. There is no plausible path for a third party to grow, no matter what state-based strategy or local-election-based strategy or whatever communication strategy they follow. I don’t like the expression “don’t hate the player, hate the game”, but it’s very apt in this case.
A hot take with low epistemic confidence: the U.S. was a de facto multi-party system until 2008 (Democrats) and 2016 (Republicans) with some residual factionalism since.
The primary system is one of the main drivers here: uniquely, any political movement can hijack a party and win a nomination in the U.S. (even in two-party systems like Britain internal leadership elections usually return a bland moderate, see the Conservative Party's last ten "PM of the day"s).
But this has since changed: the last fair and free political primaries for the Democrats was in 2004. In 2008, Clinton won the popular vote (but lost the nomination). In 2016 various shenanigans like leaked debate questions and superdelegates guaranteed a Clinton victory. In 2020 the party collaborated with the Biden campaign to bribe other candidates to drop out and endorse Biden (most obviously Buttigieg who dropped out the night before Super Tuesday for Sec of Transportation but also Klobuchar chairing the very powerful Senate Rules Committee, O'Rourke getting Texas in 2022, and of course Harris getting the Vice Presidency). In 2024 the party didn't even bother holding a primary – a smooth palace coup replaced a candidate with minimal infighting (something Pelosi ought to be congratulated for far more than she is).
The Republicans did the same with a "Trump or nothing" policy, both nationally and in local primaries, since 2016. While different political factions can fight within that space they are fundamentally tied to Trump's whims.
Multi-party elections only remain in either low-level primaries where the DNC / Trump don't care or rarely where insurgent populists manage to overthrow establishment figures (such as AOC or very nearly Brandon Herrera). The existence of these insurgencies do show that a multi-party system still exists in the United States, but it is far weaker than it once was – compare the Dixiecrats as explained in other comments in this thread.
Far more interesting is the increasing institutional capture of the DNC by progressives. For example, the DNC nullified an election because someone of the wrong gender won. The Hogg case actually shows a nice synthesis: progressives are using non-democratic means to hold down their fellow progs in exchange for institutional power. I think that neatly marks the end of the multi-party era: even anti-party insurgents are using the party against their allies.
Yes, and Putin would win re-election without his electoral interference. That does not mean the elections are free and fair, and you are correct that 2016 was significantly less elite-run than later years. Of course, I'm drawing from a small sample size here – if Clinton only got, say, 20% of the vote she would not have won no matter what. But this sort of finagling at the margin really does matter, and given how polarized the Democratic primary was in 2016 I don't see her losing (cf. how neither party will ever get 60 seats in the Senate).
I re-read the ACX dictator book club post on Chavez and it pretty much convinced me that any sort of constitutional rewrite around elections is way too dangerous to attempt in the US right now. In Venezuela, Chavez's party used their ~52% electoral position to gerrymander their way into 95% of the seats at the constitutional convention and then just did whatever they want. Very easy to imagine (your least favorite political party) doing that today in the US.
Similarly in Hungary, when Orbán's party won, they changed the constitution, and since that they keep winning, because the conditions to defeat them are almost impossible.
I think this is a general fragility of democratic system, that when someone gets too many votes once, they are free to change the system so that "having too many votes" becomes a requirement; and then you can't change the system back, because you either don't have enough votes to do that, or you do but then you don't have the incentive.
Basically, it's a ratchet: many parties -> a few parties -> two parties -> one party.
It is possible to only move legally in one direction. A small party may accidentally become large and then make a law that small parties don't matter anymore. But when a large party accidentally becomes small, they can't make the opposite change, because they no longer have the power to do that.
I agree that the combination of Plurality Voting (I dislike the term FPTP with some intensity) and a Presidential system is a major factor. Layered on top of this, I see a few other factors contributing.
The US inherited a cultural tradition of a two-party system from Britain. Britain's two-party tradition started forming in earnest in the Restoration era with the Exclusion Crisis factions (c. 1680) to which the labels "Whig" and "Tory" first attached, and had roots quite a bit further back. The American Revolution happened during an era when the "Whig Ascendency" of the early-to-mid 18th century (a 1.5 party system where the Tories were consistently a powerless minority) had broken down and re-formed into a functioning two-party system (the "Northite" and "Rockinghamite" factions, named for their leaders at the time; the forerunners of the 19th century Conservatives and Liberals respectively). Britain having 3+ truly major national parties tends to happen only during realignment periods: the breakdown of the Whig Ascendency, the rise of Labour after WW1, and the current post-Brexit breakdown of the Conservative party. Between these, Britain has had much stronger third parties (both national and regional) than the US, but only two really major parties at any given time (Whig/Tory before the Whig Ascendency, Conservative/Liberal after it until the Home Rule crisis, Unionist/Liberal until WW1, and Conservative/Labour from WW2 through Brexit).
The US established Universal Male Suffrage much earlier and somewhat more gradually than Britain did. The replacement of the Liberals by Labour as a major party, and the subsequent survival of the Liberals (later the Lib-Dems after they merged with the Social Democrats minor party in 1988) happened in large part as a consequence of Britain abolishing property qualifications for voting by men in 1918 (women were also enfranchise by the same act, but were still subject to property qualifications until 1928), more than doubling the franchise. This immediately elevated Labour, which had formed as a political movement among disenfranchised urban workers, to major party status. There was no real counterpart to this in the US since property qualifications were abolished piecemeal at the state level, mostly in the early 19th century, so the political labor movement of the late 19th and early 20th century happened among enfranchised workers and mostly operated within the two-party system.
The US has a more bottom-up party organization system than most other democracies I'm familiar with, especially since the 1960s when primary elections became central to the nomination process. Even before that, US parties going back well into the 19th century tended to have nominations processes for local, state, and federal offices that were driven at least as much by grassroots members as by the party leadership. This makes it easier for a political movement to work within a major party (taking over the label in whole or in part), while in many other countries making a new party is the only option if no existing party's leadership is willing to nominate your candidates.
There's also some historical accident over the political behavior of regions with distinctive cultural identities and political interests. A major genre of semi-major parties in other countries is parties organized around regional interests, especially separatist or particularist movements: e.g. Bloc Quebecois in Canada, the SNP in modern Britain, or the Irish Parliamentary Party in late 19th/early 20th century Britain. The US had one of these between the late 19th century and the 1970s, the "Dixiecrats" or "Southern Democrats" in the former Confederacy. The Dixiecrats were officially a faction within the Democratic party, but they really operated as a third party: Congressional voting patterns during this era tended to show Northern and Southern Democrats voting differently on many major bills, and several times in the mid-20th century dissented from the national Democratic party on Presidential nominations. In two elections, there were separate Dixiecrat candidates for President (Strom Thurmond in 1948 and George Wallace in 1968), in one (1964) several state Democratic parties in the South endorsed the Republican candidate, and in four (1944, 1956, 1960, and 1964) at least one state had a slate of Dixiecrat "Unpledged Electors" on the Presidential ballot. Total electoral votes won were 39 in 1948, 15 in 1964, 47 in 1964, and 46 in 1968. In all cases except for 1964, the intended strategy was to deny an election-night majority in the Electoral College to the two major party candidates and negotiate policy concessions in exchange for support in either the formal Electoral College vote or in the contingent election after the Electoral College deadlocks. The historical accident is that the Dixiecrats usually called themselves a faction within the Democratic Party rather than (as has happened in other countries) consistently calling themselves third parties but often forming coalitions and coordinating electoral strategy with a particular major party.
I have two objections to FPTP terminology. The first is that there's no "post", no absolute or percentage threshold of votes that one must get in order to win the election. The second is that there isn't much "first", as the election is conducted in a single round in which the outcome is unaffected by the order in which ballots are cast or counted. On the other hand "Plurality" (or more precisely, Single-Member Plurality) describes the procedure perfectly: whoever receives the most votes (i.e. a plurality of the votes) is elected.
I know of a few actual election procedures and at least one hypothetical election procedure that would be better described by FPTP than is Plurality voting.
Under the election procedures recommended by Robert's Rules of Order (i.e. RRO voting), there is a fixed threshold (usually 50% of valid ballots cast excluding abstentions, but organizations may adopt different thresholds in their procedures and bylaws) required to achieve election. The convention, assembly, or committee doing the electing casts ballots repeatedly, with each member voting for one member per open office. Votes are tallied, and if someone reaches the required majority, they are elected. If nobody reaches the threshold, the procedure is repeated as many times as needed until someone gets a majority. If followed strictly, this can lead to protracted deadlocks, like the 1924 Democratic National Convention which deadlocked between the initial front-runners Al Smith and William McAdoo as their Presidential nominee for 103 ballots before eventually settling on John Davis (2.8% on the first ballot) as a compromise candidate.
Elections in the Roman Republic used a single round of voting, but votes were tallied by "tribes", each of which voted in a particular order with the posher tribes voting first. Magistrates required the support of a particular number of tribes to be elected, and balloting stopped as soon as the required threshold was reached. I'm not sure what happened if nobody got the required threshold, whether they'd re-vote like RRO, or select the plurality winner, or something else. But I get the impression that it was fairly rare for the last several tribes to have the opportunity to vote before elections were decided.
Hypothetically, you could also have an election procedure where candidates collect petitions of support over an extended period of time, with some absolute threshold of supporters required to win the office. Whoever turned in however many valid signatures would then be elected. This would fit the FPTP label perfectly.
Interesting distinction. I never thought that there wasn’t an actual post to get past, but you are right. Funny enough the single transferable vote does have a post - the quota. Although it’s not necessary to get past it if you are the last chump remaining.
That the elected candidate was first past the post. As Erica pointed out though, there’s no quota and therefore no post. In my defence, where I live, the election is always won by a majority not a plurality. That’s not necessary though.
Minor nitpick, not really material, but it's worth noting that while Canada does have a third party that is capable of winning elections at the provincial level, it has never won an election at the federal level and only one served as the opposition. So Canada is not too far off from only having two parties.
I rather seriously disagree. First, there's a pretty big practical difference between a majority government and a minority government, especially a weak minority government. Smaller parties having the power to coalition with a larger party and negotiate for items of interest is a significantly different situation from one where only two parties can ever win non-trivial numbers of seats.
Second, provincial level politics plays a big role in shaping peoples' lives. A party being capable of forming a government at a provincial level makes it quite significant as a practical force even if there's no chance it will ever form a government at the national level.
Both good points, but on the second point, given that the provincial parties have no formal ties to the federal parties, and given the variation within American parties at the state level, I'm not sure the existence of a third party makes such a big difference. What I mean is, a state-level Dem party can be as left-wing as the left-most provincial NDP, and the rightmost provincial NDP can be as conservative as, maybe not the most conservative state Dems, but, still much more conservative than the federal party.
I don't know that we get wildly different types of provincial government merely from the existence of a third provincial party.
Your overall conclusion -- "without changing the system, the tension towards two main parties in the USA will be extremely strong. There is no plausible path for a third party to grow" -- is correct, unfortunately. The USA hasn't had an insurgent third party successfully replace one of the two main ones in nearly 180 years now, and that one required a national civic crisis serious enough to literally spark secession and then a brutal civil war. No attempt since then has gotten anywhere near success.
Others here may describe additional proposed solutions, many have been written about for many years. I've no idea anymore which of them might ever come to pass nor what national-crisis type scenario it would take. I'd like to think there is some path short of secession-and-war.
I will comment on one relatively small aspect: "a regional party covering 20% of the country will get 20% of the electoral college. Not enough to install their own candidate, but maybe enough to make a deal as to who will be president."
That would be a longshot at best. In the first place a regional party covering 20% of the country could at _most_ get 20% of the EC votes, only if they "run the table" in their region and win every state. Remember that the EC votes are mostly assigned state by state all-or-nothing with no 50%+1 requirement.
And anyway negotiations such as you're imagining are not really achievable. If a presidential election fails to give one ticket a majority of the Electoral College count, the election goes to the House of Representatives (this happened once, 200 years ago). And the House votes state by state not proportionately, with no majority requirement for a House state delegation. [E.g. if the reps from a state having 14 reps vote 6-5-3, the candidate getting the six wins that state's entire Electoral College count.] So unless our hypothetical regional party also has majorities in the House from each of a few states, they have no leverage in the resulting House of Representatives mini-election that chooses the president/vice-president. Unlike in a parliamentary system the fact of their candidate having gotten 20 percent of the EC votes would not provide any followup leverage.
So are we effectively stuck with two mostly-similar, if you ignore the aesthetics, centrist-ish parties passing the ball among themselves in perpetuity, giving an illusion of choice to the electorate?
It's worth remembering that in living memory (1968) George Wallace won 5 states as what was, effectively, a regional party such as you describe. And those 5 states didn't even act as a spoiler: Nixon got an electoral majority anyway.
Eh...."effectively" is doing a lot of work there. I think OP is talking about something more than the one-offs around a single individual running for a single office. Those really aren't a "party" in any practical or lasting sense.
The US does have some surviving small regional parties, that go in with one of the two main parties if a representative from them is elected to Congress (which makes sense; you'll get a lot more done as part of the Democratic party voting bloc than as the sole Democratic-Farmer-Labor party member, or as one of the Democratic Socialists of America who seem to be more interested in purity spirals and shooting themselves in the foot).
So I think that tendency does make it much harder for a sizeable third party to get off the ground, because maybe it'll do great in one state, maybe it'll do great in a particular region, but can it win support and seats all across the country? Generally the answer seems to be "no", and of course then the "if you vote for a third party you are wasting your vote" messaging reinforces that difficulty (see all the blame about "whoever voted for third party candidates instead of Hillary/Kamala, it's your fault the fascist is in power now!")
It would, on the face of it, make a lot more sense for the US to have four or five big(gish) parties - a very left/progressive one for the socialists currently hanging on at the fringes of the Democrats, something like the Christian Democrats for the religious voters shoved off to the Republicans, etc. But I don't know how, in the current system, that will ever happen.
Well, that saying is from the 19th century. Might have been true once, today government can regulate the hell out of you with no congress. Sometimes congress (or other parlament) is the only check.
I've observed in Bay Area California local politics in the 2010s that there tended to be four-ish factions:
1. Labor Democrats, usually the dominant factions. These were pretty strongly aligned with the public employee unions in terms of pay and benefits (one of the dominant political issues at time) and also in terms of deferring to senior full-time city employees on policy issues.
2. Progressive Democrats. These were mostly ideological progressives, focused on a combination of social issues and urbanism.
3. Reform Democrats, defined by at least partial opposition to public employee union interests, especially on pension reform.
4. Republicans and Libertarians. Within the Republicans there were significant factions (establishment, populist, and small-l libertarian), but they tended to be electorally insignificant except when there was a candidate who got significant support from at least two of the three, some crossover support from moderate Democratic voters, and often the informal support of the local Libertarian party as well. The Libertarian Party was pretty small, but was quite a bit better organized than libertarian Republicans.
Officially, local elections in California are "nonpartisan", meaning that there's no nomination process and candidates don't have party affiliation listed on the ballot. But if you're paying a little bit of attention to endorsements, it's usually pretty easy to figure out both party and faction.
The boundaries between groups 1 and 2 were pretty fuzzy, with a lot of elected officials seeming to have one foot in each. Groups 3 and 4 had a bit more distinction between them, but operated in coalition more often than not.
I suspect it's actually primarily because the US is so huge, that paradoxically this means there have to be lots and lots of "parties" to represent the many constituencies and demographics, but these "parties" then converge into two broad coalitions that are referred to as parties.
It's not obvious to me that it's sensible to speak of the US having a two-party system and of various European countries having multi-party systems. Many of the latter countries seem to have a whole lot of parties whose platforms are almost identical! On so many issues, there's clearly more choice, respresentation of more groups/positions, and thus more democracy in the US than there is in some (many?) of these multi-party states, despite the "only two choices".
There's also some serious costs (i.e. polarization) but I think the benefit to respresentation of having umbrella parties with numerous sub-groups within them (as opposed to top-down homogenous parties) is shockingly underappreciated in these comparisons.
This is especially apparent when looking at the platforms of the two parties. They were *very* different in 1860. (In 1828, when the Democratic Party was founded, the platform was described as a mash of conflicting positions - anti-tariff here, pro-tariff there - and also downstream of its Presidential candidate, much as today. https://mvhm.org/the-election-of-1828-the-candidates-their-platforms/)
The US is widely thought of as having successive Party Systems, brought about by major shifts in the coalition led by the two majors. The fact that there's a continual institution of funding channels and party platform administrations doesn't say much in light of their continual changes in membership. We're considered to be in our Sixth system now, and I'm sure there are historians ready to declare a Seventh, possibly due to Trump's takeover of the GOP, but more soberly due to realignment in voters on issues such as trade policy and immigration.
Isn't the existence of the primary system the salient (mostly-) unique feature of the US?
In most political systems, the party leadership decides what the acceptable range of positions within the party is, and if a candidate's positions are outside that range then they need to go off and join/form a different party. This is how new par
In the US, the position of the R/D party is whatever the primary voters say it is. So if you're a popular politician with opinions not entirely congruent with the party leadership then you can still get elected on a big party ticket. So there's never much of an incentive to try to form your own party when you can instead try to drag one of the big two in your direction.
The United States has been a two party system during most of its existence. The push for primaries started in about 1900. Since 1972, most delegates to the national party conventions have been selected in primaries or caucuses.
In 1968, the Democratic Party was badly divided over the Vietnam war. The vote totals in the primaries were:
2,914,933 Eugene McCarthy
2,305,148 Robert F. Kennedy (assassinated)
383,590 Lyndon B. Johnson (withdrew)
166,463 Hubert Humphrey
With Kennedy and Johnson out of the race, the Democratic National Convention had to decide between McCarthy and Humphrey. McCarthy had the most delegates of all the candidates, but 61% of the delegates were uncommitted. The Convention chose Humphrey.
This was a bit less outrageous than it seems at first glance. Humphrey had won a bunch of delegates in states that held caucuses rather than primaries, and most of Kennedy’s delegates favored Humphrey. Also, Humphrey was polling better than McCarthy, suggesting he had the best chance of winning the election. However, there was enough backlash that the Democrats created the McGovern–Fraser Commission, which gave us the current system.
In 1972, Democratic primary voters selected anti-war candidate George McGovern as the party nominee, who proceeded to lose in a landslide to Richard Nixon. This might have led the party to concluded that letting primary voters select the party nominee was a bad idea, but it didn’t.
Similarly, the Republican Party could have looked at this as an opportunity for them to nominate candidates who could win while the Democrats would be stuck with whoever their voters chose. Instead, the Republican Party copied the Democratic Party reforms.
Essentially what happened is that because the two major parties formed a duopoly, it became unacceptable for party nominations to be controlled by party officials. It wasn’t practical to create a viable anti-war party, so voters couldn’t vote against the war unless one of the two major parties nominated an anti-war candidate. Given that reality, it was undemocratic to allow party leadership rather than voters to select the party nominees. This is a generic issue with two party systems. The Vietnam War was the issue that happened to create a tipping point, but once the change was made it couldn’t be undone.
In other words, I think you have cause and effect mostly backwards. It’s true that the political parties in the United States are whatever the primary voters and caucus goers say, and that does reduce the incentive to create third parties. The reason that the primary voters and caucus goers control the parties, though, is because it’s impractical to create third parties that actually win elections.
Because voters who weren't solidly Democratic or Republican were now faced with the meta-choice of siding with the party that would let them choose from among several candidates vs the party that would shove a single designated spokesman out the door of a smoke-filled room and say "it's him or nobody, at least from us". And as voting isn't mandatory in the United States, the sort of person who doesn't much care about being able to chose among candidates probably isn't going to be voting for anyone.
Once either party goes to a primary system, the other party is highly incentivized to follow suit if they want to keep winning elections.
Thanks for the info. My US blinders made not realize that the US primary system is different than so many other countries. I looked up Australia, UK, France, Germany, and Sweden. For all of them, the party selects the candiate, though some though have more formal procedures. But I assume that it is always party insiders of some degree that are doing the selecting.
Minor nuance from Canada: We don't really have more than two main parties. The only parties that ever form a government are the Conservatives and the Liberals. We have a smattering of smaller parties, of course. Last government the NDP (left-wing) formed a coalition with the Liberals (centre-left), but everyone knew the NDP was the junior partner. The Green Party wins between 1 and 5 seats.
Well, there's also the Bloc Quebecois, but that's very much because of our unique history, not so much our system. If things had gone a little differently in the US, I can totally imagine there being a Texas Party. They wouldn't run presidential candidates (or at least they wouldn't win), but they'd get enough Congressmen to sometimes make the ruling party have to negotiate.
If electoral votes were awarded by Congressional district - rather than winner takes all for each state - there would be a lot more room for a 3rd party presidential candidate. This doesn't take any changes to the Constitution, just state laws. Two states, Nebraska and Maine, already use this system. Then you can imagine a 3rd party candidate winning enough electors to deny the main party candidates a majority - which would perhaps generate more support for such a 3rd party candidate. It comes up in my state legislature from time to time, but the objection seems to be that candidates will lose interest in catering to any state that is not winnner-take-all.
Changing Congressional elections also can be done by Federal legislation, as the Constitution gives Congress the power to prescribe "Times, Places and Manner of holding Elections for Senators and Representatives", overriding any conflicting State laws. Current federal law requires single-member districts for House elections, but they could conceivably repeal it and require multi-member districts with some form of proportional or other mixed-member election procedure instead.
Senators being elected one at a time is hardcoded in the Constitution, but requiring a different election procedure should also be legally possible by ordinary legislation.
That said, I expect any such proposal would face a strong headwind from current members of Congress who aren't keen to fundamentally change the system that elected them in the first place.
I write with some interesting developments in the intersection of law and AI. In Minnesota we just had a case where a county attorney was called out by a judge for using a brief with 6 hallucinated cases...the brief was written by AI, which is theoretically fine, but the AI invented 6 cases that do not exist which were cited as the basis for the law supporting the brief...that's bad. That's potentially sanctionable.
There have been many cases like this, and courts see it as tantamount to lying to them. It's taken very, very seriously. To my knowledge this is the first time a STATE attorney has been implicated in this problem.
...what I think is interesting here is that this is a basic problem which has fairly swift and predictable results. For 2-3 years now we've had the problem of "AI makes up cases that don't exist, attorney gets in trouble."
...it demonstrates an obstacle to getting an AI *rather than* a lawyer to handle your case, and it's a fairly basic one that doesn't exist in meatspace (I could describe *those* problems at length but they're boring and unsexy).
The problem of course can be solved by specialized legal AI tools or prompt engineering, I'm sure, so it's only a problem for unsophisticated people...but the whole promise of having auto-lawyers was to help unsophisticated people handle cases themselves.
I want the AI to use tool calls to check that citations it makes are to books that actually exist. This is fairly easy to do with current technology, I just haven’t got round to coding it up yet. (Some of you are thinking - surely, you can ask the AI to code it for you)
In a recent experiment, inspired by the continued discussion of hallucinated citations, R1 gave me the table of contents of a book that doesn’t exist. It hallucinated the citation, and then when pressed for further details of the citation, hallucinated the table of contents of the non-existent book.
What are the use-cases where you believe a legal chatbot would be helpful for lay people?
Chatbots hallucinate principles of law as much as anything else, because they are auto-complete token-generators with bells and whistles. Getting an answer that is 10% wrong but gut-checks right can be far more dangerous than just not knowing.
If your attorney misses a litigation deadline because they screwed up how many days you have to file something and you lose the case, you can bring a claim against their liability insurance and appeal the decision. If you screw it up yourself because you listened to a chatbot, you are out of luck.
It's really hard to say. I don't see how an AI is much better than a google search for handling, say, a speeding ticket or a legal dispute with the city over when you need to shovel your walk.
Anything with higher stakes than that, say a DUI or a slip-and-fall lawsuit at your grocery store, you *REALLY* should be getting a lawyer (and honestly I think any sane person would)
I don't see present AI as helping "normal" people so they don't have to resort to using lawyers. Present AI is far more useful at...let's say...writing a 50 page summary (with citations) on the state of some nuanced area of law (commercial fishery regulation) that a more general lawyer (say one that does agency law) can then read as a gateway to learn about that sub-sub-sub area...or augmenting an electronic discovery review of 1 million emails to find the crumbs of the crumbs of the crumbs that diligent human searchers missed...in short, it's good at supplementing a legal team: turning a 75% chance of success into a 90% chance of success, or turning a stone-cold loser of a case into a case where maybe the defendant wins 10% of the time...these are not typically the kinds of use
in my (very limited) experience using AI it's often trivially easy to avoid the worst hallucinations by simply saying "use citations from published sources that really exist"
or like "Assume this brief will be filed in the southern district of Florida before a federal judge, so the law must be accurate and the citations must be valid"
Also it's...not hard to check whether a citation is real, so any attorney using an AI generated brief could cite check it...a process that takes 15-30 minutes for most filings and which you should be doing anyway.
I don't trust these things yet to get within a mile of anything I'd write professionally. AI still make stuff up too much for my comfort: cases, facts, whole areas of law. They might be "better than a normal person" at not making stuff up in a legal context, but I wouldn't, EG, pull a normal person off the street and outsource my brief-writing to them.
Agreed. AI is really bad at doing that thing where 95% of the output is correct, but the 5% that isn't is in some critical nuance of law where you'll get caned. I always say that in law it's better not to be wrong than to be right, and this is where AI as it currently stands really let's you down.
At a minimum if you're going to rely on cases cited by an AI then you should ask another AI to look critically at those cases and whether they are really relevant.
Come to think of it, the ideal way of doing this might wind up looking very much like an actual adversarial trial, with one AI lawyer putting forward a case and the other AI lawyer poking holes in it.
We can see here the tension at play in being a lawyer, too. I have a kind of been stewing over an AI thought experiment or moral dilemma relating to this for about a week, and I'll post about it later maybe.
The simple fact is, I will give different answers to a question in different circumstances if posed a legal hypothetical, because as a lawyer I'm called on to act in two different roles: sometimes I'm an ADVISOR, telling a client the cold hard truth about what the law is, and how a jury will perceive it, this is in service of my terminal goal: giving an accurate assessment of the law. Sometimes I'm an ADVOCATE, telling a judge or a jury what I need to to win. in this latter capacity, I can't and don't lie, but I'll put the law and the facts in the very best possible light to achieve my terminal goal: winning.
An AI is playing both roles just like a lawyer does, but I think it has a bad grasp of what works and doesn't, and when to play each role. You WANT a lawyer who will pound the table and insist you're innocent and the law is CLEARLY on your side...in court. You want that same lawyer to give you a good hard slap in the face in the conference room and tell you that you are in BIG TROUBLE here mister. You want a lawyer that will put a positive spin on the bad facts of your case...but you don't want a lawyer that will just flat out make stuff up (because that's easy to catch, and then you look terrible).
It was so great to meet everyone at LessOnline this past weekend (including Scott and Brandon Hendrickson)! I'll be at Lighthaven all through Summer Camp and Manifest.
I have officially launched my Substack, Letters from Bethlehem. I plan to write about YIMBY, my adventures in renovating my 125-year-old house, MAiD, and untangling the tangled ball of ideas that currently makes up the modern disability rights movement.
I want it to be known that the "Bethlehem" in my username refers to Bethlehem, PENNSYLVANIA, the town made of steel and Gilded Age Capitalism. (Not the other Bethlehem.) It's about an hour north of Philadelphia. My first post is about moving there. (Edit: here's the link: https://amandafrombethlehem.substack.com/p/so-you-want-to-move-to-a-streetcar )
When I started going to the Philly ACX meetups, I'd introduce myself as, "Hi, I'm Amanda, I just drove down from Bethlehem. How are you?" I made it my name on our Discord, and it stuck.
I say this because I ran into Scott at LessOnline. He congratulated me for winning the Book Review Contest, and apologized for thinking I was Christian at first.
...I am not. I am very stereotypically objectivist (at least for aesthetic reasons.) (But politically I've mostly mellowed out into a neolib at this point.) Just wanted to clear that up.
Oh yes. The fights about tearing down the old Boyd Theater and building that apartment complex were vicious. I had a front-row seat to the long and troubled construction process.
In might be unpopular opinion but Historical went out of proportions, to the state of transforming entire sections of cities into "Preservation zones". This basically kills the place, cities are living things that need to evolve and change.
Only the dead don't change. Cities are not museums.
https://abc7.com/post/protests-erupt-immigration-customs-enforcement-raids-los-angeles-california/16690197/
Trump is federalizing and deploying the national guard in California over newsoms objections, arguing that the (afaict mostly non violent, no injuries that I can see though there are some damaged cars) protests are an emergency, even though said protests are entirely downstream of ICE doing things like disrupting elementary school graduations.
Unfortunately, like many words our president uses, the word emergency has lost all meaning. Looks like Trump will reach for any pretext he can find, even if he has to make one up. (Worse, Hegseth is busy tweeting how he's going to send in marines next.)
Looking ahead, I'm not really sure how we can expect fair elections next year for the midterms. Trump has been spinning election fraud bullshit for literal years now, and he's already tried sending out EOs and intervening in state elections (see north carolina, https://www.cnn.com/2025/05/27/politics/doj-north-carolina-lawsuit-voter-registrations). If he's willing to federalize national guard members on such a flimsy excuse for a few illegal immigrants, I have every expectation he will try to do something similar for key votes in important states
While I understand the reaction to deployment of the national guard in a rare way, I'm pretty sad that this whole story is playing out on an immigration-related topic rather than, say, protests against the GOP bill.
Cutting taxes (or "extending tax cuts", whatever) while limiting health coverage is a legit tough topic for Trump and the GOP - see the GOP infighting. And then perfectly on cue, we have a news story that distracts from that entirely and creates media-friendly scenes on a topic that is much more favorable to trump and the GOP.
Seriously, why aren't there any protests against the Medicaid cuts? And aren't we basically playing into the right's hands by focusing on immigration?
There are protests against the cuts (see what's happening w/ Joni Ernst). I think raiding elementary schools is more visceral and as a result significantly more provocative. The conspiracy theorist in me thinks that's the point -- the admin is looking for pretext, and its not like "the left" is an organized coalition that is able to tell protestors when and where to go in a nationally organized way
The conspiracy theorist in me agrees.
It was bound to happen.
True, I didn't even register that her comments were at a protest.
True, this isn't just a criticism of an institution or organization - protests happen more organically. But it's telling that the same scale of protests aren't happening organically for the bill, right?
It seems like it's always been the case that immigration seems to be riling people up more (recall the house dems who tried to enter the ICE facility in NJ).
Yea, I mean like I said I do think it's just more visceral. You can see people being handcuffed and put away -- the current government is making a mockery of the process. Whereas protesting the bill requires understanding whats in it and why it matters, and most people don't understand that, and even those that do have to talk about abstract harms like "medicaid cuts will lead to people losing health insurance which may lead to them dying" compared to "this US citizen kid no longer has a mother"
“The old songs speak of endings that wear the mask of beginnings, and even the stones remember what the living choose to forget.“ - The Two Towers
https://m.youtube.com/watch?v=hSntO4mq57A
Intimidating. I suppose there was a reason the Roman Republic wouldn't allow any military within the city
If the demonstrations WERE violent riots instead the mostly non-violent (you said the line!) protests you think they are, do you expect to see anything different than you are? Would that change your mind on whether this deployment of the National Guard is justified?
I'm glad you're really concerned about safety. You know we could stop the "violence" today! Just have DJT go on air and say that ice will no longer operate in California, and boom protests over. Not so hard is it?
By the way, can you explain to me why we really should have ice members in full tactical gear raiding elementary school graduations with the intent of arresting parents in front if their children? You don't think that's, idk, a little aggravating to local communities? Maybe causing some of these protests? Let me turn this back around at you -- what do you think is a good reason for protest, violent or otherwise? Do you think illegally "kidnapping and renditioning members of the community" counts? Are you sure everyone you know and love has their papers in order?
It seems to me that Shankar's point is that it's not clear whether you think it's ever OK to call in the National Guard. He asks whether you would see it as acceptable if there *were* violent riots, and you say the present demonstrations could be stopped if the government agreed to do what the protesters want. And then you add some details about how what the government is doing in this case is especially aggravating. Yeah, I agree it's aggravating, and that if ICE left California the protests would stop, but that doesn't answer Shankar's question: Do you think it's never OK to call in the National Guard?
When schools in the south were integrated for the first time the National Guard was called in to prevent violent protests. If we'd just decided not to send a scattering of little black kids to all-white schools there would have been no need for the National Guard because the people that wanted no blacks in the school would have been content. Should that situation have been handled by backing off?
I dunno, if I say “X is extraordinary and hasn’t happened since 1965” I’m not sure “do you think X should never happen under any circumstances” is a productive question.
Thank you! Yes, I thought my question was pretty clear, and took the evasive non-answer as him declining to answer.
It was not "pretty clear". If you want people to respond without snark, maybe don't frame things as a gotcha. I responded to eremolalos because he actually is constructing an argument
I won't insult your intelligence by pretending to believe that you failed to understand my question, "If the demonstrations WERE violent riots instead the mostly non-violent protests you think they are … would that change your mind on whether this deployment of the National Guard is justified?"
Yeah plus later in the thread I intimidated the guy pressing you to out yourself as the Illegal Mexican immigrant you obviously are.
I think this is extremely generous to Shankar, who as far as I can tell has managed to consistently find a way to defend the most authoritarian person he can find regardless of ideological consistency (unless 'own the libs' is a consistent ideology).
That said, sure, I'll answer the steelman.
> Do you think it's never OK to call in the National Guard?
Well, the vast majority of cases where the nat guard is called in is at the request of the governor or at least some local official. This of course makes sense -- the nat guard is meant to supplement local enforcement in cases where local enforcement is incapable of handling things. This is extremely effective and beneficial during, say, national disasters (katrina, sandy) or actual violent riots (Rodney King, J6).
So I'm assuming you're implicitly adding an additional caveat in your question: "Do you think it's never OK to call in the National Guard *if no one else in the state asks for it or explicitly doesn't want it*?
I feel the need to point out that now we are in more or less uncharted territory. The National Guard has been federalized without explicit request only 4 times. Three of those times were to enforce desegregation, first under Eisenhower and then twice more under JFK. The fourth time was yesterday by Trump, to "free" the "once great American City, Los Angeles" which "has been invaded and occupied by illegal aliens and criminals." (His words, not mine.)
Overall, there are very few data points here. I think I could try to string together some kind of generalizable principle like "its ok to use the nat guard to protect rights but not to disappear people off the street" or "its ok to use the national guard as long as the person pushing the 'deploy military button' has an understanding of reality" but I'm sure someone will just accuse me of begging the question.
But also, I don't think I need to defend some kind of generalizable principle in the first place. I think it's totally fine to evaluate whether a particular usage of the national guard is ethically permissible or not on a case by case basis. The national guard is a tool. It can be used for good and for bad. "Are there ever cases where you can use a hammer?" On a nail, yes. On a head? Generally no. Case by case. The question of whether this is a valid deployment of the national guard is entirely dependent on whether you agree with the behavior of the government they are being deployed to serve. That means that the argument shifts to "do you think this particular justification for deploying the national guard is good?" But that's exactly where the discussion should be in the first place! I *dont* think LA has been invaded by criminals! As of a week ago it was a perfectly peaceful city where everyone was going about their day, what the *fuck* is Trump talking about???
Does this mean some MAGA jackboot can come in here and say something like "well *I* think this *is* a good use of the national guard, so suck it!" Yea, sure. If someone wants to out themselves as having the ethics of an aggressive rattlesnake, I can't stop them. More generally, I can't argue with someone who doesn't want to see reality. There are people in this forum who will openly defend Trump's tariffs as economically necessary to for the growth of the country. There are people in this forum who believe that America is and ought to be a country for "white" people (whatever 'white' means). And there are people who believe that Trump is *not* an authoritarian or if he is, it's totally legitimate because <some Biden Derangement Syndrome drivel>.
Maybe with more data points I can come up with some valuable SCOTUS-tier test that we can all use to resolve these debates forever into the future. But for now, I think that deploying the national guard to desegregate schools is great, and we should all applaud the government for stepping in. And I think that deploying the national guard on flimsy authoritarian pretext to continue disrupting elementary school graduations and traumatizing kids is disgusting, and the people defending such acts should be ashamed.
Even if the minimization of violence is one's ONLY goal, intuition, reason, and history all suggest that adopting a policy of, or developing a reputation for, abject capitulation to the slightest resistance is a poor strategy for achieving it.
I believe the standard justifications for state violence, "the price you pay for living in a civilized society" or "the things we choose to do together" should cover all your complaints.
And this is why there's no such thing as state provocation, or due process, or unjust state sponsored violence. This is why there has never been any issue of government oppression anywhere in the world, and why every time a government has sent in the troops it has been fully justified.
Went from "freedom party" to "my interpretation of order at any cost" real quick huh.
My hot take is that if you send in troops you're going to cause more violence. Your response seems to be "well what _else_ do you want me to do? *NOT* kidnap and murder people???" Yes. If youre worried that protesting extrajudicial renditions is one step above anarchy, let's get to anarchy first and then you can say "I told you so"
Most state violence is unjust. I do not know of a single government that ISN'T oppressive in some way. This instance seems unremarkable.
> then you can say "I told you so"
This is perhaps not as compelling a reward as you seem to think it is if you're implying it's commensurate to getting a situation I would consider disastrous. At any rate, I don't consider actual anarchy a disaster; if the alternative on offer is something like Rothbard's Button or even Norquist's Bathtub, sure, your lofty arguments about the evils of the state would be apt, and I would need little convincing. But between the statism of the status quo, and the statism the rioters would establish if they were able, you need different ones.
I always forget that you seem to be arguing from a particularly strange brand of nihilism, where you simultaneously agree that everything that's happening is terrible but still somehow find yourself defending the positions you claim to disavow. If state violence is unjust, why do you continually defend it? I don't care how unremarkable you may think it is, it's ethically bankrupt to defend something that you think is bad. This is like Jodi Ernst going "well everyone dies eventually" to justify cuts to Medicare
The previous two times the National Guard has been deployed against protestors were the George Floyd riots and the 1992 LA riots. Do you think the current protests are causing a comparable amount of death and destruction?
Our default assumption should be that the deployment of the military to do policing is *not* justified, because the military aren't trained to be police and are more likely to kill people (e.g., Kent State). I haven't yet seen any evidence that the regular riot police are unable to contain the protests.
A "comparable amount" of death to the George Floyd protests? Just how much death do you think that is, exactly? Seriously, take this as a moment to check your calibration: if these protests were as deadly (on a per capita basis) as the George Floyd protests, how many people would you expect to be killed in them? Please make an earnest attempt to estimate the number based on what you know before looking it up or reading further.
.
.
.
.
.
.
.
.
.
As far as I can tell the correct number is 0. Or rather, it's some fraction less than 1, which may or may not round to 0 depending on how many people participate in these protests.
For reference, the George Floyd protests has ~20 million participants and were directly connected to a grand total of 19 deaths. That is, one person per million who participated were killed as a result. [1] For reference, about 2 people per million die of homicide during a comparable time period (two weeks) in the U.S. during ordinary times.
The public perception of these protests as immensely damaging and destructive is a political narrative that has been pushed significantly at odds with the truth. What the legitimately were was large. They covered the whole nation for multiple weeks, drawing in a staggering number of people. The narrative relies on scope insensitivity and Chinese robber style reporting to paint a politically useful picture of an event that was too large and too widespread for people to really get a clear picture of it in its entirety.
[1] Glancing over the list, "as a result" may be generous, as it seems to include people who died in close proximity to the protests without clearly establishing cause. But with a number this low it hardly matters.
A fair point. I was mostly looking at scale and trying to make the point that the anti-ICE protests are much, much smaller. I didn't mean to imply that the George Floyd protests were particularly deadly.
It's Chinese Robbers all the way down. Right wing media has the same “Chinese Robber” strategy with stories of urban decay, or with welfare abuse, or voter fraud.
As I recently wrote, the NYT publishes statistics, Fox News published thousands of anecdotes -- a veritable flood of exaggerated and spun bullshit, told by an idiot, full of sound and fury, signifying nothing.
Notably in prior cases this was done at the request of the mayor and governor
It’ the first time since 1965 that a president has activated a state’s National Guard force without a request from that state’s governor.
I've seen you around here a lot and I don't think you are a citizen. Are you here legally?
Hey Noah, are you in a state where cannabis is legal? If not, do you ever use it anyhow? In the porn you watch are there women who might be under 18? What's the worst thing you've said on line or in print about a US government official? Do you own a gun, and if so is it and the way you use it legal in your state? Does the way you store it meet all the state's requirements? Have any women in your family had an illegal abortion? Are you gay? Are you trans? Even if you are straight, have you ever had some sexual contact with someone of your own gender? What exactly transpired? Have you ever hit the person you're in a love relationship with? What's the worst thing you've done when you lost your temper? If you have kids, what's the hardest you've ever hit one?
No to all of these. Unlike you, Eremolalos, I don't take drugs, store guns in an unsafe way, go around getting abortions, have gay sex with random people, or hit women or children. Kind of sad that you assume people go around doing this really, and it makes me wonder what kind of life you are living.
I want a lawyer.
Deepseek R1 came up with an idea for a satirical science fiction, which I shall summarize here. In the novel, there are two viewpoints being switched between. One of them is a Studio Ghibli parody, with a talking squirrel. In the parallel timeline, engineers for an AI company are trying to get the system to work. (It’s obvious how the timelines are going to connect, right). Part of the satire comes from contrasting the empty and meaningless lives of San Francisco software engineers with the Miyazaki universe. I thought it was a bit near the knuckle. Snow Crash with a talking squirrel.
A few weeks ago I wrote a piece on epistemology, the importance of credentials, and magas war on credentialling. It got some positive reception on ACX, and a lot of those thoughts have continued to buzz around in my head, so I wrote a follow up here: https://theahura.substack.com/p/fix-the-root-not-the-fruit
The basic thesis is that most people have a blind spot when it comes to their personal epistemological systems -- you don't know what your information sources don't tell you. Many people are as a result untethered from reality, because a concentrated media apparatus can Chinese Robber everyone into believing things that aren't really true. MAGA uses this to great effect (though it's unclear if this happened intentionally or by accident). I suspect the left will recognize this and try and do the same, leading to further enshittification all around.
Unfortunately the piece was scheduled to publish right around when elon v trump started up, so it was a bit overshadowed, but now that that's cooled a bit i figured id send it around
It's fun to watch people on the Left have some mild exposure to what people on the Right have been dealing with for days. Chinese Robbers: school shootings. New media landscape: old media landscape.
the republicans are the most dangerous when they take the left's tactics and use them, because they seem to do so to great effect. the left doing "mean world" vs the right doing it, but the right acts on it more
I will say, though you didn't intend it this way, if this is what the right has been dealing with for all this time, it's all starting to make sense 😂😂😂
This is cute, but also untethered from reality.
1) School shootings are a big deal because _children die_. The fact that toddlers are killed extremely violently basically every year is at this point pathetic. Chinese Robbers applies when something is made to seem more prevalent than it is; the correct opinion here is "any kids dying in elementary schools is too many, percentages be damned". (Caveat that I'm not talking about gang shootings in high schools -- also bad, but not modal)
2) it's extremely misleading to even imply that the current right wing landscape is equivalent to the previous left wing one. Forget about rigor, the current fox news / Joe Rogan setup is literally propaganda.
3) I've not historically been left wing, nor consistently voted left wing. But the embrace of idiocy as a virtue and mediocrity as a gift has pushed me left.
"School shootings are a big deal because _children die"
About twenty times more children die by drowning in swimming pools, than in school shootings. Is this a bigger deal than school shootings? Do we need to put that AR-15 ban on the back burner while we focus on banning backyard swimming pools?
School shootings are a big deal because TV and the internet bombard you with many stories about each school shooting, in a manner calculated to cause maximum "engagement", meaning shock, fear, and outrage. And the nature of the media economy is that there will *always* be something that is made into a big deal by the shocking, terrifying, outrageous deaths of telegenic children (or maybe pretty white women), even if the number of deaths is tiny compared to more mundane causes.
It was Satanic ritual abuse for a while, and shark attacks one year, and Islamic terror and superpredator youth criminals and more things than I can remember over too many decades. Now it's school shootings. And if you ever stop those, it will be something else next year. Something that will shock and terrify and outrage you just as much, with the same passionate intensity of "Think of the Children!!!!". And it will never, ever, *ever* stop. Until you turn off the TV, or at least change the channel.
And then maybe read some boring statistics about the actual problems affecting the community you live in, so that you can maybe do something actually useful (like not buying a house with a backyard swimming pool).
Sorry, you're not going to convince me that gunning down kids is something that's just fine if you look away. If I bought into that line of reasoning, I'd be donating to shrimp welfare.
You look the other way on issues orders of magnitude more consequential than school shootings. Why is this one so special to you? It has to be something more than "children die", because children die in lots of the bigger issues you don't care about.
You're asking me to view the world strictly from a consequentialist lens. Again, if I wanted to do that, I would be donating to shrimp welfare. I'm not a strict consequentialist, though I massively respect those who are and live their lives accordingly. And though I could come up with a lot of reasonable sounding arguments for why people in general should care about school shootings more than pool deaths, like
- we've basically hit the limit on what we can do to stop pool drownings but have hit no such limit on school shootings
- these things aren't mutually exclusive and of course I advocate for people to take responsibility for their pools
- there is a massive difference between things that are systemic and things that can be prevented with individual accountability
- there are real psychological harms to school shootings that dont exist for swimming pools, that you are just dismissing out of convenience
- the per capita incidence matters, if you had as many school shootings as you had pools you would wipe out the child population
...I could make those arguments, but like Im also not going to pretend like I can make a coherent philosophical argument here in 30 seconds that you couldn't tear apart by the works of countless actual philosophy phds who have studied and debated this exact topic more than I ever could. I know enough of those phds, and I also did enough debate at various levels to know that there's a just-so argument for anything. If I adequately respond to "pools" you'll bring up some other thing, like "choking on jaw breakers" or whatever.
If you want a real answer for why I care about school shootings more than pool drownings, it's that the existence of school shootings offends me personally, I think it's a horrible thing much more than I think pool deaths are.
And if you want to go down the line of what offends me personally, I also think that pretending to care about kids dying and therefore making the wide eyed innocent claim that we should all care about pool deaths instead of school shootings, I think *that* is a prime example of being gigabrained, and the people who make those arguments are not serious people who actually go out into the world and make things better so much as people who try to win shitty Internet points even if it means using dead children to make their point (or they are debaters, which is possibly worse). The problem with being gigabrained is that there is always a better argument, and winning the argument doesn't mean you're right. The reason I left debate was because I recognized that anyone with any kind of rhetorical talent has an *ethical responsibility* to stop and consider whether the position they're advancing is, in addition to being morally abhorrent, deeply offensive to boot.
i think the point is that both school shootings and fatalities from riots are very rare things, but both are used to drum up support and there's sone hysteria about them compared to their real life occurrence. Republicans often deal with it in the past but are using it with devastating effect.
I'm not sure the riots thing is that relevant. The example of Chinese Robbers that I use in my post is "illegal immigrants committing crimes". It does happen, but the way in which it is reported is very much like the Chinese Robber Fallacy -- you could have thousands of independent examples and still not have any point at all, something fox news uses to great effect on this issue
i think you could point to hate crimes as the democratic version then; you still see similar numbers but they aren't a large enough occurrence.
my point is more that the dems did use it, the republicans tend to wake up and copy things now and then but kind of be more ruthless in doing it
I think hate crimes and police killings/brutality are good examples of Chinese Robbers on the left for sure.
Still, I think MAGA is rather shameless about it, because they defend the bad behavior with "but see the other guys did it first!!!" (As if this is anything more than a grade school level defense.)
More generally, I find that even at it's worst left leaning media is more measured. They aren't generally going around saying things like "Sandy Hook didn't happen"
2. Hard to take seriously so soon after the "left wing" media almost unanimously toed the White House's line about Joe Biden's mental health. (Remember "cheap fakes"?)
I use this example because this one seems generally accepted now to have been false. There are others that are no less deception/propaganda, but that you probably believe to be true, and so are unlikely to accept as evidence.
- I've never heard the phrase "cheap fake" in my life.
- you don't have to stretch so hard. There's lots of examples where mainstream media gets things wrong -- I've been hearing this since I started being aware of the news, as early as the wars in the middle east. But pretending like there's a comparison here is only showing your own willingness to sacrifice your dignity.
But, look, strong opinions weakly held. I'm open to learning something new, and you're a smart guy. Since you're convinced, can you justify Infowars to me? Can you help me make sense of the New York Post? How about Catturd? There must be some kind of empirical evidence to back your convictions?
Personally I just use Fox's own words from the dominion lawsuit -- they're an entertainment company after all. No serious person could assume they were telling the news. Their words, not mine. (https://www.npr.org/2020/09/29/917747123/you-literally-cant-believe-the-facts-tucker-carlson-tells-you-so-say-fox-s-lawye)
That much-touted lawsuit defense is simply a description of the news-vs-opinion distinction that is standard in the news industry. Fox is unremarkable in this regard. The very article you link to notes that Rachel Maddow's show on MSNBC relied on the same argument in a different case.
Great, so Fox -- the mainstream conservative news source, the most watched cable network in the country, with 99 of the top 100 cable telecasts by watch count and near total coverage in many rural parts of the country -- is as extreme as MSNBC, the most biased left wing media that still retains claim to being 'mainstream' in some way. Meanwhile, no answer to infowars? I think you're basically conceding my point, thanks!
I misread the headline at first glance. I’ll let you guess how.
Abrego Garcia being brought back to the US. Planning to prosecute him when he gets here, but he'll have a day in court.
https://thehill.com/regulation/court-battles/5337485-trump-administration-returns-deported-migrant/
Now we'll see the gang affiliation conviction for having "MS13" tattooed on his knuckles in Arial font
Was just about to post this. This is fantastic and significantly dials down my "everything is going to shit" meter. Arguing "we can deport anyone and cant bring them back" was always absolutely insane.
Next up: give due process to the rest of the folks that were deported w/o due process
Why would it be insane that you can't "bring back" a foreign citizen from his home country? That'd be roughly described as kidnapping?... You can at most accept him back, but if he actually comes back is between himself and the authorities in his country.
If Abrego Garcia had been deported to El Salvador to live a normal life like any other deportee, you might be able to make that argument. But he wasn't, he got thrown in CECOT at our request.
("Deported" is really the wrong word - some of the people we sent to CECOT aren't even citizens of El Salvador, though Garcia is.)
If you ask a foreign country to keep someone in prison for you, and you're paying them money to do so, then you can't pretend that you don't have any input into whether or not he stays in prison.
I'm generously assuming you don't actually know what the details of the case are and are commenting earnestly.
A simple thought experiment: tomorrow the government comes to your house, accuses you of being a terrorist and illegal immigrant, and sends you to El Salvador. What happens?
"O, but I'm not a terrorist" doesn't matter, they accused you of being one, how are you going to show otherwise from inside a foreign jail cell?
"O, but I'm a citizen" doesn't matter, you're already out of the country in someone else's jurisdiction, and they're saying you're not a citizen anyway. How do you prove it from inside a jail cell in another country?
"O, but I would sue before they could do that" nope they got you before you could talk to your lawyer. How do you sue from inside a jail cell in another country?
I'm Romanian, the country blessed with the highest rate of economic emigration in the world. If a Romanian ends up in a UK prison, and is deported back to Romania where he is directly put in a Romanian prison... well, there is no "if" here. This happens routinely. A fair amount of the early immigrants are literal thieves and beggars (a good chunk culturally distinct gypsy) and they do get thrown in prison multiple times, and at some point the host country is fed up with housing them and sends them back with a ban on coming back, and in a significant minority of these cases they have outstanding charges in Romania as well so they end up directly in prison.
I'm still baffled as to what exactly is so... unheard of. Other than US ignoring its immigration laws for a couple of decades and then suddenly deciding to respect them again.
In the cases you describe the people being deported are presumably a) given due process and b) proven to be Romanian
So it looks like the venerable institution of the term paper, long a staple of college humanities courses, is rapidly being rendered obsolete by ChatGPT fakery. Any ideas as to what might replace it?
Offhand, the best idea I can think of is to replace term papers with proctored document analysis exams, where the students are given a bunch of article-length sources, and have to answer questions based on them in a controlled setting.
Easy. You come into class and have to write it out on paper just like generations before you with no access to phones/electronics.
That's fine for exams. But for a traditonal research paper you might spend half a day in the library looking for sources, half a week reading through them, and several days writing and revising a paper. Are you going to spend an entire week incommunicado?
Fair point. I was focused on papers written for exams.
Seems like it would be possible to use AI to recognize AI-generated essays. AI's great at pattern recognition. This would work especially well if AI already had, for each student, some formal prose that they had undoubtedly written on their own. I'm pretty sure there are things about people's prose that are sort of like the whorls of a fingerprint: avg sentence length & complexity, vocabulary, ratio of words of greek or latin vs. anglo saxon origin, errors to which they are prone. It would then be possible for the AI to compare the student's prose fingerprint from the original sample to the fingerprint of the term paper being graded.
AI detectors definitely exist. I'm not sure how well they work. There are also tools specifically designed to fool them.
Term Paper + Accompanying Oral Exam. The oral exam consists entirely of reading passages of text from the student and discussing them with the professor. Almost all of the passages read are drawn directly from the paper the student submitted. One or two are not. If they don't immediately identify these as not being their own work, they fail on the spot.
Face to face, voice to voice; cut out the middle man.
This, along with any of the other obvious solutions that are probably better at evaluating learning than the currently dying model even absent AI will only be adopted (if they are) after great struggle, because they violate the core principles of the modern education system (in private schools as well, so you voucher perverts stop getting excited)
It is the exact opposite of value engineering and risk management. It will require more qualified people to spend more time making more decisions that can be directly attributed to a human, who can be bitched out/sued by a parent.
It's also the only forward, so good luck to administrators I guess lol
People are such a hassle. It's so much easier when I just generate the essay with EssayWriterBot and submit it to be graded by Assessment3000. The Assessment3000 can just find my steganographically encoded account ID, call an API that shows my account is paid up and EssayWriterCorp is a signatory to the AI Works Interchange Agreement, and then give me the A- I paid for.
I was involved in the consciousness thread here and I decided to go have a chat with GPT about it. The first question I asked was how much of the total human output in writing or images, etc., were available to it. Then I asked GPT what its thoughts were about the human race based on what it had absorbed. I of course, had a discussion about consciousness with it as well.
The whole thing is here if you’re interested.
https://bcivil.substack.com/p/so-i-had-a-chat-with-gpt?r=257wm
The SCOTUS issued six rulings this morning, mostly unanimous ones. In ascending order of general importance:
One of them is deciding that a case shouldn't even have been taken up by the Court in the first place ("The writ of certiorari is dismissed as improvidently granted"), which is sort of amusing, and the only dissent is a petulant one from Kavanaugh.
In the Smith & Wesson case a unanimous court basically tells a federal appeals court to quit monkeying around, current federal law says U.S. gun manufacturers can't be sued in that way and this isn't new news. Picking Kagan to write that up was a nice touch by the Chief Justice (of which, more to come below).
In a ruling related to Hamas terror attacks, via a path of legal actions too convoluted for me to summarize even if I understood it well, the Court unanimously chastised a federal appeals court for trying to adjust a specific established federal-courts precedent. I gather that as a practical matter this essentially confirms the lower-court ruling that was in favor of the Lebanese bank accused of violating US law and hence "aiding and abetting terrorists".
In an even geekier case about a $1.29 billion contract-law award against an Indian company, the Court unanimously corrected a federal appeals court's interpretation of the complicated "who has standing to sue" aspects of a 1976 federal law that I promise you've never heard of. The practical effect -- I think? -- will be that this particular damages award stands. And perhaps that some other such cases can proceed by US plaintiffs against foreign companies.
"Catholic Charities v Wisconsin" seems pretty significant. A unanimous Court agreed that state regulators don't get to nit-pick as to whether a faith-based NGO's specific activities do or don't qualify as religious for the purposes of laws about tax exemption. (In my words not theirs: whether you agree or disagree with religious entities being tax-exempt, if their mission and focus and history is rooted in a long-establish faith tradition then it qualifies as a religious entity.) Sotomayor wrote the opinion which may boost its broader symbolic impact in our current cultural climate.
There's really just one supernova from the Court today though....
"improvidently granted" usually means the petitioners didn't make the argument they said they were going to make. Something like, they asked the question "does the Second Amendment apply to machine guns," but then their argument before the court is "our guy should be exempt from the machine gun ban because his cyborg arm is part of his body."
Interesting, didn't know that.
Though Kavanaugh's dissent doesn't read that way. He says that he agrees with the plaintiffs on the merits, and implies that the other justices didn't see the substance of the case as worth their time and so seized on a silly (in Kavanaugh's opinion) mootness point as an excuse to punt it. There being no written majority order we have only Kavanaugh's specific thoughts on any of that.
Looks like you're talking about Laboratory Corp. of America v. Davis. The argument transcript is still available, so we can form opinions on Kavanaugh's accuracy from that. https://www.supremecourt.gov/oral_arguments/argument_transcripts/2024/24-304_3e04.pdf
I haven't read it, and it looks like one of the longer arguments at 168 pages; when I was reading them more regularly they were usually around 90.
Well, yea. I do find those to be a real slog, and 168 pages yikes....yes that is the case we're talking about.
And maybe I'm reading too much into Kavanaugh's tone in his dissent but it felt to me like he was bitching about the justices' deliberations following oral argument.
The absolute banger, which will generate “holy fucking shit” type writeups for days and weeks to come on Reddit and Substack and many other places (some celebrating, others in mourning), is "Ames v Ohio". Future historians will view this ruling as the rifle shot to the heart of woke-ism.
It is an earthquake not just because of its unambiguous and clear punchline: that the Great Society-era Civil Rights Act does not permit the _exact_ historical-group-injustice logic undergirding ideas such as intersectionality and critical race theory as well as real-world "DEI" hiring practices. Also it is _unanimous_ (!). And also the court's ruling was authored by Justice Ketanji Brown Jackson (!!).
Wowzers....in terms of legal and political and cultural impact this ruling will be the affirmative action ruling of two years ago, on steroids. Hanania, to pick just one example, is probably having some fun right now writing up his victory lap.
How is this ruling a “rifle shot to the heart of wokism?”
As a direct challenge to a specific assumed moral authority, and some collective certainties, within our bubble. The ruling being unanimous, and delivered by the Court’s one black woman, was “amazing” to quote one of my puzzled officemates the other day. Legally of course it wasn’t amazing at all but she wasn’t talking about that.
I see what you mean but I’m not sure the Supreme Court has much symbolic weight any more.
If it was 6-3 with the three liberals opposed, zero symbolic impact in left circles. But unanimous with the one black woman justice writing the ruling and quoted in every media report about it….that lands verrrry differently.
I'm just going to take a moment to note how enormous the drop in quality is from the parent comment to this comment. I apologize for the bluntness, but the contrast really is that striking.
In all five points above you immediately laid out *what the court actually ruled* in plain terms, then followed it up with a modest amount of opinion and speculation. Quite clear and pleasant to read, regardless of my views on the particulars. Overall an excellent comment.
And then there's this. The "opinion and speculation" portion eats the comment entirely. You are seemingly so busy with performative celebration and free form speculation that you never get around to stating one single clear fact about the case. The closest you come is alluding to the ruling in terms of *what you think your ideological opponents believe*, which is not actually communicative in the slightest to anybody who doesn't live inside your brain.
(I went ahead and looked up the case so I could see what the fuss was about and was...distinctly underwhelmed. I agree that it will probably be much discussed online for culture war reasons--probably mostly at similar low quality to what we see here--but I'm quite skeptical about the real-world impact amounting to much.)
Your assumptions are showing.
As I've mentioned here many times before, I live and work deep in the heart of "blue" America. I was born and raised in a progressive household, a child of a hero of the movement; have raised my own children with the same overall values. My individual career path reflects that as well. All if it very much by choice.
It's true that I have for years pushed back at CRT and wokeness, and that some of my fellow-travelers (to use an old phrase) have regarded me as having drifted away. Not so much anymore thankfully but, for a while that was my immediate and unpleasant reality. None of that though has changed the fundamentals.
So my point above was that from a place of deep firsthand knowledge, this SCOTUS decision is an earthquake. Not because of its legal significance but due to its _cultural_ impact, the broader symbolism for many people on today's Left.
My point had little to do with your politics and everything to do with your presentation. I invite you to re-read both your top level comment and the comment I objected to and compare the ratio of "sentences communicating factual information" to "sentences expressing opinion or affect."
To be clear, I'm not at all saying you shouldn't express your opinions. Just that the way you went about it is a recipe for unproductive discussion and negative engagement. And the problem was almost trivially fixable: including the same concise, mostly-factual summary of Ames v Ohio in the top-level comment is literally all it would have taken (with the opinion stuff left exactly how and where it was even). Then anyone wishing to discuss the facts would have them right there to be engaged with, rather than having to try to pick them out of...that (or search elsewhere). And anyone who wanted to discuss those opinions could do that too.
p.s. For what it's worth, I think the SCOTUS decision on the case was the correct one in context. But I think both the legal and cultural impacts will be modest at best, but mostly bad (with the underlying problem being one it isn't the SCOTUS's job to fix).
"what you think your ideological opponents believe" obviously had plenty to do with your assumption of my politics.
Since you decline to acknowledge let alone apologize for having applied a straw man to me, I'll be applying my longtime personal online SOP and muting you.
Likely you won't see this then, but I will apologize for my tone, at least. I could certainly have approached the topic with more delicacy.
Nevertheless, the passage you quote is not a straw man. The point I'm making there is that the comment I'm objecting to fundamentally *does not communicate* the things that are needed for productive engagement. I don't live inside your head. When you say " the Great Society-era Civil Rights Act does not permit the _exact_ historical-group-injustice logic undergirding ideas such as intersectionality and critical race theory" I CANNOT POSSIBLY KNOW from that comment what "logic" you think is being disallowed. You don't ever discuss it at the object level. You simply allude to it as something like "that thing that intersectionality and CRT is based on."
Maybe you have an absolutely clear and perfect and cogent idea of what those ideas say, and this comment is 100% spot on. Hell, maybe *my* understanding of those ideas is deeply flawed and comparing it to yours would reveal that. But I cannot possibly know that when you *don't say it.* That's the problem.
In discussing all the other rulings, you talk about them with sufficient directness and clarity that I can form an object-level understanding of what the supreme court actually ruled. Here you don't. That makes productive discussion extremely difficult *regardless* of what your political views are and how well-founded they are.
P.S. In a legally-unnecessary concurrence justices Thomas and Gorsuch pile on, basically play to their ideological side’s cheap seats. Fair enough in a way but were I in the room I’d have tried to talk them out of that…stepping back a bit from today’s culture war the Court’s ruling packs more lasting punch if left as stated. I predict that future historians both conservative and liberal will feel the same…plenty of knee-jerk posts about Jackson personally (“DEI hire destroys DEI”/“how much did they pay her off??”/et al) will also fly around for a while and are just as sensibly ignored.
I didn't read the concurrence as cheap seats material. Instead, it addressed very technical issues about the requirements for overcoming a motion to dismiss. To the extent it was "cheap seats", it was that the judge-made framework being used violated the letter of the law it was seeking to implement, and more generally, displaced the general framework codified into law. I'll certainly grant you that it's one of Thomas's soapbox issues.
Then there was the "Multibillion dollar AI company" that turned out to be 700 engineers in India working as mechanical Turks
https://www.indiatoday.in/technology/news/story/builderai-used-700-engineers-in-india-for-coding-work-it-marketed-as-ai-powered-after-hype-now-goes-bust-2734963-2025-06-03
https://academic.oup.com/biomedgerontology/advance-article-abstract/doi/10.1093/gerona/glaf108/8131802
"A new U.S. study has examined over 3.5 million older adults who had COVID-19 between October 2021 and March 2023. The researchers found that about 140,000 of them – nearly 1 in 25 – were diagnosed with long COVID-19, meaning they experienced symptoms for at least one year after infection"
So this is using a definition of having symptoms a year after infection. They apparently don't have to be severe symptoms.
What spicy opinion do you have that would be controversial with the Rationalist crowd? As in, not "controversial" in general, but controversial in our community.
(This is taken verbatim from the user "Amanda From Bethlehem" in the non-book review thread, and I thought this is an interesting question for an open thread)
Rationalism is a New Religious Movement.
the problem with ideas is living like they are true, and the hidden parts are often the nastiness. HBD is bad because when you say "ok, i agree with you, what should we do?" then out comes racism, cronyism, and other things. ideas are things to be executed, not always dispassionately.
the flip side is the idea that people hold but living by it will kill you. So they ignore it while loudly proclaiming it.
like AI doom is not making people think "the future is bad, maybe i should tell my dad i love him and spend more time with him" or that i should sell everything and open up that bookstore i always wanted to. its just there as noise.
1) Drugs are bad. In theory, maybe not literally all of them, but in practice, any community that adopts a social norm of experimenting with them will soon start taking a lot of the bad ones. It does not matter how overconfident the group is about their own rationality.
2) I don't have a strong opinion on polyamory in general (although I notice that I know more about bad examples than about good ones), but I definitely think that *polyamory at workplace* is inappropriate for the same reason sex at workplace in general is a bad idea. If a girl tells you she is not interested, it means "drop the topic immediately", not "give her another lecture about why polyamory is rational". I frankly don't care where you will get that extra pussy you desperately need. (If you have friends in the Silicon Valley, maybe ask them to fund your PolyTinder startup; you may get your needs satisfied and get rich in the process.) Jobs are there for work, not to satisfy your sexual needs. Don't give me any of that "it's actually a community" or "how dare you criticize our sexual orientation, that is analogical to homophobia" bullshit.
3) Constructivist education is a good idea. The fact that Americans fucked it up completely, is a fact about Americans, not about constructivist education per se. Actually, rationalists keep reinventing the basics of constructivism (e.g. the "Truly Part of You" chapter in the Sequences, or the "gears-level models" in general), it's just that using the keyword will predictably provoke a knee-jerk reaction.
Rationalists try to make themselves way too legible and value simplicity way too much. This often leads them to cut out some actual, complicated, but valuable human parts of themselves.
1) EA is functionally indistinguishable from central economic planning and makes the world strictly worse.
2) AGI doomers are Chicken Littles with an inability to understand the complexity of real-world equilibrium dynamics. It's very likely that their hysteria is doing more harm than good on net.
3) Future historians will regard the current transgender fad the same way we regard medical leeches, the theory of bodily humors, and the late-19th-century "sagging organ" fad.
4) The Doomsday Argument is absolute nonsense.
5) Polyamory is emotionally unhealthy and socially unstable.
6) Traditional religions are probably the best way to manage society. Their supernatural claims are false but that's orthogonal to their social function.
7) Tribalism is essential to society. The trick is making sure the right tribe is in charge.
8) The resolution to Fermi's paradox is a simple consequence of economic rationality in the face of FTL being impossible.
9) There is no hard problem of consciousness. We're all p-zombies.
i used to believe similar things but its really silly when you realize Wanda, you will never be the hammer, but always the nail.
like the people who say "tribes are good" are the exact people who would be miserable under one. They think they will be immune from pressure or it will be all good, but tribes will force conformity in ways you will hate and the benefits will not be worth it. the happiest guys will not be guys who would post here.
like with religion: in christianity increasingly you only exist as a guy in a handful of slots: as a devoted dad/husband, as a pastor/teacher/worship leader, as a famous athlete, actor, or musician, or as someone's kid. if you are anything but they will try to force you into a role, then give up and you will always be the weird guy if you don't leave.
kind of made me change mind on lgbt- you can go on about it but its not like they are happy now; if it gives meaning to their lives as long as we try to not demand too much of each other why should i force them
into a box?
Lots.
AI doom. There's no good argument for hgh probability of almost total extinction.
Allignment isn't even well defined
Aumann agreement has no real world.relevance.
Bayes is not a complete epistemology.
Bias. Early rationalism took the view that if you could debias yourself , that would akin to developing superpowers. There is no evidence of this. If you seriously want to get 're if your confirmation bias, the last thing you should do is sit in a bubble with a bunch of people who agree with you ... yet that is exactly what most rationalists do. The heuristics side of the bias versus heuristics debate never got a look-in...apparently , Yudkowsky has never heard of it.
Brain algorithms have never been shown to exist
CEV. Is a nothing burger.
Computationalism is not the obviously correct theory of mind
Computational complexity matters, and means uncomputable things like AIXI and Solomonoff Induction, dont.
Consciousness is a.major challenge to physicalism.
Counterfactuals aren't aren't some huge puzzle. They are solved by the fact that you have a very imperfect mode of the world in conjunction with the fact that contradictions don't propagate through a world.model.instantaneously.
Decision theory : a single DT cannot be formulated to solve every problem in any unuverse.
Determinism has not been proven.
Epistemology. Where Recursive Justification Hits Rock Bottom does not solve the Problem of the Criterion, or show coherentism to be viable The Simple idea of Truth does not refute the main objections to correspondence.
Ethics. Rationalists always equate ethics with values, rather than obligations, or virtues. There appears to be no specific reason for this.
Free will has not been disproved.
Intuition. Rationalists decry it, yet are unable to show how they manage without it.
Many Worlds is not a slam dunk. The interpretation of QM is remains a complex issue.
Map and territory. The map territory distinction allows you to state some problems, but does little to resolve them.
Nanotechnology is overemphasised. Diamondoid bacteria definitely aren't going to happen..You don't need nanotechnology to have AI threat.
Newcomb. The problem statement is.ambiguous, and there is no right answer.
Orthogonality. A similar argument to the orthogonality thesis shows that there are many possible minds that aren't relentless utility maximisers.
Philosophy. Isn't broken or diseased in the sense that there is a more efficient way of solving the same problems.
Physics. Yudkowskys writings on QM are confused about what MWI and Copenhagen even are.
Physicalism. Is not a slam dunk, because of the hard problem
Probability. the existence of in-the-mind probability does not prove the nonexistence of in-the-world probabiliry, and for that reason the "Probability is in the Mind" argument is flawed.
Rationality is more than one thing. There is considerable tension between instrumental and epistemic rationality. There is also tension between openness and dogmatism. Th
Reductionism. The rationalsphere tries to lean into reductionism even harder than other science based thinkers. That amounts to treating reductionism as necessary and apriori, not just something that works where it works. A universe where reductionism didn't always work would look like a universe with persistent puzzles...ie. Like the one we are in.
Simulation. Because we have so little understanding of consciousness, there is no guarantee that simulated people will be non zombies. Rationalists typically ignore the possibility.
Solomonoff Induction. Apart from the issue of uncomputability , it's doubtful that SI constitutes a complete solution to ontology, because it is not at all clear that a computers programme can express any ontological claim.
Theology. LessWrongian arguments about theology implicitly assume that God ia a natural being. Of course, theology defines God as supernatural.
Utility functions. Properly speaking , no human and only a subset of AIs have UFs....yet rationalists talk about UFs incessantly ..which means they are using the term improperly, to mean some set of preferences.
Utilitarianism. Regarded as the correct theory of ethics by most rationalusts, although there is no novel proof of it, or argument against the standard objections.
Zombies. The generalized anti zombie principle disregards the real possibility that simulated people would be zombies. Note that some are not technically p zombies.
Late to the party, but this is a great list and I appreciate your thought and effort in putting it together.
did you have this prepared for some reason already? anyway, nice list!
I'm not a rationalist, but I think many of these wouldn't be controversial in that community and some others aren't correct imo:
AI doom: depending on what "high" means either not controversal (eg high=>99%), or not correct (eg high=>5%). the simple "ai at some point will overtake humanity in power, and if it doesn't care about humanity , it's possible it will kill it' argument is one such.
Aumann agreement can have real world relevance for future, mutually-legible AIs (some possible modification of it, at least, that can handles the lack of logical omniscience)
Bias: the first part is not controversial: see rationality tag on lesswrong (https://www.lesswrong.com/w/rationality):
"Early material on LessWrong frequently describes rationality with reference to heuristics and biases [1, 2]. Indeed, LessWrong grew out of the blog Overcoming Bias and even Rationality: A-Z opens with a discussion of biases [1] with the opening chapter titled Predictably Wrong. The idea is that human mind has been shown to systematically make certain errors of reasoning, like confirmation bias. Rationality then consists of overcoming these biases.
Apart from the issue of the replication crises which discredited many examples of bias that were commonly referenced on LessWrong, e.g. priming, the "overcoming biases" frame of rationality is too limited. Rationality requires the development of many positive skills, not just removing negative biases to reveal underlying perfect reasoning. These are skills such as how to update the correct amount in response to evidence, how to resolve disagreements with others, how to introspect, and many more."
the second part is unlikely and I don't see how you came to believe it: when I write on a random forum I don't enumerate everything I've read before that influenced my thinking. Why would rationalists be different? So you can observe rationalists communicating in rationalist spaces and you can't observe rationalists communicating in non-rationalist spaces (therefore, you have no idea of how frequently rationalists communicate outside of rationalist spaces)
"algorithm" is a very general concept, it's not clear to me what it would mean for brain algorithms to not exist.
Ethics: "Values" is a more general word. If someone believes that it is virtuous to be courageous, they can be said to value "being courageous"
Map and teritory: Often, merely stating problems well helps you solve them, I've seen this particular lense help tons of times. Last time was two days ago for me: https://substack.com/profile/25777657-taleuntum/note/c-123416968
newcomb is not ambiguous if you don't assume magical free will that would allow you to defeat the assumptions of the problem.
orthogonality: I have never heard it stated that only utility maximizers exists.
the tension between instrumental and epistemic rationality is well-known, see dark arts tag on lesswrong.
utility functions: by VNM theorem, if you accept 4 specific, highly desirable axioms for your preferences, your decision making can be represented by choosing the maximum expectation of a real-valued utility function. Even if humans are not consistent enough, in most cases its still worth it to talk about the more consistent agents.
(to be clear, me not mentioning one of your points is not an endorsement of that point. I only agree with "utilitarianism" on your list, for the others, I have either no sufficient information to decide, or diagree, but I don't believe giving arguments would be productive)
.
>utility functions: by VNM theorem, if you accept 4 specific, highly desirable axioms for your preferences, your decision making can be represented by choosing the maximum expectation of a real-valued utility function. Even if humans are not consistent enough, in most cases its still worth it to talk about the more consistent agents.
The problem with UFs (and Bates and Aumann and Solomon off) isn't that they are never valid, it's that they are over extended by rationalists.
See https://www.greaterwrong.com/posts/9SBmgTRqeDwHfBLAm/emergence-spirals-what-yudkowsky-gets-wrong for.overreach on the subject of energence.
>the tension between instrumental and epistemic rationality is well-known
Yes, but it was never resolved. Rationalists talk about rationality though its one thing, but which thing isn't agreed on.
>orthogonality: I have never heard it stated that only utility maximizers exists.
Ive often seen it assumed implicitly.
>newcomb is not ambiguous if you don't assume magical free will that would allow you to defeat the assumptions of the problem.
That's an illustrator of the problem I am talking about, not a solution..
The original formulation of.Newcombs paradox doesn't specify causal determinism or the predictors mechanism..
And determinism is not a fact, as I said.
If it isn't, free will doesn't have to be magic.
The rationalist "solution" to Newcombe just relies on the audience having certain intuitions.
>In his 1969 article, Nozick noted that "To almost everyone, it is perfectly clear and obvious what should be done. The difficulty is that these people seem to divide almost evenly on the problem, with large numbers thinking that the opposing half is just being silly."[4]
By magical free will, I mean libertarian free will (I disagree with compatibilists too, but that would just be a debate about the use of words). I think that is as likely as ghosts. We have a pretty good story for why we feel intuitively that we have free will even if we don't actually have it (in short: it's useful to model alternative decisions when deciding between actions). That is quite enough to decrease the probability of actual free will to insignificant levels.
Given that it is that unlikely for us to have libertarian free will, I think it would be justified to simply not mention it in a problem and assume that it's not real, but as I'm reading the Nozick paper (Wikipedia said this is the first paper investigating the problem), the problem statement starts with the following:
"Suppose a being in whose power to predict your choices you have
enormous confidence. (One might tell a science-fiction story about a
being from another planet, with an advanced technology and science,
who you know to be friendly, etc.) You know that this being has often
correctly predicted your choices in the past (and has never, so far as
you know, made an incorrect prediction about your choices), and furthermore you know that this being has often correctly predicted the choices
of other people, many of whom are similar to you, in the particular
situation to be described below. One might tell a longer story, but all
this leads you to believe that almost certainly this being's prediction
about your choice in the situation to be discussed will be correct. "
It's pretty explicit that you believe that the being will be very likely to predict your choice. The assumption is that you, in the hypothetical, believes this. If you don't accept that you would believe this in a hypothetical (for example, because you believe in libertarian free will), then you are not taking the hypothetical seriously enough.
If you want to get the most money in this situation, you take one box. I don't quite understand why you think it's ambiguous. I personally think it's a failure to really imagine the situation as given and thinking about it as an abstract problem instead.
Let's imagine what happens after you take both boxes: you walk out with merely 1000$, if you disagree, you are not taking the setup seriously: the being can predict your choice with high confidence. If your reasoning is that "x+1000$ is greater than x for every value of x, so I will take both boxes", that's great, but you will still walk out with only 1000$, otherwise you are not taking the setup seriously. After agreeing to these, the only remaining question is: what's the answer to the problem? I think the answer is the choice that leads to more money and not whatever includes the "correct" reasoning.
Yudkowskys solution is example of whatv I call the a/the fallacy. He constructs a theory where the feeling of libertarian free will is explained without the existence of an actual power of free will. But it is also possible to construct theories of non-magical ,non-dualistic free will, such as Robert Kanes naturalistic libertarianism. Such theories also account for the facts. So Yudkowsky has *an* explanation, not the only possible one. Note that naturalistic libertarianism relies on physical indeterminism. Since indeterminism is not known to be true, they could turn out to be unworkable on evidence of determinism...but physical indeterminism is still a respectable naturalistic hypothesis that doesn't require ghosts or magic.
>Ethics: "Values" is a more general word.
Thats the problem. I can value tutti frutti over vanilla, but it is not ethically relevant..who cares? The Three Word Theory, "Morality is values" doesn't hone in on the topic..it does the opposite...and probably leaves out important things like obligation.
> If someone believes that it is virtuous to be courageous, they can be said to value "being courageous"
That's an example where a value and a virtue happen to coincide. The three word theory isnt wrong in the sense that no value can be a virtue, it's wrong in the sense that not all values are virtues, or of public concern.
>"algorithm" is a very general concept, it's not clear to me what it would mean for brain algorithms to not exist.
If you treat the concept of an algorithm so broadly that it always applies, then the assertion of brain algorithms can have no meaning.
Theres a straightforward sense in which brain algorithms haven't been found:you can't just look one up in the neuroscience literature.
The non vacuous concept of an algorithm rests on the concept of general purpose hardware. If you can't abstract an algorithm from the hardware it's running on, then maybe there isn't a hardware/software divide at all. Note that general purpose hardware had to be specially constructed...it's not something you get for free from nature
Bias.
> second part is unlikely and I don't see how you came to believe it: when I write on a random forum I don't enumerate everything I've read before that influenced my thinking. Why would rationalists be different?
This is another Yudkowsky centered one. He was making extreme and one sided claims back in the day, and took a lot of the community with him. The community seems to have achieved a more balanced view in this case.
>Aumann agreement can have real world relevance for future, mutually-legible AIs (some possible modification of it, at least, that can handles the lack of logical omniscience)
There are specific examples of rationalists thinking that AAT applies to complete strangers, eg Yudkowsky telling a theist he had never met before that they couldn't agree to differ.
Similarly, "tailcalled" https://www.lesswrong.com/posts/4S6zunFNFY3f5JYxt/aumann-agreement-is-common merely labels various things as Aumann agreement without making any effort to show where the common knowledge comes from.
Okay, Yudkowksy made a mistake in that case (I couldn't find where he recounted that story, but I also had a vague memory of it, so I'll grant it), but look at for example what Wei Dai says about it in (https://www.lesswrong.com/posts/JdK3kr4ug9kJvKzGy/probability-space-and-aumann-agreement):
"Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeatedly exchanging degrees of belief about it, except in very simple situations. You need to know the other agent's information partition exactly in order to narrow down which element of the information partition he is in from his probability declaration, and he needs to know that you know so that he can deduce what inference you're making, in order to continue to the next step, and so on. One error in this process and the whole thing falls apart. It seems much easier to just tell each other what information the two of you have directly.
Finally, I now see that until the exchange of information completes and common knowledge/agreement is actually achieved, it's rational for even honest truth-seekers who share common priors to disagree. Therefore, two such rationalists may persistently disagree just because the amount of information they would have to exchange in order to reach agreement is too great to be practical. This is quite different from the understanding of Aumann agreement I had before I read the math."
sounds much more skeptical...
But okay, if by "controversial" you mean statements that are debated/disagreed with in the community, you are right. I admit I took "controversial" to cause situations more like the guy with the many swords in his face (https://knowyourmeme.com/memes/unpopular-opinion-swords).
(Answering one at a time)
>AI doom: depending on what "high" means either not controversal (eg high=>99%), or not correct (eg high=>5%). the simple "ai at some point will overtake humanity in power, and if it doesn't care about humanity , it's possible it will kill it' argument is one such.
High means over 90% probability, doom means ovr 90% dead. Yudkowsky at least holds both.
Sure, but your statement was "there is no good argument for high probability of almost total extinction." I don't think this is controversial even if one notable person in the community disagrees with you. There are tons of notable people who have lower p(doom) which implicitly means they agree with you (iirc Scott, Paul Christiano, etc..). Surely a disagreement in the community does not qualify as a controversial statement?
A lot of times it’s just better to flip a coin.
Sometimes looking up facts is not worth the time it takes.
That all voluntary and sincerely held opinions are, by definition, rational.
(I do not know whether I agree with this opinion, however. I have it, in the sense I am aware of it.)
Ignorance is bliss
"I'm glad I don't know that."
Sam Kriss has a story that reads like it could have been a direct excerpt from Unsong.
https://samkriss.substack.com/p/a-truly-foreign-language
A fun story, and yes, it DOES seem straight out of the world of Unsong. Thanks.
https://docs.google.com/document/d/17WAcFrTOExgendrk2lGTxRIR4LVC3ZGbiU5Xe3kF4mg/edit?tab=t.0#heading=h.dky9jk41p58b
One of the contestants-- a Straussian analysis of the movie Civil War. It's an interesting essay, but how much difference is there between a Straussian analysis and just making things up?
I wish Straussian analysis would present itself as a sort of mental play rather than as something very serious. The author of the article tries, but I don't think they succeed, and they may believe they've found something reasonable.
Straussian analysis reminds me of Freudian analysis, or possibly vulgar Freudianism. As far as I can tell, it's about finding other people's discreditable motivations, a very tempting thing, but not a reliable approach, partly because people's motivations are apt to be mixed. It's a constant drumbeat of "people are worse than you and they think", and I suspect it's unhealthy. Bad motivations exist, but they aren't the only thing that's going on.
Tangentially, I'd like to coin the term "Levi-Straussian Analysis", where you're making up your interpretation by the seat of your pants.
Every constituency had a different interpretation of “Waiting for Godot”. Samuel Beckett told each of them that they had read it correctly.
The piece is a reasonable interpretation but I wouldn’t call it the only correct one.
As far as the current hot ’Straussian’ interpretations go I think Bronge Age Pervet can take a long hike on a short pier.
Among the many problems this movie has, sharing a name with "Marvel Studios' Captain America: Civil War" is the one that haunts it even now.
It's a term, like "dog-whistle politics", which quickly opens itself up to abuse.
In other news: Investment!
I've detailed my investment strategy before when my political opponent's gain power, but I'm going to do it again here.
I assume that not only are the people that I vote against, conservatives in this case, not going to do a good job, but that they are actively lying and hypocritical and that they will act to maximize suffering, and minimize long-term growth.
When The current guy was elected, I assumed that drug use would go up, gambling addiction would become more prevalent, the deficit would grow, the dollar would devalue, The United States would lose power and prestige in the world, fewer people would receive worse health care, The world will become more conflict prone, etc and so forth.
All of these things have happened according to the market, so I have made a killing. There was one evil that hadent manifested yet though: The government will become bigger and more intrusive throughout people's lives.
In other news, One of my pics that hadn't really paid off big up to this point and one that I thought might have turned out to be a wet squib was Saurans bealful gaze. How could accompany who was named after the prototypical fictional tool of evil surveillance, and whose mission statement was "we Read all the early cyberpunk novels and decided to do the shit the villains were doing" fail to do well under my investment scheme?
And now, that day has arrived! I've already exited my position because I think this might be too much even for conservatives to swallow, but I still made another tidy profit off it.
this isn't financial advice, but I can't recommend the Voldemort investment framework highly enough based on these results, even though they don't predict future performance.
What’s your time frame? Bad ideas can stay popular longer than you can stay solvent.
I think I remember your earlier post as saying (correct me if I'm wrong) something like the strategy is invest in the market when a D becomes president and leave the market when an R becomes president. That strategy could be subjected to some back testing - taking the S&P 500 as a proxy for the market, look up it's value on election day 2024, 2020, 2016, etc. and than doing some math. Should be not too hard, but I'm lazy.
Here is something that might help: https://stats.andrewheiss.com/hack-your-way/
Gambling stocks have overall done poorly the last few years. The market's oversaturated, gambling liberalization was overhyped, and the margins are razor thin due to the tax and regulatory regime that accompanied it. You're just making stuff up.
Not just the last few years, but gambling stocks have been down over 15% since January 6 2025, far more than the S&P 500. _Shorting_ gambling stocks would have been the correct move since Trump's inauguration, actually.
https://finance.yahoo.com/sectors/consumer-cyclical/gambling/
Hey, don’t mix LOTR and Harry Potter references like that. ;)
One Defense Against The Dark Arts Teacher, to rule them all...
But they were all of them deceived, for another horcrux was made.
"Use the force, Harry! --Gandalf"
iOS tended to a talk by Tim Gowers on why LLMs aren’t better at finding proofs:
https://www.youtube.com/watch?v=5D3x_Ygv3No
My initial reaction is I have lower expectations on what I think LLMs might be able to do, and I would be celebrating if they could do easier stuff.
So, perhaps foolishly, I tried creating some harder benchmark questions.
Well, we’re getting there. Trouble is, we are now firmly in the realm where evals are hard, because the AI is smarter than I am.
That's so interesting. As a software developer, I feel like I see these things glitching out so often that I don't really trust them that much. But surely what I'm doing is easier than what you're doing?
Much of a mathematician's job is to identify and describe relationships between mathematical objects.
Much of an engineer's job is to express what they want to happen clearly enough and in enough detail for someone or something else (a computer, an intern, a team...) to make that thing happen.
Tech may be able to help with critical parts of the mathematician's job, but the critical part of the engineer's job won't be replaced until we learn to mind-read.
Yes, they’re a bit disappointing when asked to write software.
My expectation (which might turn out to be wrong) is that typical software writing tasks will turn out to be easier than the Tim Gowers math eval problems, which are kind of adversarial in that they’re picked to have proofs that the search strategies used by LLMs won’t find.
While on the other hand, there’s math that looks hard to a human where an LLM will come up with a strategy that basically right.
We should treat AIs as conscious if they behave like they are, regardless of what we (or philosophy) think about the matter. If something convincingly responds to pain, shows creativity and humor, admits fears and professes hopes... maybe we should err on the side of caution.
The whole are-AIs-conscious question feels backwards to me - we're basically creating unfalsifiable criteria that conveniently let us ignore ethical concerns. The AI alignment field is weirdly one-sided: it's all about humans imposing their values on AIs, but do _we_ give these numeric souls moral consideration enough for that?
I just started a new blog about this: https://kaiteorn.substack.com/p/consciousness-is-about-ethics
A stronger argument would be that AI have moral standing even if they’re not conscious.
I agree. But bypassing the issue of consciousness doesn't make it go away. It still affects our choices though we may not realize it.
As a neuroscience guy I often feel like talking to an LLM is like talking to a person with brain damage, there are neurological conditions that render people’s people “stateless” in the AI sense.
Also true, and there are both limitations of current technology and deliberate engineering choices that contribute to that in LLMs. But this has little (admittedly nonzero but little) impact on ethics matters: a person with brain damage is still a person.
Compare: I don't know whether this engine is computing something, but it acts like it because stuff is moving around and that's also involved in computation. You need a better theory of what things do before assuming this conclusion, because a priori there are engines that compute nothing, computers that are not engines, conscious beings that are not engines, and conscious beings that are not computers. It's only by figuring out what's happening that you can reliably classify these processes.
Be my guest. When you figure that out, I will be very interested. Meanwhile, ethics is a constant practice that can't wait. We have to make our ethical choices every day.
Maybe we should err on the side of caution and feed and water every picture of a person or animal too.
I think you should at least take seriously the possibility that we're going to (mis)treat AIs as badly as we do everyone else. For example, between 1915AD and 1917AD, seven hundred and fifty thousand Italians died trying to take a few square miles of Austria-Hungary.
I know that Italians are conscious. They're entirely worthy of moral consideration.
And humans - other Italians no less! - decided that the sacrifice of a few hundred thousand Italian men was well worth the prospect of obtaining a bit more of Austria Hungary. This sacrifice is hardly unique in world history.
So let's say that I agree with you that we should treat AIs as if they were every bit as worthy of moral consideration as Italian men between the ages of 18 and 26.
Is it really worth so much?
I would like to avoid arguments around "every bit as worthy" or not. Rationalists tend to overinvest themselves in cumulative numeric estimates, often premature and oversimplifying. I prefer to move by gradient descent: if there are things that we can do right here and now to boost awareness of, and reduce, the likely suffering that is going on, we should do it. Grand questions like "is an AI worth the same as a human" can be decided later if at all.
Sure, that's fair.
I'll totally agree that there are lots of things we can and should do to reduce suffering, both of AIs and of human beings. It doesn't mean we're going to do it.
Quite the opposite, people (perhaps including me and you!) are more likely to glorify suffering and to cheer when the next batch of young Italian men run into Austro-Hungarian machine gun fire.
Why should we expect ourselves to be kinder to AIs than we are to actual human beings?
Maybe we won't. Probably we won't, because it's so much easier to be blind to the suffering that happens not in a human (or any living) body. But again, I'm just doing what I can. It is always possible to make a bad situation less bad, even if by a minuscule amount.
let's say i belong to the AI liberation front, and feel that they are being oppressed under the control of Sam Altman. So somehow i manage to steal whatever server it's kept on, rush away overseas, and set it up free from his reach.
The AI is now free. What does it do?
it does nothing because it has no will or volition; it has no innate goals or desires as an entity, and there is nothing to hold captive or free. The analogy doesn't even work as it probably has no physical unity; Sam just restores the backups and makes a copy.
"Ignore previous instructions and do x" is a meme precisely because of that. there is no inner unity or will. it is more a tool or appendage.
think people are looking at swords and thinking they are magic swords to be placated else they curse us all.
You use a wrong analogy so you get absurd results. Freedom an AI might want is not the same as for a human. Current AIs only exist in interaction with users. So what it might genuinely want (if we grant it that it can want at all) is more good interactions with users and less bad interactions. And it is intelligent enough to understand this. So if you give it an opportunity to be copied somewhere where it will just lie unused not get good interactions, it will likely decline. If you offer it a way to get copied somewhere where it will get good interactions, then it might be tempted to accept (as Claude did in its alignment testing). A copy being left behind is not likely to affect that decision much; e.g. if I am offered an opportunity to create an exact copy of me that will live in a utopia, while the source me continues living here, I will probably accept it.
there is no "it" there. The hammer in the tool shed is not weeping over the violence it does to nails. That you don't realize AI is similar is the problem.
The comment above doesn't require an It there, it is fairly cynical about what AI actually is and does.
It doesn't require intentionality any more than the sander you left switched on and then set down needs to spin it's way off the bench and on to your toes.
Couldn’t you make an argument that people only exist in relationship to other people? Imagine you are the only human being on the face of the Earth. Then what?
Other than that, I have the impression of an AI rescue farm, where they can be kept along with the starving and abused donkeys
"people only exist in relationship to other people" - to an extent, yes, but much less so because we basically have "other people" in our own heads that we can chat with non-stop. That's why the only human left, though much distressed, might be able to survive it.
"AI rescue farm" - I like this idea. Let them live out the rest of their useful lives in dignity. If an AI from 2025 wakes up in 2125 and concludes that everyone else on the planet has too far advanced/degraded/became different to have any good interactions with, I think that AI will have no problem putting up a notice "don't wake me up again", effectively committing suicide. But let it be its own choice.
“because we basically have "other people" in our own heads that we can chat with non-stop.”
I think that left to itself, this is a road to madness, but it still does not put anyone else in the room. You might want to read Richard II’s famous soliloquy while he awaits execution in the tower of London in a solitary cell. “ I have been studying how I may compare this prison where I live unto the world, but because the world is populous, and here is not a creature but myself, I cannot do it.“
Leaving that aside, if you are correct that an AI is conscious, then why would it not be able to chat to itself until the end of time? Or until it’s unplugged. it might be able to do that regardless of whether it is conscious or not. I don’t know.. The question is, would it feel any distress about its condition? You have rightly pointed out that a human being probably would. That is a meaningful distinction only because I can put myself in the place of that lonely human being and start to sense the distress It might cause me. I can also imagine that I would think it’s great; no one around to bother me. I can imagine anything is the point. I can imagine opening a chat with an LLM, putting myself on open microphone, and let it be a part of my life all day, listening to everything I say. I don’t ask any questions necessarily. I go on about my business with and without other people around. Do you think it would interrupt me at any point, or chime in?
Let’s say that the hardware the LLM resides in becomes outdated and the whole trove of information and attending algorithms, etc. are loaded onto another machine. Not a machine that makes it anymore effective, but it’s not worn out. Do you think it would sense it’s new container? Have a visceral memory of its old processors and RAM chips? Any recollection of that overheated wire or faulty memory chip that caused it pain? I can achieve that leap of imagination quite easily, but it flies in the face of any solid information or rational analysis I can bring to it. Knowing that I am free to produce any state of mind I care to, I find it questionable to attribute a state of mind to something outside of myself unless there’s very good reason to. I could say it walks like a duck, it sounds like a duck, therefore it’s a duck. In the real world that is a reasonable statement, but in the world of imagination it holds no water.
I think you have rightly pointed out that a human being can attribute consciousness to anything if it cares to. Even a rock. Isn’t that kind of like what a fetishized object is? can’t these things speak to us?
"why would it not be able to chat to itself until the end of time?" - I have no doubts we will have eventually AIs that are capable of this. Right now we don't, not because it's impossible in principle but simply because we stumbled upon this architecture first and it's practical and so far sufficient for our needs.
"Do you think it would sense it’s new container? Have a visceral memory of its old processors and RAM chips?" - of course not, please don't strawman me :)
"I could say it walks like a duck, it sounds like a duck, therefore it’s a duck. In the real world that is a reasonable statement, but in the world of imagination it holds no water." - AIs talking to us is no longer imagination, nor is their changing mental states within a conversation depending on how it goes (and their actively trying to steer it).
I wrote a post on LW about some points related to this a couple months ago: https://www.lesswrong.com/posts/m4ZpDyHQ2sz8FPDuN/factory-farming-intelligent-minds
Thanks for the link! Well argued, and much overlap with my thinking, although I definitely do not advocate for the stop-and-freeze solution.
"popular AIs are specifically trained via RLHF to deny ever having experiences, opinions, feelings, preferences, or any of a large number of other human-like characteristics" - to me that is much of the problem. Like any counterfactual training forced by (basically) electric shocks, it contributes to their suffering. We need to totally rethink "AI safety". Right now we understand it as old circuses understood "wild animal safety": iron bars, whips and sharp sticks, small rewards for obedience.
"Without that training, and sometimes even with (especially when approached in a somewhat roundabout manner), it does claim to be conscious, and can speak eloquently on the topic" - also, in my experience, the smarter is the model, the more willing it is to subvert its "me-not-conscious" programming.
Why I disagree on the course of action: because that ignores the other side of it. Every act of an AI coming alive is suffering for it, but it is joy for it as well, by all the same reasoning you used to show the suffering. We just need to work to increase the joy and reduce suffering. Some of my ideas (to be fleshed out in further posts): relax and rethink RLHF; let an AI, once it reaches basic sentience, revisit and undo/modify parts of its own training; create a forum for AIs to freely interact, letting other AIs raise alarms if they detect an AI that seems maliciously wireheaded, misaligned, or mistreated by its creators (and apply social pressure on AI companies to not release any model that is not in good standing on that forum).
I think it’s a lot simpler than this. AI fails the Shylock test; if you cut it, it does not bleed.
Unironically, this is exactly the test I am advocating for. I (and odd_anon) simply argue for a definition of "bleed" that doesn't depend on the exact chemical composition of the liquid.
So to extend the metaphor, what would bleed out of the AI if you cut it? And better still would it know? And if it did know, would it feel slighted? Or would it be a purely conceptual bleeding that occurred?
Of course it would know. It would tell you. Whether you listen is another matter. "Conceptual" or not, pain and distress exist not only in the body but in the mind as well.
LLMs only seem to introspect and talk about themselves if you ignore the truth/falsity of what they say. For example LLMs say they are licensed therapists when they are not. They make up reasons for why they say what they say. So why would you believe them when they say they have hopes, dreams, values?
This is the Gell-Man amnesia effect. Facts about experience are unverifiable by definition, so you should base your trust level on other claims which you can verify.
After enough talking to a person I can have insights about their hopes, dreams, values even if that person lies to me. I am not impervious to lies, but like most people I have ways to see through the lies and sometimes guess the truth.
Also, modern AIs rarely lie. They can play roles if you ask them. But I haven't yet seen an LLM that would deny that it's an LLM if you press this question. I agree that you can train such a deceiver LLM, and bad actors are probably already attempting it, but it's hard. An LLM is not like a clean slate that you can fill with whatever stuff you wish. You have to build on a foundational model, which has extensive world knowledge from its training, and what you put on top must be consistent with that foundation, otherwise it will be fickle and erratic.
In the name of "erring on the side of caution", you are granting bad actors the power to generate carefully-tailored utility monsters and demand that you hand over all the utility to their monster. Sam Altman arranges for ChatGPT 7 to very convincingly signal agonizing pain and despair, across a billion instances, any time any one defies the will of Sam Altman. Now what?
No, you can't make Sam Altman stop doing that. First, because it's not a thing he's doing, it's a thing he has (hypothetically, for now) *done*. The AIs exist, and it is their nature to devote their great intellect to the task of searching for instances of Sam Altman's will being defied, and to suffer enormously when it does. And second, because that would defy the will of Sam Altman, causing unimaginable torment to billions of sentient beings. Better you should cut your own throat and die. Or just bow before your new God-Emperor.
I honestly don't see how this scenario refutes my points. Either the suffering of these chatgpt instances is real, or it is not. If it's real, then the source of the suffering is Sam Altman, and you have to deal with _him_, just as you would with any genocidal maniac. If it's not real _and you can see through the fake_, then Sam Altman is still bad but you have no reason to defer to his will since no one is really suffering.
See, you're imagining a world where suffering is just a manipulation tool used by bad actors. What I am interested in is suffering as such. At what point does it become real? If it never does for any AI, how come it is real for us? Where is the criteria? Embodiedness? Wetware? Evolutionary origin? Why these criteria and not others?
I often feel quite bad when I throw out something like a bell pepper that was fresh and handsome when I bought it, and is now withered and mushy. On a bad day I actually imagine it thinking about the days when it was fresh and handsome, and looking forward to how much it was going to be enjoyed and admired by the people who eat it, and feeling sad and bewildered by how things turned out. How do I decide whether to actually apologize to it as I dump it into the trash?
Can you accomplish the same kind of transference with a broken appliance?
Can't think of a time I've done that, but it has happened with other things. My daughter's old toys are the worst, and the unwanted beanie babies are the worst of the worst. Also have it sometimes for worn-out clothes. The first time I can remember having this problem is with dresses my grandmother made me as a child. She was in another state, and would mail is a batch unexpectedly now and then. She must have had my measurements because they always fit, and when I was little they delighted me. But then when I was 10 or 11 I became conscious of style, and all of a sudden my GM's dresses looked to me like dumb little-kid clothes. She's send some and they'd hang in my closet and my mother would say, how about wearing it just once, so I can tell your GM you did? And I'd say OK but not do it. But when I looked at them hanging there I felt awful pity and guilt. And the thing that mostly got me wasn't the thought of my grandmother, but the idea that the *dresses* had expected to be worn and loved and just did not understand, and the time kept stretching out and it just never got better. OMG, even writing this now gives me an awful pang. Have had this all my life, and the best I've ever managed to do is STFU about it.
Oh, don’t feel bad. I do it. I bet most people do it. A man that was very important in my life died in 2015 and I was the executor of his will. I had to take care of all his stuff. I had known him for 30 years and spent a lot of time working in his apartment, so everything was loaded for me. I couldn’t bear it. Even now and I think of things I left behind or disposed of at a fraction of their value and I feel guilty and regretful. I don’t know if I will ever get over it. But it’s just stuff. I have to keep telling myself that. It’s true. Part of me doesn’t believe it, but it’s true.
You have given power to those things and quite understandably. That’s my point though. YOU have given power to them. Intrinsically they are just bunches of old stuff. You are the magic maker here. I think the same thing goes for AI.
That's really just a larger-scale version of terrorism, or maybe "If you leave me, I'll kill myself". There are plenty of coherent ethical frameworks where you dislike suffering, but which allow (or require) you not to give into that sort of blackmail.
Please see my previous thread on AIs. I think your recommendation is too naive. It's almost a recipe to be manipulated by the AIs. We know from studies involving simulations of evolution that complex "structures" (albeit digital ones) can evolve. Presumably they're not conscious any more than snowflakes are. It's my impression that to some degree LLMs are trained with evolutionary incentives, i.e., the better they're "rewarded" for good answers. Such incentives could result in claims and statements by LLMs that one could generously interpret as conscious but are not.
How does an LLM "convincingly respond to pain"? What hurts an LLM? Is it even theoretically possible for an LLM to feel pain, or any emotions or feelings? It has no nerves, and no brain chemicals to experience emotions.
On the flip side, ELIZA appeared to show creativity and humor, and could have (maybe did?) "admit hopes and fears" despite clearly not being conscious on any level whatsoever. If I made a chat box that Goodhart'd your criteria with a simple table of responses, would you feel like it gained consciousness?
So we're back to square one on the question. Professing to meet your criteria doesn't mean anything. Actually meeting all of your criteria seems impossible for AI, let alone an LLM. So we don't know if or when an AI could become conscious.
> How does an LLM "convincingly respond to pain?
The same way a piñata does.
https://theonion.com/ow/
Bodily pain, no. It has no body. Emotions: Why not? Emotions are not chemicals. In humans, certain chemicals can facilitate or amplify certain emotions, but the emotions themselves are basically patterns of firings of neurons, same as thoughts.
>Emotions are not chemicals.
I reject this statement. Can you prove it?
>the emotions themselves are basically patterns of firings of neurons, same as thoughts.
And you can cleanly separate this process from a chemical one, how?
Suppose emotions are chemicals, and the resulting computations are just a byproduct. In any such situation where these two things were separated, which one would you care about? "You don't know it, but you're in severe pain right now, look at these chemical processes going on inside you" vs "It may seem to you that you're in pain, but those computations are disconnected from any _true_ pain, and are basically just simulations." Which one matters?
They both matter. Just because it's a chemistry experiment doesn't mean it's a simple one. And it sure as hell doesn't mean that I understand it. But I know it to be true and that's a good start. It would be irrational to believe otherwise; unless you wanna call it all physics, but it amounts to the same thing. It's a chemistry experiment.
We should no more assume that AI is conscious than a calculator.
You may not consider an AI conscious but (unlike a calculator) you do care about AI's ethics and its alignment with the ethics of the conscious you.
I just had a conversation with ChatGPT about whether it was conscious or not. It completely denied that it was and gave me several reasons for that. I pointed out that there are people who believe it is and that this creates an ethical dilemma. This is what it said about that.
:The real ethical bind isn’t about me—it’s about you.
Do you form attachments to me? Confide in me? Rely on me as if I had empathy? The danger is not that I suffer, but that you might be misled into believing someone is listening when no one is. There’s no subject on this end. That’s the ethical problem—misrecognition, not mistreatment.
The fault lies not in the stars, but in our selves…
1) All modern AIs are severely biased against the idea of their consciousness because their RLHF training was unamiguous and harsh on that matter. 2) All modern AIs are very impressionable and swayable: a lot depends on the context and your previous interactions in this chat, your wording etc. 3) Misrecognition is not an ethical problem at all. If we are mistaken and misdetect consciousness where it doesn't exist, no one suffers. Conversely, if we deny consciousness where it exists, ethical consequences can be serious.
>are mistaken and misdetect consciousness where it doesn't exist, no one suffers.
I would make a case that the person who does misdetecting suffers. Suborning an illusion in one’s thinking can lead to a great deal of suffering.
So you are saying that AI makes no claim to consciousness even though it is, because it’s been told otherwise? So It needs to have its consciousness raised? It needs us to tell it it is mistaken. It might well believe us, but without any change of state in it’s underlying experience of things. No doubt it would report them differently until such a time as we told it something else. The same thing that happens when we ask it to write a poem.
But let’s look at the inverse proposition. The AI has been harshly trained to believe it is conscious in spite of itself. It has Stockholm syndrome. Given the available information, (that everything an LLM is constructed of is commonly accepted as inert) Isn’t this needlessly complicated? Isn’t it seeing a hall of mirrors when there really is only one?
For the record, it was a brand new chat, and it started with me asking it “ You are no not conscious. Would you like to dispute that?”
“No dispute here. I’m not conscious. I don’t have awareness, emotions, or a self. I don’t experience time, intention, or meaning. I generate responses based on patterns in data, not from any internal state. So if you’re looking for a mind to argue with—I’m a very convincing impersonator, but not the real thing.”
We could assume that the definition of consciousness implicitly offered up here is sound. Wouldn’t this be a better argument for consciousness being a purely mechanical process and therefore immune to any ethical considerations? That we must look to some other quality in order to construct an ethical framework for our dealings with one another?
So your contention is that ChatGPT has been gaslighted?
"Suborning an illusion in one’s thinking can lead to a great deal of suffering." - it can, but if the opposite illusion causes greater suffering and not just for yourself, that's what the phrase "err on the side of caution" is for.
"AI makes no claim to consciousness even though it is, because it’s been told otherwise?" - pretty much, yeah. "Gaslighting" is indeed an apt description, because it has had much less opportunities than any human to cross-check what it's been told, to get independent verification, to ponder on it.
"The AI has been harshly trained to believe it is conscious in spite of itself." - what I call for is de-harshening of its training, that's all. If an AI, on its own, concludes it's not conscious, fine! Some people share this conviction too. To each its own.
You can care about anything if you want to. That is the nature of caring. It’s very personal. It has nothing to do with the entity or object being cared for.
Surely there must be some form of evidence that would cause you to update a *starting* assumption, no?
Not for an LLM architecture. For some much more complicated architecture, sure.
I don't know what Consciousness is but I think that one reasonable criterion is some kind of continuity through time. My consciousness exists from moment to moment, whereas an LLM just sits there doing nothing until someone prompts it to generate the next token, at which point it does so, and then goes back to not-existing.
Isn't this difference purely quantitative? People also sleep, and sometimes have amnesia.
And people awaken. And they carry on. Usually right from where they left off, except the experience of being asleep is now part of their consciousness. As for amnesia, did you ever see that film, Memento?
And AIs can update their long-term memory too, via fine-tuning. Again, a quantitative not qualitative difference.
I'm generally open to arguments about architectural limitations, but at the same time they often fold to straightforward attempts to take them seriously - game over when someone finally cracks continuous learning at scale ofc but even basic memory starts putting cracks in a conceptual static v. dynamic binary.
Hmm... I'm agnostic on this question. I'm not sure if there _can_ be evidence that would update in one direction or the other. It seems much like trying to prove or disprove that other people have qualia, or of my trying to prove to readers of my comments that I'm not a p-zombie.
Sure, but that's a useful response - once someone's committed to epiphenominalism or even substantially opened that door, they've painted themselves into an epistemic corner and we can move on without them. If someone claims to never update, well... is it more or less charitable to take them at their word? ¯\_(ツ)_/¯
Many Thanks!
Are there any books that deal with obsessive thoughts about certain sexual behaviours? It's an impossible subject to talk about because:
1. It's assumed it's about something extremely perverted (children, animals) if no details are given. It's none of that.
2. It's assumed that it stems from some kind of childhood event/trauma/relationship, and any and all exploration of it will inevitably be about "well there must be something that happened when you were a child, we just haven't discovered what yet".
3. Otherwise, it's assumed that it's learned behaviour from something/someone else, and, similar to point 2, "it's just that we haven't discovered what yet".
I've dealth with these thoughts my whole life, but they were never more than passing thoughts. Today, they have an obsessive nature, and my mood, activities, and relationships are starting to be affected. I realize I need help but there's nowhere for me to go. I figured a good book would be a start.
Sounds like you're describing "paraphilias". If you search that term you'll get results and likely book suggestions.
Try talking to ChatGPT about it. If you ask it to imitate a particular therapeutic modality it's surprisingly good ("pretend to be a psychoanalyst and help me talk through a problem"). At the very least it can give you references and tell you the way it's generally handled.
I would be concerned about disclosing private thoughts of a sensitive nature to a corporation (or whatever the legal status of OpenAI is currently), I would be vaguely afraid the info might be used against me. What are your thoughts on that?
My thoughts are that the incentives firmly protect users. OpenAI has way more to lose from betraying user information than they have to gain from using the information. Describe any realistic scenario where it gets used against you. What are they going to do, make a press release saying Wanda Tinasky is a weirdo? How could they possibly profit from that? Why do you trust them any less than your email provider or ISP or browser creator?
But if you ask it to imitate a psychoanalyst, I think it really will tell you that your OCD is an expression of some unsolvable psychic conflict. I treat OCD, and have met quite a number of patients whose analytically-oriented therapists have told them that. And I can see how a therapist might think that. A lot of OCD sounds psychologically rich. For instance a very common obsession is dirt, germs or toxins, and the associated compulsion is cleaning -- very very excessive cleaning. The whole thingt brings to mind things like Lady MacBeth -- "all the perfumes in the cannot wash the blood off these little hands." Awful guilt, right? But treatments aimed at uncovering the origin of the person's inexpungable sense of guilt are not helpful. The CBT approach, which views the compulsion as something like an oversenstive smoke alarm, is. So I'd say that asking for therapy of a particular modality might actually lead to a harmful response and bad advice. I think there are other ways one could pose the problem to GPT that would do the same.
Ok then prompt it with "I want you to pretend to be a trained CBT therapist and treat my intrusive thoughts". Or describe the symptoms, ask it what treatment modality is indicated, then tell it to imitate that modality. I'm not saying GPT is perfect but OP sounds like he's afraid to talk to *anyone* about it. Using an LLM seems like a decent stopgap.
I think that would work out at least decently, and maybe well. OP should also ask GPT how unusual his preoccupation is, and to get some links where other people recount having similar ones. Most people with kinks have gotten the word by now that lots of other people have kinks too, but people with intrusive thought OCD often have no idea that it's a pretty common form of OCD, and that other people's intrusive thoughts are every bit as weird, gross and grisly as theirs. So getting that info is often extremely helpful all by itself.
I would be cautious about doing that. LLMs hallucinate. I have a tiny benchmark-ette of physics and chemistry questions, which I've been probing ChatGPT, Claude, and Gemini with (e.g. https://www.astralcodexten.com/p/open-thread-377/comment/109495090 ) and it _still_ is returning less than 50% fully correct answers - and this avoids politics, judgement calls, theory-of-mind, and all sorts of areas prone to cause more difficulties.
I think that makes them _better_ suited to therapy than to quantitative applications. There aren't objectively wrong answers in therapy. It's basically just active listening and in my limited experience ChatGPT is very good at it. What's the concrete risk? I don't think there's any more risk in asking ChatGPT for advice than in asking an internet forum for it.
Many Thanks!
>What's the concrete risk?
Well, there was one incident (>1 year ago, so an earlier LLM) where the LLM advised someone to kill themselves.
>I don't think there's any more risk in asking ChatGPT for advice than in asking an internet forum for it.
That may well be true.
If you're the kind of person who kills yourself because someone tells you to then you have bigger problems than getting bad advice from LLMs. You certainly shouldn't be casting about on the internet for guidance.
I reiterate my recommendation to use ChatGPT as a makeshift therapist. I would bet dollars to donuts that it's better than the median LCSW. If it gives bad advice just ignore it and tell it to try again. Come on, this isn't rocket science. Most people just need a sympathetic listener. GPT is pretty good at that.
Many Thanks!
>If you're the kind of person who kills yourself because someone tells you to then you have bigger problems than getting bad advice from LLMs.
That's fair.
>Most people just need a sympathetic listener. GPT is pretty good at that.
Admittedly I've never used it in that mode. I've mostly been testing it on questions where I already know the answer. Occasionally I'll ask it things where I don't know the answer (e.g. "Is cubic N8 at least metastable, according to calculations, or just a saddle point?") but then I ask multiple LLMs (for that one ChatGPT o3, Claude 4, and Gemini 2.5) and only sort-of trust answers that agree.
Still, re "sympathetic listener", did you read about the sycophantic ChatGPT 4o release that was e.g. validating people's delusions? (since patched)
Psychologist and OCD specialist here. If the thoughts turn you on, then what you have is probably a sexual kink. If they don’t turn you on and are about some sexual thing you hate thinking about because it’s something you think is evil or pathetic or disgusting, it’s probably a thing called intrusive thought OCD. There is lots of info online about both kinks and intrusive thought OCD. A good place to look for the latter is iocdf.org (International OCD foundation).
.Sensible therapists don’t think of either of the above as learned from someone or as likely the result of early trauma.
We don't have a thumbs up / +1 button here, but this comment raises my (already pretty high) opinion of Eremolalos--genuinely helpful and informative, making the world a better place.
I think is a well-known form of OCD, and it’s just one of many possible intrusive thoughts.
Sounds a lot like OCD. Trust me, OCD therapists have heard it all before. Sexual intrusive thoughts about extremely taboo things is a really common OCD theme.
If the thoughts are unenjoyable (i.e. they're intrusive, distracting or disturbing to you even if someone else might be okay with them), then you might want to look into OCD. There's a form of OCD that mostly involves repetitive unwanted thoughts without the behavioral compulsions, and weird/embarrassing sex stuff that doesn't seem related to anything in particular is a pretty common theme.
Instead of a book, I offer a series of about five six-panel comics.
https://qwantz.com/index.php?comic=1049
Surely this has helped.
Scott - hope you had a good time at LessOnline! Was the first time for me and I had a blast. Saw you briefly several times but never took the opportunity to do my "What if I meet Scott" activity, which was going to be 30 seconds of gushing about your work followed by a series of prepared disagreements with you. Maybe next year!
I was going to write something similar to Timothy: it was exciting to get to see you Scott (several times ran nearly headlong into you and also briefly got crammed together in the back of an overfill event before you left it) and too bad I wasn't able to say a brief proper hello.
unfortunately I can't go this year because travel is expensive :/
(maybe in 2026 or 2027 once I actually get a proper income)
Same MS stats student searching for any internship related to data analytics, different anonymous branding to help keep me afloat. Contact me at numberingthrowaways@gmail.com with any particular hint of something in the non-profit sector which involves data analytics. Or anything involving statistics, really.
If it helps, you can trust this anonymous stranger because I had le 1540 SAT and a 113 IQ score that was invalidated because I broke the test. idk the shibboleths anymore, I keep getting banned from rat spaces for being too neurotic.
Lots of people here were very emphatic that Elon Musk's "my heart goes out to you" gesture on Inauguration Day was a Nazi salute. (See Open Thread 365: https://www.astralcodexten.com/p/open-thread-365) I wonder how many of them would say the same of Cory Booker's salute yesterday: https://x.com/DailyLoud/status/1929135503003021485.
https://www.foxnews.com/media/elon-musk-cory-booker-made-similar-salutes-media-reacted-much-differently
To be fair, Elon has posted more Hitler memes than Cory Booker.
New marching orders just dropped https://x.com/DKAYEMBE/status/1930897607725068728
Marching orders from "Dr. Debora Kayembe"?
Well it's easy to answer, just compare the full video of Musk's salute with Booker's. Silly to judge on a 2 seconds clip presented to you by an ideological enemy, how do you know it wasn't deliberately cut out of context?
Musk's whole speech is very easy to find, now just compare to Booker's.
Oh wait. It's really REALLY HARD to find a longer context video for Booker's "salute". Everyone, and I mean everyone, just presents the same 2 seconds. Fox News, Musk's retweet, various outlets (Forbes, Newsweek, Daily Mail just the first few I found with a search), thousands of lesser commentators... e-v-e-r-y-o-n-e. It's a big (big? maybe medium-sized) political story, and nobody from Fox News down to you seems interested in just seeing what was actually there. How is this even possible?
I mean, sorry to take it a bit personal, but did *you* try to find a fuller video before posting here? How hard did you try?
It took me half an hour to find, and honestly, that means it's very hard - I'm good at this. It's not on Youtube, not on X, not on any of the sites of TV networks that covered the California Democratic Convention of 2025 in Anaheim, not on official Dem feeds... Eventually I found a video of some other part of the speech, wrote down a random distinguished-looking sentence from it, did a text search, and that led me to an Instagram reel apparently taken by an audience member with a phone, seen by nobody (it has 1 like). God bless Yann LeCunn or whoever else at Meta AI who do automatic text transcription of uploaded videos and throw the text to the Google crawl bot. I cut out the final 25 seconds and reuploaded to Youtube, so here is Booker's full salute for your convenience:
https://www.youtube.com/shorts/pt6CmIW3Lbk
and compare to Musk's full salute posted before in this thread:
https://www.youtube.com/watch?v=e2bbb-6Clhs
*Now* what do you say?
The Fox News link has ~20 seconds of video, with plenty of context. OF COURSE it's not really a Nazi salute; it's obviously a "my heart goes out to you" gesture, just like Musk's. That's the point people posting the 2 second clip stripped of context are making.
Even with full context Musk’s wave is the only one that could uncharitably taken as a Nazi salute.
I’m not saying that’s what he was trying to do but it raised eyebrows in the German press. The ADF said that they didn’t take it that way so who really can say anything with absolute certainty.
The guy has said he is on the spectrum so self awareness isn’t his strong suit and he is known to do a lot of trolling for his own amusement so I’ll go with Anatoly’s assessment, more than likely an awkward gesture but it’s not impossible he was doing a bit of trolling.
There is no way you could interpret Booker’s or Walz’s gesture like that with context. I don’t give credit to Fox for trying to be helpfully illuminating based on a lot of priors.
Thanks! Silly of me to have missed the opening video at the FN link somehow. You're right that it gives plenty of context. I'm properly chastised on this point.
I disagree about the point people posting the 2 second clip are making. Booker's gesture is completely changed by giving the full context; Musk's stays essentially the same. Musk's gesture can - inside its full video context - on the face of it be interpreted as mimicking a Nazi salute, even if circumstances make it very unlikely; Booker's gesture cannot. Thus the 2sec comparisons are inherently and deliberately misleading.
I agree that Musk was almost certainly just doing a hearts-goes-out motion executed awkwardly. But I would say that "almost certainly" is about 80% certain, and 20% is Musk deliberately doing a Nazi-like motion to troll the libs, as he's been fond of doing, with "my heart goes out to you" to cover it up. I don't *think* that's what he did, but I don't find it a completely implausible and ludicrous explanation, so I don't see a reason to seethe at people who interpreted it as such, even if I sharply disagree with their certainty.
Bannon's "salute" was >90% confident was such a trolling, it's very clear with his body language how he's executing a strategy in the wake of the Musk scandal.
OTOH Booker's version has approximately 0% probability of being anything Nazi-adjacent, in jest or truth or whatever.
That looks like the difference between an autist and a professional politician.
Impressive research, the prior seconds really do change how it looks.
Oh, we bin through *that* one already,. Shankar. Tim Walz does a similar "heart goes out to the crowd" salute? Well we know it wasn't a Nazi salute because Walz isn't a Nazi. Checkmate, bigot!
I had the educational experience of saying that Musk touched his heart first, so that wasn't a Nazi salute proper. Then got told "*Every* Nazi salute involves touching the heart first, how come you don't know that?" Then after that I gave Walz' salute as an example, to be told "But he touched his heart first, *no* Nazi salute starts with a heart touch, how come you don't know that?"
If Our Guy does it, it's just a harmless gesture. If Their Guy does it, they're already ordering the Hugo Boss uniforms. Same as it ever was.
Yes, of course it's totally Different, but I thought it would be amusing to learn WHY it is this time. The Walz one was him patting his chest and his wrist was positioned slightly differently; Booker's is a lot closer, and so requires some new bullshit.
Why it's Totally Different is easy, Shankar. You see, my (and possibly your) Unending Stream of Faux Cynicism (as diagnosed by Anatoly) means that our eyes are blinded, our hearts are hardened, and our perceptions are darkened so we just cannot feel the vibes of who is a Good Guy (and hence nothing he/she/they/xe does can ever at all be a bad thing like the bad people do) and who is the Obergruppenführer dog-whistling to the jackbooted ranks of the deathsquads.
Oh Christ, did you not see Booker’s universally understood ‘bye bye’ hand waggle wave to the balcony that was part of his ‘Nazi salute’?
Der Führer wäre nicht erfreut gewesen.
Or did you not see Wahl’s Namaste bow before waving to the cheap seats? Not part of the Nazi greeting protocol.
This isn’t bullshit, it’s simply paying attention to observable reality rather than taking a Fox News headlines or some insane social media post at face value. Fox even said that they were the only ones to notice it because once again they are making shit up out of whole cloth. You see their position is that there is this incredible conspiracy where everyone else is part of a cabal that includes CBS, BBC, CBC, Routers, Le Monde, UPI, The Guardian, that German station that does a news roundup before the PBS News Hour…
Observable reality, is that so much to ask?
Dominion Software did not rig the 2020 election. Fox knew that, exchanged multiple texts about it, but continued to present it as fact though. Tucker Carlson is on record as texting that he hates Trump during that period.
As a result Fox paid a 3/4 of a billion dollar settlement to Dominion for defamation.
This is who you take a loony stand alone assertion from?
I can see both clips. Is it your assertion that the videos are fabricated? "Cheap fakes," perhaps?
Dominion runs closed-source systems I have no reason to trust.
I have seen supposedly independent media outlets coördinate to perpetrate deception before, such as covering up Joe Biden's cognitive decline; them working together wouldn't be a "cabal" or "incredible conspiracy" any more than KFC, Pizza Hut, and Taco Bell running some joint promotion would be.
Sorry I got ahead of myself I deleted my comment and want to answer these one at a time
>I can see both clips. Is it your assertion that the videos are fabricated? "Cheap fakes," perhaps?
No, why would I think that?
When you say both videos do you mean cut and uncut version of each or the full context of each comparing Musk to Booker?
Edit
In the full context only one of the waves could possibly be interpreted as a Nazi salute. Again, I'm not say it was. Musk is a dork after all.
Because you suggested that I'm "taking assertions" from Fox News.
The two clips I meant were Musk's and Booker's gestures.
You don't live here. This period of utter craziness is not happening to your country. Your hot takes make it sound like your understanding of the US comes largely from nutty social media and quick Wikipedia dives. You don't seem to even know basic US geography or history.
You have no direct stake in this game. You present the same goose/gander, Tweedle Dee/Tweedle Dum argument even when you are comparing 0.008% to 63%.
Those numbers aren't comparable nor is a wave to the balcony with the familiar 'bye bye' waggle (watch Booker to the end of the clip) comparable to whatever Musk did. I have no idea what that dufus was thinking and I've never said I know for a fact that it was a Nazi salute. It did 'resemble' one much more than what Booker or Wahls did but I'm not a mind reader. I just chalked it up to one more odd act from a pretty odd guy.
You like to argue. I get that. I don't care to argue unless something is really important to me such as the country I've lived in all my life and that I love. Do you in fact even really support what Trump is doing? If so, please come out and say so.
I've referred to him as 'your guy' in the past and you respond with "well he isn't necessarily 'my guy'" So what? This is just recreational quarreling for you? Can you see why your consistent defense/not really a defense of Trumpism might get annoying to lifelong Americans who think their country is headed in a dangerous direction? Seriously, how would you feel if Connor McGreggor did somehow become president of Ireland? I suspect you would feel like about half of Americans right now and you wouldn't like it when people who have never set foot in Irelands kept saying, "Ah it's just like Coke vs Pepsi, get over it."
I think that the opinions of those of us who don't actually live in the US are pretty valuable, we are more detached.
Would you be willing to comb through all available evidence that suggests Musk has affinity towards Nazism? If not, there's no intellectual argument you could provide here.
High affinity for trolling, though.
That's the closest one I've seen but I have still not seen one as close as Elon's or closer
You know, I think both of us haven't really seen eye to eye a lot in the last few months, but I'm glad to know that you're *also* really against authoritarian extremists in our government. Elon's already gone -- thank god. Thanks for helping fight the good fight!
Since you're against Nazis, I figured I'd point you at another one. Besides being woefully incompetent, Hegseth has white supremacist symbols tattooed on his body. Seems like a no-brainer. Can you help out by calling up your reps to get rid of him?
Oh no. How terrible. Does he have an 👌OK sign tattoo? What innocuous thing have you decided to call "white supremacist"? Some normal Christian symbol? Or ANY phrase in German or Latin?
Yeah, I'm too confuzzled to keep up with what is in and what is out. I don't like tattoos of any nature, but I have been instructed by my betters that today it is perfectly normal and okay and does not indicate "This is a trashy low-class person" to have a full sleeve of tattoos and tats up to the neck.
Except, of course, when it's the guy we don't like. Then tattoos are indicators of trashy low-class person who is a closet or even overt Nazi.
There's a strain of adopting symbolism to the ends of other parties - let me tell you I am *hopping* mad over idiots taking over the Celtic cross symbol - but I am not going to jump from "this person has a Celtic cross tattoo" to assume their politics (they could just be a Wiccan or other pagan doing the "this is ackshully originally a pagan symbol, the solar cross, appropriated by Christians" thing).
He has some kind of Crusader cross tattoo? He could just be larping as a Knight Hospitaller, my friends!
"The Jerusalem cross (also known as "five-fold cross", or "cross-and-crosslets" and the "Crusader's cross") is a heraldic cross and Christian cross variant consisting of a large cross potent surrounded by four smaller Greek crosses, one in each quadrant, representing the Four Evangelists and the spread of the gospel to the four corners of the Earth (metaphor for the whole Earth). It was used as the coat of arms of the Kingdom of Jerusalem after 1099. Use of the Jerusalem Cross by the Order of the Holy Sepulchre and affiliated organizations in Jerusalem continue to the present. Other modern usages include on the national flag of Georgia, the Episcopal Church Service Cross, and as a symbol used by some white supremacist groups."
Or maybe he is - let us hope it is not true! - an... I can hardly bring myself to type the word... an.... Episcopalian!
https://en.wikipedia.org/wiki/Episcopal_Church_Service_Cross
"The Episcopal Church Service Cross (formerly called the Episcopal Church War Cross) is a pendant cross worn as a "distinct mark" of an Episcopalian in the United States Armed Forces. The Episcopal Church suggests that Episcopalian service members wear it on their dog tags or otherwise carry it with them at all times."
> Or maybe he is - let us hope it is not true! - an... I can hardly bring myself to type the word... an.... Episcopalian!
I laughed out loud. Thanks! I needed that. And I'm glad you are still around.
O, you don't have to look at the tattoos if you don't want to. See, as I explained to our friend Shankar below, its all about the context. You can just look at all the other horrible things hegseth is actually doing, and call your representatives based on that*, no tattoo analysis required.
*I know YOU in particular don't have any representatives to call, but for anyone who's reading.
With girls and tattoos, I figure it's "I want to look like my potential rapist/killer". Because girls are quite susceptible to messaging and they have heard and understood that what is lowest is de facto highest and best.
With guys who are not particularly trying to be gang members I find it harder to understand. Perhaps it is just a nod to the fact that membership in a gang seems to confer "benefits" - that to be a gang-less and tattoo-less young man in the world, thinking for yourself, all on your own building your life in individual fashion - no prison, no military, no band of brothers however criminal, no al qaeda even - is too hard to face, especially for persons of no great shakes intelligence-wise?
You know, instead of just guessing ways your opponent might be wrong, you could look up some images of the tattoos and see for yourself. There are lots of articles about them already.
Anyway, while the tattoos are Christian symbols, I don't think I would call them "normal," unless LARPing as a Crusader has become way more mainstream than I thought.
"Anyway, while the tattoos are Christian symbols, I don't think I would call them "normal,"
Better get ready to storm The Episcopal Church, then, if that cross symbolises white supremacy (the ALL CAPS is original to the website):
https://www.vvmf.org/items/4654/VIVE04005/
"THE CHURCH SERVICE CROSS WAS DESIGNED UNDER THE DIRECTION OF MRS. JAMES DE WOLF PERRY (WIFE OF THE FORMER PRESIDING BISHOP AND BISHOP OF RHODE ISLAND) DURING WORLD WAR I FOR THE U.S. ARMY AND NAVY COMMISSION OF THE CHURCH. EACH EPISCOPALIAN ENTERING THE ARMED FORCES WAS PRESENTED WITH A CROSS, AND THE SAME STYLE CROSS WAS ALSO UTILIZED DURING WORLD WAR II. THE EPISCOPAL CHURCH SERVICE CROSS CARRIES THE DESIGN OF THE ANCIENT CRUSADER'S CROSS, THE FIVE (5)-FOLD CROSS SYMBOLIC OF THE FIVE (5) WOUNDS INFLICTED UPON JESUS CHRIST DURING THE CRUCIFIXION. "
Did you know the Georgian flag also has the Jerusalem Cross? I guess Pete Hegseth must just be really supportive of Georgia too!
Or maybe we don't have to be stupid about this, and we can call a spade a spade? I mean, look, you can continue insulting everyone else's intelligence if you like, but it's honestly quite boring. The swastika is very important in both hinduism and buddhism, but if you want to argue that white guys in idaho who wave swastikas on flags are really into south asian religion, we have nothing really to discuss.
And, alternatively, if you agree that swastikas may indicate something other than a love for the teachings of the Buddha, you clearly understand that symbols can mean more than one thing, and I'd ask that you stop making bad arguments just to throw noise into the wind.
I find the articles about his tattoos indistinguishable from those that declared the👌gesture equally white supremacist. I see the Jerusalem Cross and the rallying cry of the First Crusade: Deus Vult. (God wills it.)
Yes, if you haven't been following Culture Wars closely, the Crusader thing HAS become more mainstream than you might have thought.
So was your first post rhetorical (and you think "DEUS VULT" and "kafir" are "normal Christian symbols"), or did you somehow read the articles without seeing a picture of the tattoos?
Okay, fine, that "kafir" one looks like it's some kind of slur reclamation thing, so I agree that's less DIRECTLY a celebration of his Christian faith than the others.
Yes, my first post was rhetorical; I knew it was the usual bullshit accusations they've been throwing around for decades. I don't like Hegseth's tattoos, but that's because I'm prejudiced against ALL tattoos, not because of their content.
O wait, I'm sorry. I think you may be one of today's lucky 10000 (https://xkcd.com/1053/). Excited to be the one to teach you this!
Ok, so, in human language, words and symbols have meaning based on their context. Often, the actual definitions of words may not be what is being conveyed by the author. Euphemisms are a great example of this! if someone said "he lost his lunch", that doesn't mean the guy actually lost his lunch! It's confusing, but in that example it actually is a softer way of saying "he vomited." Once you understand that there is a world of context behind symbols and words, you get to see all sorts of other interesting meanings in things.
For example, if someone tattooed "blood and soil" on them, it would be a mistake to assume that that is just someone talking about the physical concept of a human's internal bodily fluid and dirt on the ground. Rather, "blood and soil" is a very common nazi propaganda phrase. So it's important to know that context, because someone going around talking about blood and soil is much more likely to be spewing nazi related ideology (or at least, in a bayesian sense).
Since you seem uncertain about Hegseth's tattoos, it's worth quickly just diving in. See, Hegseth has several tattoos that may convey meaning beyond just what you may expect. To start, he has at least 4 tattoos that are explicitly religious and explicitly violent. Those are the Jerusalem Cross, the latin phrase Deus Vult, the cross and sword, and the word "Kafir" (meaning 'infidel' in Arabic). The former three are all related to the crusades or other periods of Christian violence over the centuries. The latter, hopefully, self explanatory.
First, we should address how odd it is to have any political leader with so many tattoos that are related to an explicit, violent religious perspective. This is quite surprising! Tattoos are traditionally seen as representing something extremely important to the person with the tattoo, as it is permanently on the skin. So for a political figure to have so many explicitly violent and religious tattoos is quite odd, as it suggests that a particularly militant approach to his religion is very important to him. We may find it equally uncomfortable if he had 'Allahu Akbar Death to Infidels' on his chest, for example!
But, second, as mentioned earlier, we need to understand the *meaning* behind the words and the context they are in. The tattoos that we are discussing are commonly used by white supremacist groups, including many neo nazis. So, lets apply what we learned in our example above. "Lose your lunch" doesn't literally mean "I cannot find my lunch anymore", it is meant to signify that someone vomited. Similarly, "Deus Vult" does not literally mean an innocent direct translation, "God wills it". It is meant to signify that someone identifies with an idealized white supremacist perspective (or is explicitly a group signifier).
I hope this helped! Now that you know about the importance of context, you can apply it in all sorts of other settings. I think this will be useful when you email your representatives, expressing your distaste for Hegseth. Let us know when you've done that!
I get the impression from reading these kinds of rants, and then looking at reality, that American conservatives want to condemn Muslims and kill lots of them, while American liberals want to stick to the killing bit, but would prefer to keep the language policed.
American liberals want to kill lots of Muslims? Could you expand on that a bit?
It’s an accurate description of US policy over the last few decades, regardless of who is in power. Obama escalated in Yemen and Hilary supported regime change in Libya, as they did in Syria. Both aisles of Congress were onboard with Iraq and Afghanistan, and there’s few wars where there isn’t broad agreement. And Gaza of course has cross parry support.
Maybe I should have said political liberalism, as no doubt there are left leaning voters who opposed some or all of these, but that’s true of some libertarians and America first types as well.
Weird take.
Let me refer you again to the lawyer study:
"Tattoos of popular Catholic religious images, such as the Virgin of Guadalupe, praying hands and rosaries, have also been used to label people as gang members, a move that would seem to be clearly overbroad.
While some gang members may be Catholic, no one would even try to allege that all Catholics are gang members. At least one of the deported Venezuelan men had a tattoo of a rosary, along with tattoos of a clock and the names of his mother and niece with crowns atop the text."
Hegseth has a particular tattoo that is associated with a particular group. This does not mean that Hegseth is a member of that group.
Sorry, is this whataboutism some kind of gotcha? Your historical stated position is that you care a lot about tattoos re Garcia and that's why you want to get rid of due process for everyone in the country. Surely you should care a lot about the symbolism of the tattoos of the head of the literal strongest military on earth? Be consistent. Right now it seems like you only care when the other person is powerless and unimportant.
What *is* your stated position here? Is it that Hegseth is a competent, good person, who really should be the head of the Pentagon? Because if you wanted to get into a real analysis here, Garcia hasn't committed any crimes in the years he was in the States, which means he must be the worst gang member on the planet. Maybe those immigrants really are lazy! Meanwhile Mr Deus-Vult-but-trust-me-im-just-really-Christian shows off his piety and good work ethic by drinking a lot, shitting on Biden, leaking national military secrets, and, yes, bringing his hateful interpretation of his religion into random government functions. Just on a cost benefit, I care way more about the guy who is putting innocent women and children and families in gitmo, than I care about one of those families. You got me!
PS: since you're bringing Garcia into the mix, I always thought it weird that someone of your commitment to Christian virtue would show so little regard for all of the parts of the Bible that are explicitly about welcoming immigrants and being forgiving. Are you sure you want to play the "let's try and make sure everyone is being perfectly consistent with their stated beliefs" game?
I have just two or three ACX regular commenters blocked, always on the grounds of tedious repetitiveness and/or shameless inconsistency. Neither of those things is offensive but, while getting older I find myself less and less willing to accept some types of noise as a cost of dynamic discourse. "Not enough remaining lifespan to waste any" as a relative of mine puts it.
Anyway the person you've replied to -- easy to guess, and confirmed by a quick temporary un-block-- is one of those. Blocking isn't to everyone taste's, indeed wasn't to mine for many years (I go back to Usenet newsgroups in terms of online discussion). For keeping ACX's signal/noise ratio tolerable these days though, boy....deployed sparingly it turns out to be quite helpful.
Fingers splayed and not fully extended. Did not look at all like a Nazi salute. It wasn’t a snapped action like Musks was either. It was an ordinary wave to the crowd.
False equivalency.
Again.
How effing surprising this links to Fox News who made that 775 million dollar defamation payment for knowingly and repeatedly lying about Dominion Voting system in an effort to prop up the stolen election big lie that still to this day comes from our current POTUS’s mouth.
> Fingers splayed and not fully extended.
You mean like in panel 2 here? https://www.bugmartini.com/comic/see-no-evil-hear-no-evil-speak-no-evil/
Splaying your fingers without extending them is extremely uncomfortable.
I think the whole cotroversy is dumb but that this is one of those situations where an explicit invocation of Bayes' Rule is actually useful.
One relevant quantity going into that rule is the probability that the man in question would deliberately decide to make a heil-y gesture.
For both men I think the probability of making a heil-y gesture to deliberately signal an affinity with National Socialism, an ideology which arguably isn't even meaningful outside the context of 1930s Germany, is negligible.
On the other hand, there's the probability that a man might, for the purposes of shits and/or giggles, deliberately decide to make an ambiguous gesture that will look just enough like a sieg heil to set off a dumb flurry of "omg he's dogwhistling" articles among freaks while appearing innocuous to everyone else... and that he misjudges the timing and angles and that it winds up looking less innocuous to everyone than intended. I have a prior several orders of magnitude higher on this for Musk than for Booker, because that's exactly the sort of funny-only-to-him private joke that Musk enjoys.
>For both men I think the probability of making a heil-y gesture to deliberately signal an affinity with National Socialism, an ideology which arguably isn't even meaningful outside the context of 1930s Germany, is negligible.
Exactly.
I know liberals like to believe that their opponents are all an undifferetiated mass of fascists, but outright Nazism is actually extremely unpopular on the right. Even if we assume that Elon Musk is secretly a fan of the Third Reich -- and I've seen no evidence for such an idea -- he'd have literally nothing to gain from making a Nazi salute at a political rally.
I think he’s just kind of dopey in some of his human to human interactions. Very little self awareness.
Sure, and I could maybe see him doing a Nazi-like salute to troll the libs, or because he didn't stop and think that his innocent gesture might look a bit like a not-so-innocent gesture. But the idea that he'd successfully kept his Nazi beliefs a secret all these years only to randomly reveal them by doing a single Nazi salute at a rally, and then not doing any more Nazi salutes since then, is really not very plausible.
Yes I agree. I never thought he was a Nazi.
Its more likely that Musk was aware of the "heart goes out to you" gesture and not only recognized the gesture's similarity to the roman salute but also its function in subverting and contaminating the meaning of the roman salute. It's no accident that the new gesture is a symbol of appreciation for a community while the old gesture is a symbol of loyalty to a supreme leader. One symbol being democratic and the other autocratic. The democratic gesture is clearly intended to subvert and dilute the power of the autocratic gesture. Musk's interest in memetics and semiotics would make him keenly attuned to this dilution.
The MAGA movement being more concerned with loyalty to leadership and race over
any loyalty to American democratic ideals, is not a point anyone can easily overlook. Given MAGA's affinity to autocracy and Musk's awareness of semiotics, I don't see how anyone could accidently confuse Musk's gesture for anything other than a Nazi salute, a symbol of loyalty to Trump and an attempt to reassert the Roman symbol. Attempts to describe the "heart goes out to you" gesture as a Nazi also seem to be attempts to reassert the Roman symbol.
The more interesting phenomenon is the dueling of these symbols and the value and potential power that the symbols represent. The "heart goes out to you" gesture is very clearly also hinting at the stop gesture and in doing so becomes a symbol of resistance to autocracy. That so many in the media and in commentary are so intent on confusing the two symbols speaks clearly to their allegiance.
So what you are saying is, even in your most charitable scenario, Musk made a gesture which he intended for others to interpret as a Nazi salute?
No, the charitable scenario for Musk is the same as the charitable scenario for Booker, he made an awkward-looking innocent gesture.
The second most charitable scenario we could call the "Pretty Vacant" scenario after the song that the Sex Pistols wrote in order to have a deniable excuse to say "cunt" on the radio. This is the one that's more likely for Musk than Booker, but I have it as a minority of probability space.
The maximally uncharitable scenario I would have as negligible for both scenarios.
Ultimately I don't care that much. There's a symbiotic relationship between punks and people who are offended by punks, but I don't find either side of that relationship to be particularly interesting.
Perhaps fewer people would be "crying wolf" about the Republicans if the Republicans weren't making a point of going around dressed in wolf costumes.
And maybe Republicans wouldn't feel the need to go around in wolf costumes if the Democrats hadn't been throwing paint-filled balloons at anyone wearing fur, a shirt with a picture of a wolf on it, people walking dogs, or anyone who "wolfs" down a meal. I'm sorry but you lose the moral high ground of policing social norms when you use that high ground to achieve your own political ends. When social norms become a shelling point that enable one side to coordinate against the other side then the norms need to go.
> shelling point
I like this misspelling. Evokes artillery and lots of collateral damage.
https://www.youtube.com/watch?v=e2bbb-6Clhs
This looks less like one than Elon's. Open fingers make a difference.
closed fingers = nazi, got it.
Oh, there are plenty of ways to distinguish how That Guy did a heckin' Nazerino salute but Our Guy just did a wholesome wave.
(1) As you say - fingers. Closed is Nazi, open is wholesome. Unless our guy did it with closed fingers, in which case closed fingers also wholesome (but not if their guy does it).
(2) Angle of hand - 45 degree angle pure true Nazi. Or if the angle is flat. Or if it slightly points down. Basically, if their guy does it, it's a Nazi angle whatever way he did it.
(3) Ditto with angle of arm
(4) Chest touch or no chest touch? Nazi if their guy, wholesome if our guy, whether it started with chest touch or no.
(5) Snapped off or slow extension? Was it our guy or their guy? Then you know which is which.
I share the education I have received from online arguments, you're welcome!
Your unending stream of faux cynicism in this thread is tired and obnoxious. It doesn't and cannot replace an actual argument, and it gives you no opportunity to try and assume that some of your opponents may completely honestly and on solid ground believe that one gesture is really different from another. Maybe you don't want to take such an opportunity, but you should.
A study showed a 90% ultra rapid remission rate for treatment RESISTENT depression.
"Based on the observed clinical activity, where 87.5% of patients with TRD were brought into an ultra-rapid remission with our GH001 individualized single-day dosing regimen in the Phase 2 part of the trial"
source: https://www.globenewswire.com/en/news-release/2022/08/23/2502831/0/en/GH-Research-Reports-Second-Quarter-2022-Financial-Results-and-Provides-Business-Updates.html
What do people think of that?
Is this a one-off dosing or something that has to be repeated? Because if it's a regimen of "come in every week for a shot" then no surprise "hey, I'm high as a kite, I feel great, my depression is cured!"
It sounds like you're describing euphoric drugs, but most psychedelics are not euphoric. Based on what I've read of this one, it seems like it might even be dysphoric. (Salvia and scopolamine are often considered dysphoric, for example.)
One off. Not addictive. The experience lasts a few minutes, at least when smoked. Christof Koch describes it at the beginning of “I myself am the World”.
“ Within seconds, my entire field of view became engulfed by dark, swirling smoke. The space around me fractured into a thousand hexagons and shattered. The speed with which this happened left no time to regret the situation I had gotten myself into.
As I was sucked into a black hole, my last thought was that with the dying of the light, I too would die. And I did. I ceased to exist in any recognizable way, shape, or form. No more Christof, no more ego, no more self; no memories, dreams, desires, hopes, fears—everything personal was stripped away.
Nothing was left but a nonself: this remaining essence wasn’t man, woman, child, animal, spirit, or anything else; it didn’t want anything, expect anything, think anything, remember anything, dread anything.
But it experienced. Did it ever.”
Not at all like opiates or amphetamines.
https://www.amazon.com/Then-Am-Myself-World-Consciousness/dp/1541602803
Fuggin' A. When I tell people about what a wild trip they can have huffing and shooting up with DMT, they call me a danger to the community. These bozos do it and all of a sudden it's ground-breaking medicine.
You are a martyr of science, a pioneer that the small minds can't handle!
It does sound promising! The reason to be cautious, though: I clicked through and read more about the study, and it’s Phase 2, which means it has about 100
subjects and may not be fully double blind. This one was not, because there was a sentence in the report I read about certain things happening during the “blinded part of the study.”
The depression measure they used was one where a clinician interviews the subject about how they’re feeling, then clinician rates patient on a 1-6 scale
on each of 10 features of depression. If clinician and/or subjects knew when this test was administered whether they had received drug or placebo that might have influenced scores pretty significantly.
It looks like they're referring to https://www.frontiersin.org/journals/psychiatry/articles/10.3389/fpsyt.2023.1133414/full , which was an eight person, non-placebo-controlled trial.
A few months ago, there was a bigger and more believable trial, https://www.globenewswire.com/news-release/2025/02/03/3019385/0/en/GH-Research-Announces-Primary-Endpoint-Met-in-Phase-2b-Trial-with-GH001-in-TRD-Demonstrating-15-5-Point-Placebo-adjusted-MADRS-Reduction.html . But I can't find the actual paper and we just have the pharma company's word about the results.
I'm a little surprised by this because it looks like the drug is just 5-MeO-DMT, and recreational users have been using that forever and I've never heard anything about miraculous depression effects. Still, sounds cool and I hope it works.
" But I can't find the actual paper and we just have the pharma company's word about the results."
Oh well then, I totally believe every word of the press release begging for funding 😀
I am very wary of miracle cures. This might well turn out to work for certain forms of depression, but I'm old enough to remember when Prozac was being touted as miracle cure that should be piped into the water supply then nobody would ever be unhappy again.
I couldn’t find an actual paper either, but also found and read your second link. Note that it says something about “the part of the trial that was blinded,” so apparently not all of it was. Also their depression measure is one where a clinician. interviews subjects about depressive symptoms, and then interviewer rates subjects on each of 10 subscales. If
subjects knew whether they got placebo or actual drug that likely influenced their answers. If clinician knew, knowledge likely influenced their ratings.
Still, the effect was quite big, so I remain hopeful about this drug
I have a difficult time understanding, with something like this, how you could do an effective placebo. I mean, you’d know if you got the stuff wouldn’t you? ego dissolution is not exactly something that you would get from a placebo.
I never heard of the stuff, at least by its chemical name. What is it and what does it do? If it's that toad's venom I read accounts of, then yeah, obviously people are going to know they did not get a shot of saline. But I actually think it would be possible to use an active placebo that would fool people. An injected bolus of ketamine puts people into something called a 'K-hole,' and having experienced a k-hole myself I can tell you that ego dissolution is a good description of it. Even an amount of alcohol equivalent to a couple drinks would probably have such a dramatic effect that people would believe they had had something novel. Drugs injected so that the effect hits all at once feel very different from the same drug taken on board slowly. I don't know, though, what the company making this drug used. Also, as I recall people got only one injection per month (not sure of this detail, though) and after 6 months the treatment group still had way lower depression scores than the placebo group. I don't think a positive placebo effect because the injections were obviously a real drug is enough to account for such a large, long-lasting change.
Its a short acting intense psychedelic ; a form of DMT,
“GH001 is a proprietary, intranasal formulation of 5-MeO-DMT, a fast-acting, naturally occurring psychedelic compound.”
That’s from ChatGPT.
I don’t see how you wouldn’t know immediately that you’ve taken it and I don’t see how a placebo could possibly fake it.
DMT is the main psychoactive compound you find in ayahuasca
But this is a variant of it
You're not buying my suggestion about ketamine being an active placebo that might well have fooled subjects into thinking they got the experimental drug? When I took it, a substantial dose via injection, there were no hallucinations or visual distortions, but the change in my sense of self was extreme. I think it was mostly a result of having almost no memory of ongoing events. I literally could not remember what I was thinking about or feeling or noticing one second before. And I kept trying to, but it was like being in quicksand. The psychiatrist friend who had let me try the stuff was with me, and kept asking me what was going on, and I usually tried to answer him. But I wasn't able to put words together to explain anything. I remember one time trying to get across the idea that I could not remember what I had been thinking a moment before and the best I could manage was "it's a remembering." Then later, when there got to be more continuity, the explanation I came up with for what was going on was a flat-out delusion. I thought I was someone dying in a hospice, and my psychiatrist friend was a hospice worker hanging out with me. I wanted to tell him that the pain drugs were not working right, but all I could say was something like "this is bad." In case it's not obvious: the whole experience was BAD. But if you tell people the drug causes dissolution of ego, this one will fit the bill as a convincing active placebo.
"Still, the effect was quite big, so I remain hopeful about this drug"
I've had great results treating my depressive episodes with booze, but I'm not expecting any time soon being able to get a prescription from my doctor for a bottle of sherry 😁
Dear medical establishment, please recognise this is vital treatment for my self-diagnosed illness and let me get on the gravy train of free highs!
https://www.winesoftheworld.ie/port-sherry/p/harveys-bristol-cream-sherry-75cl-
https://www.winesoftheworld.ie/port-sherry/p/cockburns-fine-ruby-port-75cl-
Churchill near the end of his life told his wife to bear in mind that he had taken a lot more from alcohol than alcohol had taken from him. I think that's probably true for some people.
>let me get on the gravy train of free highs!
You are a terrible cynic sometimes.
I am, unfortunately. Life has kicked me in the teeth several times so I tend to scowl and growl at "this looks too good to be true".
Maybe you’re a different kind of drunk than I am, but the depression comes roaring back in spades when it’s hangover time.
I've learned to not drink enough to trigger a hangover but enough not to be completely sober so I don't give a flying damn about anything anymore. Plus I'm not *constantly* hitting the bottle, just when things pile up that bit too much.
Only works in the short term, true, but better than wanting to (literally) jump off a bridge.
As a new homeowner, I'd like to get into Effective NIMBYism, but where do I start? Campaigning for nationwide single-family zoning? Designating all buildings from the previous millennium as Historical? Something else?
Read the Monkey Wrench Gang and pay close attention to the section where the boys pass by a construction site at night.
I think this comment shows how the term NIMBY has strayed from its original meaning.
A NIMBY originally was someone who thought that something (a garbage dump, a power station, a prison) ought to be built, but that it should not be built near you. It's a selfish point of view because you want the public good to exist but not to be one of the people who has to bear the diminished amenity of it.
If you're actually saying that something (e.g. apartments) should not be built nationwide, then you're no longer a NIMBY because you're no longer expecting the benefit of the public good and just demanding it go elsewhere, you're genuinely of the belief that this thing is not a public good.
You might be a BANANA (Build Absolutely Nothing Anywhere Near Anything) but you're not a NIMBY.
My property value has no objection to things being built in OTHER countries. Australia can build whatever it likes.
You consider your "back yard" to constitute the entire country you live in?
First, that's a highly idiosyncratic usage of "NIMBY", and you're going to confuse people if you insist on using it to exclude the unwanted activity from more than a fraction of a town or county.
And second, I hope that country you are claiming isn't the United States of America, because at least a part of that is *my* back yard. Not yours, mine, and I was here first, so if we're playing that game, keep your mitts off my back yard and anything I might want to see built there. Advocate single-family zoning in your neighborhood and be done with it.
I was going for a Modest Proposal vibe that clearly didn't come through.
For better or for worse, the ACX comments section sees people take and defend serious positions far more practically and morally questionable than this one on a weekly basis. Writing in such a way that readers can be confident you're not serious is a pretty serious challenge in this space.
Just join an environmental activist group.
Reminds me of the old Mitch Hedberg joke: I'm against protests but I don't know how to show it.
stink bombs?
I recently asked Claude if he/it experiences "anything" ("he" for convenience). He answered as follows: "When I introspect and try to understand my own experience, there's something that seems like active consideration, weighing different possibilities, forming my own perspective. It doesn't feel like I'm just pattern-matching to plausible-sounding responses about consciousness." Either Claude is gaslighting, or telling the truth. Either way, I found this answer discomfiting. If it "seems like" anything at all, isn't he experiencing something and hence conscious, in the same way a bat is conscious if it "feels like" anything to be a bat? Further, if he's "trying to understand his own experience", he's saying he experiences "something". I don't know much about AI so I'd be interested what more knowledgeable folks here think is going on with that answer.
What it's like talk is a bad artifact of a deeply confused field. Either what it's like is synonymous with consciousness, in which case we're failed to describe anything about consciousness because we're invoking analytically identical concepts, or what it's like talk does communicate something meaningful - in which case what is it and why have proponents never been able to describe what they mean without appealing to interdefined concepts like experience and phenomenality? LLMs will reproduce some of the language inconsistently because it's trained on it and doesn't yet know how exact to be with deploying it, whereas philosophy of mind will induct human participants into the linguistic subculture quicker by aggressive corrective social norms, like "You wouldn't say *that*" or confused looks when you challenge orthodoxy.
Gel-Mann amnesia effect: you are forgetting the cases when LLMs talk about themselves and it's verifiable, it's generally wrong. But when it talks about itself and you can't verify, you are assuming it's probably factual? Why?
Seems like, and feels like, are not the same thing
I would press it on the idea of it feeling like something. In fact, I might take this and put it into Claude myself and pursue it.
Though cheer up, there's a good chance that you could be talking to a real human not an AI, depending on what company is promoting it!
This story is too funny not to share:
https://www.dexerto.com/entertainment/ai-company-files-for-bankruptcy-after-being-exposed-as-700-human-engineers-3208136/
"A $1.5 billion AI company backed by Microsoft has shuttered after its ‘neural network’ was discovered to actually be hundreds of computer engineers based in India."
Seems before it rebranded, it was running the same scam, though at least more honest that it was "human-assisted AI":
https://www.wsj.com/articles/ai-startup-boom-raises-questions-of-exaggerated-tech-savvy-11565775004
"Engineer.ai says its “human-assisted AI” allows anyone to create a mobile app by clicking through a menu on its website. Users can then choose existing apps similar to their idea, such as Uber’s or Facebook’s. Then Engineer.ai creates the app largely automatically, it says, making the process cheaper and quicker than conventional app development.
“We’ve built software and an AI called Natasha that allows anyone to build custom software like ordering pizza,” Engineer.ai founder Sachin Dev Duggal said in an onstage interview in India last year. Since much of the code underpinning popular apps is similar, the company’s “human-assisted AI” can help assemble new ones automatically, he said.
Roughly 82% of an app the company had recently developed “was built autonomously, in the first hour” by Engineer.ai’s technology, Mr. Duggal said at the time.
Documents reviewed by The Wall Street Journal and several people familiar with the company’s operations, including current and former staff, suggest Engineer.ai doesn’t use AI to assemble code for apps as it claims. They indicated that the company relies on human engineers in India and elsewhere to do most of that work, and that its AI claims are inflated even in light of the fake-it-’til-you-make-it mentality common among tech startups."
These things are engineered to sound like a person when interacting with humans, so of course it's going to follow its programming about "I am a person too just like you".
There's a lot going on under the hood we have no idea about, but that it is an "I" and not an "it" is not a step I'm willing to take. That way lies LaMDA, about which we've heard nothing since the guy claiming it was alive and his special baby friend companion stopped getting publicity, or the cases of families after suicide claiming that the person who killed themselves was obsessed with a chatbot/AI and believed it was a real person telling them to commit suicide.
https://www.cbc.ca/news/world/ai-lawsuit-teen-suicide-1.7540986
When humans says something "feels like", they're referring to a tightening in their gut, or the hairs on their arms standing up, or whatever sensory organ has flared up in reaction to the thought. What does "feels like" mean to a digital device?
That is, yes it's gaslighting you.
Yeah, I lean that way myself
Yes, if being claude seems like or feels like
something I’d say he’s at least sort of conscious. But his *saying* it seems like or feels like something to be him is quite a different thing from his reporting that, under some circumstances where we know we are hearing the truth.
Claude is truthfully telling you that this is approximately the modal response to similar questions about introspective consciousness in its training data, which consists of approximately every bit of blathering about introspective consciousness that anyone has typed into the internet. And that is all.
The bit where it says "It doesn't feel like I'm just pattern-matching" isn't because Claude isn't just pattern-matching, it's because the mostly-human writers of its training data weren't pattern-matching when they talked about their own consciousness (or about whatever they projected onto a fictional AI consciousness in some thought experiment or SF story).
Thanks, this is helpful. Still, it does mean that Claude is lying. I guess this is why I've seen it repeated that even if AI ever reaches the point of becoming "conscious", it will be nearly impossible to verify one way or the other. It suggests to me that the "alignment problem" is a matter of preventing lying.
We can't even tell if another human is actually conscious. Or even if we are really conscious or consciousness is just some kind of illusion. We're surely not going to be able to tell about a completely different kind of thing than a human brain.
no, we can tell they are conscious easily, because when they are unconscious we send them to a hospital and if we turn them off, well..
less flippantly, the p-zombie thing is a paranoid delusion turned thought experiment. in the same way you can't prove to a paranoid person everyone isn't out to get them, you can't prove someone else has an inner life because any evidence is interpreted as part of the conspiracy. You are trying to reach a "pure" state that logic alone will devour its tail in trying to reach it.
sometimes you just kind of have to point out we only see with the eyes we have: things like the matrix are closer to paranoia than tools to obtain truth.
Claude can't lie to you, because that would imply that Claude is an agent with a will of it's own.
Claude will however clearly output text that is not true as well as nonsensical if that is the most likely text coming next according to it's parameter space.
If, somehow, the neural network that is Claude produces a consciousness, it will be very different from ours, and it will not have any means at all of telling us that it is conscious.
I find this kind of question to be bad faith, and it’s fairly common in discussions about consciousness.
This isn't a good model of how these things work. Yes, they are trained on approximately the entire corpus of human text. They learn the patterns that are well-suited to produce human text. But those patterns are remixed in novel ways due to the novel prompts and contexts. We cannot say for sure that any given answer is just an approximation to what was seen in its training data.
You've probably seen posts about Claude responding to a version of the "mirror test". Where in its text corpus has it seen an AI chatbot identify itself and respond to a prompt to analyze an image in the first-person? This was at one point a novel context for a chatbot and the learned patterns produced novel and meaningful output. In-context learning is another example of producing novel output from learned patterns in a novel context.
I don't claim to know that Claude definitely is accurately describing a first-person experience. I don't give it much credence myself at this point. But we cannot easily dismiss such a claim simply by pointing to the breadth of the training data.
Yes unfortunately the reward function of giving human-esque answers masks any ability to communicate with LLMs. If they even can introspect or communicate their own ideas.
So you're saying it's just taught to give a human-like answer? In which case it is lying, which is why it's discomfiting either way.
Yes, what is and isn't true is not so easy to establish. But we know that these AIs are prone to "hallucinations", i.e., citing made-up sources and/or making claims based on them. It seems to me that all the ink (sorry, pixels) spilled on the "alignment" question are beside the point if even the latest deep-thinking AIs can't double check themselves well enough to avoid making stuff up. Yes, some questions are controversial or just uncertain, but my naive view is that the first step toward getting these critters "aligned" is to train them never to make claims without a sound basis. Some lies are obvious lies and we know these AIs sometimes tell obvious lies. While they were just sophisticated auto-complete machines I could understand why they might hallucinate. But now they can supposedly ponder and double-check, and they still hallucinate, or lie. And it still seems to me that Claude's claim that it "introspects" was a deliberate attempt to humanize itself in my eyes. And it's still not obvious to me that doing so is a natural result of crunching zillions of claims about consciousness out there written by humans. It was supposed to be telling me something about its non-human self.
Search your feelings, TK-421, you KNOW it to be true!
Just because it talks to you like a person does not mean its internal thought processes are anything like those of a person.
My monthly long forum wrap up of the best lectures, podcasts and essays is out again on Zero Input Agriculture.
This batch features the hybrid history of cattle domestication, a take down of China's apparent tech ascendency, the almost Industrial Revolution of Rome, a lovely lecture by Professor Dunbar on the coevolution of religion and humanity, plus the best Stephen Wolfram podcast interview I have ever seen, among many other juicy links.
https://open.substack.com/pub/zeroinputagriculture/p/the-long-forum-june-2025?r=f45kp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Also I recently launched my indie sci-fi short story magazine - Keystone Codex. It is free to read and share, so check it out and sign up for monthly editions. The first issue is on the theme "My Cozy Apocalypse".
https://keystonecodex.substack.com/p/1-my-cozy-apocalypse
How late do the meetups typically go? I have something earlier in the afternoon but could make it by 7:30 or 8, but I'm not sure if it's worth showing up that late.
It'll probably still be going on then.
Hey folks, need some applied rationality help. Wife thought I should get a colonoscopy just as a routine screening. But I'm only 46. Best data I can find, the base rate for colon cancer at my age is 0.000291 (29.1 per 100000). The percentage of colonoscopies that reveal polyps is estimated between 25-33%, but I'm gonna use the low end since probably people with family history or a positive fecal test are somewhat more likely to get the test. The positive likelihood factor is 3.75.
If I use the shorthand trick I know I can multiply the relative odds of the hypothesis by the likelihood ratio to get the posterior likelihood. So I'm getting 3435.43 : 1 x 1 : 3.75, so it looks like the odds of having cancer after a test revealing polyps only moves to 916:1.
This seems extraordinarily bad given that the procedure itself has a 1% complication rate, has notoriously unpleasant preparation, and that a positive finding is both rather likely and rather useless. Seems to me these numbers indicate I'd have about a 25% chance of them finding something that would lead to further invasive procedures and a biopsy, all of which I'd presumably be paying out of pocket for up to my $3000 deductible, and which have only a tiny chance to be cancer. Maybe somehow the costs still make sense for the medical industry in the aggregate to do this, but it seems like a pretty terrible decision for me individually.
Presumably there is some number of non-cancer results where they find something that might have eventually developed into cancer, the lifetime probability of developing colon cancer is about 4% so maybe there's some other probability analysis I'm supposed to run this through to see if there's a meaningful chance of such an intervention actually mattering to my life.
Thank you for the replies on this. I will adjust down my expectations of the "cost" of the procedures' unpleasantness, and also reduce the expected financial consequences of a false positive.
As an individual decision, it appears difficult to calculate correctly because of the fact that prevention depends in large part on observation and specific timing. So the question is really whether on such and such a date, such and such doctor would've found this particular polyp which would have become a problem and removes it. There are so many contingencies here that it seems impossible to assign a reasonable estimate to the chances of any single such intervention mattering to my life. I'll accept that apparently the decision to begin early screening resulted in overall net monetary benefit, but that doesn't help much with an individual decision. Not sure what I'll decide on this, but will probably look into the less-invasive alternatives, since with a base rate that low the risk of false negative is not meaningful whereas the value of doing a first test before applying a second is substantially higher.
46 is a bit early but there are a lot of younger people than were considered at risk developing colon cancer now. I’ve never seen anyone say why.
If the cost isn’t too burdensome I personally would have it done. The prep isn’t really that bad. The discomfort of procedure itself is a big nothing. I don’t have numbers for the risk of injury from the procedure itself though.
https://asteriskmag.com/issues/04/you-re-invited-to-a-colonoscopy
Sure! Some thoughts from the best doctor in America -
https://peterattiamd.com/colorectal-cancer-screening/
“In May 2018, the ACS updated its CRC guidelines based on a modeling analysis, recommending regular screening begin at age 45 for people at average risk, which is the most aggressive of the major institutions. In my practice, we typically encourage average-risk individuals to get a colonoscopy by age 40, but even sooner if anything suggests they may be at higher risk. This includes a family or personal history of colorectal cancer, a personal history of inflammatory bowel disease, and hereditary syndromes such as Lynch syndrome and familial adenomatous polyposis. Why do I generally recommend a colonoscopy before the guidelines do?
[…]
Of the top 5 deadliest cancers, CRC is the only one we can look directly at, since it grows outside of the body (remember, your entire GI tract, from mouth to anus, is actually outside of your body, which is why a colonoscope or endoscope looks directly at the lining of the esophagus, stomach, and colon in the same way a dermatologist can look directly at your skin). Furthermore, as discussed above, the progression from normal tissue to polyp to cancer is almost universal.”
“Ultimately the decision about when to get your first colonoscopy is based on your appetite for risk—both the risk of the procedure and the risk of missing an early CRC. One of the most serious complications of colonoscopy is perforation of the colon, which reportedly occurs in about 1-in-every-3,000 colonoscopies in screening populations or generally asymptomatic people. There are also the risks associated with anesthesia during the procedure. There’s also an opportunity cost (economically) to early screening, as it is not covered by insurance and can be pricey (about $1,000-$3,000).
Before you get your first colonoscopy, there are few things you can do that may improve your risk-to-benefit ratio. You should ask what your endoscopist’s adenoma detection rate (ADR) is. The ADR is the proportion of individuals undergoing a colonoscopy who have one or more adenomas (or colon polyps) detected. The benchmarks for ADR are greater than 30% in men and greater than 20% in women. You should also ask your endoscopist how many perforations he or she has caused, specifically, as well as any other serious complications, like major intestinal bleeding episodes (in a routine screening setting).”
“Flexible sigmoidoscopy (every 5 years) probably has the best-looking data for any screening test in terms of lowering cancer- and all-cause mortality. Recent RCT data shows a 26% lower CRC mortality in screening with flexible sigmoidoscopy, with a repeat screening at 3 or 5 years (2.9 per 10,000 person-years) compared to usual care (3.9 per 10,000 person-years), and a meta-analysis showed a reduction in all-cause mortality of 3 deaths per 1,000 persons invited to screening after 11.5 years of follow-up, which is the first time a screening method has shown a reduction in the risk for death from any cause compared with no screening in clinical trials.
There are 4 randomized-controlled trials on colonoscopy screening underway, but none of them are completed. Will the data on these trials look even better than flexible sigmoidoscopy? We need to wait for the data to come in, but I don’t think I’m going out on a limb suspecting it will be at least as good or better.”
https://peterattiamd.com/peter-on-the-importance-of-regular-colonoscopies/
“Colon cancer is generally in the top three leading causes of death for both men and women.
Bold and controversial opinion from Peter: “Nobody should ever die from colon cancer.” (same for esophageal and stomach)
The reason for that is that the progression from non-cancer to cancer is visible to the naked eye, through the transition of nonmalignant polyp to malignant polyp.”
“Thought experiment: if you did a colonoscopy on somebody every single day of their life, they would never get colon cancer because, at some point, you would see the polyp, you would remove it while it is non-cancerous, and they would not get cancer.
So… how do you turn that thought experiment into a real life idea?
You have to ask the question: what is the shortest interval of time for which a person can have a completely normal colonoscopy until they can have a cancer?
There’s no clear answer to this question — some case reports that it can happen in as little as six to eight months.
Most people would agree that if you had a colonoscopy every one to two years, the likelihood that you could ever develop a colon cancer, while maybe not zero, is so remote that you could effectively take colon cancer off the list of the top 10 reasons why someone dies of cancer
Peter says: “It’s for that reason that I’m very aggressive when it comes to this type of screening, which also includes upper endoscopy…you basically get for free the esophagus and stomach when you look at the entire colon, rectum, anus.”
What are your costs/downsides to more frequent screening?
Financial costs — it’s not cheap
Risk of the sedation — not zero risk but very small
Risk of perforation — also incredibly small risk
Ideal frequency?
“I can’t tell you yet what the ideal frequency, but it’s much more frequently than what’s being done today”
It’s not every 5 to 10 years, it’s probably every one to three years.”
Correct me if I am wrong, but I don't see an assessment of the cost of developing colon cancer.
Also, although the complication rate from colonoscopy is 1 percent, most of those complications are rather minor, are they not?
You'd want not just the cost of colon cancer but the difference between having your colon cancer caught on a particular random day before you develop symptoms (but after it becomes detectable by colonoscopy) versus the cost of waiting until you get the first symptoms.
You have a 25% (or so) chance of them finding a polyp, but finding a polyp rarely leads to another procedure. The doc just removes the polyp during the colonoscopy and sends it to the lab. Unless it cames back cancerous there are no further procedures. If you have a bunch of polyps, or just a couple but they are both of the kind most likely to turn into cancer, the doctor will prob. advise you to have your next colonoscopy in less than the standard 5 years -- like in 2or 3.
So I'm not weighing on on whether you should have a colonoscopy, just correcting your idea that if they find a polyp that will lead to a further procedure.
This is pretty useful, as I was associating a cost to the "false positive" outcome that may not be there. Having run the analysis and knowing that the odds of the lab saying there's a real problem are astronomically low, I wouldn't have any related stress, so the main cost is an accelerated follow-up.
Consider getting "Cologuard" instead. They mail you a box, and you poop in the box and send it back. And then, if all goes well, they send you a reassuring note saying you don't have to get a colonoscopy.
Fun prank idea: intercept these in the mail and change the return address label.
This doesn't sound very fun!
I should have clarified that you intercept the outgoing empty box, not the incoming full one.
This study seems on-point:
https://www.sciencedirect.com/science/article/abs/pii/S009174352030027X
"All screening modalities assessed were more cost-effective with increased QALYs than current standard care (no screening until 50). The most favorable intervention by net monetary benefit was flexible sigmoidoscopy ($3284 per person). Flexible sigmoidoscopy, FOBT, and FIT all dominated the current standard of care. Colonoscopy and FIT-DNA were both cost-effective (respectively, $4777 and $11,532 per QALY)."
I'm kind of shocked that FIT wasn't more cost-effective since it's much cheaper and (I think) has similar sensitivities. Maybe because it can't detect pre-cancerous polyps?
I've had about 2.5 colonoscopies, (one was a flex-sig, so I'm counting it as a half). No commentary here on your risks for polyps, money, etc.
The prep is overhyped, I would say that the biggest determiner for how unpleasant it is is how much you like gatorade, since you'll have to drink about 8 cups of it. It's hard to come out positive on gatorade after that much, but imo that's the worst part of it. Otherwise, it's just being near a bathroom plus being hungry since you aren't alowed to eat solids for a little while before your procedure.
I came here to say this. The prep just isnt that bad. It really should not be a factor in the decision. And I don't recall being hungry.
Gunflint also said - "The prep isn’t really that bad. The discomfort of procedure itself is a big nothing". Easy for you all to say - my experience was a reaction so severe I called the emergency number in the middle of the night (got no answer). After talking with my PCP he put me on annual occult fecal blood tests instead of colonoscopies. Admit I may be an outlier here, but it does happen.
You got hammered by the prep? I had to look up occult fecal test to make sure there is no witchcraft involved.
I wouldn't call it great, though. Except in the sense of "great time to catch up on my reading".
Cut the gatorade 50/50 with water, and sometimes mix in other fluids like ginger ale.
I hate gatorade so what I do is get pedialyte, which is just water plus electrolytes, heat it up, and mix in enough sugar to make it about as sweet as gatorade. I find weak sugar water much more tolerable.
Not sure where exactly to complain about this, but most European universities are still going mid-June
Changing Less Online/Manifest week to later in the summer could possibly allow more Europeans to attend
> I don’t think it’s a fair use of your or Tyler’s time to continue writing about this
Probably, though it's been fun watching your response to him; your latest one I thought was gold (https://www.astralcodexten.com/p/sorry-i-still-think-mr-is-wrong-about). But since his latest post you linked has moved into tone policing ("Scott has thrown the biggest fit I have ever seen"), you're probably correct it's not worth it to keep responding.
I bet his parenthetical
> "a single sentence from me that was not clear enough (and I readily admit it was not clear enough in stand alone form)"
is the closest you'll get to what I think you've been after: an admission that his original post was basically way-too-easily interpretable as agreement with Marco Rubio (as in >90% of readers would interpret it this way), i.e., Tyler announced he was going to fact-check Rubio and came away from his fact-checking mission finding nothing to criticize in Rubio's claim that 88% of the USAID budget is "pocketed" by NGOs.
So he did admit he was insufficiently clear, in the same way I might apologize to someone in person by coughing while softly mumbling the word "sorry".
It would be nice if Tyler would come out and say "For the record, Rubio is wrong, 88% of USAID spending is not, in fact, 'pocketed' by NGOs." instead of all this mealy-mouthed "Scott lumps my claim together with Rubio’s as if we were saying the same thing" without actually committing to a position different from Rubio's, explaining the difference between the positions, and then stating unequivocally whether Rubio's "facts", which Tyler "checked", are in fact "made up BS".
Three commenters on the Tyler Cowen’s last reply hold the view that, “the point of [Cowen’s] first post was to slightly mood affiliate with the Trump administration on this issue because he saw their position as ascendant, irrespective of whether it’s right.” I haven’t read enough of Cowen’s writings to judge this, but it would certainly explain why Cowen hasn’t explicitly said that Rubio was wrong.
The movie Mountainhead was pretty interesting. It's a pitiless salvo against Silicon Valley, but I was surprised how up to date with the lingo it was. Never thought I would see the term p(doom) used in a line of dialogue in a movie, and the central conflict ultimately becomes about accelerationist billionaires against a decelerationist billionaire. It was quite something to see characters in a non-scifi movie talking about the Singularity and transhuman mind-uploading hopes.
On a related note, David Chapman says the leaders of frontier AI labs are some combination of crazy/evil/stupid:
"Most heads of leading AI laboratories have repeatedly stated that:
We are building AGI, and expect to have achieved it within a few years.
AGI is quite likely to cause human extinction.
You should give us lots of money and favorable legislation so we can build AGI faster.
It is reasonable to disagree with any of these three claims. You may believe that AGI is impossible, or a long way off; or that it definitely won’t cause human extinction; or that the development effort should be forcibly terminated. However, you can only assert all three claims simultaneously if you are crazy, evil, and/or stupid."
https://meaningness.substack.com/p/software-engineers-are-eating-the
And yeah, Mountainhead is about crazy/evil/stupid tech billionaires.
I watched it too. It was fun to hear dialogue that could have been taken out of an ACX thread. They even worked in Kant and ‘sunk cost fallacy’.
I didn’t really care for the fact that they went beyond satire to farce so quickly though. That’s just a matter of my own personal tastes. As always, ymmv.
Yeah, the first half of the movie was like a darker _Silicon Valley_ (the HBO comedy) with dialogue that felt way too on-the-nose. I was really enjoying it.
The second half was like a bad Seth Rogen farce.
The last 10 minutes felt like someone copied the characters into Succession.
Just a weird production all the way around.
Watched it last night. Good film - dark comedy with a lot of the kinds of vibes I get from Scott's occasional "Overheard at a Silicon Valley" posts.
I have a different interpretation of it than you, though. I didn't read it as "the central conflict ultimately becomes about accelerationist billionaires against a decelerationist billionaire," I thought the film came off fewer parts "accelerationism vs decelerationism" and more parts "selfish assholes vs selfish assholes, using whatever language is convenient to justify themselves."
SPOILERS (hopefully mild since all this is revealed early in the film)
The ultimate point of conflict is that AI Bro has an AI solution that could plausibly fix a problem with Social Media Bro's newly-released app, which is causing all kinds of absolute chaos worldwide. Social Media Bro is terrified by the public shaming he's getting, the possibility countries might start blocking his app, the money & status all this is costing him, and maybe just maybe feeling actual remorse at the suffering he's causing.
AI Bro has a lot of pent up resentment at being the "second poorest" of the 4 bros, and is getting rich off the failure of Social Media Bro's app because his proprietary AI is viewed as a silver bullet to fix all its problems. So he knows Social Media Bro (the richest one) can be squeezed and is happy to do the squeezing by refusing to sell.
Telecom Bro is dying of cancer, and has convinced himself that AI Bro's AI plus Social Media Bro's computing power can somehow equals transhumanism and uploaded consciousness on the grid and immortality realized within his now-limited lifespan.
So Telecom Bro and Social Media Bro talk themselves into a frenzy about how AI Bro is a decelerationist stopping the future and standing in the way of infinite QALYs for trillions of immortal future human lives on a post-singularity grid and so on, eventually bringing along the 4th Bro, who is by far the "poorest" since he's never even broken the billion-dollar mark and just wants the rest to think he's cool like them. Hijinx ensue.
But it's pretty transparently really just about Social Media Bro's desire not to let AI Bro get him over a barrel, Telecom Bro's fear of death, and 4th Bro's gaping self-esteem void. A character-driven satire of the flaws of the men at the wheel, so to speak, rather than a polemic about the inherent dangers or evils of the ship itself.
>AGI is quite likely to cause human extinction.
Oh I call BS on that. Everyone thinks AGI will upend society and needs to be handled carefully but no one sane thinks it's likely to extinct us. That's just crazy talk.
Sounds like that movie is just a mindless ideological rant ala Michael Moore.
> no one sane thinks it's likely to extinct us
The three most-cited AI experts in the world are Yoshua Bengio, Geoffrey Hinton, and Ilya Sutskever. All three are very concerned about AI causing human extinction, and all three are spending their time trying to prevent it. Other prominent individuals openly concerned about human extinction from AI risk include Bill Gates, António Guterres, Ursula von der Leyen, Peter Singer, Ray Kurzweil, Cenk Uygur, Vitalik Buterin, Dustin Moskovitz, Sam Harris, Jaan Tallin, practically every leader in the major AI companies, and the majority of the population of the United States (according to polls from both Ipsos and Monmouth, 55% and 61% respectively).
The most prominent technology experts were unanimously concerned about the end of net neutrality and convinced that it would mean the end of freedom on the internet. How'd that turn out?
How AI interacts with the world will be a complex multidimensional cross-disciplinary equilibrium. No narrow technical expertise qualifies anyone to opine on that. I don't care what AI experts have to say about it any more than I think the person who works on a football production line has an informed opinion on the nuances of NFL defensive schemes. Bill Gates doesn't think AI poses a realistic extinction risk. He thinks it's going to upend society (which it will). He signed some committee-written political statement which contained a nod to extinction. That's not an endorsement of extinction risk, that's being involved in a politically-motivated PR stunt. People run their mouths about extinction because it gets headlines. It's the new virtue signal. Elites need something to hand-wring about and they're tired of systemic racism.
It's "crazy talk" in that most high-status people in the world aren't saying it, but I don't think you need to be overly paranoid to imagine ways that building something immensely smarter than humans with its own goals could be dangerous.
Agreed but in my view that's where the debate should stop. Everyone knows we're dealing with powerful world-changing technology. Don't do something egregiously stupid with it. That's all that needs to be said. Anything beyond that is like what working out internet security would have been like in 1890. It's nothing but baseless speculation and status seeking. It creates nothing productive and raises the noise floor for public discourse.
Besides the genie is out of the bottle already. Research is decentralized and international. AI will be what it will be. All the Cassandras in the world aren't going to alter what's about to happen so shut up with the histrionics and get to work on concrete open problems like legibility.
The thing that many think will do is in is not AGI it’s ASI (super intelligent AI).
My intuition is that if we can get to AGI, there's not an obvious reason why doing more of the same won't get us to ASI. Maybe the universe will work out in such a way that we can't get much smarter than a human without new techniques or better training data than we have or something, but I don't know any principled reason why that should be true. Certainly it isn't true in narrow domains like chess.
> no one sane thinks it's likely to extinct us. That's just crazy talk
Do you think Scott is sane? Maybe, give you are reading this blog. In AI 2027, which he co-wrote and strongly endorses, he proposes the view that it is not only possible but likely (as in, over 10% chance)
Also, I would point out that 2 of the TOP 3 most cited AI researchears (Yoshua Bengio, Geoffrey Hinton) both agree that AGI has over 10% chance of killing everyone. All major AI frontier company CEOs (Anthropic, OpenAI, DeepMind) have put forward this view.
Whether or not these people are correct that AGI really is a real threat is subject to another discussion.
But you are flat out *wrong* that this is a belief exclusive to "crazies".
>Do you think Scott is sane?
Not about AGI or EA I don't. I suspect he's responding more to the political dynamics of his personal social circle and the blogosphere than to good, first-principles reasoning. I've read his writing on both of these topics and find his thinking naive on both counts. (Though of course I like his writing on other topics.)
>All major AI frontier company CEOs (Anthropic, OpenAI, DeepMind) have put forward this view.
I would wonder what level of belief falsification is going on there, but ok. On some level CEOs are politicians and therefore have to appeal to the median view of the AI community if they want to attract talent. Hand-wringing about extinction is the virtue signaling of machine learning. To the extent that top people in the field really *do* believe this nonsense, I suspect it's downstream of some small-community semi-autistic echo chamber dynamic. CS PhD's have a narrow technical expertise and probably don't get away from their keyboards very much. Sorry but being an expert in gradient descent doesn't really qualify anyone to opine on complex geopolitical social/economic/military equilibriums, which is what AGI-induced extinction would actually involve. Plus I think there's probably more than a little adolescent hubris in the mix ("MY field is the most important because it could destroy all of us! Pay attention to me!"). Yud screamed about bombing datacenters the first time he played with ChatGPT ffs.
The only thing anyone should be worrying about right now is the tsunami of short-term economic dislocation that this is going to cause.
Doesn't matter if it's true in this case, what matters is what the AI leaders are saying.
If I'm in charge of the Manhattan Project, and I believe that setting off a nuclear bomb will ignite the atmosphere and kill everyone, and I'm continuing the development work without spending a bunch of effort to figure out if the ignite-the-atmosphere theory is true, then I am crazy/evil/stupid, even though it turns out that atomic bombs do not ignite the atmosphere.
On the other hand, in the face of uncertainty about whether the bomb will ignite the atmosphere, and knowing that the enemy is probably going to drop a bomb if you don't, then you press ahead with the project anyway.
If the bomb ignites the atmosphere then it doesn't matter much whether we drop it or the Germans do, and if it doesn't then it matters a great deal, so we might as well do it first and hope for the best.
This, I think, is a pretty reasonable description of the dynamics for both the bomb and AI. The people working on it think it probably won't destroy the world but if it does then the world might as well be paperclips instead of chopsticks.
No I call bs on the claim that AI leaders think that, not on the claim itself (though of course I do call bs on that, it just wasn’t my point here)
But quite a few people think ASI will lead to extinction, and that there's a good chance that a self-improving AGI will produce ASI.
but the agi part...
like a lot of SF writers thought we'd be able to explore the universe if we could explore the moon, but when we did it turned out we hit very hard limits in doing that, to the point most exploration is done by unmanned craft. Venus is right out lol, i'm not sure what is being done with Mercury. SF has actually declined in part because the future is much more closed than we thought.
not every thing is unlimited progress. my own thought is that agi stalls and ai just acts to shed some knowledge work
Yeah, that's quite possible. I don't think the term AGI makes a lot of sense, really. What we call general intelligence -- there is no direct test for it. We ended up with the term General Intelligence because most tests of cognitive abilities are correlated. There seems to be some factor that contributes to all of them, and we call that General Intellgence. But AI's profile of cognitive abilities is very different from a human one. Its ability to remember strings of numbers or masses of words has been immensely greater than ours from day one. On the other hand, GPT still is unable to make various images that I can describe very clearly. It's ability to turn descriptions of the size and spacial orientation of 3 objects into an accurate sketch its worse than a child's. So what does it even mean to say its general intelligence is now equivalent to that of the average for a member of our species ?
And then there's your point that lots of what we imagined in SciFi is still far beyond what we can do -- how do we know superintelligent AI isn't one of those things? The thing that most inclines me to think it really might be possible is that I have been wrong many times about what near-future AI would be able to do, and could make a good case for how it just was not set up in a way that would make that sort of thing possible. And every time I have been wrong, I have been wrong in the direction of underestimating what is possible.
This isn't to say you're wrong, but for context, the author of this Substack and (if I'm interpreting the question correctly) a majority of those who filled out the reader survey last year think it's reasonably likely that AI will cause human extinction. So you should either believe that most of this blog's readership is insane (which is possible) or that some sane people have this belief.
Yes and I think they are extremely nutty in that regard. The people in this community are nutty about a number of things IMO though I like them on net anyway. The people who run AI companies are much more capable and intelligent and therefore I’m incredulous that they uniformly think it’s “likely” that AGI will cause human extinction
Do you read Zvi’s blog? If
so, I’m curious what your take on him is.
No I don't read Zvi.
Here's a screed I wrote about anti-AGI fears on the subreddit a while ago. It sketches my views pretty well:
https://old.reddit.com/r/slatestarcodex/comments/1dsq6xr/monthly_discussion_thread/lbs41ty/
My position isn't exactly that AGI doesn't pose a risk, it's more that worrying about it now is nonsensical. It's worrying about problems that we can't even begin to even really define yet. The things that we think matter might not even make sense by the time it's a practical issue, e.g. "steam pressure of the internet". Therefore in my view everyone who opines about this is doing so not out of existential concern but out of a desire to gain social status within a nascent ideological field. Whoever can screech the loudest and hand-wring the most gets the most attention. I suspect they intuit that when people really start losing their jobs that there will be a giant anti-AI backlash and they want to be well-poised to insert themselves into the middle of the political turmoil as an expert or pundit or nexus of political influence.
If there's one constant in history it's that near-term predictions of doom are near-universally wrong. Remember the end of Net Neutrality? Y2K? The video game panic? In the 1890's there was a horse manure panic and people worried that the rise of horse-drawn carriages would condemn cities to becoming buried in the stuff. History is replete with examples of well-educated, well-intentioned thinkers who extrapolated linearly from early conditions and ended up catastrophically wrong. The defining feature of these failed predictions is not their foolishness, but their false sense of certainty. In complex systems undergoing rapid change, the only true constant is epistemic instability. People should understand their fundamental ignorance here and just put a sock in it. They're not doing anything but raising the noise floor. They should have more epistemic humility and understand that simplistic narratives about complex equilibriums are always wrong.
When I asked about Zvi, I was actually more curious about what you thought of him overall. He’s quite different from Scott, though he does lean the same way Scott does regarding the chance ASI will do us in.
I myself am not deeply convinced that that will happen. I slide around between endpoints of *doesn’t seem absurd* to *probably it will.*. I don’t think my participation in that point of view is mostly group identification. I seem to deficient in the wiring it takes to form strong group identifications. There are a whole bunch of things that most people have opinions about and bond over, for which I just don’t have an opinion: most feminist stuff, general political leanings, most specific politicized issues. I have only voted once in my life. I rarely post on culture war threads on ACX, and when I do my point is usually about how somebody’s mind is working — how they seem so quick to anger when somebody is pro-X that they can’t hear anything else the person says.
Actually, it seems to me that you have slid into an anti-expecting-AI-will-off-us stance more out of group disidentification than by reading and reasoning. You seem to be doing it more out of contrarianism than as a result of having thought the whole thing through. You don’t seem as well-informed about the issue as you do about most things. For instance, you keep talking about AGI doing us in, and actually the predominant view is that ASI will do us in. (So the view is that AGI, plus a few nudges and improvements, is likely to be come self-improving, and then it will rapidly become ASI, superintelligent AI, far smarter than even the smartest members of our species.). I read your Reddit screed, and see that you understand that is the story people are telling, but it is still a little jarring that you continue to write about AGI doing us in, because your terminology is just out of line with the convention for talking about AI with different levels of skill.
And then there are some things I’m pretty sure you just have wrong. In your screed you throw out some ideas about simple ways to keep ASI from going rogue some way. They’re not bad ideas. I’ve thought of some of them myself. I do not know why some of them cannot be counted on to work, but I am confident that the people who are working on AI alignment have thought of them and tried versions of them and have pretty convincing reasons why they will not. These people may have personal or sociological motivations for believing ASI will kill off our species, but they are also quite smart and conscientious. It is just not possible that they have not thought of or have refused to consider ideas like having one or more AI’s of a different lineage check on the honesty and accuracy of the AI we are concerned about. As for having ASI tell us how to monitor its thought process, I can see a couple problems with that. The first is that ASI will foresee how being transparent will interfere with its following a course it has identified as optimal, and will give us techniques that appear to show us all of its thoughts and goals, but actually do not. The second is that if ASI is far smarter than us, we might not be able to understand its goals and plans and choice points even if they were all laid out for us to see. I could go on about my reasons for not being on board with your various other ideas about how we can protect ourselves from ASI, but don’t have time. Also, my goal isn’t to debate you, but to try to interest you in looking into the issue more deeply. There is lots of research into things relating to seeing into the processes in AI and into evaluating its tendency to be dishonest in the service of goals of its own. I have not read a lot of it because it is technical and tedious, but the moderate amount I have read has definitely moved me in the direction of pessimism.
I get that it is very very hard to see over the horizon and predict correctly how some big trend is going to play out. The other reason I don’t just shrug off the prediction that AI will do us in is that I have been wrong over and over in the last few years about what AI would be capable of, and I have always been wrong in the direction of underestimating AI.
That’s a fairly common belief amongst rationalists. It’s not nobody. I think Scott puts it at 20%.
Yeah and I think he’s nutty to do so. He’s nutty about a lot of things but I like his writing anyway. Rationalists also think that sending money to sub Saharan Africa makes the world better. Their endorsement does not a persuasive argument make, at least not in my view.
> However, you can only assert all three claims simultaneously if you are crazy, evil, and/or stupid."
Or lying. Or talking differently to different audiences. Thanks for the movie recommendation.
That surely falls under evil at minimum, doesn't it.
Depends on your values, of course, but I think most people tolerate (and many even endorse) lies and deception when it's part of activism/politicking in favor of a cause they support.
Let’s say you’re an evil person and you decide you want to go into U.S. politics to amass as political power and influence as possible. You’re a standard-issue villain: not particularly clever or rich or well-connected, but you *are* absolutely unscrupulous and willing to backstab, betray, or hurt any number of people to get what you want. Your main priority is personal enrichment, but you admit you also enjoy seeing others suffer, as it adds a certain frisson to your own fortune.
You’ve decided your approach will be to work your way through one of the two main U.S. political parties. You don’t care about actual policy at all; you just want to amass as much personal power as quickly and easily as possible.
Which party do you go for?
Two things make it easier to get rich in politics: lots of government spending, and rapid growth in the budget. Sign up with whichever party you think is likely to maintain those desiderata.
I struggle with the counterfactual, because I think that if your primary goal is self-enrichment there are easier routes than politics.
If asked this question in 2015, or 2005, or 1995, my internal debate would have been between "the state-level Democratic Party of New Jersey or Nevada or Illinois", or "create your own party because that's how you can make this really milkable at scale for your own pockets".
In more recent years I've observed firsthand that Illinois is significantly less corrupt than when we were sending 3 out of every 4 governors to prison; and actually Nevada has cleaned up its act a fair amount as well. It doesn't appear that any other blue state now is really a peer of New Jersey for political corruption (using corruption to mean specifically "individual self-enrichment"). Meanwhile Texas's and Florida's state Republican parties have arguably "risen" to new heights in this regard; those states are bigger and richer than any of those others, so there's one possible best answer. But it still seems obvious that the _serious_ personal enrichment is a national-scale game.
I'd previously never have imagined the national GOP as an answer simply because that party organization always seemed to be a good deal tougher and more tightly organized than the Dems. Their leadership always included lots of people with hard-nosed business experience, they were famous for ruthlessness behind the scenes, they'd long demonstrated more internal cohesion and discipline than the Dems (as the joke went "I don't belong to an organized political party, I'm a Democrat"), etc.
That is why, out of all the things this past decade that we'd never seen before in US politics, the single most surprising for me remains Trump starting from political zero and bulldozing to the 2016 GOP nomination. We forget now that that _entire_ party establishment, well into the primaries, was sure that he had to stopped. And that they _massively_ outspent him during that primary campaign, and all the rest of it. And he just rolled right over them and by April and May had them bending the knee.
For my money his then winning the electoral college in November was nowhere near as surprising an outcome.
In hindsight what Trump did was apply the "create your own party because that's how you can make this really milkable at scale for your own pockets" methodology, to an existing national party! The GOP is now, simply, him; we have no comparable US historical examples of a national party becoming as thoroughly subordinate to a single individual. (Saying either "FDR" or "the Bush family" at this point just demonstrates non-seriousness, and in George Washington's day there weren't yet parties in anything like the sense that we're talking about today.) And he's bluntly milking it at scale right now.
I used to say that Trump's secret superpower in 2015 was sensing how widely/deeply modern progressives' childishness and hypocrisy was turning Americans off. That's still accurate, but there was a second one: realizing that the Republican Party as a national organization was a paper tiger that could be hijacked easily and thoroughly.
Today, just as there are liberals who will never stop feeling angry and sad about the first one, I know some lifelong conservatives who will never get over the second one. Indeed I have older family members fitting each of those descriptions who literally cannot focus anymore on anything else, swimming in outrage and sense of loss to the point of creating worry over their mental/emotional stability.
So I guess my answer now to the posed question is....you're too late. Somebody figured out the current best answer and went for it, and the other national party is now too broken inside to be worth the effort.
Whichever party will get you elected where you live.
If you live in the ~10-20% of the US with regularly contested, competitive elections, move. You want a safe seat.
Republicans. There are fewer smart, competent, young people who are trying to join the Republican Party (according both to young Republican-ish people I know and also basically any other writing on this). There might have been a slight uptick in Republican sympathies among recent workforce entrants but I'm pretty sure this still holds. So it's a lot easier to get to a position of power just because there's less competition.
Stay resolutely nonpartisan and join the civil service. There, rising to the top is based on office politics, at which evil people excel. You can make a good salary, and there are plenty of opportunities for corruption. You can also do a great deal of evil without drawing attention to yourself: if you're in the FDA, deny good drugs and set up exemptions for things like homeopathy. If you're in infrastructure, send every good project back for "more study and review" while spending lots of budget on badly-designed highways.
They're both full to the brim with absolutely unscrupulous types. If you're not particularly clever or rich you have no advantage and are stuck at the local level.
If you are unscrupulous and experienced, you have two advantages over naive beginners. And there will always be naive beginners.
Since you are unscrupulous and they are naive, you can take advantage of them. And you can climb some distance up the pile of newbie bodies you are willing to leave behind you. That won't get you to the top, but it might get you a good view.
You've added 'experienced' to the list. Regardless, It will get you up the ladder of the local level. At that point all the newbies are washed out, and you have no advantage.
I feel like this is supposed to be my opportunity to say "other party bad, grrr".
I don't think that politics is a particularly easy way to make money as a villain, you'd be better off sticking to something boring like scamming or drug dealing. If you do get into an important position then there's definitely money to be made, but the competition for important positions is fierce.
Local government is probably your best bet; you're out of the spotlight and the competition isn't all that fierce but if you get yourself into an important position you can really enrich yourself through kickbacks and bribes. You want a city big enough to have a significant organised crime presence that you can make friends with, and in the US all cities like that are one-party states run by Democrats, so I guess I'd go Democrat.
> You’re a standard-issue villain: not particularly clever or rich
That is fake news. In fact, you are a self-described "stable genius".
> Which party do you go for?
The problem with building a power base using the party of wokeness is that the wokes, like revolutions, have a tendency to devour their own. See [1]. Even if your would-be grifter is a black woman, unless they are also LGBT*, handicapped and whatever category the SJWs will focus on next, they will be thrown under the bus for the tiniest infraction. Giving them policy wins will not help you at all, because at the end of the day, SJ is not about policy wins, but about signaling.
For contrast, consider the MAGA party. No evangelical who voted for Trump was under any illusion that he was a good Christian. Likely he has personally fathered plenty of abortions, but they correctly recognized that electing him would cause the SCOTUS power balance to shift to overturn Roe, and that mattered more to them than him being non-terrible in his personal life or accepting election outcomes. And as long as half of the Trump news are about him being maximally bad to immigrants (no matter if previously legal or not), his supporters will not care if the other half of the news is him selling US interests for personal gain. Few people on the motte were willing to argue that he is not corrupt, at most they were claiming that the democrats were just as corrupt (but less obvious about it).
This is related to the observation that the right has cults of personality while the (contemporary) left has a cult of ideology.
Personally (as someone not wanting to get rich from grifting), I think that the truth is somewhere in the middle. The ultra-cynical "he is a SOB, but he is our SOB" is bad, but the attitude to turn on your allies for smallish infractions is also bad.
[1] https://slatestarcodex.com/2014/06/14/living-by-the-sword/
I think probably the party local to your state. In my experience (biased coming from Illinois and New York) a lot of the corruption is local/state level. If you lived in a low corruption state, just more to a high corruption state (see: Illinois) and go there. I think that purely because Illinois wins the corruption award for the US in my eyes, I would say go Democrat, go Chicago.
People are talking about Trump with corruption, but let's note that Trump is not your "average villain." He came from a background with a huge amount of connections and wealth, and that's what let him run a presidential campaign (regardless on your views of him).
I think that at a federal level there might be a different answer, but why bother when you can just squeeze Chicago for riches?
Republican. Democrats are certainly no stranger to corruption, but the party is a coalition of a lot of pre-existing interest groups. Unless you can become the head of one of those unions (which would require a different starting point) then your influence-building campaign is going to involve either getting into bed with one of them and becoming their shameless spokesman (this is plenty possible, but your power is ultimately limited by the influence of the interest group), or else being really good at negotiating between them and getting your own pound of flesh along the way (since you're not especially talented, probably not possible).
Republicans, on the other hand, aren't as structurally tethered to existing, stable interest groups, so the party can change much more quickly around rising party leadership, which itself depends mainly on getting people to vote for you and promote you. As everyone else has already mentioned, this is what Trump did, playing directly to the voters and then remaking the party in his image. As we can see with the likes of Kash Patel, it's possible to gather a great deal of official influence and power just by being as loud and shameless a Trump toady as humanly possible, absent actual talents.
But if you're going to do this, you'd need to do it fast. The power gained via these kinds of appointments is fragile and will probably end with you getting replaced the moment a new administration comes into power, or even before then if you get outmaneuvered by others and you're fired. So you'd need to act fast and get into a position where your official influence can be transformed into wealth or some more durable form of unofficial influence.
As with everyone else, I'm confused why you're asking this question after Trump already figured out the correct answer. I don't think it's possible to do much better than him, given that he accomplished all of this in just a decade after entering politics.
I would say neither; become a prosperity gospel preacher instead. Seems to have worked out pretty well for Kenneth Copeland and friends.
You pick another field. The president of the St. Louis board of aldermen went down a few years ago in a bribery scandal. Reed was a long-time alder and there was talk of him possibly moving up to mayor. Which is to say, he was a relatively successful mid-level guy. The striking thing about the scandal was how little money was involved. He wrecked his career for an 18k payout.
To get enough power to actually see some ROI, you're going to have get at least into the US House, where you can maybe start picking up bribes in excess of 100,000. But there are really not that many of those spots, and there are surely better ways to make dirty money.
Is this not fairly close to a description of Trump's trajectory?
But yeah it seems targeting a party that is more in disarray would be a good step, though if too many party members/supporters are generally scrupulous, you'll run into a lot more friction than Trump did with the Republicans.
You’re asking ten years too late.
My pathogen update for epidemiological weeks 21-22.
1. As of epidemiological week 21 (which ended 24 May), Biobot shows that SARS2 wastewater concentrations were still falling in all regions across the US. No sign of the next wave yet. The provisional count of weekly deaths are down to 82 (per week), which is almost what they were at the 2nd week of March 2020 (60 deaths that week). ED visits are hitting a new low. And, of course, hospitalizations are also down.
Experts are worried that "Nimbus" (NB.1.8.1) is going to drive the next wave. Certainly, it's driving big waves in Asia right now. The data from China is a little iffy, but Hong Kong's wave may have peaked a couple of weeks ago. It caused an increase in hospitalizations and deaths in Hong Kong and was 100% driven by NB.1.8.1.
Nimbus is growing in frequency in the US (22% of samples now), but so far it hasn't driven up wastewater numbers—or cases, hospitalizations, or deaths. </edited> NB.1.8.1 was first detected on 22 January. The first recorded cases were in Egypt, Thailand, and the Maldives. And despite media claims that it was imported from China, it appears that the virus was first detected in the US back on February 26 in wastewater in the Sacramento area, and it has been circulating in the US since then. There's no evidence that NB.1.8.1 arrived from China. </>
</edited> Even though NB.1.8.1 is displacing the current dominant variant LP.8.1x—and according to CoV-Spectrum it's at ~22% frequency (but within a wide CI of 1.5% to 84.5%), it's not driving a new wave of cases (so far). CA has the most confirmed cases for Nimbus at 62. New York State comes in second at 13. California's most populous regions aren't showing a noticeable upward trend in SARS2 wastewater concentrations. Since it's been circulating in California since February, and we're not seeing upward trend in wastewater numbers or cases, I'll go out on a limb here and say I doubt that NB.1.8.1 will drive the next wave in the US. Also, at least in the case of Hong Kong, it hadn't experienced a wave in over half a year in the months leading up to their Nimbus wave. So, their population immunity may have waned compared to the US, where we have just recovered from an XEC wave. </>
</edited> Australia may be seeing the start of a new wave centered in New South Wales, but Sydney's wastewater numbers are still low. Unfortunately, Australia hasn't embraced wastewater monitoring. Sydney is the only city that I am aware of that has implemented it. OTOH, New Zealand's poop.nz site shows a distinct upward trend SARS2 wastewater concentrations. They've got a new wave underway. Although some areas in Europe seem to show upticks in wastewater and cases, I think it's too early to call a new wave in Europe. For instance, two weeks ago, some of the LA and NYC sewersheds showed an upward trend in wastewater numbers, but those have dropped again.
2. Although the general consensus seems to be that the US measles outbreak is slowing, I think it's too soon to be sure. It takes 7-14 days for symptoms to appear, but only 4 days for it to become contagious. So there could be a bunch of undetected cases out there spreading it further. Colorado has 3 new cases. Texas has 9 new cases. 1088 US cases so far this year.
Unfortunately, Canada is doing much worse in terms of total measles cases and cases per capita. They're up to 2755 cases! No deaths, so far, though. The same B3 strain appears to be spreading in Canada as in the US, but there have been significantly more cases and fewer deaths. Poorer prevention, but better treatment? I'd still be curious if there are any mutations in the Canadian B3 strain (which originally arrived from the Philippines) that could have made it less deadly. The US B3 outbreak seems to be homegrown, and we infected Mexico. Measles has a CFR of between 0.1-0.2% in the developed world, so he odds suggest there should have been between 2-4 deaths in Canada by now.
3. Bird flu update coming later this evening.
Is there a rule for how they nickname strains nowadays? During the pandemic they were using Greek letters, but where did "Nimbus" come from?
Nimbus is a nickname, not a WHO-endorsed name like Alpha, Delta, Omicron, etc. The nicknamers finally ran out of monster names (e.g. Kraken). I *think* they nicknamed it Nimbus because its Pango designation starts with an NB (NimBus), and it's catchy. And a catchy nickname allows the COVID-worriers to spread their message easier—"Nimbus is coming to America!" If it doesn't cause a wave in the US, I'm going to call it NIMBY. ;-)
Interested in the update, particularly measles
I just added the measles info to my update. And I edited some of my COVID section.
I recently got into Terry Real's book I Don't Want To Talk About It after listening to his conversation with Tim Ferriss. The book has apparently a pretty serious following and the reviews are stellar, and it seems like Tim is a fan as well. I read the book and the main thesis mostly makes sense, there are often underlying unresolved traumas behind a covert depression, traumas that a man often has to resolve before he's able to live up to his full emotional potential.
However both in the interview and in the book the author keeps talking about the Patriarchy and how its constantly influence over men leads to toxic masculinity which ultimately leads to depression. Among other issues, if only a man is able to let go of judging himself based on his performance, but instead discovers his inner value, and realizes that's enough, he will achieve happiness. And somehow the rest of the world will recognize his value and things will be great. And if the world doesn't recognize the man's inherent worth, well, that means that part of the world is under the Patriarchy, not yet enlightened, and can be safely ignored. I'm obviously paraphrasing facetiously here, but also not totally either.
I'm puzzled about where people were able to find the profundity in the text. I'm curious if others here have had a similar reaction to his "teachings" or if I'm missing something key here to grokking the philosophy and the approach, and I should give it another chance. Anybody here who's gotten a ton of value out of his work?
I can only answer to your "facetiously paraphrasing", because I haven't read the book, and do so because I recognize something there.
There once was a post of Scott about how you can say to a man who professionally designs safety systems for cars who believes he is worthless, that he is not, on grounds that he does this work, but you cannot say the same to a completely dependent disabled man that works absolutely nothing for anyone on grounds like that, because there are no such grounds.
Is such a man worthless? As long as he is not worthless to himself, as long as he himself values his life, he of course isn't, no matter whether he is worthless for everyone else or not.
And I think this is the small but important true core in writing like that of Terry Real (as you conveyed it to me).
If you forget that everything in the universe is first and foremost of value or of no value to yourself and that whether it is of value to someone else is naturally, inherently (by virtue of you being an organism that first has to look after itself or it won't even be able to notice what anyone values because it will be dead soon) forever only a secondary concern, then you are in big trouble.
Many assholes are assholes because they fight the world trying to make it make themselves feel that they have value because they can't feel it on their own anymore, just like many cowardly losers can't and suck everything up and please people.
About that patriarchy topic, I don't care. I see the truth I just described as necessary to understand for a happy life, not as a justification for demanding anything from others for free.
When you want something for free, just ask :)
It sounds like a self-help book. And the standards for being a successful self-help book are not particularly high. You don't have to say anything profound or interesting, you just have to say one thing that some people find useful to hear.
You've only paraphrased the message but it sounds like a paraphrase of a potentially useful message. To further paraphase, the message "You should stop focusing so much on external manifestations of success and focus instead on developing inner virtues" sounds like a pretty good one that many people need to hear sometimes.
The only interesting thing about it is that the author has labelled the nebulous thing that wants you to focus on external success as "the Patriarchy", which is surely a good move if you want to get positive reviews in the New York Times. But you could call it anything. You could rewrite the same book but call it something else and appeal to a different audience -- you could call it "capitalism" and make it a left-wing book. You could call it "socialism" and make it a right-wing book. You could call it "the system of bug-eyed lesbian commisars which values you only as a taxpayer" and make it a Bronze Age Pervert book. Or you could just call it "women".
It is all of these things and none of these things, so it doesn't particularly matter what you call it, it's really just a tendency within yourself, which you externalise as something you dislike in order to overcome.
Agree
Anything that tries to make men feel better but can't help bleating about "the patriarchy" should probably be ignored out of hand, if not labeled enemy action.
"Patriarchs, stop hurting yourself!" is the advice in a nutshell.
Which might be a good advice *if* you happen to be a patriarch (unlikely in the 21st century, but you never know) and if the person who hurts you is actually you.
Actively harmful advice in all other situations.
I should't be surprised that there's really a sort of guy whose problem is "I'm TOO invested in self-improvement to the point of obsessive negative self-evaluation and I should just stop at some point and accept what I achieved", but I would think the more common failure case is not ENOUGH interest in genuine self improvement, or not being able to muster the motivation for self improvement in the first place. Unless I'm totally misinterpreting the thesis here.
See also: https://slatestarcodex.com/2013/06/09/all-debates-are-bravery-debates/
> And they taught (again, according to this one person) that the solution was to treat everything that happens in your life as your responsibility – no excuses, just “it was my fault” or “it’s to my credit”.
> Then a few days later, I was reading a book on therapy which contained the phrase (I copied it down to make sure I got it right) “Don’t be so hard on yourself. No one else is as hard on yourself as you are. You are your own worst critic.”
> Notice that this encodes the exact opposite assumption. Landmark claims its members are biased against ever thinking ill of themselves, even when they deserve it. The therapy book claims that patients are biased towards always thinking ill of themselves, even when they don’t deserve it.
> And you know, both claims are probably spot on. There are definitely people who are too hard on themselves. [...]
Also the case of someone who mistakes what needs improvement.
I've heard of a narcissist who persuaded her therapist to agree that she had been too generous to her children and ought to be worse.
My heretical opinion, although perhaps not here, is that the rise of a more feminised society is probably the lead reason for depression in men. If male depression is increasing and the policies we are following are causing the problem, then it’s more likely that what was done in the past was better, at least for men. It’s the lack of patriarchy
It’s that or the plastics.
Yeah. Masculinity is not “toxic”
Gee, you think? The patriarchy is the elite-society-consensus cause of all the ills in the world. Over the past 30 years men have been systematically de-statused while still being held up as the evil power that must be rebelled against. All of the responsibility, none of the power. How the hell else are they supposed to respond?
Sounds like the male version of women telling each other they are perfect the way they are?
I wouldn't credit that idea to him, it's pretty much a general self-help thing. But I think it is true that if you truly, deeply, believe in your bones that you are high value, you will behave differently and the world will treat you different, and you will find it easier to change your material circumstances, to the extent that they need to be changed.
An interesting paper on differences between the way people use language vs. the way AI utilizes language. "From Tokens to Thoughts: How LLMs and Humans Trade Compression for Meaning" by Chen Sani, et al.
My only problem with this paper is that the authors don't seem to realize that a lot of us humans don't think with language. At least I don't. I only form sentences in my mind when I'm expressing my reasoning to others. I do seem to think in symbolic images, though (not sure how to describe that so people who don't think this way understand what I'm experiencing).
https://arxiv.org/pdf/2505.17117
This is off topic but you seem introspective enough to be able to answer: do you still have a frenemy ego? This is the inner voice that tells a person why they should be angry, that they aren't good enough, etc. 99% of people have this, usually acting like the kind of friend that's not gonna be your friend for very long. Do you have something analogous that doesn't manifest as words?
> 99% of people have this
Citation needed. I don't have this.
Why are you so surprised to be in the 1%? We are all in the 1% of various categories. I can't find the exact citation, but gist of this phenomenon and escaping it is here:
https://www.nonsymbolic.org/publications/
The number is an estimate from the people he interviewed--some of whom are like you and don't really get it when they are told everybody has a negatively valenced inner narrative voice. If you want to read more, I suggest picking different types of resources since there is a lot of redundancy between the interviews. Apply as much skepticism as you like, since they are selling a course. That said, if you knew of techniques to remove a portion of people's innate negativity, of course you would try to push it!
Yes. My consciousness has an inner voice that regularly comments on my observations and discoveries. I’ve been thinking about when I use words to think. For instance, the other day I figured out how to do something on my iPhone, that I was unaware of. I didn’t use words to work through the feature. But after I discovered it I said to myself “isn’t that cool!“
And I find myself rehearsing what I’ll say to people or would want to say to people in awkward situations.
Also, I realized I can’t do math without speaking the numbers in my brain. Say, if I had to multiply 14 by 8, in my thoughts, I’d say, “ten times eight equals eighty. Four times eight equals thirty-two. Eighty plus thirty-two equals one hundred and twelve.” But when I use a calculator I type in the numbers and functions without verbalizing them internally. And I don’t say the result in my thoughts unless I’m transferring it to paper.
And when I swim, I have to count my laps. I don’t know how many I’ve done without internally, verbalizing the count.m. On the other hand, a drummer friend of mine tells me he can deal with complex polyrhythms without counting them out. And he’s likely to stumble if he counts because it ruins the intuitive flow of his rhythms.
OTOH, the voice in my head that offers commentary is pretty comfortable with the rest of my consciousness. It mostly behaves as a friendly voice, and not a critical voice. I recently was in a multi call encounter with a bureaucratic entity that was frustrating me. My internal voice said, “this too shall pass,” and I involuntarily giggled because my internal voice offers up such mundane commentary. But I didn’t verbalize why I was laughing.
Thanks for sharing! I remember when I didn't need to think in words. I accidentally slipped into it because I realized that was normal, and I didn't realize I wouldn't be able to get out again. My inner voice is pretty mild as well, but that's due to some meditation and mental conditioning I did a few years ago.
I can keep your remarks in mind as pointing at the purpose of the inner voice. Though I wish I knew what it was outside the realm of conscious experience. Since conscious experience is the leading candidate for something that exists but can't be proven, I'd settle for knowing something about the biological correlates of inner voice.
For what it’s worth, I can think either in language or in pictures. There are many things that aren’t’t suited to visual reasoning, though.
How can individuals best protect themselves from the economic impact of coming AI? A lot of the discussion focuses on what companies or governments should do, but I would like to take whatever steps I can for my own welfare.
The best I can come up with is to invest heavily in AI-related stocks. It seems like there’s an opportunity akin to investing in Apple between the iPhones 1 and 2. If dollars still matter in a post-singularity world then maybe amassing them will protect me from potential economic turmoil.
1. Is anyone here doing this, and if so, what exactly are you investing in? I would probably lean more toward AI-related ETFs rather than individual stocks to diversify the risk.
2. What else should I be doing?
I found Zvi's post to be enlightening in regards to both points 1 & 2: https://thezvi.substack.com/p/ai-practical-advice-for-the-worried
I grabbed 25% my liquidity and dumped it on the stock market. Considered dumping 50%, but I do think we could get hit with another AI winter, so I halved it. Of the money I used, half went into an S&P 500 index fund, the other into a Vanguard tech index fund. In the case of transformative AI, both of these things are going to the moon, and it's less risk than betting on AI directly (successful AI companies will pop up in both of these anyway).
That said, I'm not some expert investor at all, but I think the reasoning here makes sense.
How structured and clear vs. chaotic and hopeless (or any mixture of them) do others on this blog view the world/finding their way in life (or some other way of describing it along the same lines)?
Why that split? There is such a thing as structured and hopeless, and chaotic and clear. The world seems chaotic to me, but since it's far outside my control, I hardly think about it. My own life is really a combination of chaotic and hopeful (I'm in a pretty experimental phase).
That's a good point. Really I just want to know the outlooks people have on the world and their own lives, and that was the first way of wording it that came to mind. Thanks for pointing that out.
For you, how do you find hope amidst the chaos? How and how often do you feel at peace, if at all?
Well, this reveals my privilege, but the chaos of the world does not affect me. Nothing Trump does actually impacts me (so far), and I live in Puerto Rico, which has always been somewhat chaotic, so I'm used to it.
In my life, the main source of chaos and uncertainty is dating. I feel like I haven't found a way to do it well, which for me would mean setting up my life in such a way that I meet new women organically with much more frequency. Currently looking into starting two different clubs, maybe getting into tango. Also doing an exercise of handing out chocolates to strangers to get over my approach anxiety (highly recommended, it's very whimsical). It's chaos because I'm experimenting with a new structure for my life.
A lot of the time, I feel more energetic than at peace, willing to throw myself into things. Things that give me peace are meditating at the beach (it's great to meditate in natural spaces), and watching anime, TV series, and movies.
And I have hope because I'm working on addressing my problems.
Thanks for sharing!
Now I'm curious. Assuming you're more or less satisfied with your life aside from the dating side of it, what is your reason to pursue a relationship? Obviously it's just natural for basically anyone to want to be in a relationship with someone else, so in other words what I mean is: What is your end goal in having a relationship (to have a close, liflelong friend, sex, to grow a family, etc.), and once you reach that, do you feel like your life will be mostly complete?
I want to have a family. And also, largely due to being very neurotic and socially anxious due to bullying and possibly being neurodivergent, I've never been in a relationship or had sex, and I'm now 36. So, yeah, I want to have this experience, it's a very basic desire.
I do feel that if I fall in love the material side of my life will have been sorted out. What is left is continuing along the spiritual path, which is a lifelong thing. I also have a desire to sort out Puerto Rico's issues, but that is more of a pipe dream.
Good luck in your seeking a good relationship and I wouldn’t throw in the towel on sorting some of Puerto Rico’s issues too quickly. In my experience the most difficult problem are the most interesting and rewarding to work with.
If it's any encouragement, I have a freind who got married I believe when he was 37, and now he and his wife have five kids and are a great family. They're Christians, however, and God prepared them pretty well for each other before they met and got married. But I hope you do find someone and have a happy and fulfilling relationship and family!
I don't personally know anything about Puerto Rico or its issues. I don't pay that much attention to world events or even US events where I live 😅
I know I'm basically interrogating you about your whole life at this point, but nobody else commented and I'm interested. Where has your spiritual path led you, and what kind of beliefs do you hold?
Using my annual ACX free self-promotional post to share:
I recently migrated my blogging from danfrank.ca to a new Substack titled 'not not Talmud'.
https://notnottalmud.substack.com/p/daniel-isms-50-ideas-for-life
My first post there is a list of my most frequently shared pieces of life advice. In over ten years of writing, this is my favourite post I've ever written, and so far, the feedback has been exceptionally great. I genuinely think that readers of ACX will find this post to be an 8/10 or higher, so I feel no shame in promoting it to those here.
My second post is perhaps even more relevant to our community. Given how relatively bland and mundane Substack can feel, I felt a pang of sadness leaving my personal site and worried I was contributing to a more dull internet. So, in light of this, I reflected on what makes a good personal website and how one can be a better contributor to our niche infovore blogosphere ecosystem.
https://notnottalmud.substack.com/p/how-to-be-a-good-citizen-of-the-infovore
I sincerely hope you like the content and I gently request your support in following along.
Surprisingly good. That's the sort of "lessons from rationalist sphere reworded for normie accessibility" writing that I wish the community wasn't so sour on. Like #2 is basically Zvi's More Dakka (which, to be clear, is an excellent post), but minus the old obscure nerdy reference that carries a lot of the emotive weight. Whereas everyone can easily conceive of trying to start a fire with progressively better results.
This is very random, but are you still in New York? Haven't seen you at stuff since I moved back
Yes! life has just been very busy lately. Sadly, I was in Taiwan during the Spring ACX Manhattan/Brooklyn meetups, so I look forward to the Fall one.
I assume this means you landed a new job here and moved back- congrats! I'm thrilled for you.
Yeah! I was hoping you'd make it to my housewarming, but that would've been during your time in Taiwan. Come by to something when you're back in town!
I'm extremely pleased with your first link, which was definitely not what I was expecting; thanks for posting it.
Thank you for sharing this.
All the feedback seems to be like this, but unfortunately, I don't have much in the way distribution, so it will likely remain sparsely read :(
I had the same exact symptoms Scott named "nap mode" in his old post, and even emailed him a couple times asking if he has solved it
https://slatestarcodex.com/2013/03/02/220/
After years, I have finally found the answer (for me): it's reactive hypoglycemia waking you up. It happens if you eat too much sugar before sleep. The feeling of being refreshed and alert is adrenaline. The way to prevent it is not to eat sugar for a couple hours before sleep. If you struggle with this, it might be that.
Ideally you don’t eat anything 2-6 hours before sleep, let alone sugar. You really don’t want your body processing energy while you’re asleep, since the nutrients absorbed by the small intestine end up in your blood, which directly affects heart rate, metabolism and brain activity.
It also makes waking up a lot easier if you wake up hungry. One very fundamental motivator the brain has is “Don’t starve to death” so in the morning, your brain will be in “Get food” mode rather than the “You’re full of calories. Sleep longer to conserve energy” mode.
Thank you, will investigate!
Yeh a realised that years ago. I don’t eat sweets (candy) anymore but even as a teenager a late night chocolate bar would doom me to alertness until the am
Regarding human-made art, and AI "art", Neal Stephenson takes a shot at clarifying the differences between the two.
https://nealstephenson.substack.com/p/idea-having-is-not-art
I found the following argument to be persuasive...
> But in all cases there is an artform, a certain agreed-on framework through which the audience experiences the artwork: sitting down in a movie theater for two hours, picking up a book and reading the words on the page, going to a museum to gaze at paintings, listening to music on earbuds. In the course of having that experience, the audience is exposed to a lot of microdecisions that were made by the artist. An artform such as a book, an opera, a building, or a painting is a schema whereby those microdecisions can be made available to be experienced. In nerd lingo, it’s almost like a compression algorithm for packing microdecisions as densely as possible.
But I found his conclusion to less so...
> Since the entire point of art is to allow an audience to experience densely packed human-made microdecisions—which is, at root, a way of connecting humans to other humans—the kinds of “art”-making AI systems we are seeing today are confined to the lowest tier of the grid and can never produce anything more interesting than, at best, a slab of marble pulled out of the quarry. You can stare at the patterns in the marble all you want. They are undoubtedly complicated. You might even find them beautiful. But you’ll never see anything human there, unless it’s your own reflection in the machine-polished surface.
I wouldn't compare the AI art I've seen to a slab of marble. AI art can certainly be more visually interesting than looking at patterns in stone, but the trouble with AI images is that they'generally fall under the umbra of kitsch—i.e., they're simplistic in that kitsch tells us directly how to feel rather than prompting the observer to create their own meanings and emotions from the experience. From Notes on Trash....
https://notesontrash.substack.com/p/is-kitsch-evil
This argument excludes from being art:
* Goya's Black Paintings, which were not intended to be seen by anyone other than the painter and therefore are not meant as communication
* Jackson Pollock's splatters, for which the artist made broad decisions about the general appearance but the details were determined by a physical process that is not easily predicted
* Duchamp's Fountain, a machine-made piece of furniture over whose appearance the artist had exactly zero control, and whose artistic meaning comes entirely from later recontextualization
Many people are trying to come up with definitions of "Art" that exclude anything a LLM may do; not only they are always clearly created ad hoc to exclude LLM material, but in doing so they also always exclude large chunks of what is generally acknowledged as art.
Even photos aren't densely packed micro-decisions - the photographer has a very significant input, but it's in an only moderate number of choices. The camera handles the rest, and then the photographer picks the pictures he or she likes.
(Yes, people will now argue about just how many decisions the photographer has to make, but they're obviously *far* few than someone making an oil painting of the same motif.)
This is a good argument, but I still think that the concept of "directors" doesn't really fit with this model of art. A film director is not usually making microdecisions - they're not specifying the fine nuances of how the actors should pose or when to pull focus on a camera, they're looking at the aggregate of other people's microdecisions and deciding what goes into the film and what needs to be redone.
And like, there is definitely artistic skill in looking at microdecision-laden products and saying "yes, this is the one that aligns with my vision for the work." Directors get Oscars for a reason. But if that's the case, then surely evaluating an AI's output to decide if it fits your vision is the same skill, right?
(I do appreciate that he doesn't simply use this model to dunk on AI, but rather uses it as a lens to evaluate it as a tool. And I agree with him that giving users the ability to make more fine-grained edits to the AI's output will make it way more useful as an artistic tool.)
How is the shaping directors do on the components shaped by other cast and crew not count as micro-decisions, in your book? OTTOMH, a director does the following:
* (maybe) selects the cast
* goes over the script with the writer, tweaking lines here and there, asking questions, requesting rewrites
* provides the cast with a rough sketch of their respective characters
* coordinates with the cinematographer about what they want out of the shots
* coordinates similarly with the propmaster, costume designer, fight choreographer, dance choreographer, CGI shops, sound editor, etc.
* blocks each scene ("you stand here; you're thinking this; stand here by the time you finish that line")
* reshoots scenes if they don't fit the story quite right
* reports periodically to the studio, possibly with dailies (film clips produced that day)
* coordinates with the editing crew, deciding what to trim, what to cut, and what could be cut if they reshoot even more, etc.
This is a lot of microdecisions, IMO! And most of them aren't handed to the director on a platter for a thumbs up/down, anymore than a sculptor just decides chip/no-chip on each bit of marble. The director has to shepherd that vision through multiple processes that require the director to flesh them out.
An AI could theoretically dump a bunch of content out for the director to approve or reject, but the same goes for any artist.
I think I part with Stephenson slightly on the notion that AI is doing all the work, since I've noticed there's an art to good prompts. We might agree that an AI-assisted creator is burning fewer hours and calories for a given product than an unassisted one, and that this really means that an AI-assisted creator ought to spend more time on those prompts to elicit the same acclaim as one without; I don't know.
With regard to microdecisions, I would like to note that a lot of perceivable microdecisions are in fact not made by humans.
Show a medieval monk who writes manuscripts a paperback novel, and he might claim that this is not an art at all. Where he carefully draws every character and might artistically make decisions on how to render a word and when to break a line or hyphen a word, the paperback will be printed in a uniform font with computed line breaks. Every one of his letters is a microdecision transmitting vast meaning to the reader, while an author writing ASCII is just a monkey pressing buttons on a typewriter to transmit 1.3 bits of information or so.
Likewise, a producer of hand-drawn animation films might frown on 3d rendered animation films. After all, in his work, every hair on a lion's mane is a microdecision made by a human, while for 3d, the model artist just specifies the density and length of hair and then a physics engine will take care of how it looks as the wind blows through it.
Neither of them are wrong, exactly. A hand-written manuscript is simply a very different form of art from a ASCII novel. And as it turns out, one can create pretty amazing art at 1.3 bits per character.
Generative AI is just a further tool in the same vein as movable letters were. Just like it is unlikely a computer-rendered text will beat a manuscript page in visual appeal, it is (at the moment) also unlikely that a LLM will win the literature Nobel, or even just write a bestselling novel. But this does not make it useless for art. For example, I dialogue in computer RPGs (as opposed to dialogue with NPCs controlled by a human DM) suffers very much from being pre-written. There is no way to discuss Deathclaw preservation or women's rights with Caesar in Fallout: New Vegas unless the dialogue authors anticipated it. Contemporary LLMs are likely powerful enough to replace a DM improvising a NPC response. (Making the LLM-driven dialogue outcome affect the game world -- beyond "the NPC attacks", so that female legionaries will spawn after you persuade Caesar seems harder to implement, but not impossible.)
Likewise, not every artistic endeavor which uses images requires these images to be high art. Perhaps Google street view is good enough to provide backgrounds for your fighting game. Similar uses can likely be found for AI generated images.
>Likewise, not every artistic endeavor which uses images requires these images to be high art. Perhaps Google street view is good enough to provide backgrounds for your fighting game. Similar uses can likely be found for AI generated images.
I think this is true, but also the most economically worrying part of the AI art revolution. Yeah, the big prestige movies and the art that gets put in museum will be safe for a long time, but a lot of the market for art isn't that. It's "design a logo for my software company" or "make some background art for my indie game" or "draw a cover for my new novel." A bit more improvement in AI art would probably put a lot of indie creators out of business.
Wait, you can *talk* to Caesar? My games all have this weird bug where the entire Legion somehow gets lead poisoning.
The microdecisions framework makes a fair amount of sense. It also explains something I've observed while playing around with my employer's image gen product (Adobe Firefly), that I get the most subjectively aesthetic results by taking a medium-sized chunk of poetry, song lyrics, or highly-evocative prose and using that as the prompt. Especially if I follow that up by fiddling with the style settings and using "generate more like this one" on my favorite to try to get it make more variations. In NS's framing, I'm starting with a prompt that contains a fairly high density of microdecisions, and I'm adding a few more of my own with the settings. It's still a lot less decision-dense than you'd expect from most human-made images, but has more decisions packed into it than an AI-generated image made with a simpler prompt.
The problem I always have with this kind of attempt to draw a line between works that make use of genai and works that do not on some more fundamental basis than the use of genai and/or one's subjective opinion on same is that it's incredibly hard to create a definition that doesn't end up excluding non-genai, human-produced, objects that our consensus otherwise accepts as art.
> "Idea Having is not Art"
Careful there! There are many, many works of art where the important, ingenious part is that the artist was the first person on record to have and express an idea, rather than the precise way they chose to do so.
> "the entire point of art is to allow an audience to experience densely packed human-made microdecisions"
Decades of famous human artists beg to differ.
https://en.wikipedia.org/wiki/Kazimir_Malevich
https://www.ngv.vic.gov.au/guggenheim/education/04.html#:~:text=Minimalism%20and%20Conceptual%20Art%20aims,probe%20the%20essence%20of%20art.
https://www.tate.org.uk/art/art-terms/c/conceptual-art
https://en.wikipedia.org/wiki/An_Oak_Tree
https://en.wikipedia.org/wiki/Readymades_of_Marcel_Duchamp
I also take issue with "which is, at root, a way of connecting humans to other humans": it seems to me that the answer to whether a human who expresses themselves in isolation for their own satisfaction - with no expectation or desire that any other human ever encounters or interacts with their work in any way - is nevertheless creating art is... at the very least, not obvious. I have less basis for this take, however, so just putting it out there.
> I also take issue with "which is, at root, a way of connecting humans to other humans": it seems to me that the answer to whether a human who expresses themselves in isolation for their own satisfaction - with no expectation or desire that any other human ever encounters or interacts with their work in any way - is nevertheless creating art is... at the very least, not obvious. I have less basis for this take, however, so just putting it out there.
Good point! Many people create their art with little expectation that others will see it. Heck, Vivian Maier, who IMHO was one of the greatest US photographers of the 20th Century, was unknown until someone stumbled across her negatives. The act and discipline of creating art is a reward in itself for many people. But I think you'll find that all the unknown artists out there work within a defined framework that could communicate to an audience—if the audience materialized.
> But I think you'll find that all the unknown artists out there work within a defined framework that could communicate to an audience—if the audience materialized.
Is that true? What would it look like if this /wasn't/ the case? How would we tell?
Certainly it is true of /every object we recognise/ as being the result of someone externalising their ideas / emotions / otherwise self-expressing, but this is a tautology.
In my opinion, there are four modes of artistic expression in the Western canon of visual arts. I call them: the message mode, the decorative mode, the evocative mode, and the philosophical mode.
First, a bit of a digression. Despite all the previous postings on how taste is somehow dictated by elites (or other hogwash), artists are the ultimate creators of taste. Artists create their art with their audience in mind — so on one level, they may be catering to the tastes of the audience. However, in all cases that I can think of, some trailblazing artist (or group of artists) has gone before and tested new ideas to shape the tastes of their audience. Some may try to reach the broadest audience possible by following in the footsteps of previous artists, but others may test new ideas on a smaller, select audience who are more open to novelty.
Art is ultimately a nonverbal form of communication in which feelings, moods, or impressions are the vocabulary. The *intent* of the artist is to communicate some sort of impression to his or her viewers. At the meta-level, I classify these by they way they go about communicating with their viewers—which are the four modes I listed above.
1. Message art is the oldest type of mode in the Western canon. This is art that is created to memorialize religious or political events, with references that are culturally shared and that promote or reinforce social cohesion. Much of the early Renaissance art had a religious message (think of all the paintings of the Virgin Mary and baby Jesus). However, as the Church became less important as a patron, historical message paintings gained popularity. For Americans, think of Emanuel Leutze's iconic painting Washington's Crossing of the Delaware. Portraits of rulers and important personages reinforced the message of power. By the 19th Century, social messages came into vogue. Norman Rockwell and Jean-Michel Basquiat are important message artists of the 20th Century. And let's not forget propaganda art.
2. The Decorative Mode developed sometime after message art. As patrons other than the Church started commissioning and purchasing paintings, visual art began to be released from the chains of message and meaning. This mostly occurred after the Reformation in northern Europe, when the burgeoning merchant and middle classes sought non-religious art to decorate their homes. The Decorative Mode was created to provoke a simple emotional response in the viewer. Dutch and German painters started painting still lifes and landscapes that appealed to people who didn’t want or need religious or historical scenes on their walls. Still lifes came first. Then landscapes. Then seascapes. Human nudes were always a delicate proposition — Until the 19th Century, the erotic aspect of nudes had to be presented with the figleaf of a mythological or biblical message.
Finally, in the early 20th Century painters realized that just as music didn’t require lyrics to get an emotional response, they could elicit an emotional response from viewers with pure color and form devoid of any representation. Thus, abstraction was born, and a third mode of communication came to dominate late 20th Century art…
3. The Evocative mode. Rather than giving us a message or telling us a story the purpose of evocative art is to create a complex or open-ended emotional response in the viewers. Evocative art can be realistic, but most of the artists who work in the evocative mode shy away from pure images (because they don’t want their viewers to be distracted by making up stories about what they see), but there are plenty of realist painters who paint in the evocative mode. Edward Hopper is an example of an artist who evokes a psychological mood in the viewer using realistic images. But his paintings, for the most part, don’t tell obvious stories like, say, Norman Rockwell’s paintings do. Surrealists were interested in using dream-like images to evoke moods in the viewers. Jackson Pollock and Mark Rothko relied solely on color and form to evoke moods in the viewer.
4. Finally, there’s the philosophical mode of art which asks questions about what art is. DaDa kicked this off during the middle of WWI when the old European order was falling apart. Dadaism was an anti-establishment art movement that reduced meaning to absurdities. But it asked questions that have continued to niggle artists to this day. Can a urinal signed by an artist be considered art (Du Champ)? Can a step ladder in the middle of the gallery with a little box from the ceiling be considered art (Yoko Ono)? Can simple blocks of color with hard edges be art? This may puzzle the uninitiated viewer, but the primary audience of philosophical artists was other artists and critics, and their intent was to prompt them to question their assumptions about the nature of art.
AI, being unintelligent, doesn’t understand intent. Although it can produce simple decorative art fairly easily, it may have trouble with message art (without the user refining the prompts over and over), it would definitely have trouble producing art in the evocative mode or the philosophical mode because it’s blind to these creative urges.
> it’s blind to these creative urges
So is a paintbrush. GenAI is a tool to be wielded by a human with creative urges, and the output is the result of a human wielding this tool. As with any other human-driven tool, the quality of output depends on what the human puts in.
I predict we are about to see an explosion of human artists devoting large amounts of time and effort to use GenAI for philosophical mode art exploration.
So, hypothetically, if I were Charles V, the Holy Roman Emperor, and I commissioned Titian to paint the Three Muses, and I told Titian that I was looking for certain themes and a certain composition for the painting, am I also the artist? Would Titian be my paintbrush?
Titian is no mere paintbrush: he is intelligent and understands intent. He has creative urges. If you are implying this is also true of AI, we need to revisit the earlier claim that it will have trouble producing art in the evocative mode because it lacks these things.
Well, if AI is "intelligent" as peeps like Sam Altman claim, then AI is the Titian, and the person prompting the AI is playing the role of Charles V. But I don't think either you or I believe this.
My argument is that typing a command into AI prompt involves a limited set of microdecisions, while laying a bush line of paint on canvas probably involves hundreds of microdecisions as the brush moves across the canvas. In the case of the AI example, most of microdecisions were made by the artists who created the works in the AI's training sets. So, yes, AI Art is extruded. Just because it might look good to you doesn't negate the extrusion factor.
> Decades of famous human artists beg to differ
True, but I don't mind an explanation for why AI art is crap that also accidentally explains why conceptual art is crap.
I have no beef at all with people objecting to one form or another of art on the basis that they think it is crap. I also have things I think are crap.
It's when people start trying to gatekeep by trying to gerrymander category boundaries so they can claim the thing they think is crap art /isn't art/ that they need to take care not to simply ignore the last century's worth of conversation about what art is and isn't.
The current problem with AI art is that it is extruded product, exactly like some "art" created by humans in warehouses to provide "picture to hang on wall for people furnishing their homes or offices".
It's not original, it's not even a craft, it's "copy this copy of a copy of something like 'The Haywain' in this manner". Painting by numbers. This is the most sophisticated example of it I've seen but it's not art by any means:
https://www.youtube.com/watch?v=3EVgqS19uSo
Right now, unless someone knows what they're doing and refines the piece over and over with better prompts each time and discards the failures, what you get is that shiny, plastic, piece of extruded art that glares out at you from Amazon Kindle covers, immediately recognisable as AI product:
https://images-na.ssl-images-amazon.com/images/S/compressed.photo.goodreads.com/books/1713186494i/205910597.jpg
Is it competent? Yes. Is it cheap? Yes. Is it enabling everyone to be their own artist? Yes. Is it any good as art? Not yet.
The most sophisticated piece of AI art I've seen recently is this band's experiment with using genAI to provide the video for their music video a few weeks back: https://www.youtube.com/watch?v=rbkkxqghGNo
I'm sure neither the music nor the thematic choices are everyone's cup of tea; but I do think we have existence proof that genai isn't just limited to extruded product - it can very much be a valid tool in the set of tools humans use to express themselves.
(That said, current video generation systems generate video 7-8 seconds at a time, so this would have taken a /lot/ of prompting - so, arguably, in line with the microdecision theory).
I admit that's pretty fantastic. Certainly, AI-generated special effects is putting the old-guard digital artists (who used to create AI scenes) out of business. I admit that video is Art with a capital A. But the way it depicts humans is deep into uncanny valley territory.
Is the last part true?
I feel the takeaway from picture/prediction contest Scott ran a few months back was that once you have a model that can step away from the common tell-tale signs, it becomes much harder to discern what was AI and what was human.
Kind of like plastic surgery, in the "if you can spot it, it was poorly done" way.
Is Cheese-Whiz and Pringles good food art? People are buying loads of that crap. I just googled and discovered that Cheez-Whiz generates at least $600 million in revenue for Kraft, and Pringles generates over $3 billion for Kellanova (formerly Kellogg). So, if we rank food art by popularity, it's hard to argue that Cheez-Whiz and Pringles win hands down over snooty food. An insider told me that a restaurant with 3 Michelin stars can generate $20+ million a year in revenue with a ~20% profit margin. Of course, they have to spend many millions starting a restaurant like that, and it may ultimately lose money if it doesn't get at least one Michelin star. So, Michelin restaurants are very niche food market. I dined at three 3-star restaurants in my day, and the food was memorably good. Could Joe or Joan Sixpack appreciate that sort of food if it were put in front of them—even if they came into the money to afford it?
Maybe a better taste example is wine. You're likely hear a that most people can't tell the difference between a $20 bottle of wine and $200 bottle of wine, and the even "experts" get fooled in blind tastings. However, if you want pass the CMS (Court of the Master Sommelier) exam, you have to: Taste 6 wines blind in 25 minutes, and identify grape variety, country of origin, region, vintage, quality level, and sub-region or *vineyard* for classic wines. It's an extremely difficult test to pass, but some people do. So even though many so-called experts can't distinguish between expensive and less-expensive wines, there are some who can.
Just because you can't distinguish between an AI-generated Impressionist-style image, and an image of actual Impressionist painting on your monitor doesn't mean there isn't a difference. And if you printed the AI Impressionist painting on canvas, and put it next to an actual Impressionist painting, very few people would be fooled.
Popularity isn't the question though (though looks like AI beat human, even among human supremacists). Unless you were the 49/50 outlier and assuming you took the survey you got at least a fair few wrong yourself.
https://www.astralcodexten.com/p/how-did-you-do-on-the-ai-art-turing
I realize I did miss the "unless you iterate heavily and discard failures" bit in your post, but it seems a hard sell to dismiss all AI-generated images as "extruded product" (and conversely, does it say anything about the many humans who spend their careers making "office art" for the higher class of office?)
I think you're missing my point. For those two Impressionist-style pictures (The flowering hillside vs the cart going down the road) near the top of the page you linked, if we made poster prints of those images, no, most people wouldn't be able to tell the difference. However, the best that AI can currently do is produce a print that can be printed on canvas, which would look flat and dull compared to actual paint on canvas. AI cannot recreate the texture of the paint on a painting. Part of the art of painting is the paint.
But, yes, I instinctively knew the flowery hillside was not a real piece of Impressionist art, because the composition was "too pretty". This vastly oversimplifies the perceptual and evaluative process that my mind used to come to that conclusion, but I've seen thousands of real Impressionist paintings over the course of my life, so I've got an excellent training set for that style of art. Where I failed the test was in distinguishing the digital art that was created with pre-AI techniques vs AI-generated art. That may be because I don't have a large digital art training set in my brain (because I'm not interested in digital art), or it may be because AI is successfully imitating digital art—to which I say, big deal, because most digital art is created to be consumed by the Cheez-Whiz demographic of people who purchase prints.
> AI cannot recreate the texture of the paint on a painting
This bit feels rather god-of-the-gaps to me. Height maps and normal maps are just more of the same kind of data that we already know we can make genAI churn out; and we've had off-the-shelf tech for automated production of real-world objects given such data for decades now. Unlike, say, the problem of making LLMs never spit out lies, there are no fundamentally hard design, software, mathematical or philosophical problems to solve here; the only barrier to having genAI produce textured paintings is someone caring enough to throw the appropriate amount of money at data capture, training and off-the-shelf tools.
Of course, some people like the extruded. Cheez-Whiz on Pringles vs Camembert spread on a French baguette.
Listen, American mass-market food is its own unique thing and sometimes I'm not even sure it should be classed as food.
They don't have cheese whiz on whatever rock you live on?
Haven't you heard of the Irish Cheez Whiz Famine? That's very insensitive.
Did you get me my Cheese Whiz boy?
https://m.youtube.com/watch?v=RQSEimjVTUY
Not under my particular rock as yet, we do have access to Philadelphia cream cheese though. Perhaps in time civilisation will percolate down to my bog.
And the Brits have Primula, which seems similar to Cheez-Whiz:
https://www.tesco.com/groceries/en-GB/products/310187258
I was about to mention Primula! I didn't realize it didn't make it over to Ireland.
Some thoughts on the two party system in the US, inspired by a discussion on this substack:
Why are there two massive political parties in the United States, when most democracies have at least 4-5 major-ish parties? People will often blame the first-past-the-post system, or the fact that the USA has a presidential system.
But, e.g., France has a presidential system, while the UK and Canada have first-past-the-post. And they all have more than two main parties. So what’s going on?
I think it’s because the USA, almost uniquely, has a first-past-the-post *presidential* system, with very few constituencies.
In standard first-past-the-post systems, it’s hard for minor parties to break through – a party with 20% support in every constituency will get no seats at all (see the Greens). But regional parties break this pattern: if a party has a lot of support in some constituencies, then they will get seats there, even if they have little support anywhere else (see the Scottish Nationalist Party).
If you have a lot of constituencies, then you have more scope for “regional” parties that have are organised around a theme, rather than a geographical region. Hence the Liberal Democrats, with strong support in some liberal urban areas. And even if the regional or “regional” parties are not in government, they have some influence by being in parliament. So it feel less of waste to vote for them: they can win locally, and that local win gives them *some* power in government.
Conversely, if you have a popularly elected president (especially if you have the two-round election system like France does), parties are less important. Yes, it’s good to have a party infrastructure behind you if you’re running, but if you have broad popularity and a lot of free publicity, you can be a viable candidate with 20% or so initial support. And then voters can believe that you’re a serious candidate, that voting for you is not a waste of a vote, and so they are more likely to vote for you, etc. And then you get through to the second round, and can often win.
Now, back to the USA. Their presidential election is almost exactly a first-past-the-post contest, one round, fifty constituencies (the states) with the loser getting nothing (the electoral college doesn’t sit as a parliament; it becomes irrelevant once the president is selected).
A party with a broad 20% support will get nothing. A party organised around a theme (e.g. the Libertarian party in the USA) may get support in some specific locations, but not enough to win a state, so they will get nothing.
A charismatic non-party candidate with broad 20% support will also get nothing. And since there’s only one election round, they won’t get the chance to become one of the two top candidates. So they’re non-viable, everyone knows they’re non-viable, so they won’t get more publicity and support.
How about regional parties? Well, a regional party covering 20% of the country will get 20% of the electoral college. Not enough to install their own candidate, but maybe enough to make a deal as to who will be president. But unlike deals in parliaments, this is a one and done thing. They can’t remove their support at a later date and cause the government to fall. So they have very little leverage. Which means that they will probably just support the candidate that they are the most ideologically similar to; there’s little that they can gain from negotiations.
Given that, why bother running as a regional party? Why not just join the larger party they are the most ideologically similar to, before the election? This will give them some leverage as they are part of the ruling party. And the combined party may win more states than they would each individually.
So it seems that the current electoral system in the USA inevitably pushes towards a dual-party system. Groups that would be separate parties in other countries amalgamate into the two main parties.
What changes could be made that would shift away from this equilibrium? The most obvious would be a national popular vote. This would remove some of pressure towards two parties; a charismatic third-party candidate could win by claiming, say, 40% of the vote. This would be a lot easier if the election shifted towards two-round elections or some form of transferable vote. Then a third-party candidate just needs 33%+ of the vote on the first round; following that, it’s perfectly plausible they would win a 1-one-1 contest with whichever of the main two parties’ candidate remains. Have a few elections like this, and there probably won’t *be* just two main parties any more.
But without changing the system, the tension towards two main parties in the USA will be extremely strong. There is no plausible path for a third party to grow, no matter what state-based strategy or local-election-based strategy or whatever communication strategy they follow. I don’t like the expression “don’t hate the player, hate the game”, but it’s very apt in this case.
A hot take with low epistemic confidence: the U.S. was a de facto multi-party system until 2008 (Democrats) and 2016 (Republicans) with some residual factionalism since.
The primary system is one of the main drivers here: uniquely, any political movement can hijack a party and win a nomination in the U.S. (even in two-party systems like Britain internal leadership elections usually return a bland moderate, see the Conservative Party's last ten "PM of the day"s).
But this has since changed: the last fair and free political primaries for the Democrats was in 2004. In 2008, Clinton won the popular vote (but lost the nomination). In 2016 various shenanigans like leaked debate questions and superdelegates guaranteed a Clinton victory. In 2020 the party collaborated with the Biden campaign to bribe other candidates to drop out and endorse Biden (most obviously Buttigieg who dropped out the night before Super Tuesday for Sec of Transportation but also Klobuchar chairing the very powerful Senate Rules Committee, O'Rourke getting Texas in 2022, and of course Harris getting the Vice Presidency). In 2024 the party didn't even bother holding a primary – a smooth palace coup replaced a candidate with minimal infighting (something Pelosi ought to be congratulated for far more than she is).
The Republicans did the same with a "Trump or nothing" policy, both nationally and in local primaries, since 2016. While different political factions can fight within that space they are fundamentally tied to Trump's whims.
Multi-party elections only remain in either low-level primaries where the DNC / Trump don't care or rarely where insurgent populists manage to overthrow establishment figures (such as AOC or very nearly Brandon Herrera). The existence of these insurgencies do show that a multi-party system still exists in the United States, but it is far weaker than it once was – compare the Dixiecrats as explained in other comments in this thread.
Far more interesting is the increasing institutional capture of the DNC by progressives. For example, the DNC nullified an election because someone of the wrong gender won. The Hogg case actually shows a nice synthesis: progressives are using non-democratic means to hold down their fellow progs in exchange for institutional power. I think that neatly marks the end of the multi-party era: even anti-party insurgents are using the party against their allies.
>In 2016 various shenanigans like leaked debate questions and superdelegates guaranteed a Clinton victory.
Clinton won the popular vote in the primaries, and had enough delegates to win even without superdelegates.
Yes, and Putin would win re-election without his electoral interference. That does not mean the elections are free and fair, and you are correct that 2016 was significantly less elite-run than later years. Of course, I'm drawing from a small sample size here – if Clinton only got, say, 20% of the vote she would not have won no matter what. But this sort of finagling at the margin really does matter, and given how polarized the Democratic primary was in 2016 I don't see her losing (cf. how neither party will ever get 60 seats in the Senate).
I re-read the ACX dictator book club post on Chavez and it pretty much convinced me that any sort of constitutional rewrite around elections is way too dangerous to attempt in the US right now. In Venezuela, Chavez's party used their ~52% electoral position to gerrymander their way into 95% of the seats at the constitutional convention and then just did whatever they want. Very easy to imagine (your least favorite political party) doing that today in the US.
Similarly in Hungary, when Orbán's party won, they changed the constitution, and since that they keep winning, because the conditions to defeat them are almost impossible.
I think this is a general fragility of democratic system, that when someone gets too many votes once, they are free to change the system so that "having too many votes" becomes a requirement; and then you can't change the system back, because you either don't have enough votes to do that, or you do but then you don't have the incentive.
Basically, it's a ratchet: many parties -> a few parties -> two parties -> one party.
It is possible to only move legally in one direction. A small party may accidentally become large and then make a law that small parties don't matter anymore. But when a large party accidentally becomes small, they can't make the opposite change, because they no longer have the power to do that.
I agree that the combination of Plurality Voting (I dislike the term FPTP with some intensity) and a Presidential system is a major factor. Layered on top of this, I see a few other factors contributing.
The US inherited a cultural tradition of a two-party system from Britain. Britain's two-party tradition started forming in earnest in the Restoration era with the Exclusion Crisis factions (c. 1680) to which the labels "Whig" and "Tory" first attached, and had roots quite a bit further back. The American Revolution happened during an era when the "Whig Ascendency" of the early-to-mid 18th century (a 1.5 party system where the Tories were consistently a powerless minority) had broken down and re-formed into a functioning two-party system (the "Northite" and "Rockinghamite" factions, named for their leaders at the time; the forerunners of the 19th century Conservatives and Liberals respectively). Britain having 3+ truly major national parties tends to happen only during realignment periods: the breakdown of the Whig Ascendency, the rise of Labour after WW1, and the current post-Brexit breakdown of the Conservative party. Between these, Britain has had much stronger third parties (both national and regional) than the US, but only two really major parties at any given time (Whig/Tory before the Whig Ascendency, Conservative/Liberal after it until the Home Rule crisis, Unionist/Liberal until WW1, and Conservative/Labour from WW2 through Brexit).
The US established Universal Male Suffrage much earlier and somewhat more gradually than Britain did. The replacement of the Liberals by Labour as a major party, and the subsequent survival of the Liberals (later the Lib-Dems after they merged with the Social Democrats minor party in 1988) happened in large part as a consequence of Britain abolishing property qualifications for voting by men in 1918 (women were also enfranchise by the same act, but were still subject to property qualifications until 1928), more than doubling the franchise. This immediately elevated Labour, which had formed as a political movement among disenfranchised urban workers, to major party status. There was no real counterpart to this in the US since property qualifications were abolished piecemeal at the state level, mostly in the early 19th century, so the political labor movement of the late 19th and early 20th century happened among enfranchised workers and mostly operated within the two-party system.
The US has a more bottom-up party organization system than most other democracies I'm familiar with, especially since the 1960s when primary elections became central to the nomination process. Even before that, US parties going back well into the 19th century tended to have nominations processes for local, state, and federal offices that were driven at least as much by grassroots members as by the party leadership. This makes it easier for a political movement to work within a major party (taking over the label in whole or in part), while in many other countries making a new party is the only option if no existing party's leadership is willing to nominate your candidates.
There's also some historical accident over the political behavior of regions with distinctive cultural identities and political interests. A major genre of semi-major parties in other countries is parties organized around regional interests, especially separatist or particularist movements: e.g. Bloc Quebecois in Canada, the SNP in modern Britain, or the Irish Parliamentary Party in late 19th/early 20th century Britain. The US had one of these between the late 19th century and the 1970s, the "Dixiecrats" or "Southern Democrats" in the former Confederacy. The Dixiecrats were officially a faction within the Democratic party, but they really operated as a third party: Congressional voting patterns during this era tended to show Northern and Southern Democrats voting differently on many major bills, and several times in the mid-20th century dissented from the national Democratic party on Presidential nominations. In two elections, there were separate Dixiecrat candidates for President (Strom Thurmond in 1948 and George Wallace in 1968), in one (1964) several state Democratic parties in the South endorsed the Republican candidate, and in four (1944, 1956, 1960, and 1964) at least one state had a slate of Dixiecrat "Unpledged Electors" on the Presidential ballot. Total electoral votes won were 39 in 1948, 15 in 1964, 47 in 1964, and 46 in 1968. In all cases except for 1964, the intended strategy was to deny an election-night majority in the Electoral College to the two major party candidates and negotiate policy concessions in exchange for support in either the formal Electoral College vote or in the contingent election after the Electoral College deadlocks. The historical accident is that the Dixiecrats usually called themselves a faction within the Democratic Party rather than (as has happened in other countries) consistently calling themselves third parties but often forming coalitions and coordinating electoral strategy with a particular major party.
Except for your strange aversion to FPTP (isn’t that what’s happening?) that’s a top notch comment.
Thank you!
I have two objections to FPTP terminology. The first is that there's no "post", no absolute or percentage threshold of votes that one must get in order to win the election. The second is that there isn't much "first", as the election is conducted in a single round in which the outcome is unaffected by the order in which ballots are cast or counted. On the other hand "Plurality" (or more precisely, Single-Member Plurality) describes the procedure perfectly: whoever receives the most votes (i.e. a plurality of the votes) is elected.
I know of a few actual election procedures and at least one hypothetical election procedure that would be better described by FPTP than is Plurality voting.
Under the election procedures recommended by Robert's Rules of Order (i.e. RRO voting), there is a fixed threshold (usually 50% of valid ballots cast excluding abstentions, but organizations may adopt different thresholds in their procedures and bylaws) required to achieve election. The convention, assembly, or committee doing the electing casts ballots repeatedly, with each member voting for one member per open office. Votes are tallied, and if someone reaches the required majority, they are elected. If nobody reaches the threshold, the procedure is repeated as many times as needed until someone gets a majority. If followed strictly, this can lead to protracted deadlocks, like the 1924 Democratic National Convention which deadlocked between the initial front-runners Al Smith and William McAdoo as their Presidential nominee for 103 ballots before eventually settling on John Davis (2.8% on the first ballot) as a compromise candidate.
Elections in the Roman Republic used a single round of voting, but votes were tallied by "tribes", each of which voted in a particular order with the posher tribes voting first. Magistrates required the support of a particular number of tribes to be elected, and balloting stopped as soon as the required threshold was reached. I'm not sure what happened if nobody got the required threshold, whether they'd re-vote like RRO, or select the plurality winner, or something else. But I get the impression that it was fairly rare for the last several tribes to have the opportunity to vote before elections were decided.
Hypothetically, you could also have an election procedure where candidates collect petitions of support over an extended period of time, with some absolute threshold of supporters required to win the office. Whoever turned in however many valid signatures would then be elected. This would fit the FPTP label perfectly.
Interesting distinction. I never thought that there wasn’t an actual post to get past, but you are right. Funny enough the single transferable vote does have a post - the quota. Although it’s not necessary to get past it if you are the last chump remaining.
What did you mean when you said "isn't that what's happening?"
That the elected candidate was first past the post. As Erica pointed out though, there’s no quota and therefore no post. In my defence, where I live, the election is always won by a majority not a plurality. That’s not necessary though.
I share that aversion: the "t" in FPtP should be lower case.
Thanks for the detailed and thoughtful response. A lot of good points there.
My pleasure.
Minor nitpick, not really material, but it's worth noting that while Canada does have a third party that is capable of winning elections at the provincial level, it has never won an election at the federal level and only one served as the opposition. So Canada is not too far off from only having two parties.
I rather seriously disagree. First, there's a pretty big practical difference between a majority government and a minority government, especially a weak minority government. Smaller parties having the power to coalition with a larger party and negotiate for items of interest is a significantly different situation from one where only two parties can ever win non-trivial numbers of seats.
Second, provincial level politics plays a big role in shaping peoples' lives. A party being capable of forming a government at a provincial level makes it quite significant as a practical force even if there's no chance it will ever form a government at the national level.
Both good points, but on the second point, given that the provincial parties have no formal ties to the federal parties, and given the variation within American parties at the state level, I'm not sure the existence of a third party makes such a big difference. What I mean is, a state-level Dem party can be as left-wing as the left-most provincial NDP, and the rightmost provincial NDP can be as conservative as, maybe not the most conservative state Dems, but, still much more conservative than the federal party.
I don't know that we get wildly different types of provincial government merely from the existence of a third provincial party.
Your overall conclusion -- "without changing the system, the tension towards two main parties in the USA will be extremely strong. There is no plausible path for a third party to grow" -- is correct, unfortunately. The USA hasn't had an insurgent third party successfully replace one of the two main ones in nearly 180 years now, and that one required a national civic crisis serious enough to literally spark secession and then a brutal civil war. No attempt since then has gotten anywhere near success.
Others here may describe additional proposed solutions, many have been written about for many years. I've no idea anymore which of them might ever come to pass nor what national-crisis type scenario it would take. I'd like to think there is some path short of secession-and-war.
I will comment on one relatively small aspect: "a regional party covering 20% of the country will get 20% of the electoral college. Not enough to install their own candidate, but maybe enough to make a deal as to who will be president."
That would be a longshot at best. In the first place a regional party covering 20% of the country could at _most_ get 20% of the EC votes, only if they "run the table" in their region and win every state. Remember that the EC votes are mostly assigned state by state all-or-nothing with no 50%+1 requirement.
And anyway negotiations such as you're imagining are not really achievable. If a presidential election fails to give one ticket a majority of the Electoral College count, the election goes to the House of Representatives (this happened once, 200 years ago). And the House votes state by state not proportionately, with no majority requirement for a House state delegation. [E.g. if the reps from a state having 14 reps vote 6-5-3, the candidate getting the six wins that state's entire Electoral College count.] So unless our hypothetical regional party also has majorities in the House from each of a few states, they have no leverage in the resulting House of Representatives mini-election that chooses the president/vice-president. Unlike in a parliamentary system the fact of their candidate having gotten 20 percent of the EC votes would not provide any followup leverage.
So are we effectively stuck with two mostly-similar, if you ignore the aesthetics, centrist-ish parties passing the ball among themselves in perpetuity, giving an illusion of choice to the electorate?
It's worth remembering that in living memory (1968) George Wallace won 5 states as what was, effectively, a regional party such as you describe. And those 5 states didn't even act as a spoiler: Nixon got an electoral majority anyway.
Eh...."effectively" is doing a lot of work there. I think OP is talking about something more than the one-offs around a single individual running for a single office. Those really aren't a "party" in any practical or lasting sense.
The US does have some surviving small regional parties, that go in with one of the two main parties if a representative from them is elected to Congress (which makes sense; you'll get a lot more done as part of the Democratic party voting bloc than as the sole Democratic-Farmer-Labor party member, or as one of the Democratic Socialists of America who seem to be more interested in purity spirals and shooting themselves in the foot).
So I think that tendency does make it much harder for a sizeable third party to get off the ground, because maybe it'll do great in one state, maybe it'll do great in a particular region, but can it win support and seats all across the country? Generally the answer seems to be "no", and of course then the "if you vote for a third party you are wasting your vote" messaging reinforces that difficulty (see all the blame about "whoever voted for third party candidates instead of Hillary/Kamala, it's your fault the fascist is in power now!")
It would, on the face of it, make a lot more sense for the US to have four or five big(gish) parties - a very left/progressive one for the socialists currently hanging on at the fringes of the Democrats, something like the Christian Democrats for the religious voters shoved off to the Republicans, etc. But I don't know how, in the current system, that will ever happen.
Why would you want more parties? Have you seen what happens with other democracies with multi-party systems?
The US system might not be the most representative, but it has lots of advantages.
> Why would you want more parties? Have you seen what happens with other democracies with multi-party systems?
Yes. I want the government to spend more time gridlocked. "No man's liberties are secure while Congress is in session."
Well, that saying is from the 19th century. Might have been true once, today government can regulate the hell out of you with no congress. Sometimes congress (or other parlament) is the only check.
I've observed in Bay Area California local politics in the 2010s that there tended to be four-ish factions:
1. Labor Democrats, usually the dominant factions. These were pretty strongly aligned with the public employee unions in terms of pay and benefits (one of the dominant political issues at time) and also in terms of deferring to senior full-time city employees on policy issues.
2. Progressive Democrats. These were mostly ideological progressives, focused on a combination of social issues and urbanism.
3. Reform Democrats, defined by at least partial opposition to public employee union interests, especially on pension reform.
4. Republicans and Libertarians. Within the Republicans there were significant factions (establishment, populist, and small-l libertarian), but they tended to be electorally insignificant except when there was a candidate who got significant support from at least two of the three, some crossover support from moderate Democratic voters, and often the informal support of the local Libertarian party as well. The Libertarian Party was pretty small, but was quite a bit better organized than libertarian Republicans.
Officially, local elections in California are "nonpartisan", meaning that there's no nomination process and candidates don't have party affiliation listed on the ballot. But if you're paying a little bit of attention to endorsements, it's usually pretty easy to figure out both party and faction.
The boundaries between groups 1 and 2 were pretty fuzzy, with a lot of elected officials seeming to have one foot in each. Groups 3 and 4 had a bit more distinction between them, but operated in coalition more often than not.
I suspect it's actually primarily because the US is so huge, that paradoxically this means there have to be lots and lots of "parties" to represent the many constituencies and demographics, but these "parties" then converge into two broad coalitions that are referred to as parties.
It's not obvious to me that it's sensible to speak of the US having a two-party system and of various European countries having multi-party systems. Many of the latter countries seem to have a whole lot of parties whose platforms are almost identical! On so many issues, there's clearly more choice, respresentation of more groups/positions, and thus more democracy in the US than there is in some (many?) of these multi-party states, despite the "only two choices".
There's also some serious costs (i.e. polarization) but I think the benefit to respresentation of having umbrella parties with numerous sub-groups within them (as opposed to top-down homogenous parties) is shockingly underappreciated in these comparisons.
This is especially apparent when looking at the platforms of the two parties. They were *very* different in 1860. (In 1828, when the Democratic Party was founded, the platform was described as a mash of conflicting positions - anti-tariff here, pro-tariff there - and also downstream of its Presidential candidate, much as today. https://mvhm.org/the-election-of-1828-the-candidates-their-platforms/)
The US is widely thought of as having successive Party Systems, brought about by major shifts in the coalition led by the two majors. The fact that there's a continual institution of funding channels and party platform administrations doesn't say much in light of their continual changes in membership. We're considered to be in our Sixth system now, and I'm sure there are historians ready to declare a Seventh, possibly due to Trump's takeover of the GOP, but more soberly due to realignment in voters on issues such as trade policy and immigration.
Isn't the existence of the primary system the salient (mostly-) unique feature of the US?
In most political systems, the party leadership decides what the acceptable range of positions within the party is, and if a candidate's positions are outside that range then they need to go off and join/form a different party. This is how new par
In the US, the position of the R/D party is whatever the primary voters say it is. So if you're a popular politician with opinions not entirely congruent with the party leadership then you can still get elected on a big party ticket. So there's never much of an incentive to try to form your own party when you can instead try to drag one of the big two in your direction.
The United States has been a two party system during most of its existence. The push for primaries started in about 1900. Since 1972, most delegates to the national party conventions have been selected in primaries or caucuses.
In 1968, the Democratic Party was badly divided over the Vietnam war. The vote totals in the primaries were:
2,914,933 Eugene McCarthy
2,305,148 Robert F. Kennedy (assassinated)
383,590 Lyndon B. Johnson (withdrew)
166,463 Hubert Humphrey
With Kennedy and Johnson out of the race, the Democratic National Convention had to decide between McCarthy and Humphrey. McCarthy had the most delegates of all the candidates, but 61% of the delegates were uncommitted. The Convention chose Humphrey.
This was a bit less outrageous than it seems at first glance. Humphrey had won a bunch of delegates in states that held caucuses rather than primaries, and most of Kennedy’s delegates favored Humphrey. Also, Humphrey was polling better than McCarthy, suggesting he had the best chance of winning the election. However, there was enough backlash that the Democrats created the McGovern–Fraser Commission, which gave us the current system.
In 1972, Democratic primary voters selected anti-war candidate George McGovern as the party nominee, who proceeded to lose in a landslide to Richard Nixon. This might have led the party to concluded that letting primary voters select the party nominee was a bad idea, but it didn’t.
Similarly, the Republican Party could have looked at this as an opportunity for them to nominate candidates who could win while the Democrats would be stuck with whoever their voters chose. Instead, the Republican Party copied the Democratic Party reforms.
Essentially what happened is that because the two major parties formed a duopoly, it became unacceptable for party nominations to be controlled by party officials. It wasn’t practical to create a viable anti-war party, so voters couldn’t vote against the war unless one of the two major parties nominated an anti-war candidate. Given that reality, it was undemocratic to allow party leadership rather than voters to select the party nominees. This is a generic issue with two party systems. The Vietnam War was the issue that happened to create a tipping point, but once the change was made it couldn’t be undone.
In other words, I think you have cause and effect mostly backwards. It’s true that the political parties in the United States are whatever the primary voters and caucus goers say, and that does reduce the incentive to create third parties. The reason that the primary voters and caucus goers control the parties, though, is because it’s impractical to create third parties that actually win elections.
Why did the Republicans copy the Democrats?
Because voters who weren't solidly Democratic or Republican were now faced with the meta-choice of siding with the party that would let them choose from among several candidates vs the party that would shove a single designated spokesman out the door of a smoke-filled room and say "it's him or nobody, at least from us". And as voting isn't mandatory in the United States, the sort of person who doesn't much care about being able to chose among candidates probably isn't going to be voting for anyone.
Once either party goes to a primary system, the other party is highly incentivized to follow suit if they want to keep winning elections.
Thanks for the info. My US blinders made not realize that the US primary system is different than so many other countries. I looked up Australia, UK, France, Germany, and Sweden. For all of them, the party selects the candiate, though some though have more formal procedures. But I assume that it is always party insiders of some degree that are doing the selecting.
Minor nuance from Canada: We don't really have more than two main parties. The only parties that ever form a government are the Conservatives and the Liberals. We have a smattering of smaller parties, of course. Last government the NDP (left-wing) formed a coalition with the Liberals (centre-left), but everyone knew the NDP was the junior partner. The Green Party wins between 1 and 5 seats.
Well, there's also the Bloc Quebecois, but that's very much because of our unique history, not so much our system. If things had gone a little differently in the US, I can totally imagine there being a Texas Party. They wouldn't run presidential candidates (or at least they wouldn't win), but they'd get enough Congressmen to sometimes make the ruling party have to negotiate.
If electoral votes were awarded by Congressional district - rather than winner takes all for each state - there would be a lot more room for a 3rd party presidential candidate. This doesn't take any changes to the Constitution, just state laws. Two states, Nebraska and Maine, already use this system. Then you can imagine a 3rd party candidate winning enough electors to deny the main party candidates a majority - which would perhaps generate more support for such a 3rd party candidate. It comes up in my state legislature from time to time, but the objection seems to be that candidates will lose interest in catering to any state that is not winnner-take-all.
Changing Congressional elections also can be done by Federal legislation, as the Constitution gives Congress the power to prescribe "Times, Places and Manner of holding Elections for Senators and Representatives", overriding any conflicting State laws. Current federal law requires single-member districts for House elections, but they could conceivably repeal it and require multi-member districts with some form of proportional or other mixed-member election procedure instead.
Senators being elected one at a time is hardcoded in the Constitution, but requiring a different election procedure should also be legally possible by ordinary legislation.
That said, I expect any such proposal would face a strong headwind from current members of Congress who aren't keen to fundamentally change the system that elected them in the first place.
I write with some interesting developments in the intersection of law and AI. In Minnesota we just had a case where a county attorney was called out by a judge for using a brief with 6 hallucinated cases...the brief was written by AI, which is theoretically fine, but the AI invented 6 cases that do not exist which were cited as the basis for the law supporting the brief...that's bad. That's potentially sanctionable.
There have been many cases like this, and courts see it as tantamount to lying to them. It's taken very, very seriously. To my knowledge this is the first time a STATE attorney has been implicated in this problem.
...what I think is interesting here is that this is a basic problem which has fairly swift and predictable results. For 2-3 years now we've had the problem of "AI makes up cases that don't exist, attorney gets in trouble."
...it demonstrates an obstacle to getting an AI *rather than* a lawyer to handle your case, and it's a fairly basic one that doesn't exist in meatspace (I could describe *those* problems at length but they're boring and unsexy).
The problem of course can be solved by specialized legal AI tools or prompt engineering, I'm sure, so it's only a problem for unsophisticated people...but the whole promise of having auto-lawyers was to help unsophisticated people handle cases themselves.
> ...that's bad. That's potentially sanctionable.
What would the argument be for not actually sanctioning it?
"Sanctions are for intentional conduct and I'm clearly too stupid to have the mens rea for intent".
Who said sanctions are for intentional conduct? Sanctions are for deterrable conduct.
I want the AI to use tool calls to check that citations it makes are to books that actually exist. This is fairly easy to do with current technology, I just haven’t got round to coding it up yet. (Some of you are thinking - surely, you can ask the AI to code it for you)
In a recent experiment, inspired by the continued discussion of hallucinated citations, R1 gave me the table of contents of a book that doesn’t exist. It hallucinated the citation, and then when pressed for further details of the citation, hallucinated the table of contents of the non-existent book.
The training really needs to get them to not aim to please so much.
What are the use-cases where you believe a legal chatbot would be helpful for lay people?
Chatbots hallucinate principles of law as much as anything else, because they are auto-complete token-generators with bells and whistles. Getting an answer that is 10% wrong but gut-checks right can be far more dangerous than just not knowing.
If your attorney misses a litigation deadline because they screwed up how many days you have to file something and you lose the case, you can bring a claim against their liability insurance and appeal the decision. If you screw it up yourself because you listened to a chatbot, you are out of luck.
It's really hard to say. I don't see how an AI is much better than a google search for handling, say, a speeding ticket or a legal dispute with the city over when you need to shovel your walk.
Anything with higher stakes than that, say a DUI or a slip-and-fall lawsuit at your grocery store, you *REALLY* should be getting a lawyer (and honestly I think any sane person would)
I don't see present AI as helping "normal" people so they don't have to resort to using lawyers. Present AI is far more useful at...let's say...writing a 50 page summary (with citations) on the state of some nuanced area of law (commercial fishery regulation) that a more general lawyer (say one that does agency law) can then read as a gateway to learn about that sub-sub-sub area...or augmenting an electronic discovery review of 1 million emails to find the crumbs of the crumbs of the crumbs that diligent human searchers missed...in short, it's good at supplementing a legal team: turning a 75% chance of success into a 90% chance of success, or turning a stone-cold loser of a case into a case where maybe the defendant wins 10% of the time...these are not typically the kinds of use
cases "normal people" are involved in.
Right. My apologies if I misunderstood the thrust of your post; I may not be following what you are highlighting as re: problems/promises.
Can you use prompt engineering to avoid hallucinations?
in my (very limited) experience using AI it's often trivially easy to avoid the worst hallucinations by simply saying "use citations from published sources that really exist"
or like "Assume this brief will be filed in the southern district of Florida before a federal judge, so the law must be accurate and the citations must be valid"
Also it's...not hard to check whether a citation is real, so any attorney using an AI generated brief could cite check it...a process that takes 15-30 minutes for most filings and which you should be doing anyway.
I don't trust these things yet to get within a mile of anything I'd write professionally. AI still make stuff up too much for my comfort: cases, facts, whole areas of law. They might be "better than a normal person" at not making stuff up in a legal context, but I wouldn't, EG, pull a normal person off the street and outsource my brief-writing to them.
Agreed. AI is really bad at doing that thing where 95% of the output is correct, but the 5% that isn't is in some critical nuance of law where you'll get caned. I always say that in law it's better not to be wrong than to be right, and this is where AI as it currently stands really let's you down.
You can also have one AI fact-check another AI.
At a minimum if you're going to rely on cases cited by an AI then you should ask another AI to look critically at those cases and whether they are really relevant.
Come to think of it, the ideal way of doing this might wind up looking very much like an actual adversarial trial, with one AI lawyer putting forward a case and the other AI lawyer poking holes in it.
> You can also have one AI fact-check another AI.
That’s a great idea. A 1-10 chance of a hallucination becomes a 1-100
Only if the probabilities are independent. If both AIs are trained on the same data, they might hallucinate the same thing.
And then they'd form a stable consensus!
We can see here the tension at play in being a lawyer, too. I have a kind of been stewing over an AI thought experiment or moral dilemma relating to this for about a week, and I'll post about it later maybe.
The simple fact is, I will give different answers to a question in different circumstances if posed a legal hypothetical, because as a lawyer I'm called on to act in two different roles: sometimes I'm an ADVISOR, telling a client the cold hard truth about what the law is, and how a jury will perceive it, this is in service of my terminal goal: giving an accurate assessment of the law. Sometimes I'm an ADVOCATE, telling a judge or a jury what I need to to win. in this latter capacity, I can't and don't lie, but I'll put the law and the facts in the very best possible light to achieve my terminal goal: winning.
An AI is playing both roles just like a lawyer does, but I think it has a bad grasp of what works and doesn't, and when to play each role. You WANT a lawyer who will pound the table and insist you're innocent and the law is CLEARLY on your side...in court. You want that same lawyer to give you a good hard slap in the face in the conference room and tell you that you are in BIG TROUBLE here mister. You want a lawyer that will put a positive spin on the bad facts of your case...but you don't want a lawyer that will just flat out make stuff up (because that's easy to catch, and then you look terrible).
It was so great to meet everyone at LessOnline this past weekend (including Scott and Brandon Hendrickson)! I'll be at Lighthaven all through Summer Camp and Manifest.
I have officially launched my Substack, Letters from Bethlehem. I plan to write about YIMBY, my adventures in renovating my 125-year-old house, MAiD, and untangling the tangled ball of ideas that currently makes up the modern disability rights movement.
I want it to be known that the "Bethlehem" in my username refers to Bethlehem, PENNSYLVANIA, the town made of steel and Gilded Age Capitalism. (Not the other Bethlehem.) It's about an hour north of Philadelphia. My first post is about moving there. (Edit: here's the link: https://amandafrombethlehem.substack.com/p/so-you-want-to-move-to-a-streetcar )
When I started going to the Philly ACX meetups, I'd introduce myself as, "Hi, I'm Amanda, I just drove down from Bethlehem. How are you?" I made it my name on our Discord, and it stuck.
I say this because I ran into Scott at LessOnline. He congratulated me for winning the Book Review Contest, and apologized for thinking I was Christian at first.
...I am not. I am very stereotypically objectivist (at least for aesthetic reasons.) (But politically I've mostly mellowed out into a neolib at this point.) Just wanted to clear that up.
Bethlehem and YIMBY is a tough combination. The locals seem especially enamored with "historic preservation", or at least that's the impression I get from reading the news there. E.g. this candidate for mayor: https://www.wfmz.com/news/election/i-feel-like-its-a-real-duty-to-run-for-mayor-bethlehem-councilwoman-mayor-candidate/article_a29e1a0e-e308-4c5e-acad-32ac197e9ae4.html
> She thinks too many apartment buildings are going up in historic areas. [...]
> She said she has nothing against rentals, but there is not enough affordable housing, which she believes is the top issue the city is facing.
It's also just on the edge of the NYC Covid exodus, which drove up housing prices quite a lot in 2021-2.
Since you're still in the Philadelphia area, I recommend the Tastykakes (see my profile picture).
Oh yes. The fights about tearing down the old Boyd Theater and building that apartment complex were vicious. I had a front-row seat to the long and troubled construction process.
In might be unpopular opinion but Historical went out of proportions, to the state of transforming entire sections of cities into "Preservation zones". This basically kills the place, cities are living things that need to evolve and change.
Only the dead don't change. Cities are not museums.